query_id
stringlengths 32
32
| query
stringlengths 5
5.38k
| positive_passages
listlengths 1
23
| negative_passages
listlengths 4
100
| subset
stringclasses 7
values |
---|---|---|---|---|
39d3a7ae2678036aae7f582eb9f0db1a
|
Entropy-based Selection of Graph Cuboids
|
[
{
"docid": "bf14f996f9013351aca1e9935157c0e3",
"text": "Attributed graphs are becoming important tools for modeling information networks, such as the Web and various social networks (e.g. Facebook, LinkedIn, Twitter). However, it is computationally challenging to manage and analyze attributed graphs to support effective decision making. In this paper, we propose, Pagrol, a parallel graph OLAP (Online Analytical Processing) system over attributed graphs. In particular, Pagrol introduces a new conceptual Hyper Graph Cube model (which is an attributed-graph analogue of the data cube model for relational DBMS) to aggregate attributed graphs at different granularities and levels. The proposed model supports different queries as well as a new set of graph OLAP Roll-Up/Drill-Down operations. Furthermore, on the basis of Hyper Graph Cube, Pagrol provides an efficient MapReduce-based parallel graph cubing algorithm, MRGraph-Cubing, to compute the graph cube for an attributed graph. Pagrol employs numerous optimization techniques: (a) a self-contained join strategy to minimize I/O cost; (b) a scheme that groups cuboids into batches so as to minimize redundant computations; (c) a cost-based scheme to allocate the batches into bags (each with a small number of batches); and (d) an efficient scheme to process a bag using a single MapReduce job. Results of extensive experimental studies using both real Facebook and synthetic datasets on a 128-node cluster show that Pagrol is effective, efficient and scalable.",
"title": ""
}
] |
[
{
"docid": "e3ae049bd1cecbde679acdefc4ad0758",
"text": "Beneficial plant–microbe interactions in the rhizosphere are primary determinants of plant health and soil fertility. Arbuscular mycorrhizas are the most important microbial symbioses for the majority of plants and, under conditions of P-limitation, influence plant community development, nutrient uptake, water relations and above-ground productivity. They also act as bioprotectants against pathogens and toxic stresses. This review discusses the mechanism by which these benefits are conferred through abiotic and biotic interactions in the rhizosphere. Attention is paid to the conservation of biodiversity in arbuscular mycorrhizal fungi (AMF). Examples are provided in which the ecology of AMF has been taken into account and has had an impact in landscape regeneration, horticulture, alleviation of desertification and in the bioremediation of contaminated soils. It is vital that soil scientists and agriculturalists pay due attention to the management of AMF in any schemes to increase, restore or maintain soil fertility.",
"title": ""
},
{
"docid": "07447829f6294660359219c2310968b6",
"text": "Caudal duplication (dipygus) is an uncommon pathologic of conjoined twinning. The conjoined malformation is classified according to the nature and site of the union. We report the presence of this malformation in a female crossbreed puppy. The puppy was delivered by caesarean section following a prolonged period of dystocia. External findings showed a single head (monocephalus) and a normal cranium with no fissure in the medial line detected. The thorax displayed a caudal duplication arising from the lumbosacral region (rachipagus). The puppy had three upper limbs, a right and left, and a third limb in the dorsal region where the bifurcation began. The subsequent caudal duplication appeared symmetrical. Necropsy revealed internal abnormalities consisting of a complete duplication of the urogenital system and a duplication of the large intestines arising from a bifurcation of the caudal ileum . Considering the morphophysiological description the malformation described would be classified as the first case in the dog of a monocephalusrachipagustribrachius tetrapus.",
"title": ""
},
{
"docid": "3608939d057889c2731b12194ef28ea6",
"text": "Permanent magnets with rare earth materials are widely used in interior permanent magnet synchronous motors (IPMSMs) in Hybrid Electric Vehicles (HEVs). The recent price rise of rare earth materials has become a serious concern. A Switched Reluctance Motor (SRM) is one of the candidates for HEV rare-earth-free-motors. An SRM has been developed with dimensions, maximum torque, operating area, and maximum efficiency that all compete with the IPMSM. The efficiency map of the SRM is different from that of the IPMSM; thus, direct comparison has been rather difficult. In this paper, a comparison of energy consumption between the SRM and the IPMSM using four standard driving schedules is carried out. In HWFET and NEDC driving schedules, the SRM is found to have better efficiency because its efficiency is high at the high-rotational-speed region.",
"title": ""
},
{
"docid": "1e6167b15cc904131582beaaf9eb6051",
"text": "Using fully homomorphic encryption scheme, we construct fully homomorphic encryption scheme FHE4GT that can homomorphically compute an encryption of the greater-than bit that indicates x > x' or not, given two ciphertexts c and c' of x and x', respectively, without knowing the secret key. Then, we construct homomorphic classifier homClassify that can homomorphically classify a given encrypted data without decrypting it, using machine learned parameters.",
"title": ""
},
{
"docid": "d4ea09e7c942174c0301441a5c53b4ef",
"text": "As the cloud computing is a new style of computing over internet. It has many advantages along with some crucial issues to be resolved in order to improve reliability of cloud environment. These issues are related with the load management, fault tolerance and different security issues in cloud environment. In this paper the main concern is load balancing in cloud computing. The load can be CPU load, memory capacity, delay or network load. Load balancing is the process of distributing the load among various nodes of a distributed system to improve both resource utilization and job response time while also avoiding a situation where some of the nodes are heavily loaded while other nodes are idle or doing very little work. Load balancing ensures that all the processor in the system or every node in the network does approximately the equal amount of work at any instant of time. Many methods to resolve this problem has been came into existence like Particle Swarm Optimization, hash method, genetic algorithms and several scheduling based algorithms are there. In this paper we are proposing a method based on Ant Colony optimization to resolve the problem of load balancing in cloud environment.",
"title": ""
},
{
"docid": "54ab143dc18413c58c20612dbae142eb",
"text": "Elderly adults may master challenging cognitive demands by additionally recruiting the cross-hemispheric counterparts of otherwise unilaterally engaged brain regions, a strategy that seems to be at odds with the notion of lateralized functions in cerebral cortex. We wondered whether bilateral activation might be a general coping strategy that is independent of age, task content and brain region. While using functional magnetic resonance imaging (fMRI), we pushed young and old subjects to their working memory (WM) capacity limits in verbal, spatial, and object domains. Then, we compared the fMRI signal reflecting WM maintenance between hemispheric counterparts of various task-relevant cerebral regions that are known to exhibit lateralization. Whereas language-related areas kept their lateralized activation pattern independent of age in difficult tasks, we observed bilaterality in dorsolateral and anterior prefrontal cortex across WM domains and age groups. In summary, the additional recruitment of cross-hemispheric counterparts seems to be an age-independent domain-general strategy to master cognitive challenges. This phenomenon is largely confined to prefrontal cortex, which is arguably less specialized and more flexible than other parts of the brain.",
"title": ""
},
{
"docid": "405e5d6050adec3cc6e60a4e64b1e0a5",
"text": "The ARCS Motivation Theory was proposed to guide instructional designers and teachers who develop their own instruction to integrate motivational design strategies into the instruction. There is a lack of literature supporting the idea that instruction for blended courses if designed based on the ARCS Motivation Theory provides different experiences for learners in terms of motivation than instruction developed following the standard instructional design procedure for blended courses. This study was conducted to compare the students‘ motivational evaluation of blended course modules developed based on the ARCS Motivation Theory and students‘ motivational evaluation of blended course modules developed following the standard instructional design procedure. Randomly assigned fifty junior undergraduate students studying at the department of Turkish Language and Literature participated in the study. Motivation Measure for the Blended Course Instruction (MMBCI) instrument was used to collect data for the study after the Confirmatory Factor Analysis (CFA). Results of the study indicated that designing instruction in blended courses based on the ARCS Motivation Theory provides more motivational benefits for students and consequently contributes student learning.",
"title": ""
},
{
"docid": "c1b1fe329296d4996f741b9e2ae558ac",
"text": "In this work, we face the problem of unsupervised domain adaptation with a novel deep learning approach which leverages our finding that entropy minimization is induced by the optimal alignment of second order statistics between source and target domains. We formally demonstrate this hypothesis and, aiming at achieving an optimal alignment in practical cases, we adopt a more principled strategy which, differently from the current Euclidean approaches, deploys alignment along geodesics. Our pipeline can be implemented by adding to the standard classification loss (on the labeled source domain), a source-to-target regularizer that is weighted in an unsupervised and data-driven fashion. We provide extensive experiments to assess the superiority of our framework on standard domain and modality adaptation benchmarks.",
"title": ""
},
{
"docid": "0ef3d7b26feba199df7d466d14740a57",
"text": "A parsing algorithm visualizer is a tool that visualizes the construction of a parser for a given context-free grammar and then illustrates the use of that parser to parse a given string. Parsing algorithm visualizers are used to teach the course on compiler construction which in invariably included in all undergraduate computer science curricula. This paper presents a new parsing algorithm visualizer that can visualize six parsing algorithms, viz. predictive parsing, simple LR parsing, canonical LR parsing, look-ahead LR parsing, Earley parsing and CYK parsing. The tool logically explains the process of parsing showing the calculations involved in each step. The output of the tool has been structured to maximize the learning outcomes and contains important constructs like FIRST and FOLLOW sets, item sets, parsing table, parse tree and leftmost or rightmost derivation depending on the algorithm being visualized. The tool has been used to teach the course on compiler construction at both undergraduate and graduate levels. An overall positive feedback was received from the students with 89% of them saying that the tool helped them in understanding the parsing algorithms. The tool is capable of visualizing multiple parsing algorithms and 88% students used it to compare the algorithms.",
"title": ""
},
{
"docid": "c67ffe3dfa6f0fe0449f13f1feb20300",
"text": "The associations between giving a history of physical, emotional, and sexual abuse in children and a range of mental health, interpersonal, and sexual problems in adult life were examined in a community sample of women. Abuse was defined to establish groups giving histories of unequivocal victimization. A history of any form of abuse was associated with increased rates of psychopathology, sexual difficulties, decreased self-esteem, and interpersonal problems. The similarities between the three forms of abuse in terms of their association with negative adult outcomes was more apparent than any differences, though there was a trend for sexual abuse to be particularly associated to sexual problems, emotional abuse to low self-esteem, and physical abuse to marital breakdown. Abuse of all types was more frequent in those from disturbed and disrupted family backgrounds. The background factors associated with reports of abuse were themselves often associated to the same range of negative adult outcomes as for abuse. Logistic regressions indicated that some, though not all, of the apparent associations between abuse and adult problems was accounted for by this matrix of childhood disadvantage from which abuse so often emerged.",
"title": ""
},
{
"docid": "abc48ae19e2ea1e1bb296ff0ccd492a2",
"text": "This paper reports the results achieved by Carnegie Mellon University on the Topic Detection and Tracking Project’s secondyear evaluation for the segmentation, detection, and tracking tasks. Additional post-evaluation improvements are also",
"title": ""
},
{
"docid": "a58cbbff744568ae7abd2873d04d48e9",
"text": "Training real-world Deep Neural Networks (DNNs) can take an eon (i.e., weeks or months) without leveraging distributed systems. Even distributed training takes inordinate time, of which a large fraction is spent in communicating weights and gradients over the network. State-of-the-art distributed training algorithms use a hierarchy of worker-aggregator nodes. The aggregators repeatedly receive gradient updates from their allocated group of the workers, and send back the updated weights. This paper sets out to reduce this significant communication cost by embedding data compression accelerators in the Network Interface Cards (NICs). To maximize the benefits of in-network acceleration, the proposed solution, named INCEPTIONN (In-Network Computing to Exchange and Process Training Information Of Neural Networks), uniquely combines hardware and algorithmic innovations by exploiting the following three observations. (1) Gradients are significantly more tolerant to precision loss than weights and as such lend themselves better to aggressive compression without the need for the complex mechanisms to avert any loss. (2) The existing training algorithms only communicate gradients in one leg of the communication, which reduces the opportunities for in-network acceleration of compression. (3) The aggregators can become a bottleneck with compression as they need to compress/decompress multiple streams from their allocated worker group. To this end, we first propose a lightweight and hardware-friendly lossy-compression algorithm for floating-point gradients, which exploits their unique value characteristics. This compression not only enables significantly reducing the gradient communication with practically no loss of accuracy, but also comes with low complexity for direct implementation as a hardware block in the NIC. To maximize the opportunities for compression and avoid the bottleneck at aggregators, we also propose an aggregator-free training algorithm that exchanges gradients in both legs of communication in the group, while the workers collectively perform the aggregation in a distributed manner. Without changing the mathematics of training, this algorithm leverages the associative property of the aggregation operator and enables our in-network accelerators to (1) apply compression for all communications, and (2) prevent the aggregator nodes from becoming bottlenecks. Our experiments demonstrate that INCEPTIONN reduces the communication time by 70.9~80.7% and offers 2.2~3.1x speedup over the conventional training system, while achieving the same level of accuracy.",
"title": ""
},
{
"docid": "a6b29716a299415fd88289032acf7d3d",
"text": "As Internet grows quickly, pornography, which is often printed into a small quantity of publication in the past, becomes one of the highly distributed information over Internet. However, pornography may be harmful to children, and may affect the efficiency of workers. In this paper, we design an easy scheme for detecting pornography. We exploit primitive information from pornography and use this knowledge for determining whether a given photo belongs to pornography or not. In the beginning, we extract skin region from photos, and find out the correlation in skin region and non-skin region. Then, we use these correlations as the input of support vector machine (SVM), an excellent tool for classification with learning abilities. After a period of training SVM model, we achieved about 75% of accuracy, 35% of false alarm rate, and only 14% of mis-detection rate. Moreover, we also provide a simple tool based on our scheme.",
"title": ""
},
{
"docid": "5bf172cfc7d7de0c82707889cf722ab2",
"text": "The concept of a decentralized ledger usually implies that each node of a blockchain network stores the entire blockchain. However, in the case of popular blockchains, which each weigh several hundreds of GB, the large amount of data to be stored can incite new or low-capacity nodes to run lightweight clients. Such nodes do not participate to the global storage effort and can result in a centralization of the blockchain by very few nodes, which is contrary to the basic concepts of a blockchain. To avoid this problem, we propose new low storage nodes that store a reduced amount of data generated from the blockchain by using erasure codes. The properties of this technique ensure that any block of the chain can be easily rebuilt from a small number of such nodes. This system should encourage low storage nodes to contribute to the storage of the blockchain and to maintain decentralization despite of a globally increasing size of the blockchain. This system paves the way to new types of blockchains which would only be managed by low capacity nodes.",
"title": ""
},
{
"docid": "d35736158d3f38503f0f2090c4e47811",
"text": "This study examines the role of the decision environment in how well business intelligence (BI) capabilities are leveraged to achieve BI success. We examine the decision environment in terms of the types of decisions made and the information processing needs of the organization. Our findings suggest that technological capabilities such as data quality, user access and the integration of BI with other systems are necessary for BI success, regardless of the decision environment. However, the decision environment does influence the relationship between BI success and capabilities, such as the extent to which BI supports flexibility and risk in decision making. 2013 Elsevier B.V. All rights reserved. * Corresponding author. Tel.: +32 16248854. E-mail addresses: oyku.isik@vlerick.com (Ö. Işık), mary.jones@unt.edu (M.C. Jones), anna.sidorova@unt.edu (A. Sidorova).",
"title": ""
},
{
"docid": "22c3eb9aa0127e687f6ebb6994fc8d1d",
"text": "In this paper, the novel inverse synthetic aperture secondary radar wireless positioning technique is introduced. The proposed concept allows for a precise spatial localization of a backscatter transponder even in dense multipath environments. A novel secondary radar signal evaluation concept compensates for the unknown modulation phase of the returned signal and thus leads to radar signals comparable to common primary radar. With use of this concept, inverse synthetic aperture radar algorithms can be applied to the signals of backscatter transponder systems. In simulations and first experiments, we used a broadband holographic reconstruction principle to realize the inverse synthetic aperture approach. The movement of the transponder along a short arbitrary aperture path is determined with assisting relative sensors (dead reckoning or inertia sensors). A set of signals measured along the aperture is adaptively focused to the transponder position. By this focusing technique, multipath reflections can be suppressed impressively and a precise indoor positioning becomes feasible. With our technique, completely new and powerful options for integrated navigation and sensor fusion in RF identification systems and wireless local positioning systems are now possible.",
"title": ""
},
{
"docid": "20ca4823a5bb5388404e509cb558fae9",
"text": "Developing learning experiences that facilitate self-actualization and creativity is among the most important goals of our society in preparation for the future. To facilitate deep understanding of a new concept, to facilitate learning, learners must have the opportunity to develop multiple and flexible perspectives. The process of becoming an expert involves failure, as well as the ability to understand failure and the motivation to move onward. Meta-cognitive awareness and personal strategies can play a role in developing an individual’s ability to persevere through failure, and combat other diluting influences. Awareness and reflective technologies can be instrumental in developing a meta-cognitive ability to make conscious and unconscious decisions about engagement that will ultimately enhance learning, expertise, creativity, and self-actualization. This paper will review diverse perspectives from psychology, engineering, education, and computer science to present opportunities to enhance creativity, motivation, and self-actualization in learning systems. r 2005 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "9bcf4fcb795ab4cfe4e9d2a447179feb",
"text": "In a previous experiment, we determined how various changes in three structural elements of the software inspection process (team size and the number and sequencing of sessions) altered effectiveness and interval. Our results showed that such changes did not significantly influence the defect detection rate, but that certain combinations of changes dramatically increased the inspection interval. We also observed a large amount of unexplained variance in the data, indicating that other factors must be affecting inspection performance. The nature and extent of these other factors now have to be determined to ensure that they had not biased our earlier results. Also, identifying these other factors might suggest additional ways to improve the efficiency of inspections. Acting on the hypothesis that the “inputs” into the inspection process (reviewers, authors, and code units) were significant sources of variation, we modeled their effects on inspection performance. We found that they were responsible for much more variation in detect detection than was process structure. This leads us to conclude that better defect detection techniques, not better process structures, are the key to improving inspection effectiveness. The combined effects of process inputs and process structure on the inspection interval accounted for only a small percentage of the variance in inspection interval. Therefore, there must be other factors which need to be identified.",
"title": ""
},
{
"docid": "57167d5bf02e9c76057daa83d3f803c5",
"text": "When alcohol is consumed, the alcoholic beverages first pass through the various segments of the gastrointestinal (GI) tract. Accordingly, alcohol may interfere with the structure as well as the function of GI-tract segments. For example, alcohol can impair the function of the muscles separating the esophagus from the stomach, thereby favoring the occurrence of heartburn. Alcohol-induced damage to the mucosal lining of the esophagus also increases the risk of esophageal cancer. In the stomach, alcohol interferes with gastric acid secretion and with the activity of the muscles surrounding the stomach. Similarly, alcohol may impair the muscle movement in the small and large intestines, contributing to the diarrhea frequently observed in alcoholics. Moreover, alcohol inhibits the absorption of nutrients in the small intestine and increases the transport of toxins across the intestinal walls, effects that may contribute to the development of alcohol-related damage to the liver and other organs.",
"title": ""
},
{
"docid": "229a541fa4b8e9157c8cc057ae028676",
"text": "The proposed system introduces a new genetic algorithm for prediction of financial performance with input data sets from a financial domain. The goal is to produce a GA-based methodology for prediction of stock market performance along with an associative classifier from numerical data. This work restricts the numerical data to stock trading data. Stock trading data contains the quotes of stock market. From this information, many technical indicators can be extracted, and by investigating the relations between these indicators trading signals can discovered. Genetic algorithm is being used to generate all the optimized relations among the technical indicator and its value. Along with genetic algorithm association rule mining algorithm is used for generation of association rules among the various Technical Indicators. Associative rules are generated whose left side contains a set of trading signals, expressed by relations among the technical indicators, and whose right side indicates whether there is a positive ,negative or no change. The rules are being further given to the classification process which will be able to classify the new data making use of the previously generated rules. The proposed idea in the paper is to offer an efficient genetic algorithm in combination with the association rule mining algorithm which predicts stock market performance. Keywords— Genetic Algorithm, Associative Rule Mining, Technical Indicators, Associative rules, Stock Market, Numerical Data, Rules INTRODUCTION Over the last decades, there has been much research interests directed at understanding and predicting future. Among them, to forecast price movements in stock markets is a major challenge confronting investors, speculator and businesses. How to make a right decision in stock trading extracts many attentions from many financial and technical fields. Many technologies such as evolutionary optimization methods have been studied to help people find better way to earn more profit from the stock market. And the data mining method shows its power to improve the accuracy of stock movement prediction, with which more profit can be obtained with less risk. Applications of data mining techniques for stock investment include clustering, decision tree etc. Moreover, researches on stock market discover trading signals and timings from financial data. Because of the numerical attributes used, data mining techniques, such as decision tree, have weaker capabilities to handle this kind of numerical data and there are infinitely many possible ways to enumerate relations among data. Stock prices depend on various factors, the important ones being the market sentiment, performance of the industry, earning results and projected earnings, takeover or merger, introduction of a new product or introduction of an existing product into new markets, share buy-back, announcements of dividends/bonuses, addition or removal from the index and such other factors leading to a positive or negative impact on the share price and the associated volumes. Apart from the basic technical and fundamental analysis techniques used in stock market analysis and prediction, soft computing methods based on Association Rule Mining, fuzzy logic, neural networks, genetic algorithms etc. are increasingly finding their place in understanding and predicting the financial markets. Genetic algorithm has a great capability to discover good solutions rapidly for difficult high dimensional problems. The genetic algorithm has good capability to deal with numerical data and relations between numerical data. Genetic algorithms have emerged as a powerful general purpose search and optimization technique and have found applications in widespread areas. Associative classification, one of the most important tasks in data mining and knowledge discovery, builds a classification system based on associative classification rules. Association rules are learned and extracted from the available training dataset and the most suitable rules are selected to build an associative classification model. Association rule discovery has been used with great success in International Journal of Engineering Research and General Science Volume 3, Issue 1, January-February, 2015 ISSN 2091-273",
"title": ""
}
] |
scidocsrr
|
0e77526a85deeb78ba405294d8edb762
|
The Core Components of RTI : A Closer Look at Evidence-based Core Curriculum , Assessment and Progress Monitoring , and Data-based Decision Making
|
[
{
"docid": "a87b76cbd497826eadb9b0c28c8253dd",
"text": "On December 3, 2004, President Bush signed into law the Individuals with Disabilities Education Improvement Act (IDEA, 2004). The revised law is different from the previous version in at least one important respect. Whereas practitioners were previously encouraged to use IQ–achievement discrepancy to identify children with learning disabilities (LD), they now may use “Response to Intervention,” or RTI, a new, alternative method. It is also a means of providing early intervention to all children at risk for school failure. IDEA 2004 permits districts to use as much as 15% of their special education monies to fund early intervention activities. All this has implications for the number and type of children identified, the kinds of educational services provided, and who delivers them. RTI may be especially important to Reading Research Quarterly readers because roughly 80% of those with an LD label have been described as reading disabled (Lyon, 1995). With RTI, there may be a larger role for reading specialists, which in turn might affect preand inservice professional development activities conducted by universities and school districts. Yet much still needs to be understood to ensure that RTI implementation will promote effective early intervention and represents a valid means of LD identification. In this article, we explain important features of RTI, why it has been promoted as a substitute for IQ–achievement discrepancy, and what remains to be understood before it may be seen as a valid means of LD identification. What is RTI?",
"title": ""
}
] |
[
{
"docid": "6ce8648c194a73fccf9352a74faa405c",
"text": "Recently, social media such as Facebook has been more popular. Receiving information from Facebook and generating or spreading information on Facebook every day has become a general lifestyle. This new information-exchanging platform contains a lot of meaningful messages including users' emotions and preferences. Using messages on Facebook or in general social media to predict the election result and political affiliation has been a trend. In Taiwan, for example, almost every politician tries to have public opinion polls by using social media; almost every politician has his or her own fan page on Facebook, and so do the parties. We make an effort to predict to what party, DPP or KMT, two major parties in Taiwan, a post would be related or affiliated. We design features and models for the prediction, and we evaluate as well as compare them with the data collected from several political fan pages on Facebook. The results show that we can obtain accuracy higher than 90% when the text and interaction features are used with a nearest neighbor classifier.",
"title": ""
},
{
"docid": "c8d33f21915a6f1403f046ffa17b6e2e",
"text": "Synthetic aperture radar (SAR) image segmentation is a difficult problem due to the presence of strong multiplicative noise. To attain multi-region segmentation for SAR images, this paper presents a parametric segmentation method based on the multi-texture model with level sets. Segmentation is achieved by solving level set functions obtained from minimizing the proposed energy functional. To fully utilize image information, edge feature and region information are both included in the energy functional. For the need of level set evolution, the ratio of exponentially weighted averages operator is modified to obtain edge feature. Region information is obtained by the improved edgeworth series expansion, which can adaptively model a SAR image distribution with respect to various kinds of regions. The performance of the proposed method is verified by three high resolution SAR images. The experimental results demonstrate that SAR images can be segmented into multiple regions accurately without any speckle pre-processing steps by the proposed method.",
"title": ""
},
{
"docid": "6d65156ca8fed2aa61dda6f5c98ecdce",
"text": "Emerging digital environments and infrastructures, such as distributed security services and distributed computing services, have generated new options of communication, information sharing, and resource utilization in past years. However, when distributed services are used, the question arises of to what extent we can trust service providers to not violate security requirements, whether in isolation or jointly. Answering this question is crucial for designing trustworthy distributed systems and selecting trustworthy service providers. This paper presents a novel trust measurement method for distributed systems, and makes use of propositional logic and probability theory. The results of the qualitative part include the specification of a formal trust language and the representation of its terms by means of propositional logic formulas. Based on these formulas, the quantitative part returns trust metrics for the determination of trustworthiness with which given distributed systems are assumed to fulfill a particular security requirement.",
"title": ""
},
{
"docid": "4a572df21f3a8ebe3437204471a1fd10",
"text": "Whilst studies on emotion recognition show that genderdependent analysis can improve emotion classification performance, the potential differences in the manifestation of depression between male and female speech have yet to be fully explored. This paper presents a qualitative analysis of phonetically aligned acoustic features to highlight differences in the manifestation of depression. Gender-dependent analysis with phonetically aligned gender-dependent features are used for speech-based depression recognition. The presented experimental study reveals gender differences in the effect of depression on vowel-level features. Considering the experimental study, we also show that a small set of knowledge-driven gender-dependent vowel-level features can outperform state-of-the-art turn-level acoustic features when performing a binary depressed speech recognition task. A combination of these preselected gender-dependent vowel-level features with turn-level standardised openSMILE features results in additional improvement for depression recognition.",
"title": ""
},
{
"docid": "b1c1f9cdce2454508fc4a5c060dc1c57",
"text": "We present a reduced-order approach for robust, dynamic, and efficient bipedal locomotion control, culminating in 3D balancing and walking with ATRIAS, a heavily underactuated legged robot. These results are a development toward solving a number of enduring challenges in bipedal locomotion: achieving robust 3D gaits at various speeds and transitioning between them, all while minimally draining on-board energy supplies. Our reduced-order control methodology works by extracting and exploiting general dynamical behaviors from the spring-mass model of bipedal walking. When implemented on a robot with spring-mass passive dynamics, e.g. ATRIAS, this controller is sufficiently robust to balance while subjected to pushes, kicks, and successive dodgeball strikes. The controller further allowed smooth transitions between stepping in place and walking at a variety of speeds (up to 1.2 m/s). The resulting gait dynamics also match qualitatively to the reduced-order model, and additionally, measurements of human walking. We argue that the presented locomotion performance is compelling evidence of the effectiveness of the presented approach; both the control concepts and the practice of building robots with passive dynamics to accommodate them. INTRODUCTION We present 3D bipedal walking control for the dynamic bipedal robot, ATRIAS (Fig. 1), by building controllers on a foundation of insights from a reduced-order “spring-mass” math model. This work is aimed at tackling an enduring set of challenges in bipedal robotics: fast 3D locomotion that is efficient and robust to disturbances. Further, we want the ability to transition between gaits of different speeds, including slowing to and starting from zero velocity. This set of demands is challenging from a generalized formal control approach because of various inconvenient mathematical properties; legged systems are typically cast as nonlinear, hybrid-dynamical, and nonholonomic systems, which at the same time, because of the very nature of walking, require highly robust control algorithms. Bipedal robots are also increasingly becoming underactuated, i.e. a system with fewer actuators than degrees of freedom [1]. Underactuation is problematic for nonlinear control methods; as degrees of underactuation increase, handy techniques like feedback-linearization decline in the scope of their utility. 1 Copyright c © 2015 by ASME FIGURE 1: ATRIAS, A HUMAN-SCALE BIPEDAL “SPRINGMASS” ROBOT DESIGNED TO WALK AND RUN IN THREE DIMENSIONS. Whenever a robot does not have an actuated foot planted on rigid ground, it is effectively underactuated. As a result, the faster legged robots move and the rougher the terrain they encounter, it becomes increasingly impractical to avoid these underactuated domains. Further, there are compelling reasons, both mechanical and dynamical, for removing actuators from certain degrees of freedom (see more in the robot design section). With these facts in mind, our robotic platform is built to embody an underactuated and compliant “spring-mass” model (Fig. 2A), and our control reckons with the severe underactuation that results. ATRIAS has twelve degrees of freedom when walking, but just six actuators. However, by numerically analyzing the spring-mass model, we identify important targets and structures of control that can be regulated on the full-order robot which approximates it. We organize the remainder of this paper as follows. We begin by surveying existing control methods for 3D, underactuated, and spring-mass locomotion. 2) The design philosophy of our spring-mass robot, ATRIAS, and its implementation are briefly described. 3) We then build a controller incrementally from a 1D idealized model, to a 3D model, to the full 12-degree-of-freedom robot. 4) We show that this controller can regulate speeds ranging from 0 m/s to 1.2 m/s and transition between them. 5) With a set of perturbation experiments, we demonstrate the robustness of the controller and 6) argue in our conclusions for the thoughtful cooperation between the tasks of robot design and control. (A) (B) Virtual leg Vi rtu al leg dir ec tio n spring-mass k m FIGURE 2: THE DESIGN PHILOSOPHY OF ATRIAS, WHICH MAXIMALLY EMBODIES THE “SPRING-MASS” MODEL OF WALKING AND RUNNING. A) THE SPRING MASS MODEL WITH A POINT MASS BODY AND MASSLESS LEG SPRING. B) ATRIAS WITH A VIRTUAL LEG SPRING OVERLAID.",
"title": ""
},
{
"docid": "4c1060bf3e7d01f817e6ce84d1d6fac0",
"text": "1668 The smaller the volume (or share) of imports from the trading partner, the larger the impact of a preferential trade agreement on home country welfare—because the smaller the imports, the smaller the loss in tariff revenue. And the home country is better off as a small member of a large bloc than as a large member of a small bloc. Summary findings There has been a resurgence of preferential trade agreements (PTAs) partly because of the deeper European integration known as EC-92, which led to a fear of a Fortress Europe; and partly because of the U.S. decision to form a PTA with Canada. As a result, there has been a domino effect: a proliferation of PTAs, which has led to renewed debate about how PTAs affect both welfare and the multilateral system. Schiff examines two issues: the welfare impact of preferential trade agreements (PTAs) and the effect of structural and policy changes on PTAs. He asks how the PTA's effect on home-country welfare is affected by higher demand for imports; the efficiency of production of the partner or rest of the world (ROW); the share imported from the partner (ROW); and the initial protection on imports from the partner (ROW). Among his findings: • An individual country benefits more from a PTA if it imports less from its partner countries (with imports measured either in volume or as a share of total imports). This result has important implications for choice of partners. • A small home country loses from forming a free trade agreement (FTA) with a small partner country but gains from forming one with the rest of the world. In other words, the home country is better off as a small member of a large bloc than as a large member of a small bloc. This result need not hold if smuggling is a factor. • Home country welfare after formation of a FTA is higher when imports from the partner country are smaller, whether the partner country is large or small. Welfare worsens as imports from the partner country increase. • In general, a PTA is more beneficial (or less harmful) for a country with lower import demand. A PTA is also more beneficial for a country with a more efficient import-substituting sector, as this will result in a lower demand for imports. • A small country may gain from forming a PTA when smuggling …",
"title": ""
},
{
"docid": "55b967cd6d28082ba0fa27605f161060",
"text": "Background. A scheme for format-preserving encryption (FPE) is supposed to do that which a conventional (possibly tweakable) blockcipher does—encipher messages within some message space X—except that message space, instead of being something like X = {0, 1}128, is more gen eral [1, 3]. For example, the message space might be the set X = {0, 1, . . . , 9}16, in which case each 16-digit plaintext X ∈ X gets enciphered into a 16-digit ciphertext Y ∈ X . In a stringbased FPE scheme—the only type of FPE that we consider here—the message space is of the form n X = {0, 1, . . . , radix − 1} for some message length n and alphabet size radix.",
"title": ""
},
{
"docid": "7b0d52753e359a6dff3847ff57c321ac",
"text": "Neural network based methods have obtained great progress on a variety of natural language processing tasks. However, it is still a challenge task to model long texts, such as sentences and documents. In this paper, we propose a multi-timescale long short-term memory (MT-LSTM) neural network to model long texts. MTLSTM partitions the hidden states of the standard LSTM into several groups. Each group is activated at different time periods. Thus, MT-LSTM can model very long documents as well as short sentences. Experiments on four benchmark datasets show that our model outperforms the other neural models in text classification task.",
"title": ""
},
{
"docid": "100c152685655ad6865f740639dd7d57",
"text": "Semantic image inpainting is a challenging task where large missing regions have to be filled based on the available visual data. Existing methods which extract information from only a single image generally produce unsatisfactory results due to the lack of high level context. In this paper, we propose a novel method for semantic image inpainting, which generates the missing content by conditioning on the available data. Given a trained generative model, we search for the closest encoding of the corrupted image in the latent image manifold using our context and prior losses. This encoding is then passed through the generative model to infer the missing content. In our method, inference is possible irrespective of how the missing content is structured, while the state-of-the-art learning based method requires specific information about the holes in the training phase. Experiments on three datasets show that our method successfully predicts information in large missing regions and achieves pixel-level photorealism, significantly outperforming the state-of-the-art methods.",
"title": ""
},
{
"docid": "159cd44503cb9def6276cb2b9d33c40e",
"text": "In the airline industry, data analysis and data mining are a prerequisite to push customer relationship management (CRM) ahead. Knowledge about data mining methods, marketing strategies and airline business processes has to be combined to successfully implement CRM. This paper is a case study and gives an overview about distinct issues, which have to be taken into account in order to provide a first solution to run CRM processes. We do not focus on each individual task of the project; rather we give a sketch about important steps like data preparation, customer valuation and segmentation and also explain the limitation of the solutions.",
"title": ""
},
{
"docid": "6a94bd02742b43102c25f874ba309bc9",
"text": "Reward models have become an important method for specifying performability models for many types of systems. Many methods have been proposed for solving reward models, but no method has proven itself to be applicable over all system classes and sizes. Furthermore, speci cation of reward models has usually been done at the state level, which can be extremely cumbersome for realistic models. We describe a method to specify reward models as stochastic activity networks (SANs) with impulse and rate rewards, and a method by which to solve these models via uniformization. The method is an extension of one proposed by de Souza e Silva and Gail in which impulse and rate rewards are speci ed at the SAN level, and solved in a single model. Furthermore, we propose a new technique for discarding paths in the uniformized process whose contribution to the reward variable is minimal, which greatly reduces the time and space required for a solution. A bound is calculated on the error introduced by this discarding, and its e ectiveness is illustrated through the study of the performability and availability of a degradable multi-processor system.",
"title": ""
},
{
"docid": "3fd46b96983b317973a62c8d3e458bdf",
"text": "There are lots of big companies that would love to switch from their big legacy systems to avoid compromises in functionality, make them more agile, lower IT costs, and help them to become faster to market. This article describes how they can make the move.",
"title": ""
},
{
"docid": "3613dd18a4c930a28ed520192f7ac23f",
"text": "OBJECTIVES\nIn this paper we present a contemporary understanding of \"nursing informatics\" and relate it to applications in three specific contexts, hospitals, community health, and home dwelling, to illustrate achievements that contribute to the overall schema of health informatics.\n\n\nMETHODS\nWe identified literature through database searches in MEDLINE, EMBASE, CINAHL, and the Cochrane Library. Database searching was complemented by one author search and hand searches in six relevant journals. The literature review helped in conceptual clarification and elaborate on use that are supported by applications in different settings.\n\n\nRESULTS\nConceptual clarification of nursing data, information and knowledge has been expanded to include wisdom. Information systems and support for nursing practice benefits from conceptual clarification of nursing data, information, knowledge, and wisdom. We introduce three examples of information systems and point out core issues for information integration and practice development.\n\n\nCONCLUSIONS\nExploring interplays of data, information, knowledge, and wisdom, nursing informatics takes a practice turn, accommodating to processes of application design and deployment for purposeful use by nurses in different settings. Collaborative efforts will be key to further achievements that support task shifting, mobility, and ubiquitous health care.",
"title": ""
},
{
"docid": "e95a8c698f4763e8ef19c3bd975034bb",
"text": "The stereotype content model (SCM) proposes potentially universal principles of societal stereotypes and their relation to social structure. Here, the SCM reveals theoretically grounded, cross-cultural, cross-groups similarities and one difference across 10 non-US nations. Seven European (individualist) and three East Asian (collectivist) nations (N=1,028) support three hypothesized cross-cultural similarities: (a) perceived warmth and competence reliably differentiate societal group stereotypes; (b) many out-groups receive ambivalent stereotypes (high on one dimension; low on the other); and (c) high status groups stereotypically are competent, whereas competitive groups stereotypically lack warmth. Data uncover one consequential cross-cultural difference: (d) the more collectivist cultures do not locate reference groups (in-groups and societal prototype groups) in the most positive cluster (high-competence/high-warmth), unlike individualist cultures. This demonstrates out-group derogation without obvious reference-group favouritism. The SCM can serve as a pancultural tool for predicting group stereotypes from structural relations with other groups in society, and comparing across societies.",
"title": ""
},
{
"docid": "45c3c54043337e91a44e71945f4d63dd",
"text": "Neutrophils are being increasingly recognized as an important element in tumor progression. They have been shown to exert important effects at nearly every stage of tumor progression with a number of studies demonstrating that their presence is critical to tumor development. Novel aspects of neutrophil biology have recently been elucidated and its contribution to tumorigenesis is only beginning to be appreciated. Neutrophil extracellular traps (NETs) are neutrophil-derived structures composed of DNA decorated with antimicrobial peptides. They have been shown to trap and kill microorganisms, playing a critical role in host defense. However, their contribution to tumor development and metastasis has recently been demonstrated in a number of studies highlighting NETs as a potentially important therapeutic target. Here, studies implicating NETs as facilitators of tumor progression and metastasis are reviewed. In addition, potential mechanisms by which NETs may exert these effects are explored. Finally, the ability to target NETs therapeutically in human neoplastic disease is highlighted.",
"title": ""
},
{
"docid": "1861cbfefd392f662b350e70c60f3b6b",
"text": "Text mining concerns looking for patterns in unstructured text. The related task of Information Extraction (IE) is about locating specific items in natural-language documents. This paper presents a framework for text mining, called DISCOTEX (Discovery from Text EXtraction), using a learned information extraction system to transform text into more structured data which is then mined for interesting relationships. The initial version of DISCOTEX integrates an IE module acquired by an IE learning system, and a standard rule induction module. In addition, rules mined from a database extracted from a corpus of texts are used to predict additional information to extract from future documents, thereby improving the recall of the underlying extraction system. Encouraging results are presented on applying these techniques to a corpus of computer job announcement postings from an Internet newsgroup.",
"title": ""
},
{
"docid": "0df006400924b05117a6d5b12fedfbb0",
"text": "The lack of data authentication and integrity guarantees in the Domain Name System (DNS) facilitates a wide variety of malicious activity on the Internet today. DNSSec, a set of cryptographic extensions to DNS, has been proposed to address these threats. While DNSSec does provide certain security guarantees, here we argue that it does not provide what users really need, namely end-to-end authentication and integrity. Even worse, DNSSec makes DNS much less efficient and harder to administer, thus significantly compromising DNS’s availability—arguably its most important characteristic. In this paper we explain the structure of DNS, examine the threats against it, present the details of DNSSec, and analyze the benefits of DNSSec relative to its costs. This cost-benefit analysis clearly shows that DNSSec deployment is a futile effort, one that provides little long-term benefit yet has distinct, perhaps very significant costs.",
"title": ""
},
{
"docid": "291ece850c1c6afcda49ac2e8a74319e",
"text": "The aim of this paper is to explore how well the task of text vs. nontext distinction can be solved in online handwritten documents using only offline information. Two systems are introduced. The first system generates a document segmentation first. For this purpose, four methods originally developed for machine printed documents are compared: x-y cut, morphological closing, Voronoi segmentation, and whitespace analysis. A state-of-the art classifier then distinguishes between text and non-text zones. The second system follows a bottom-up approach that classifies connected components. Experiments are performed on a new dataset of online handwritten documents containing different content types in arbitrary arrangements. The best system assigns 94.3% of the pixels to the correct class.",
"title": ""
},
{
"docid": "a8e50b6273adbb08f4be07aba6a224ea",
"text": "While the physiological adaptations that occur following endurance training in previously sedentary and recreationally active individuals are relatively well understood, the adaptations to training in already highly trained endurance athletes remain unclear. While significant improvements in endurance performance and corresponding physiological markers are evident following submaximal endurance training in sedentary and recreationally active groups, an additional increase in submaximal training (i.e. volume) in highly trained individuals does not appear to further enhance either endurance performance or associated physiological variables [e.g. peak oxygen uptake (VO2peak), oxidative enzyme activity]. It seems that, for athletes who are already trained, improvements in endurance performance can be achieved only through high-intensity interval training (HIT). The limited research which has examined changes in muscle enzyme activity in highly trained athletes, following HIT, has revealed no change in oxidative or glycolytic enzyme activity, despite significant improvements in endurance performance (p < 0.05). Instead, an increase in skeletal muscle buffering capacity may be one mechanism responsible for an improvement in endurance performance. Changes in plasma volume, stroke volume, as well as muscle cation pumps, myoglobin, capillary density and fibre type characteristics have yet to be investigated in response to HIT with the highly trained athlete. Information relating to HIT programme optimisation in endurance athletes is also very sparse. Preliminary work using the velocity at which VO2max is achieved (V(max)) as the interval intensity, and fractions (50 to 75%) of the time to exhaustion at V(max) (T(max)) as the interval duration has been successful in eliciting improvements in performance in long-distance runners. However, V(max) and T(max) have not been used with cyclists. Instead, HIT programme optimisation research in cyclists has revealed that repeated supramaximal sprinting may be equally effective as more traditional HIT programmes for eliciting improvements in endurance performance. Further examination of the biochemical and physiological adaptations which accompany different HIT programmes, as well as investigation into the optimal HIT programme for eliciting performance enhancements in highly trained athletes is required.",
"title": ""
},
{
"docid": "ff4f272d2ddfd41f58679c076b0acf63",
"text": "When scoring the quality of JPEG images, the two main considerations for viewers are blocking artifacts and improper luminance changes, such as blur. In this letter, we first propose two measures to estimate the blockiness and the luminance change within individual blocks. Then, a no-reference image quality assessment (NR-IQA) method for JPEG images is proposed. Our method obtains the quality score by considering the blocking artifacts and the luminance changes from all nonoverlapping 8 × 8 blocks in one JPEG image. The proposed method has been tested on five public IQA databases and compared with five state-of-the-art NR-IQA methods for JPEG images. The experimental results show that our method is more consistent with subjective evaluations than the state-of-the-art NR-IQA methods. The MATLAB source code of our method is available at http://image.ustc.edu.cn/IQA.html.",
"title": ""
}
] |
scidocsrr
|
a6244d7dd0a42f0a95685efa68e438e8
|
Active learning for biomedical citation screening
|
[
{
"docid": "b45aae55cc4e7bdb13463eff7aaf6c60",
"text": "Text retrieval systems typically produce a ranking of documents and let a user decide how far down that ranking to go. In contrast, programs that filter text streams, software that categorizes documents, agents which alert users, and many other IR systems must make decisions without human input or supervision. It is important to define what constitutes good effectiveness for these autonomous systems, tune the systems to achieve the highest possible effectiveness, and estimate how the effectiveness changes as new data is processed. We show how to do this for binary text classification systems, emphasizing that different goals for the system lead to different optimal behaviors. Optimizing and estimating effectiveness is greatly aided if classifiers that explicitly estimate the probability of class membership are used.",
"title": ""
},
{
"docid": "ff40eca4b4a27573e102b40c9f70aea4",
"text": "This paper is concerned with the question of how to online combine an ensemble of active learners so as to expedite the learning progress during a pool-based active learning session. We develop a powerful active learning master algorithm, based a known competitive algorithm for the multi-armed bandit problem and a novel semi-supervised performance evaluation statistic. Taking an ensemble containing two of the best known active learning algorithms and a new algorithm, the resulting new active learning master algorithm is empirically shown to consistently perform almost as well as and sometimes outperform the best algorithm in the ensemble on a range of classification problems.",
"title": ""
},
{
"docid": "174a35b2c608a7cbef4ca8183fc19d0e",
"text": "This paper shows how a text classifier’s need for labeled training documents can be reduced by taking advantage of a large pool of unlabeled documents. We modify the Query-by-Committee (QBC) method of active learning to use the unlabeled pool for explicitly estimating document density when selecting examples for labeling. Then active learning is combined with ExpectationMaximization in order to “fill in” the class labels of those documents that remain unlabeled. Experimental results show that the improvements to active learning require less than two-thirds as many labeled training examples as previous QBC approaches, and that the combination of EM and active learning requires only slightly more than half as many labeled training examples to achieve the same accuracy as either the improved active learning or EM alone.",
"title": ""
}
] |
[
{
"docid": "cd7be54d469bddf5ff644bf4bb45024c",
"text": "The estimation of psychological properties of relationships (e.g., popularity, influence, or trust) only from objective data in online social networks (OSNs) is a rather vague approach. A subjective assessment produces more accurate results, but it requires very complex and cumbersome surveys. The key contribution of this paper is a framework for personalized surveys on relationships in OSNs which follows a gamification approach. A game was developed and integrated into Facebook as an app, which makes it possible to obtain subjective ratings of users' relationships and objective data about the users, their interactions, and their social network. The combination of both subjective and objective data facilitates a deeper understanding of the psychological properties of relationships in OSNs, and lays the foundations for future research of subjective aspects within OSNs.",
"title": ""
},
{
"docid": "fd15f98ad6f43f6c5ee53f68a3d2cdc0",
"text": "In this paper, a new approach for hand tracking and gesture recognition based on the Leap Motion device and surface electromyography (SEMG) is presented. The system is about to process the depth image information and the electrical activity produced by skeletal muscles on forearm. The purpose of such combination is enhancement in the gesture recognition rate. As a first we analyse the conventional approaches toward hand tracking and gesture recognition and present the results of various researches. Successive topic gives brief overview of depth-sensing cameras with focus on Leap motion device where we test its accuracy of fingers recognition. The vision-SEMG-based system is to be potentially applicable to many areas of human computer interaction.",
"title": ""
},
{
"docid": "532463ff1e5e91a2f9054cb86dcfa654",
"text": "During the last ten years, the discontinuous Galerkin time-domain (DGTD) method has progressively emerged as a viable alternative to well established finite-di↵erence time-domain (FDTD) and finite-element time-domain (FETD) methods for the numerical simulation of electromagnetic wave propagation problems in the time-domain. The method is now actively studied for various application contexts including those requiring to model light/matter interactions on the nanoscale. In this paper we further demonstrate the capabilities of the method for the simulation of near-field plasmonic interactions by considering more particularly the possibility of combining the use of a locally refined conforming tetrahedral mesh with a local adaptation of the approximation order.",
"title": ""
},
{
"docid": "6b40dce46554801c0d1375d6d18edb9a",
"text": "Mixture of experts (ME) is modular neural network architecture for supervised learning. A double-loop Expectation-Maximization (EM) algorithm has been introduced to the ME network structure for detection of epileptic seizure. The detection of epileptiform discharges in the EEG is an important component in the diagnosis of epilepsy. EEG signals were decomposed into the frequency subbands using discrete wavelet transform (DWT). Then these sub-band frequencies were used as an input to a ME network with two discrete outputs: normal and epileptic. In order to improve accuracy, the outputs of expert networks were combined according to a set of local weights called the ‘‘gating function’’. The invariant transformations of the ME probability density functions include the permutations of the expert labels and the translations of the parameters in the gating functions. The performance of the proposed model was evaluated in terms of classification accuracies and the results confirmed that the proposed ME network structure has some potential in detecting epileptic seizures. The ME network structure achieved accuracy rates which were higher than that of the standalone neural network model. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "085ef3104f22263be11f3a2b5f16ff34",
"text": "ARTICLE INFO Tumor is the one of the most common brain diesease and this is the reason for the diagnosis & treatment of the brain tumor has vital importance. MRI is the technique used to produce computerised image of internal body tissues. Cells are growing in uncontrollable manner this results in mass of unwanted tissue which is called as tumor. CT-Scan and MRI image which are diagnostic technique are used to detect brain tumor and classifies in types malignant & benign. This is difficult due to variations hence techniques like image preprocessing, feature extraction are used, there are many methods developed but they have different results. In this paper we are going to discuss the methods for detection of brain tumor and evaluate them.",
"title": ""
},
{
"docid": "742fef70793920d2b96c0877a2a7f371",
"text": "Cloud computing is an emerging technology and it allows users to pay as you need and has the high performance. Cloud computing is a heterogeneous system as well and it holds large amount of application data. In the process of scheduling some intensive data or computing an intensive application, it is acknowledged that optimizing the transferring and processing time is crucial to an application program. In this paper in order to minimize the cost of the processing we formulate a model for task scheduling and propose a particle swarm optimization (PSO) algorithm which is based on small position value rule. By virtue of comparing PSO algorithm with the PSO algorithm embedded in crossover and mutation and in the local research, the experiment results show the PSO algorithm not only converges faster but also runs faster than the other two algorithms in a large scale. The experiment results prove that the PSO algorithm is more suitable to cloud computing.",
"title": ""
},
{
"docid": "7dd3c935b6a5a38284b36ddc1dc1d368",
"text": "(2012): Mindfulness and self-compassion as predictors of psychological wellbeing in long-term meditators and matched nonmeditators, The Journal of Positive Psychology: This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.",
"title": ""
},
{
"docid": "d35e51bceaeb813f6ef3786a8a384b71",
"text": "The presence of dark energy in our universe is causing space to expand at an accelerating rate. As a result, over the next approximately 100 billion years, all stars residing beyond the Local Group will fall beyond the cosmic horizon and become not only unobservable, but entirely inaccessible, thus limiting how much energy could one day be extracted from them. Here, we consider the likely response of a highly advanced civilization to this situation. In particular, we argue that in order to maximize its access to useable energy, a sufficiently advanced civilization would chose to expand rapidly outward, build Dyson Spheres or similar structures around encountered stars, and use the energy that is harnessed to accelerate those stars away from the approaching horizon and toward the center of the civilization. We find that such efforts will be most effective for stars with masses in the range of M ∼ (0.2 − 1)M , and could lead to the harvesting of stars within a region extending out to several tens of Mpc in radius, potentially increasing the total amount of energy that is available to a future civilization by a factor of several thousand. We also discuss the observable signatures of a civilization elsewhere in the universe that is currently in this state of stellar harvesting. ORCID: http://orcid.org/0000-0001-8837-4127 ar X iv :1 80 6. 05 20 3v 1 [ as tr oph .C O ] 1 3 Ju n 20 18",
"title": ""
},
{
"docid": "4c941e492c517768cd623ea5d8ad79dc",
"text": "Multi-task Learning (MTL) is applied to the problem of predicting next-day health, stress, and happiness using data from wearable sensors and smartphone logs. Three formulations of MTL are compared: i) Multi-task Multi-Kernel learning, which feeds information across tasks through kernel weights on feature types, ii) a Hierarchical Bayes model in which tasks share a common Dirichlet prior, and iii) Deep Neural Networks, which share several hidden layers but have final layers unique to each task. We show that by using MTL to leverage data from across the population while still customizing a model for each person, we can account for individual differences, and obtain state-of-the-art performance on this dataset.",
"title": ""
},
{
"docid": "70e96cc632b25adab5afd4941696f456",
"text": "Requirements elicitation techniques are methods used by analysts to determine the needs of customers and users, so that systems can be built with a high probability of satisfying those needs. Analysts with extensive experience seem to be more successful than less experienced analysts in uncovering the user needs. Less experienced analysts often select a technique based on one of two reasons: (a) it is the only one they know, or (b) they think that a technique that worked well last time must surely be appropriate this time. This paper presents the results of in-depth interviews with some of the world's most experienced analysts. These results demonstrate how they select elicitation techniques based on a variety of situational assessments.",
"title": ""
},
{
"docid": "2b03868a73808a0135547427112dcaf8",
"text": "In this article we focus attention on ethnography’s place in CSCW by reflecting on how ethnography in the context of CSCW has contributed to our understanding of the sociality and materiality of work and by exploring how the notion of the ‘field site’ as a construct in ethnography provides new ways of conceptualizing ‘work’ that extends beyond the workplace. We argue that the well known challenges of drawing design implications from ethnographic research have led to useful strategies for tightly coupling ethnography and design. We also offer some thoughts on recent controversies over what constitutes useful and proper ethnographic research in the context of CSCW. Finally, we argue that as the temporal and spatial horizons of inquiry have expanded, along with new domains of collaborative activity, ethnography continues to provide invaluable perspectives.",
"title": ""
},
{
"docid": "3e113df3164468bd67060822de9a647c",
"text": "BACKGROUND\nPrevious estimates of the prevalence of geriatric depression have varied. There are few large population-based studies; most of these focused on individuals younger than 80 years. No US studies have been published since the advent of the newer antidepressant agents.\n\n\nMETHODS\nIn 1995 through 1996, as part of a large population study, we examined the current and lifetime prevalence of depressive disorders in 4,559 nondemented individuals aged 65 to 100 years. This sample represented 90% of the elderly population of Cache County, Utah. Using a modified version of the Diagnostic Interview Schedule, we ascertained past and present DSM-IV major depression, dysthymia, and subclinical depressive disorders. Medication use was determined through a structured interview and a \"medicine chest inventory.\"\n\n\nRESULTS\nPoint prevalence of major depression was estimated at 4.4% in women and 2.7% in men (P= .003). Other depressive syndromes were surprisingly uncommon (combined point prevalence, 1.6%). Among subjects with current major depression, 35.7% were taking an antidepressant (mostly selective serotonin reuptake inhibitors) and 27.4% a sedative/hypnotic. The current prevalence of major depression did not change appreciably with age. Estimated lifetime prevalence of major depression was 20.4% in women and 9.6% in men (P<.001), decreasing with age.\n\n\nCONCLUSIONS\nThese estimates for prevalence of major depression are higher than those reported previously in North American studies. Treatment with antidepressants was more common than reported previously, but was still lacking in most individuals with major depression. The prevalence of subsyndromal depressive symptoms was low, possibly because of unusual characteristics of the population.",
"title": ""
},
{
"docid": "30c75f37a7798b57a90376e88bb19270",
"text": "We develop methods for performing smoothing computations in general state-space models. The methods rely on a particle representation of the filtering distributions, and their evolution through time using sequential importance sampling and resampling ideas. In particular, novel techniques are presented for generation of sample realizations of historical state sequences. This is carried out in a forwardfiltering backward-smoothing procedure which can be viewed as the non-linear, non-Gaussian counterpart of standard Kalman filter-based simulation smoothers in the linear Gaussian case. Convergence in the mean-squared error sense of the smoothed trajectories is proved, showing the validity of our proposed method. The methods are tested in a substantial application for the processing of speech signals represented by a time-varying autoregression and parameterised in terms of timevarying partial correlation coefficients, comparing the results of our algorithm with those from a simple smoother based upon the filtered trajectories.",
"title": ""
},
{
"docid": "5f8a2db77dfa71ea2051a1a92b97f1f5",
"text": "Online communities are getting increasingly important for several different user groups; at the same time, community members seem to lack loyalty, as they often change from one community to another or use their community less over time. To survive and thrive, online communities must meet members' needs. By using qualitative data are from an extensive online survey of online community users and a representative sample of Internet users, 200 responses to an open quesion regarding community-loyalty was analyzed. Results show that there are 9 main reasons why community-users decrease in their participation over time or, in simple terms, stop using their online community: 1) Lack of interesting people/friends attending, 2) Low quality content, 3) Low usability, 4) Harassment and bullying 5) Time-consuming/isolating, 6) Low trust, 7) Over-commercialized, 8) Dissatisfaction with moderators and 9) Unspecified boring. The results, design implications and future research are discussed.",
"title": ""
},
{
"docid": "8bbd670bc459cc556a400c9bd2ca3f26",
"text": "We present ongoing work in linguistic processing of hashtags in Twitter text, with the goal of supplying normalized hashtag content to be used in more complex natural language processing (NLP) tasks. Hashtags represent collectively shared topic designators with considerable surface variation that can hamper semantic interpretation. Our normalization scripts allow for the lexical consolidation and segmentation of hashtags, potentially leading to improved semantic classification.",
"title": ""
},
{
"docid": "1a8bcfab4c66a3ac7b1b1112be46911a",
"text": "Despite the widespread assumption that students require scaffolding support for self-regulated learning (SRL) processes in computer-based learning environments (CBLEs), there is little clarity as to which types of scaffolds are most effective. This study offers a literature review covering the various scaffolds that support SRL processes in the domain of science education. Effective scaffolds are categorized and discussed according to the different areas and phases of SRL. The results reveal that most studies on scaffolding processes focus on cognition, whereas few focus on the non-cognitive areas of SRL. In the field of cognition, prompts appear to be the most effective scaffolds, especially for processes during the control phase. This review also shows that studies have paid little attention to scaffold designs, learner characteristics, or various task characteristics, despite the fact that these variables have been found to have a significant influence. We conclude with the implications of our results on future design and research in the field of SRL using CBLEs.",
"title": ""
},
{
"docid": "f863ca18e7b79e44c91f35e12495bbb4",
"text": "Wireless body-centric sensing systems have an important role in the fields of biomedicine, personal healthcare, safety, and security. Body-centric radio-frequency identification (RFID) technology provides a wireless and maintenance-free communication link between the human body and the surroundings through wearable and implanted antennas. This enables real-time monitoring of human vital signs everywhere. Seamlessly integrated wearable and implanted miniaturized antennas thus have the potential to revolutionize the everyday life of people, and to contribute to independent living. Low-cost and low-power system solutions will make widespread use of such technology become reality. The primary target applications for this research are body-centric sensing systems and the relatively new interdisciplinary field of wireless brain-machine interface (BMI) systems. Providing a direct wireless pathway between the brain and an external device, a wireless brain-machine interface holds an enormous potential for helping people suffering from severely disabling neurological conditions to communicate and manage their everyday life more independently. In this paper, we discuss RFID-inspired wireless brain-machine interface systems. We demonstrate that mm-size loop implanted antennas are capable of efficiently coupling to an external transmitting loop antenna through an inductive link. In addition, we focus on wearable antennas based on electrically conductive textiles and threads, and present design guidelines for their use as wearable-antenna conductive elements. Overall, our results constitute an important milestone in the development of wireless brain-machine interface systems, and a new era of wireless body-centric systems.",
"title": ""
},
{
"docid": "7b104b14b4219ecc2d1d141fbf0e707b",
"text": "As hospitals throughout Europe are striving exploit advantages of IT and network technologies, electronic medical records systems are starting to replace paper based archives. This paper suggests and describes an add-on service to electronic medical record systems that will help regular patients in getting insight to their diagnoses and medical record. The add-on service is based annotating polysemous and foreign terms with WordNet synsets. By exploiting the way that relationships between synsets are structured and described in WordNet, it is shown how patients can get interactive opportunities to generalize and understand their personal records.",
"title": ""
},
{
"docid": "705efc15f0c07c3028c691d5098fe921",
"text": "Antisocial behavior is a socially maladaptive and harmful trait to possess. This can be especially injurious for a child who is raised by a parent with this personality structure. The pathology of antisocial behavior implies traits such as deceitfulness, irresponsibility, unreliability, and an incapability to feel guilt, remorse, or even love. This is damaging to a child’s emotional, cognitive, and social development. Parents with this personality makeup can leave a child traumatized, empty, and incapable of forming meaningful personal relationships. Both genetic and environmental factors influence the development of antisocial behavior. Moreover, the child with a genetic predisposition to antisocial behavior who is raised with a parental style that triggers the genetic liability is at high risk for developing the same personality structure. Antisocial individuals are impulsive, irritable, and often have no concerns over their purported responsibilities. As a parent, this can lead to erratic discipline, neglectful parenting, and can undermine effective care giving. This paper will focus on the implications of parents with antisocial behavior and the impact that this behavior has on attachment as well as on the development of antisocial traits in children.",
"title": ""
}
] |
scidocsrr
|
3ffde8f94f5772631efc84da4d24b557
|
Adding Concurrency to Smart Contracts
|
[
{
"docid": "779c0081af334a597f6ee6942d7e7240",
"text": "We document our experiences in teaching smart contract programming to undergraduate students at the University of Maryland, the first pedagogical attempt of its kind. Since smart contracts deal directly with the movement of valuable currency units between contratual parties, security of a contract program is of paramount importance. Our lab exposed numerous common pitfalls in designing safe and secure smart contracts. We document several typical classes of mistakes students made, suggest ways to fix/avoid them, and advocate best practices for programming smart contracts. Finally, our pedagogical efforts have also resulted in online open course materials for programming smart contracts, which may be of independent interest to the community.",
"title": ""
},
{
"docid": "c428c35e7bd0a2043df26d5e2995f8eb",
"text": "Cryptocurrencies like Bitcoin and the more recent Ethereum system allow users to specify scripts in transactions and contracts to support applications beyond simple cash transactions. In this work, we analyze the extent to which these systems can enforce the correct semantics of scripts. We show that when a script execution requires nontrivial computation effort, practical attacks exist which either waste miners' computational resources or lead miners to accept incorrect script results. These attacks drive miners to an ill-fated choice, which we call the verifier's dilemma, whereby rational miners are well-incentivized to accept unvalidated blockchains. We call the framework of computation through a scriptable cryptocurrency a consensus computer and develop a model that captures incentives for verifying computation in it. We propose a resolution to the verifier's dilemma which incentivizes correct execution of certain applications, including outsourced computation, where scripts require minimal time to verify. Finally we discuss two distinct, practical implementations of our consensus computer in real cryptocurrency networks like Ethereum.",
"title": ""
}
] |
[
{
"docid": "6021388395ddd784422a22d30dac8797",
"text": "Introduction: The European Directive 2013/59/EURATOM requires patient radiation dose information to be included in the medical report of radiological procedures. To provide effective communication to the patient, it is necessary to first assess the patient's level of knowledge regarding medical exposure. The goal of this work is to survey patients’ current knowledge level of both medical exposure to ionizing radiation and professional disciplines and communication means used by patients to garner information. Material and Methods: A questionnaire was designed comprised of thirteen questions: 737 patients participated in the survey. The data were analysed based on population age, education, and number of radiological procedures received in the three years prior to survey. Results: A majority of respondents (56.4%) did not know which modality uses ionizing radiation. 74.7% had never discussed with healthcare professionals the risk concerning their medical radiological procedures. 70.1% were not aware of the professionals that have expertise to discuss the use of ionizing radiation for medical purposes, and 84.7% believe it is important to have the radiation dose information stated in the medical report. Conclusion: Patients agree with new regulations that it is important to know the radiation level related to the medical exposure, but there is little awareness in terms of which modalities use X-Rays and the professionals and channels that can help them to better understand the exposure information. To plan effective communication, it is essential to devise methods and adequate resources for key professionals (medical physicists, radiologists, referring physicians) to convey correct and effective information.",
"title": ""
},
{
"docid": "13bfce7105cab1e4ea01fe94d04bcb97",
"text": "Recent years have seen a steady rise in the incidence of cutaneous malignant melanoma worldwide. Although it is now appreciated that the key to understanding the process by which melanocytes are transformed into malignant melanoma lies in the interplay between genetic factors and the ultraviolet (UV) spectrum of sunlight, the nature of this relation has remained obscure. Recently, prospects for elucidating the molecular mechanisms underlying such gene–environment interactions have brightened considerably through the development of UV-responsive experimental animal models of melanoma. Genetically engineered mice and human skin xenografts constitute novel platforms upon which to build studies designed to elucidate the pathogenesis of UV-induced melanomagenesis. The future refinement of these in vivo models should provide a wealth of information on the cellular and genetic targets of UV, the pathways responsible for the repair of UV-induced DNA damage, and the molecular interactions between melanocytes and other skin cells in response to UV. It is anticipated that exploitation of these model systems will contribute significantly toward the development of effective approaches to the prevention and treatment of melanoma.",
"title": ""
},
{
"docid": "e52bac5b665aae5cf020538ab37356bc",
"text": "The greater decrease of conduction velocity in sensory than in motor fibres of the peroneal, median and ulnar nerves (particularly in the digital segments) found in patients with chronic carbon disulphide poisoning, permitted the diagnosis of polyneuropathy to be made in the subclinical stage, even while the conduction in motor fibres was still within normal limits. A process of axonal degeneration is presumed to underlie occurrence of neuropathy consequent to carbon disulphide poisoning.",
"title": ""
},
{
"docid": "c8379f1382a191985cf55773d0cd02c9",
"text": "Utilizing Big Data scenarios that are generated from increasing digitization and data availability is a core topic in IS research. There are prospective advantages in generating business value from those scenarios through improved decision support and new business models. In order to harvest those potential advantages Big Data capabilities are required, including not only technological aspects of data management and analysis but also strategic and organisational aspects. To assess these capabilities, one can use capability assessment models. Employing a qualitative meta-analysis on existing capability assessment models, it can be revealed that the existing approaches greatly differ in their fundamental structure due to heterogeneous model elements. The heterogeneous elements are therefore synthesized and transformed into consistent assessment dimensions to fulfil the requirements of exhaustive and mutually exclusive aspects of a capability assessment model. As part of a broader research project to develop a consistent and harmonized Big Data Capability Assessment Model (BDCAM) a new design for a capability matrix is proposed including not only capability dimensions but also Big Data life cycle tasks in order to measure specific weaknesses along the process of data-driven value creation.",
"title": ""
},
{
"docid": "8ea2dadd6024e2f1b757818e0c5d76fa",
"text": "BACKGROUND\nLysergic acid diethylamide (LSD) is a potent serotonergic hallucinogen or psychedelic that modulates consciousness in a marked and novel way. This study sought to examine the acute and mid-term psychological effects of LSD in a controlled study.\n\n\nMETHOD\nA total of 20 healthy volunteers participated in this within-subjects study. Participants received LSD (75 µg, intravenously) on one occasion and placebo (saline, intravenously) on another, in a balanced order, with at least 2 weeks separating sessions. Acute subjective effects were measured using the Altered States of Consciousness questionnaire and the Psychotomimetic States Inventory (PSI). A measure of optimism (the Revised Life Orientation Test), the Revised NEO Personality Inventory, and the Peter's Delusions Inventory were issued at baseline and 2 weeks after each session.\n\n\nRESULTS\nLSD produced robust psychological effects; including heightened mood but also high scores on the PSI, an index of psychosis-like symptoms. Increased optimism and trait openness were observed 2 weeks after LSD (and not placebo) and there were no changes in delusional thinking.\n\n\nCONCLUSIONS\nThe present findings reinforce the view that psychedelics elicit psychosis-like symptoms acutely yet improve psychological wellbeing in the mid to long term. It is proposed that acute alterations in mood are secondary to a more fundamental modulation in the quality of cognition, and that increased cognitive flexibility subsequent to serotonin 2A receptor (5-HT2AR) stimulation promotes emotional lability during intoxication and leaves a residue of 'loosened cognition' in the mid to long term that is conducive to improved psychological wellbeing.",
"title": ""
},
{
"docid": "4e88695ea1401fabb333d7323cbcb27b",
"text": "Many popular programs, such as Netscape, use untrusted helper applications to process data from the network. Unfortunately, the unauthenticated network data they interpret could well have been created by an adversary, and the helper applications are usually too complex to be bug-free. This raises signi cant security concerns. Therefore, it is desirable to create a secure environment to contain untrusted helper applications. We propose to reduce the risk of a security breach by restricting the program's access to the operating system. In particular, we intercept and lter dangerous system calls via the Solaris process tracing facility. This enabled us to build a simple, clean, user-mode implementation of a secure environment for untrusted helper applications. Our implementation has negligible performance impact, and can protect pre-existing applications.",
"title": ""
},
{
"docid": "9043a5aae40471cb9f671a33725b0072",
"text": "In a software development group of IBM Retail Store Solutions, we built a non-trivial software system based on a stable standard specification using a disciplined, rigorous unit testing and build approach based on the test- driven development (TDD) practice. Using this practice, we reduced our defect rate by about 50 percent compared to a similar system that was built using an ad-hoc unit testing approach. The project completed on time with minimal development productivity impact. Additionally, the suite of automated unit test cases created via TDD is a reusable and extendable asset that will continue to improve quality over the lifetime of the software system. The test suite will be the basis for quality checks and will serve as a quality contract between all members of the team.",
"title": ""
},
{
"docid": "38863f217a610af5378c42e03cd3fe3c",
"text": "In human movement learning, it is most common to teach constituent elements of complex movements in isolation, before chaining them into complex movements. Segmentation and recognition of observed movement could thus proceed out of this existing knowledge, which is directly compatible with movement generation. In this paper, we address exactly this scenario. We assume that a library of movement primitives has already been taught, and we wish to identify elements of the library in a complex motor act, where the individual elements have been smoothed together, and, occasionally, there might be a movement segment that is not in our library yet. We employ a flexible machine learning representation of movement primitives based on learnable nonlinear attractor system. For the purpose of movement segmentation and recognition, it is possible to reformulate this representation as a controlled linear dynamical system. An Expectation-Maximization algorithm can be developed to estimate the open parameters of a movement primitive from the library, using as input an observed trajectory piece. If no matching primitive from the library can be found, a new primitive is created. This process allows a straightforward sequential segmentation of observed movement into known and new primitives, which are suitable for robot imitation learning. We illustrate our approach with synthetic examples and data collected from human movement. Appearing in Proceedings of the 15 International Conference on Artificial Intelligence and Statistics (AISTATS) 2012, La Palma, Canary Islands. Volume XX of JMLR: W&CP XX. Copyright 2012 by the authors.",
"title": ""
},
{
"docid": "e259e255f9acf3fa1e1429082e1bf1de",
"text": "In this work we describe an autonomous soft-bodied robot that is both self-contained and capable of rapid, continuum-body motion. We detail the design, modeling, fabrication, and control of the soft fish, focusing on enabling the robot to perform rapid escape responses. The robot employs a compliant body with embedded actuators emulating the slender anatomical form of a fish. In addition, the robot has a novel fluidic actuation system that drives body motion and has all the subsystems of a traditional robot onboard: power, actuation, processing, and control. At the core of the fish's soft body is an array of fluidic elastomer actuators. We design the fish to emulate escape responses in addition to forward swimming because such maneuvers require rapid body accelerations and continuum-body motion. These maneuvers showcase the performance capabilities of this self-contained robot. The kinematics and controllability of the robot during simulated escape response maneuvers are analyzed and compared with studies on biological fish. We show that during escape responses, the soft-bodied robot has similar input-output relationships to those observed in biological fish. The major implication of this work is that we show soft robots can be both self-contained and capable of rapid body motion.",
"title": ""
},
{
"docid": "c0e2fd07072a65885e4d90f3fca7bdf3",
"text": "Urban heat island is among the most evident aspects of human impacts on the earth system. Here we assess the diurnal and seasonal variation of surface urban heat island intensity (SUHII) defined as the surface temperature difference between urban area and suburban area measured from the MODIS. Differences in SUHII are analyzed across 419 global big cities, and we assess several potential biophysical and socio-economic driving factors. Across the big cities, we show that the average annual daytime SUHII (1.5 ± 1.2 °C) is higher than the annual nighttime SUHII (1.1 ± 0.5 °C) (P < 0.001). But no correlation is found between daytime and nighttime SUHII across big cities (P = 0.84), suggesting different driving mechanisms between day and night. The distribution of nighttime SUHII correlates positively with the difference in albedo and nighttime light between urban area and suburban area, while the distribution of daytime SUHII correlates negatively across cities with the difference of vegetation cover and activity between urban and suburban areas. Our results emphasize the key role of vegetation feedbacks in attenuating SUHII of big cities during the day, in particular during the growing season, further highlighting that increasing urban vegetation cover could be one effective way to mitigate the urban heat island effect.",
"title": ""
},
{
"docid": "0277fd19009088f84ce9f94a7e942bc1",
"text": "These study it is necessary to can be used as a theoretical foundation upon which to base decision-making and strategic thinking about e-learning system. This paper proposes a new framework for assessing readiness of an organization to implement the e-learning system project on the basis of McKinsey 7S model using fuzzy logic analysis. The study considers 7 dimensions as approach to assessing the current situation of the organization prior to system implementation to identify weakness areas which may encounter the project with failure. Adopted was focus on Questionnaires and group interviews to specific data collection from three colleges in Mosul University in Iraq. This can be achieved success in building an e-learning system at the University of Mosul by readiness assessment according to the model of multidimensional based on the framework of 7S is selected by 23 factors, and thus can avoid failures or weaknesses facing the implementation process before the start of the project and a step towards enabling the administration to make decisions that achieve success in this area, as well as to avoid the high cost associated with the implementation process.",
"title": ""
},
{
"docid": "572fbd0682b1b6ded39e8ef42325ad7c",
"text": "Here, we describe a real planning problem in the tramp shipping industry. A tramp shipping company may have a certain amount of contract cargoes that it is committed to carry, and tries to maximize the profit from optional cargoes. For real long-term contracts, the sizes of the cargoes are flexible. However, in previous research within tramp ship routing, the cargo quantities are regarded as fixed. We present an MP-model of the problem and a set partitioning approach to solve the multi-ship pickup and delivery problem with time windows and flexible cargo sizes. The columns are generated a priori and the most profitable ship schedule for each cargo set–ship combination is included in the set partitioning problem. We have tested the method on several real-life cases, and the results show the potential economical effects for the tramp shipping companies by utilizing flexible cargo sizes when generating the schedules. Journal of the Operational Research Society (2007) 58, 1167–1177. doi:10.1057/palgrave.jors.2602263 Published online 16 August 2006",
"title": ""
},
{
"docid": "a61c1e5c1eafd5efd8ee7021613cf90d",
"text": "A millimeter-wave (mmW) bandpass filter (BPF) using substrate integrated waveguide (SIW) is proposed in this work. A BPF with three resonators is formed by etching slots on the top metal plane of the single SIW cavity. The filter is investigated with the theory of electric coupling mechanism. The design procedure and design curves of the coupling coefficient (K) and quality factor (Q) are given and discussed here. The extracted K and Q are used to determine the filter circuit dimensions. In order to prove the validity, a SIW BPF operating at 140 GHz is fabricated in a single circuit layer using low temperature co-fired ceramic (LTCC) technology. The measured insertion loss is 1.913 dB at 140 GHz with a fractional bandwidth of 13.03%. The measured results are in good agreement with simulated results in such high frequency.",
"title": ""
},
{
"docid": "7b5eacf2e826e4b7a68395d9c7421463",
"text": "How does gesturing help children learn? Gesturing might encourage children to extract meaning implicit in their hand movements. If so, children should be sensitive to the particular movements they produce and learn accordingly. Alternatively, all that may matter is that children move their hands. If so, they should learn regardless of which movements they produce. To investigate these alternatives, we manipulated gesturing during a math lesson. We found that children required to produce correct gestures learned more than children required to produce partially correct gestures, who learned more than children required to produce no gestures. This effect was mediated by whether children took information conveyed solely in their gestures and added it to their speech. The findings suggest that body movements are involved not only in processing old ideas, but also in creating new ones. We may be able to lay foundations for new knowledge simply by telling learners how to move their hands.",
"title": ""
},
{
"docid": "46ef5b489f02a1b62b0fb78a28bfc32c",
"text": "Biobanks have been heralded as essential tools for translating biomedical research into practice, driving precision medicine to improve pathways for global healthcare treatment and services. Many nations have established specific governance systems to facilitate research and to address the complex ethical, legal and social challenges that they present, but this has not lead to uniformity across the world. Despite significant progress in responding to the ethical, legal and social implications of biobanking, operational, sustainability and funding challenges continue to emerge. No coherent strategy has yet been identified for addressing them. This has brought into question the overall viability and usefulness of biobanks in light of the significant resources required to keep them running. This review sets out the challenges that the biobanking community has had to overcome since their inception in the early 2000s. The first section provides a brief outline of the diversity in biobank and regulatory architecture in seven countries: Australia, Germany, Japan, Singapore, Taiwan, the UK, and the USA. The article then discusses four waves of responses to biobanking challenges. This article had its genesis in a discussion on biobanks during the Centre for Health, Law and Emerging Technologies (HeLEX) conference in Oxford UK, co-sponsored by the Centre for Law and Genetics (University of Tasmania). This article aims to provide a review of the issues associated with biobank practices and governance, with a view to informing the future course of both large-scale and smaller scale biobanks.",
"title": ""
},
{
"docid": "63b2bc943743d5b8ef9220fd672df84f",
"text": "In multiagent systems, we often have a set of agents each of which have a preference ordering over a set of items and one would like to know these preference orderings for various tasks, for example, data analysis, preference aggregation, voting etc. However, we often have a large number of items which makes it impractical to ask the agents for their complete preference ordering. In such scenarios, we usually elicit these agents’ preferences by asking (a hopefully small number of) comparison queries — asking an agent to compare two items. Prior works on preference elicitation focus on unrestricted domain and the domain of single peaked preferences and show that the preferences in single peaked domain can be elicited by much less number of queries compared to unrestricted domain. We extend this line of research and study preference elicitation for single peaked preferences on trees which is a strict superset of the domain of single peaked preferences. We show that the query complexity crucially depends on the number of leaves, the path cover number, and the distance from path of the underlying single peaked tree, whereas the other natural parameters like maximum degree, diameter, pathwidth do not play any direct role in determining query complexity. We then investigate the query complexity for finding a weak Condorcet winner for preferences single peaked on a tree and show that this task has much less query complexity than preference elicitation. Here again we observe that the number of leaves in the underlying single peaked tree and the path cover number of the tree influence the query complexity of the problem.",
"title": ""
},
{
"docid": "c553ea1a03550bdc684dbacbb9bef385",
"text": "NeuCoin is a decentralized peer-to-peer cryptocurrency derived from Sunny King’s Peercoin, which itself was derived from Satoshi Nakamoto’s Bitcoin. As with Peercoin, proof-of-stake replaces proof-of-work as NeuCoin’s security model, effectively replacing the operating costs of Bitcoin miners (electricity, computers) with the capital costs of holding the currency. Proof-of-stake also avoids proof-of-work’s inherent tendency towards centralization resulting from competition for coinbase rewards among miners based on lowest cost electricity and hash power. NeuCoin increases security relative to Peercoin and other existing proof-of-stake currencies in numerous ways, including: (1) incentivizing nodes to continuously stake coins over time through substantially higher mining rewards and lower minimum stake age; (2) abandoning the use of coin age in the mining formula; (3) causing the stake modifier parameter to change over time for each stake; and (4) utilizing a client that punishes nodes that attempt to mine on multiple branches with duplicate stakes. This paper demonstrates how NeuCoin’s proof-of-stake implementation addresses all commonly raised “nothing at stake” objections to generic proof-of-stake systems. It also reviews many of the flaws of proof-of-work designs to highlight the potential for an alternate cryptocurrency that solves these flaws.",
"title": ""
},
{
"docid": "57c2422bac0a8f44b186fadbfcadb393",
"text": "In this paper, we propose a vision-based multiple lane boundaries detection and estimation structure that fuses the edge features and the high intensity features. Our approach utilizes a camera as the only input sensor. The application of Kalman filter for information fusion and tracking significantly improves the reliability and robustness of our system. We test our system on roads with different driving scenarios, including day, night, heavy traffic, rain, confusing textures and shadows. The feasibility of our approach is demonstrated by quantitative evaluation using manually labeled video clips.",
"title": ""
},
{
"docid": "2503784af4149b3d5bd61c458b6df2bf",
"text": "In this paper, our proposed method has two contributions to demosaicking: first, different from conventional interpolation methods based on two directions or four directions, the proposed method exploits to a greater degree correlations among neighboring pixels along eight directions to improve the interpolation performance. Second, we propose an efficient post-processing method to reduce interpolation artifacts based on the color difference planes. As compared with the latest demosaicking algorithms, experiments show that the proposed algorithm provides superior performance in terms of both objective and subjective image qualities.",
"title": ""
},
{
"docid": "4bdcc552853c8b658762c0c5d509f362",
"text": "In this work, we study the problem of partof-speech tagging for Tweets. In contrast to newswire articles, Tweets are usually informal and contain numerous out-ofvocabulary words. Moreover, there is a lack of large scale labeled datasets for this domain. To tackle these challenges, we propose a novel neural network to make use of out-of-domain labeled data, unlabeled in-domain data, and labeled indomain data. Inspired by adversarial neural networks, the proposed method tries to learn common features through adversarial discriminator. In addition, we hypothesize that domain-specific features of target domain should be preserved in some degree. Hence, the proposed method adopts a sequence-to-sequence autoencoder to perform this task. Experimental results on three different datasets show that our method achieves better performance than state-of-the-art methods.",
"title": ""
}
] |
scidocsrr
|
4c552f3dd36f3e47050c1aacbf132262
|
E-Commerce in Saudi Arabia: adoption and perspectives
|
[
{
"docid": "bd13f54cd08fe2626fe8de4edce49197",
"text": "Ease of use and usefulness are believed to be fundamental in determining the acceptance and use of various, corporate ITs. These beliefs, however, may not explain the user's behavior toward newly emerging ITs, such as the World-Wide-Web (WWW). In this study, we introduce playfulness as a new factor that re ̄ects the user's intrinsic belief in WWW acceptance. Using it as an intrinsic motivation factor, we extend and empirically validate the Technology Acceptance Model (TAM) for the WWW context. # 2001 Elsevier Science B.V. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "7250d1ea22aac1690799089d2ba1acd5",
"text": "Music plays an important part in people’s lives to regulate their emotions throughout the day. We conducted an online user study to investigate how the emotional state relates to the use of emotionally laden music. We found among 359 participants that they in general prefer emotionally laden music that correspond with their emotional state. However, when looking at personality traits, different patterns emerged. We found that when in a negative emotional state, those who scored high on openness, extraversion, and agreeableness tend to cheer themselves up with happy music, while those who scored high on neuroticism tend to increase their worry with sad music. With our results we show general patterns of music usage, but also individual differences. Our results contribute to the improvement of applications such as recommender systems in order to provide tailored recommendations based on users’ personality and emotional state.",
"title": ""
},
{
"docid": "ea95c1e0e60bd96f706861bcaeab9f74",
"text": "Legumes are widely used ensiling material as they are rich in protein. Ensiling of legumes arises several problems due to their low content of sugars, high buffering capacity and high moisture content. Attention should be paid to silage protein degradability. Objective of the study was to explain the effect of additives and wilting on the fermentation quality and nutritive value of red clover-timothy silage, protein degradability and content of amines included. Test silages from fresh material, either unwilted or wilted for 24 hours, were conserved into 3-litre jars and opened for analysis in 90 days. Biological (L. plantarum +L. fermentum) and chemical (AIV 2000) additives were used for treatment. Silage protein degradability was studied by using in sacco method with ruminally fistulated cows. As the buffering capacity of red clover-timothy mixture (50:50) is low (27.5), increasing up to 39.6 at wilting, treatment with additive is necessary to improve fermentation. The use of biological or chemical additives decreased silage dry matter losses by 1.9 to 3.7 times, significantly improving the quality of fermentation – content of butyric acid was 47 g/kg DM for test silage and 1–2 g/kg DM or 0 for silage without additive. In vitro organic matter digestibility for the silage with chemical additive increased by 4%, compared to that for the test silage (P<0.0001). Ruminal degradability of silage nitrogen was approximately 90%. Protein solubility and ruminal degradability were lower for silage with chemical additive. Ruminal degradability of silage protein after 8 h was 77.2% for the test silage, 76% for the silage with biological additive and 68% for the silage with chemical additive. All biogenic amines under investigation were present in low dry matter (140g/kg) silages, prepared without an additive. The content of histamine was the highest (5.24 g/kg DM), followed by putrescine (0.86 g/kg DM). Wilting and treatment with additive significantly decreased the formation of biogenic amines in silages.",
"title": ""
},
{
"docid": "ffef016fba37b3dc167a1afb7e7766f0",
"text": "We show that the Thompson Sampling algorithm achieves logarithmic expected regret for the Bernoulli multi-armed bandit problem. More precisely, for the two-armed bandit problem, the expected regret in time T is O( lnT ∆ + 1 ∆3 ). And, for the N -armed bandit problem, the expected regret in time T is O( [ ( ∑N i=2 1 ∆i ) ] lnT ). Our bounds are optimal but for the dependence on ∆i and the constant factors in big-Oh.",
"title": ""
},
{
"docid": "4c54ccdc2c6219e185b701c75eb9e5b4",
"text": "HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Perceived development of psychological characteristics in Male and Female elite gymnasts Claire Calmels, Fabienne D’Arripe-Longueville, Magaly Hars, Nadine Debois",
"title": ""
},
{
"docid": "c2c2ddb9a6e42edcc1c035636ec1c739",
"text": "As the interest in DevOps continues to grow, there is an increasing need for software organizations to understand how to adopt it successfully. This study has as objective to clarify the concept and provide insight into existing challenges of adopting DevOps. First, the existing literature is reviewed. A definition of DevOps is then formed based on the literature by breaking down the concept into its defining characteristics. We interview 13 subjects in a software company adopting DevOps and, finally, we present 11 impediments for the company’s DevOps adoption that were identified based on the interviews.",
"title": ""
},
{
"docid": "1e2006e93ad382b3997736e446c2dff2",
"text": "Classical distillation methods transfer representations from a “teacher” neural network to a “student” network by matching their output activations. Recent methods also match the Jacobians, or the gradient of output activations with the input. However, this involves making some ad hoc decisions, in particular, the choice of the loss function. In this paper, we first establish an equivalence between Jacobian matching and distillation with input noise, from which we derive appropriate loss functions for Jacobian matching. We then rely on this analysis to apply Jacobian matching to transfer learning by establishing equivalence of a recent transfer learning procedure to distillation. We then show experimentally on standard image datasets that Jacobian-based penalties improve distillation, robustness to noisy inputs, and transfer learning.",
"title": ""
},
{
"docid": "768a4839232a39f8c4fe15ca095217d1",
"text": "Advances in deep learning over the last decade have led to a flurry of research in the application of deep artificial neural networks to robotic systems, with at least thirty papers published on the subject between 2014 and the present. This review discusses the applications, benefits, and limitations of deep learning vis-\\`a-vis physical robotic systems, using contemporary research as exemplars. It is intended to communicate recent advances to the wider robotics community and inspire additional interest in and application of deep learning in robotics.",
"title": ""
},
{
"docid": "a9cfb59c0187466d64010a3f39ac0e30",
"text": "Model-free Reinforcement Learning (RL) offers an attractive approach to learn control policies for highdimensional systems, but its relatively poor sample complexity often necessitates training in simulated environments. Even in simulation, goal-directed tasks whose natural reward function is sparse remain intractable for state-of-the-art model-free algorithms for continuous control. The bottleneck in these tasks is the prohibitive amount of exploration required to obtain a learning signal from the initial state of the system. In this work, we leverage physical priors in the form of an approximate system dynamics model to design a curriculum for a model-free policy optimization algorithm. Our Backward Reachability Curriculum (BaRC) begins policy training from states that require a small number of actions to accomplish the task, and expands the initial state distribution backwards in a dynamically-consistent manner once the policy optimization algorithm demonstrates sufficient performance. BaRC is general, in that it can accelerate training of any model-free RL algorithm on a broad class of goal-directed continuous control MDPs. Its curriculum strategy is physically intuitive, easy-to-tune, and allows incorporating physical priors to accelerate training without hindering the performance, flexibility, and applicability of the model-free RL algorithm. We evaluate our approach on two representative dynamic robotic learning problems and find substantial performance improvement relative to previous curriculum generation techniques and naı̈ve exploration strategies.",
"title": ""
},
{
"docid": "0b3e5df3c317b748280e6253965e59e5",
"text": "The explicitly observed social relations from online social platforms have been widely incorporated into recommender systems to mitigate the data sparsity issue. However, the direct usage of explicit social relations may lead to an inferior performance due to the unreliability (e.g., noises) of observed links. To this end, the discovery of reliable relations among users plays a central role in advancing social recommendation. In this paper, we propose a novel approach to adaptively identify implicit friends toward discovering more credible user relations. Particularly, implicit friends are those who share similar tastes but could be distant from each other on the network topology of social relations. Methodologically, to find the implicit friends for each user, we first model the whole system as a heterogeneous information network, and then capture the similarity of users through the meta-path based embedding representation learning. Finally, based on the intuition that social relations have varying degrees of impact on different users, our approach adaptively incorporates different numbers of similar users as implicit friends for each user to alleviate the adverse impact of unreliable social relations for a more effective recommendation. Experimental analysis on three real-world datasets demonstrates the superiority of our method and explain why implicit friends are helpful in improving social recommendation.",
"title": ""
},
{
"docid": "3ad124875f073ff961aaf61af2832815",
"text": "EVERY HUMAN CULTURE HAS SOME FORM OF MUSIC WITH A BEAT\na perceived periodic pulse that structures the perception of musical rhythm and which serves as a framework for synchronized movement to music. What are the neural mechanisms of musical beat perception, and how did they evolve? One view, which dates back to Darwin and implicitly informs some current models of beat perception, is that the relevant neural mechanisms are relatively general and are widespread among animal species. On the basis of recent neural and cross-species data on musical beat processing, this paper argues for a different view. Here we argue that beat perception is a complex brain function involving temporally-precise communication between auditory regions and motor planning regions of the cortex (even in the absence of overt movement). More specifically, we propose that simulation of periodic movement in motor planning regions provides a neural signal that helps the auditory system predict the timing of upcoming beats. This \"action simulation for auditory prediction\" (ASAP) hypothesis leads to testable predictions. We further suggest that ASAP relies on dorsal auditory pathway connections between auditory regions and motor planning regions via the parietal cortex, and suggest that these connections may be stronger in humans than in non-human primates due to the evolution of vocal learning in our lineage. This suggestion motivates cross-species research to determine which species are capable of human-like beat perception, i.e., beat perception that involves accurate temporal prediction of beat times across a fairly broad range of tempi.",
"title": ""
},
{
"docid": "1c7027cc8086830709ea2d5a41d13d20",
"text": "Hypervisor-based virtualization technology has been successfully used to deploy high-performance and scalable infrastructure for Hadoop, and now Spark applications. Container-based virtualization techniques are becoming an important option, which is increasingly used due to their lightweight operation and better scaling when compared to Virtual Machines (VM). With containerization techniques such as Docker becoming mature and promising better performance, we can use Docker to speed-up big data applications. However, as applications have different behaviors and resource requirements, before replacing traditional hypervisor-based virtual machines with Docker, it is important to analyze and compare performance of applications running in the cloud with VMs and Docker containers. VM provides distributed resource management for different virtual machines running with their own allocated resources, while Docker relies on shared pool of resources among all containers. Here, we investigate the performance of different Apache Spark applications using both Virtual Machines (VM) and Docker containers. While others have looked at Docker's performance, this is the first study that compares these different virtualization frameworks for a big data enterprise cloud environment using Apache Spark. In addition to makespan and execution time, we also analyze different resource utilization (CPU, disk, memory, etc.) by Spark applications. Our results show that Spark using Docker can obtain speed-up of over 10 times when compared to using VM. However, we observe that this may not apply to all applications due to different workload patterns and different resource management schemes performed by virtual machines and containers. Our work can guide application developers, system administrators and researchers to better design and deploy big data applications on their platforms to improve the overall performance.",
"title": ""
},
{
"docid": "899e96eacd2c73730c157056c56eea25",
"text": "Hyaluronic acid (HA), a macropolysaccharidic component of the extracellular matrix, is common to most species and it is found in many sites of the human body, including skin and soft tissue. Not only does HA play a variety of roles in physiologic and in pathologic events, but it also has been extensively employed in cosmetic and skin-care products as drug delivery agent or for several biomedical applications. The most important limitations of HA are due to its short half-life and quick degradation in vivo and its consequently poor bioavailability. In the aim to overcome these difficulties, HA is generally subjected to several chemical changes. In this paper we obtained an acetylated form of HA with increased bioavailability with respect to the HA free form. Furthermore, an improved radical scavenging and anti-inflammatory activity has been evidenced, respectively, on ABTS radical cation and murine monocyte/macrophage cell lines (J774.A1).",
"title": ""
},
{
"docid": "32378690ded8920eb81689fea1ac8c23",
"text": "OBJECTIVE\nTo investigate the effect of Beri-honey-impregnated dressing on diabetic foot ulcer and compare it with normal saline dressing.\n\n\nSTUDY DESIGN\nA randomized, controlled trial.\n\n\nPLACE AND DURATION OF STUDY\nSughra Shafi Medical Complex, Narowal, Pakistan and Bhatti International Trust (BIT) Hospital, Affiliated with Central Park Medical College, Lahore, from February 2006 to February 2010.\n\n\nMETHODOLOGY\nPatients with Wagner's grade 1 and 2 ulcers were enrolled. Those patients were divided in two groups; group A (n=179) treated with honey dressing and group B (n=169) treated with normal saline dressing. Outcome measures were calculated in terms of proportion of wounds completely healed (primary outcome), wound healing time, and deterioration of wounds. Patients were followed-up for a maximum of 120 days.\n\n\nRESULTS\nOne hundred and thirty six wounds (75.97%) out of 179 were completely healed with honey dressing and 97 (57.39%) out of 169 wtih saline dressing (p=0.001). The median wound healing time was 18.00 (6 - 120) days (Median with IQR) in group A and 29.00 (7 - 120) days (Median with IQR) in group B (p < 0.001).\n\n\nCONCLUSION\nThe present results showed that honey is an effective dressing agent instead of conventional dressings, in treating patients of diabetic foot ulcer.",
"title": ""
},
{
"docid": "e81f1caa398de7f56a70cc4db18d58db",
"text": "UNLABELLED\nThis study aimed to investigate the association of facial proportion and its relation to the golden ratio with the evaluation of facial appearance among Malaysian population. This was a cross-sectional study with 286 randomly selected from Universiti Sains Malaysia (USM) Health Campus students (150 females and 136 males; 100 Malaysian Chinese, 100 Malaysian Malay and 86 Malaysian Indian), with the mean age of 21.54 ± 1.56 (Age range, 18-25). Facial indices obtained from direct facial measurements were used for the classification of facial shape into short, ideal and long. A validated structured questionnaire was used to assess subjects' evaluation of their own facial appearance. The mean facial indices of Malaysian Indian (MI), Malaysian Chinese (MC) and Malaysian Malay (MM) were 1.59 ± 0.19, 1.57 ± 0.25 and 1.54 ± 0.23 respectively. Only MC showed significant sexual dimorphism in facial index (P = 0.047; P<0.05) but no significant difference was found between races. Out of the 286 subjects, 49 (17.1%) were of ideal facial shape, 156 (54.5%) short and 81 (28.3%) long. The facial evaluation questionnaire showed that MC had the lowest satisfaction with mean score of 2.18 ± 0.97 for overall impression and 2.15 ± 1.04 for facial parts, compared to MM and MI, with mean score of 1.80 ± 0.97 and 1.64 ± 0.74 respectively for overall impression; 1.75 ± 0.95 and 1.70 ± 0.83 respectively for facial parts.\n\n\nIN CONCLUSION\n1) Only 17.1% of Malaysian facial proportion conformed to the golden ratio, with majority of the population having short face (54.5%); 2) Facial index did not depend significantly on races; 3) Significant sexual dimorphism was shown among Malaysian Chinese; 4) All three races are generally satisfied with their own facial appearance; 5) No significant association was found between golden ratio and facial evaluation score among Malaysian population.",
"title": ""
},
{
"docid": "b0dd406b590658aa262b103e8cea4296",
"text": "This paper proposes the differential energy watermarking (DEW) algorithm for JPEG/MPEG streams. The DEW algorithm embeds label bits by selectively discarding high frequency discrete cosine transform (DCT) coefficients in certain image regions. The performance of the proposed watermarking algorithm is evaluated by the robustness of the watermark, the size of the watermark, and the visual degradation the watermark introduces. These performance factors are controlled by three parameters, namely the maximal coarseness of the quantizer used in pre-encoding, the number of DCT blocks used to embed a single watermark bit, and the lowest DCT coefficient that we permit to be discarded. We follow a rigorous approach to optimizing the performance and choosing the correct parameter settings by developing a statistical model for the watermarking algorithm. Using this model, we can derive the probability that a label bit cannot be embedded. The resulting model can be used, for instance, for maximizing the robustness against re-encoding and for selecting adequate error correcting codes for the label bit string.",
"title": ""
},
{
"docid": "80ef125fb855cfb76197e474ec371726",
"text": "An electric arc furnace is a nonlinear, time varying load with stochastic behavior, which gives rise to harmonics, interharminics and voltage flicker. Since a power system has finite impedance, the current distortion caused by a DC electric arc furnace load creates a corresponding voltage distortion in the supply lines. The current and voltage harmonic distortion causes several problems in electrical power system, such as electrical, electronic, and computer equipment damage, control system errors due to electrical noise caused by harmonics, additional losses in transmission and distribution networks and etc. This paper makes an effort to display the differences between two types of DC electric arc furnace feeding system from the viewpoint of total harmonic distortion at AC side. These Types of feeding system include controlled rectifier power supply and uncontrolled rectifier chopper power supply. Simulation results show that the uncontrolled rectifier chopper power supply is more efficient than the other one.",
"title": ""
},
{
"docid": "16c05466aa84e1704b528ccac34a4004",
"text": "Most cloud services are built with multi-tenancy which enables data and configuration segregation upon shared infrastructures. Each tenant essentially operates in an individual silo without interacting with other tenants. As cloud computing evolves we anticipate there will be increased need for tenants to collaborate across tenant boundaries. This will require cross-tenant trust models supported and enforced by the cloud service provider. Considering the on-demand self-service feature intrinsic to cloud computing, we propose a formal cross-tenant trust model (CTTM) and its role-based extension (RB-CTTM) integrating various types of trust relations into cross-tenant access control models which can be enforced by the multi-tenant authorization as a service (MTAaaS) platform in the cloud.",
"title": ""
},
{
"docid": "1dd8599c88a29ed0c4cfd0a502b50b71",
"text": "Providing customer support through social media channels is gaining increasing popularity. In such a context, automatic detection and analysis of the emotions expressed by customers is important, as is identification of the emotional techniques (e.g., apology, empathy, etc.) in the responses of customer service agents. Result of such an analysis can help assess the quality of such a service, help and inform agents about desirable responses, and help develop automated service agents for social media interactions. In this paper, we show that, in addition to text based turn features, dialogue features can significantly improve detection of emotions in social media customer service dialogues and help predict emotional techniques used by customer service agents.",
"title": ""
},
{
"docid": "60971d26877ef62b816526f13bd76c24",
"text": "Breast cancer is one of the leading causes of cancer death among women worldwide. In clinical routine, automatic breast ultrasound (BUS) image segmentation is very challenging and essential for cancer diagnosis and treatment planning. Many BUS segmentation approaches have been studied in the last two decades, and have been proved to be effective on private datasets. Currently, the advancement of BUS image segmentation seems to meet its bottleneck. The improvement of the performance is increasingly challenging, and only few new approaches were published in the last several years. It is the time to look at the field by reviewing previous approaches comprehensively and to investigate the future directions. In this paper, we study the basic ideas, theories, pros and cons of the approaches, group them into categories, and extensively review each category in depth by discussing the principles, application issues, and advantages/disadvantages. Keyword: breast ultrasound (BUS) images; breast cancer; segmentation; benchmark; early detection; computer-aided diagnosis (CAD)",
"title": ""
},
{
"docid": "f5a8159717f1d413b45ec14bf8924a1c",
"text": "BACKGROUND\nLegacy data and new structured data can be stored in a standardized format as XML-based EHRs on XML databases. Querying documents on these databases is crucial for answering research questions. Instead of using free text searches, that lead to false positive results, the precision can be increased by constraining the search to certain parts of documents.\n\n\nMETHODS\nA search ontology-based specification of queries on XML documents defines search concepts and relates them to parts in the XML document structure. Such query specification method is practically introduced and evaluated by applying concrete research questions formulated in natural language on a data collection for information retrieval purposes. The search is performed by search ontology-based XPath engineering that reuses ontologies and XML-related W3C standards.\n\n\nRESULTS\nThe key result is that the specification of research questions can be supported by the usage of search ontology-based XPath engineering. A deeper recognition of entities and a semantic understanding of the content is necessary for a further improvement of precision and recall. Key limitation is that the application of the introduced process requires skills in ontology and software development. In future, the time consuming ontology development could be overcome by implementing a new clinical role: the clinical ontologist.\n\n\nCONCLUSION\nThe introduced Search Ontology XML extension connects Search Terms to certain parts in XML documents and enables an ontology-based definition of queries. Search ontology-based XPath engineering can support research question answering by the specification of complex XPath expressions without deep syntax knowledge about XPaths.",
"title": ""
}
] |
scidocsrr
|
73643d5a2a8f9d6d689324f8c86839b2
|
3D brain tumor segmentation in multimodal MR images based on learning population- and patient-specific feature sets
|
[
{
"docid": "4b1ae8c52831341727b62687f26f300f",
"text": "A novel region-based active contour model (ACM) is proposed in this paper. It is implemented with a special processing named Selective Binary and Gaussian Filtering Regularized Level Set (SBGFRLS) method, which first selectively penalizes the level set function to be binary, and then uses a Gaussian smoothing kernel to regularize it. The advantages of our method are as follows. First, a new region-based signed pressure force (SPF) function is proposed, which can efficiently stop the contours at weak or blurred edges. Second, the exterior and interior boundaries can be automatically detected with the initial contour being anywhere in the image. Third, the proposed ACM with SBGFRLS has the property of selective local or global segmentation. It can segment not only the desired object but also the other objects. Fourth, the level set function can be easily initialized with a binary function, which is more efficient to construct than the widely used signed distance function (SDF). The computational cost for traditional re-initialization can also be reduced. Finally, the proposed algorithm can be efficiently implemented by the simple finite difference scheme. Experiments on synthetic and real images demonstrate the advantages of the proposed method over geodesic active contours (GAC) and Chan–Vese (C–V) active contours in terms of both efficiency and accuracy. 2009 Elsevier B.V. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "be7b6112f147213511a3c433337c2da7",
"text": "We assessed the physical and chemical stability of docetaxel infusion solutions. Stability of the antineoplastic drug was determined 1.) after reconstitution of the injection concentrate and 2.) after further dilution in two commonly used vehicle‐solutions, 0.9% sodium chloride and 5% dextrose, in PVC bags and polyolefine containers. Chemical stability was measured by using a stability‐indicating HPLC assay with ultraviolet detection. Physical stability was determined by visual inspection. The stability tests revealed that reconstituted docetaxel solutions (= premix solutions) are physico‐chemically stable (at a level ≥ 95% docetaxel) for a minimum of four weeks, independent of the storage temperature (refrigerated, room temperature). Diluted infusion solutions (docetaxel concentration 0.3 mg/ml and 0.9 mg/ml), with either vehicle‐solution, proved physico‐chemically stable (at a level ≥ 95% docetaxel) for a minimum of four weeks, when prepared in polyolefine containers and stored at room temperature. However, diluted infusion solutions exhibited limited physical stability in PVC bags, because docetaxel precipitation occured irregularly, though not before day 5 of storage. In addition, time‐dependent DEHP‐leaching from PVC infusion bags by docetaxel infusion solutions must be considered.",
"title": ""
},
{
"docid": "784dc9c78e6552e4df8bfd9a7796d847",
"text": "For image generation, deep neural networks are trained to extract high-level features on natural images and to reconstruct the images from the features. However it is difficult to learn to generate images containing enormous contents. To overcome this difficulty, a network with an attention mechanism has been proposed. It is trained to attend to parts of the image and to generate images step by step. This enables the network to deal with the details of a part of the image and the rough structure of the entire image. The attention mechanism is implemented by recurrent neural networks. Additionally, the Generative Adversarial Networks (GANs) approach has been proposed to generate more realistic images. In this study, we present image generation where leverages effectiveness of attention mechanism and the GANs approach. We show our method enables the iterative construction of images and more realistic image generation than standard GANs and the attention mechanism of DRAW.",
"title": ""
},
{
"docid": "6a624b97d996372de9f385798e02d2df",
"text": "Due to the continuous increase of the world population living in cities, it is crucial to identify strategic plans and perform associated actions to make cities smarter, i.e., more operationally efficient, socially friendly, and environmentally sustainable, in a cost effective manner. To achieve these goals, emerging smart cities need to be optimally and intelligently measured, monitored, and managed. In this context the paper proposes the development of a framework for classifying performance indicators of a smart city. It is based on two dimensions: the degree of objectivity of observed variables and the level of technological advancement for data collection. The paper shows an application of the presented framework to the case of the Bari municipality (Italy).",
"title": ""
},
{
"docid": "0e7c07a9b7e34c40bcfb5b98e7a64760",
"text": "Weintroduceanewkindofmosaicing,where thepositionof the samplingstrip varies asa functionof the input camera location. The new images that are generated this way correspond to a new projection model defined by two slits, termed here the Crossed-Slits (X-Slits) projection. In this projection model, every 3D point is projected by a ray defined as the line that passes through that point and intersects the two slits. The intersection of the projection rays with the imaging surface defines the image. X-Slits mosaicing provides two benefits. First, the generated mosaics are closer to perspective images than traditional pushbroom mosaics. Second, by simple manipulationsof the strip sampling function,wecanchange the locationof oneof the virtual slits, providingavirtualwalkthroughof aX-slits camera; all this can be done without recovering any 3D geometry and without calibration. A number of examples where we translate the virtual camera and change its orientation are given; the examples demonstrate realistic changes in parallax, reflections, and occlusions.",
"title": ""
},
{
"docid": "342cf76dd8b12195829aa33230bf5751",
"text": "Support Vector Machines (SVMs) have been very successful in text classification. However, the intrinsic geometric structure of text data has been ignored by standard kernels commonly used in SVMs. It is natural to assume that the documents are on the multinomial manifold, which is the simplex of multinomial models furnished with the Riemannian structure induced by the Fisher information metric. We prove that the Negative Geodesic Distance (NGD) on the multinomial manifold is conditionally positive definite (cpd), thus can be used as a kernel in SVMs. Experiments show the NGD kernel on the multinomial manifold to be effective for text classification, significantly outperforming standard kernels on the ambient Euclidean space.",
"title": ""
},
{
"docid": "8d7b77ac807e4ab49831aaa02fa1fbd4",
"text": "A Ka-band circularly polarized (CP) planar array antenna with wide axial ratio (AR) bandwidth and high efficiency is presented in this paper. A crossed slot with four parasitic patches is proposed as the CP element, which is fed by a unique 90° delay line. There are three minimal AR points within the operational frequency band to widen the bandwidth. The 90° delay line has three branches to realize the phase delay, the amplitude compensation, and the impedance matching, respectively. Thus, the design process is clear and simple. Several short-circuited posts are added in the feeding layer to suppress the TEM wave propagating outside the feeding network. Simulated results of the element show an AR bandwidth (AR <; 3 dB) of 21.2% from 25.3 to 31.3 GHz and a gain of 5.7-6.7 dBic over the same frequency band. Measured results of a fabricated 4 × 4 array demonstrate about 14% AR bandwidth and more than 17.4 dBic gain within the frequency band of 26.4-30.3 GHz. The maximum realized gain reaches to 18.2 dBic. The array antenna with a waveguide transition is fabricated through multilayer PCB process and has a size of 72 mm × 48 mm × 2.2 mm.",
"title": ""
},
{
"docid": "fd5f48aebc8fba354137dadb445846bc",
"text": "BACKGROUND\nThe syntheses of multiple qualitative studies can pull together data across different contexts, generate new theoretical or conceptual models, identify research gaps, and provide evidence for the development, implementation and evaluation of health interventions. This study aims to develop a framework for reporting the synthesis of qualitative health research.\n\n\nMETHODS\nWe conducted a comprehensive search for guidance and reviews relevant to the synthesis of qualitative research, methodology papers, and published syntheses of qualitative health research in MEDLINE, Embase, CINAHL and relevant organisational websites to May 2011. Initial items were generated inductively from guides to synthesizing qualitative health research. The preliminary checklist was piloted against forty published syntheses of qualitative research, purposively selected to capture a range of year of publication, methods and methodologies, and health topics. We removed items that were duplicated, impractical to assess, and rephrased items for clarity.\n\n\nRESULTS\nThe Enhancing transparency in reporting the synthesis of qualitative research (ENTREQ) statement consists of 21 items grouped into five main domains: introduction, methods and methodology, literature search and selection, appraisal, and synthesis of findings.\n\n\nCONCLUSIONS\nThe ENTREQ statement can help researchers to report the stages most commonly associated with the synthesis of qualitative health research: searching and selecting qualitative research, quality appraisal, and methods for synthesising qualitative findings. The synthesis of qualitative research is an expanding and evolving methodological area and we would value feedback from all stakeholders for the continued development and extension of the ENTREQ statement.",
"title": ""
},
{
"docid": "b89d42f836730a782a9b0f5df5bbd5bd",
"text": "This paper proposes a new usability evaluation checklist, UseLearn, and a related method for eLearning systems. UseLearn is a comprehensive checklist which incorporates both quality and usability evaluation perspectives in eLearning systems. Structural equation modeling is deployed to validate the UseLearn checklist quantitatively. The experimental results show that the UseLearn method supports the determination of usability problems by criticality metric analysis and the definition of relevant improvement strategies. The main advantage of the UseLearn method is the adaptive selection of the most influential usability problems, and thus significant reduction of the time and effort for usability evaluation can be achieved. At the sketching and/or design stage of eLearning systems, it will provide an effective guidance to usability analysts as to what problems should be focused on in order to improve the usability perception of the end-users. Relevance to industry: During the sketching or design stage of eLearning platforms, usability problems should be revealed and eradicated to create more usable and quality eLearning systems to satisfy the end-users. The UseLearn checklist along with its quantitative methodology proposed in this study would be helpful for usability experts to achieve this goal. 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "4597ab07ac630eb5e256f57530e2828e",
"text": "This paper presents novel QoS extensions to distributed control plane architectures for multimedia delivery over large-scale, multi-operator Software Defined Networks (SDNs). We foresee that large-scale SDNs shall be managed by a distributed control plane consisting of multiple controllers, where each controller performs optimal QoS routing within its domain and shares summarized (aggregated) QoS routing information with other domain controllers to enable inter-domain QoS routing with reduced problem dimensionality. To this effect, this paper proposes (i) topology aggregation and link summarization methods to efficiently acquire network topology and state information, (ii) a general optimization framework for flow-based end-to-end QoS provision over multi-domain networks, and (iii) two distributed control plane designs by addressing the messaging between controllers for scalable and secure inter-domain QoS routing. We apply these extensions to streaming of layered videos and compare the performance of different control planes in terms of received video quality, communication cost and memory overhead. Our experimental results show that the proposed distributed solution closely approaches the global optimum (with full network state information) and nicely scales to large networks.",
"title": ""
},
{
"docid": "e34815efa68cb1b7a269e436c838253d",
"text": "A new mobile robot prototype for inspection of overhead transmission lines is proposed. The mobile platform is composed of 3 arms. And there is a motorized rubber wheel on the end of each arm. On the two end arms, a gripper is designed to clamp firmly onto the conductors from below to secure the robot. Each arm has a motor to achieve 2 degrees of freedom which is realized by moving along a curve. It could roll over some obstacles (compression splices, vibration dampers, etc). And the robot could clear other types of obstacles (spacers, suspension clamps, etc).",
"title": ""
},
{
"docid": "0506a7f5dddf874487c90025dff0bc7d",
"text": "This paper presents a low-power decision-feedback equalizer (DFE) receiver front-end and a two-step minimum bit-error-rate (BER) adaptation algorithm. A high energy efficiency of 0.46 mW/Gbps is made possible by the combination of a direct-feedback finite-impulse-response (FIR) DFE, an infinite-impulse-response (IIR) DFE, and a clock-and-data recovery (CDR) circuit with adjustable timing offsets. Based on this architecture, the power-hungry stages used in prior DFE receivers such as the continuous-time linear equalizer (CTLE), the current-mode summing circuit for a multitap DFE, and the fast selection logic for a loop-unrolling DFE can all be removed. A two-step adaptation algorithm that finds the equalizer coefficients minimizing the BER is described. First, an extra data sampler with adjustable voltage and timing offsets measures the single-bit response (SBR) of the channel and coarsely tunes the initial coefficient values in the foreground. Next, the same circuit measures the eye-opening and bit-error rates and fine tunes the coefficients in background using a stochastic hill-climbing algorithm. A prototype DFE receiver fabricated in a 65-nm LP/RF CMOS dissipates 2.3 mW and demonstrates measured eye-opening values of 174 mV pp and 0.66 UIpp while operating at 5 Gb/s with a -15-dB loss channel.",
"title": ""
},
{
"docid": "2cff047c4b2577c99aa66df211b0beda",
"text": "Image denoising is an important pre-processing step in medical image analysis. Different algorithms have been proposed in past three decades with varying denoising performances. More recently, having outperformed all conventional methods, deep learning based models have shown a great promise. These methods are however limited for requirement of large training sample size and high computational costs. In this paper we show that using small sample size, denoising autoencoders constructed using convolutional layers can be used for efficient denoising of medical images. Heterogeneous images can be combined to boost sample size for increased denoising performance. Simplest of networks can reconstruct images with corruption levels so high that noise and signal are not differentiable to human eye.",
"title": ""
},
{
"docid": "f82eb2d4cc45577f08c7e867bf012816",
"text": "OBJECTIVE\nThe purpose of this study was to compare the retrieval characteristics of the Option Elite (Argon Medical, Plano, Tex) and Denali (Bard, Tempe, Ariz) retrievable inferior vena cava filters (IVCFs), two filters that share a similar conical design.\n\n\nMETHODS\nA single-center, retrospective study reviewed all Option and Denali IVCF removals during a 36-month period. Attempted retrievals were classified as advanced if the routine \"snare and sheath\" technique was initially unsuccessful despite multiple attempts or an alternative endovascular maneuver or access site was used. Patient and filter characteristics were documented.\n\n\nRESULTS\nIn our study, 63 Option and 45 Denali IVCFs were retrieved, with an average dwell time of 128.73 and 99.3 days, respectively. Significantly higher median fluoroscopy times were experienced in retrieving the Option filter compared with the Denali filter (12.18 vs 6.85 minutes; P = .046). Use of adjunctive techniques was also higher in comparing the Option filter with the Denali filter (19.0% vs 8.7%; P = .079). No significant difference was noted between these groups in regard to gender, age, or history of malignant disease.\n\n\nCONCLUSIONS\nOption IVCF retrieval procedures required significantly longer retrieval fluoroscopy time compared with Denali IVCFs. Although procedure time was not analyzed in this study, as a surrogate, the increased fluoroscopy time may also have an impact on procedural direct costs and throughput.",
"title": ""
},
{
"docid": "fba2cce267a075c24a1378fd55de6113",
"text": "This paper presents a novel mixed reality rehabilitation system used to help improve the reaching movements of people who have hemiparesis from stroke. The system provides real-time, multimodal, customizable, and adaptive feedback generated from the movement patterns of the subject's affected arm and torso during reaching to grasp. The feedback is provided via innovative visual and musical forms that present a stimulating, enriched environment in which to train the subjects and promote multimodal sensory-motor integration. A pilot study was conducted to test the system function, adaptation protocol and its feasibility for stroke rehabilitation. Three chronic stroke survivors underwent training using our system for six 75-min sessions over two weeks. After this relatively short time, all three subjects showed significant improvements in the movement parameters that were targeted during training. Improvements included faster and smoother reaches, increased joint coordination and reduced compensatory use of the torso and shoulder. The system was accepted by the subjects and shows promise as a useful tool for physical and occupational therapists to enhance stroke rehabilitation.",
"title": ""
},
{
"docid": "a50a9f45b25f21ce4ef04f686d25e36f",
"text": "Twitter is the largest and most popular micro-blogging website on Internet. Due to low publication barrier, anonymity and wide penetration, Twitter has become an easy target or platform for extremists to disseminate their ideologies and opinions by posting hate and extremism promoting tweets. Millions of tweets are posted on Twitter everyday and it is practically impossible for Twitter moderators or an intelligence and security analyst to manually identify such tweets, users and communities. However, automatic classification of tweets into predefined categories is a non-trivial problem problem due to short text of the tweet (the maximum length of a tweet can be 140 characters) and noisy content (incorrect grammar, spelling mistakes, presence of standard and non-standard abbreviations and slang). We frame the problem of hate and extremism promoting tweet detection as a one-class or unary-class categorization problem by learning a statistical model from a training set containing only the objects of one class . We propose several linguistic features such as presence of war, religious, negative emotions and offensive terms to discriminate hate and extremism promoting tweets from other tweets. We employ a single-class SVM and KNN algorithm for one-class classification task. We conduct a case-study on Jihad, perform a characterization study of the tweets and measure the precision and recall of the machine-learning based classifier. Experimental results on large and real-world dataset demonstrate that the proposed approach is effective with F-score of 0.60 and 0.83 for the KNN and SVM classifier respectively.",
"title": ""
},
{
"docid": "1d7ee43299e3a7581d11604f1596aeab",
"text": "We analyze the impact of corruption on bilateral trade, highlighting its dual role in terms of extortion and evasion. Corruption taxes trade, when corrupt customs officials in the importing country extort bribes from exporters (extortion effect); however, with high tariffs, corruption may be trade enhancing when corrupt officials allow exporters to evade tariff barriers (evasion effect). We derive and estimate a corruption-augmented gravity model, where the effect of corruption on trade flows is ambiguous and contingent on tariffs. Empirically, corruption taxes trade in the majority of cases, but in high-tariff environments (covering 5% to 14% of the observations) their marginal effect is trade enhancing.",
"title": ""
},
{
"docid": "a1f930147ad3c3ef48b6352e83d645d0",
"text": "Database applications such as online transaction processing (OLTP) and decision support systems (DSS) constitute the largest and fastest-growing segment of the market for multiprocessor servers. However, most current system designs have been optimized to perform well on scientific and engineering workloads. Given the radically different behavior of database workloads (especially OLTP), it is important to re-evaluate key system design decisions in the context of this important class of applications.This paper examines the behavior of database workloads on shared-memory multiprocessors with aggressive out-of-order processors, and considers simple optimizations that can provide further performance improvements. Our study is based on detailed simulations of the Oracle commercial database engine. The results show that the combination of out-of-order execution and multiple instruction issue is indeed effective in improving performance of database workloads, providing gains of 1.5 and 2.6 times over an in-order single-issue processor for OLTP and DSS, respectively. In addition, speculative techniques enable optimized implementations of memory consistency models that significantly improve the performance of stricter consistency models, bringing the performance to within 10--15% of the performance of more relaxed models.The second part of our study focuses on the more challenging OLTP workload. We show that an instruction stream buffer is effective in reducing the remaining instruction stalls in OLTP, providing a 17% reduction in execution time (approaching a perfect instruction cache to within 15%). Furthermore, our characterization shows that a large fraction of the data communication misses in OLTP exhibit migratory behavior; our preliminary results show that software prefetch and writeback/flush hints can be used for this data to further reduce execution time by 12%.",
"title": ""
},
{
"docid": "7ddf437114258023cc7d9c6d51bb8f94",
"text": "We describe a framework for cooperative control of a group of nonholonomic mobile robots that allows us to build complex systems from simple controllers and estimators. The resultant modular approach is attractive because of the potential for reusability. Our approach to composition also guarantees stability and convergence in a wide range of tasks. There are two key features in our approach: 1) a paradigm for switching between simple decentralized controllers that allows for changes in formation; 2) the use of information from a single type of sensor, an omnidirectional camera, for all our controllers. We describe estimators that abstract the sensory information at different levels, enabling both decentralized and centralized cooperative control. Our results include numerical simulations and experiments using a testbed consisting of three nonholonomic robots.",
"title": ""
},
{
"docid": "16d6862cf891e5219aae10d5fcd6ce92",
"text": "This paper describes the Power System Analysis Toolbox (PSAT), an open source Matlab and GNU/Octave-based software package for analysis and design of small to medium size electric power systems. PSAT includes power flow, continuation power flow, optimal power flow, small-signal stability analysis, and time-domain simulation, as well as several static and dynamic models, including nonconventional loads, synchronous and asynchronous machines, regulators, and FACTS. PSAT is also provided with a complete set of user-friendly graphical interfaces and a Simulink-based editor of one-line network diagrams. Basic features, algorithms, and a variety of case studies are presented in this paper to illustrate the capabilities of the presented tool and its suitability for educational and research purposes.",
"title": ""
},
{
"docid": "678ef706d4cb1c35f6b3d82bf25a4aa7",
"text": "This article is an extremely rapid survey of the modern theory of partial differential equations (PDEs). Sources of PDEs are legion: mathematical physics, geometry, probability theory, continuum mechanics, optimization theory, etc. Indeed, most of the fundamental laws of the physical sciences are partial differential equations and most papers published in applied math concern PDEs. The following discussion is consequently very broad, but also very shallow, and will certainly be inadequate for any given PDE the reader may care about. The goal is rather to highlight some of the many key insights and unifying principles across the entire subject.",
"title": ""
}
] |
scidocsrr
|
012514a33ce6e7fc8f89aeaa23a06ba3
|
Mean Box Pooling: A Rich Image Representation and Output Embedding for the Visual Madlibs Task
|
[
{
"docid": "a1ef2bce061c11a2d29536d7685a56db",
"text": "This paper presents stacked attention networks (SANs) that learn to answer natural language questions from images. SANs use semantic representation of a question as query to search for the regions in an image that are related to the answer. We argue that image question answering (QA) often requires multiple steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an image multiple times to infer the answer progressively. Experiments conducted on four image QA data sets demonstrate that the proposed SANs significantly outperform previous state-of-the-art approaches. The visualization of the attention layers illustrates the progress that the SAN locates the relevant visual clues that lead to the answer of the question layer-by-layer.",
"title": ""
},
{
"docid": "914d17433df678e9ace1c9edd1c968d3",
"text": "We propose a Deep Learning approach to the visual question answering task, where machines answer to questions about real-world images. By combining latest advances in image representation and natural language processing, we propose Ask Your Neurons, a scalable, jointly trained, end-to-end formulation to this problem. In contrast to previous efforts, we are facing a multi-modal problem where the language output (answer) is conditioned on visual and natural language inputs (image and question). We evaluate our approaches on the DAQUAR as well as the VQA dataset where we also report various baselines, including an analysis how much information is contained in the language part only. To study human consensus, we propose two novel metrics and collect additional answers which extend the original DAQUAR dataset to DAQUAR-Consensus. Finally, we evaluate a rich set of design choices how to encode, combine and decode information in our proposed Deep Learning formulation.",
"title": ""
}
] |
[
{
"docid": "30f4dfd49f1ba53f3a4786ae60da3186",
"text": "In order to improve the speed limitation of serial scrambler, we propose a new parallel scrambler architecture and circuit to overcome the limitation of serial scrambler. A very systematic parallel scrambler design methodology is first proposed. The critical path delay is only one D-register and one xor gate of two inputs. Thus, it is superior to other proposed circuits in high-speed applications. A new DET D-register with embedded xor operation is used as a basic circuit block of the parallel scrambler. Measurement results show the proposed parallel scrambler can operate in 40 Gbps with 16 outputs in TSMC 0.18-/spl mu/m CMOS process.",
"title": ""
},
{
"docid": "0fd48f6f0f5ef1e68c2a157c16713e86",
"text": "Location distinction is the ability to determine when a device has changed its position. We explore the opportunity to use sophisticated PHY-layer measurements in wireless networking systems for location distinction. We first compare two existing location distinction methods - one based on channel gains of multi-tonal probes, and another on channel impulse response. Next, we combine the benefits of these two methods to develop a new link measurement that we call the complex temporal signature. We use a 2.4 GHz link measurement data set, obtained from CRAWDAD [10], to evaluate the three location distinction methods. We find that the complex temporal signature method performs significantly better compared to the existing methods. We also perform new measurements to understand and model the temporal behavior of link signatures over time. We integrate our model in our location distinction mechanism and significantly reduce the probability of false alarms due to temporal variations of link signatures.",
"title": ""
},
{
"docid": "65a7b631bdf02ae108e07eb35c5dfe55",
"text": "Wireless access networks scale by replicating base stations geographically and then allowing mobile clients to seamlessly \"hand off\" from one station to the next as they traverse the network. However, providing the illusion of continuous connectivity requires selecting the right moment to handoff and the right base station to transfer to. Unfortunately, 802.11-based networks only attempt a handoff when a client's service degrades to a point where connectivity is threatened. Worse, the overhead of scanning for nearby base stations is routinely over 250 ms - during which incoming packets are dropped - far longer than what can be tolerated by highly interactive applications such as voice telephony. In this paper we describe SyncScan, a low-cost technique for continuously tracking nearby base stations by synchronizing short listening periods at the client with periodic transmissions from each base station. We have implemented this SyncScan algorithm using commodity 802.11 hardware and we demonstrate that it allows better handoff decisions and over an order of magnitude improvement in handoff delay. Finally, our approach only requires trivial implementation changes, is incrementally deployable and is completely backward compatible with existing 802.11 standards.",
"title": ""
},
{
"docid": "eb3b1550daa111b1977ee7e4a3ec6e43",
"text": "This paper introduces an inexpensive prosthetic hand control system designed to reduce the cognitive burden on amputees. It is designed around a vision-based object recognition system with an embedded camera that automates grasp selection and switching, and an inexpensive mechanomyography (MMG) sensor for hand opening and closing. A prototype has been developed and implemented to select between two different grasp configurations for the Bebionic V2 hand, developed by RSLSteeper. Pick and place experiments on 6 different objects in `Power' and `Pinch' grasps were used to assess feasibility on which to base full system development. Experimentation demonstrated an overall accuracy of 84.4% for grasp selection between pairs of objects. The results showed that it was more difficult to classify larger objects due to their size relative to the camera resolution. The grasping task became more accurate with time, indicating learning capability when estimating the position and trajectory of the hand for correct grasp selection; however further experimentation is required to form a conclusion. The limitation of this involves the use of unnatural reaching trajectories for correct grasp selection. The success in basic experimentation provides the proof of concept required for further system development.",
"title": ""
},
{
"docid": "fa2c3c8946ebb97e119ba25cab52ff5c",
"text": "The digital era arrives with a whole set of disruptive technologies that creates both risk and opportunity for open sources analysis. Although the sheer quantity of online conversations makes social media a huge source of information, their analysis is still a challenging task and many of traditional methods and research methodologies for data mining are not fit for purpose. Social data mining revolves around subjective content analysis, which deals with the computational processing of texts conveying people's evaluations, beliefs, attitudes and emotions. Opinion mining and sentiment analysis are the main paradigm of social media exploration and both concepts are often interchangeable. This paper investigates the use of appraisal categories to explore data gleaned for social media, going beyond the limitations of traditional sentiment and opinion-oriented approaches. Categories of appraisal are grounded on cognitive foundations of the appraisal theory, according to which people's emotional response are based on their own evaluative judgments or appraisals of situations, events or objects. A formal model is developed to describe and explain the way language is used in the cyberspace to evaluate, express mood and subjective states, construct personal standpoints and manage interpersonal interactions and relationships. A general processing framework is implemented to illustrate how the model is used to analyze a collection of tweets related to extremist attitudes.",
"title": ""
},
{
"docid": "d12e9664d73b29b43c650a8606ec7e2b",
"text": "As autonomous agents proliferate in the real world, both in software and robotic settings, they will increasingly need to band together for cooperative activities with previously unfamiliar teammates. In such ad hoc team settings, team strategies cannot be developed a priori. Rather, an agent must be prepared to cooperate with many types of teammates: it must collaborate without pre-coordination. This paper challenges the AI community to develop theory and to implement prototypes of ad hoc team agents. It defines the concept of ad hoc team agents, specifies an evaluation paradigm, and provides examples of possible theoretical and empirical approaches to challenge. The goal is to encourage progress towards this ambitious, newly realistic, and increasingly important research goal.",
"title": ""
},
{
"docid": "eba4faac7a6a0e0da2e860f9ddb01801",
"text": "Current research in Information Extraction tends to be focused on application-specific systems tailored to a particular domain. The Muse system is a multi-purpose Named Entity recognition system which aims to reduce the need for costly and time-consuming adaptation of systems to new applications, with its capability for processing texts from widely differing domains and genres. Although the system is still under development, preliminary results are encouraging, showing little degradation when processing texts of lower quality or of unusual types. The system currently averages 93% precision and 95% recall across a variety of text types.",
"title": ""
},
{
"docid": "c9077052caa804aaa58d43aaf8ba843f",
"text": "Many authors have laid down a concept about organizational learning and the learning organization. Amongst them They contributed an explanation on how organizations learn and provided tools to transfer the theoretical concepts of organizational learning into practice. Regarding the present situation it seems, that organizational learning becomes even more important. This paper provides a complementary view on the learning organization from the perspective of the evolutionary epistemology. The evolutionary epistemology gives an answer, where the subjective structures of cognition come from and why they are similar in all human beings. Applying this evolutionary concept to organizations it could be possible to provide a deeper insight of the cognition processes of organizations and explain the principles that lay behind a learning organization. It also could give an idea, which impediments in learning, caused by natural dispositions, deduced from genetic barriers of cognition in biology are existing and managers must be aware of when trying to facilitate organizational learning within their organizations.",
"title": ""
},
{
"docid": "26dac00bc328dc9c8065ff105d1f8233",
"text": "Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4 ~ 6× speed-up and 15 ~ 20× compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.",
"title": ""
},
{
"docid": "97dd8b1630b574797ca2847e6b3dfc0c",
"text": "We propose a novel discipline for programming stream functions and for the semantic description of stream manipulation languages based on the observation that both general and causal stream functions can be characterized as coKleisli arrows of comonads. This seems to be a promising application for the old, but very little exploited idea that if monads abstract notions of computation of a value, comonads ought to be useable as an abstraction of notions of value in a context. We also show that causal partial-stream functions can be described in terms of a combination of a comonad and a monad.",
"title": ""
},
{
"docid": "70374d2cbf730fab13c3e126359b59e8",
"text": "We define a new distance measure the resistor-average distance between two probability distributions that is closely related to the Kullback-Leibler distance. While the KullbackLeibler distance is asymmetric in the two distributions, the resistor-average distance is not. It arises from geometric considerations similar to those used to derive the Chernoff distance. Determining its relation to well-known distance measures reveals a new way to depict how commonly used distance measures relate to each other.",
"title": ""
},
{
"docid": "5742501e3d4a705cd14cfbe6872d3ad0",
"text": "Many devices and solutions for remote electrocardiogram (ECG) monitoring have been proposed in the literature. These solutions typically have a large marginal cost per added sensor and are not seamlessly integrated with other smart home solutions. Here, we propose an ECG remote monitoring system that is dedicated to non-technical users in need of long-term health monitoring in residential environments and is integrated in a broader Internet-of-Things (IoT) infrastructure. Our prototype consists of a complete vertical solution with a series of advantages with respect to the state of the art, considering both the prototypes with integrated front end and prototypes realized with off-the-shelf components: 1) ECG prototype sensors with record-low energy per effective number of quantized levels; 2) an architecture providing low marginal cost per added sensor/user; and 3) the possibility of seamless integration with other smart home systems through a single IoT infrastructure.",
"title": ""
},
{
"docid": "6ae2d8b60e9182300a2392a91e8ca876",
"text": "The need for text summarization is crucial as we enter the era of information overload. In this paper we present an automatic summarization system, which generates a summary for a given input document. Our system is based on identification and extraction of important sentences in the input document. We listed a set of features that we collect as part of summary generation process. These features were stored using vector representation model. We defined a ranking function which ranks each sentence as a linear combination of the sentence features. We also discussed about discourse coherence in summaries and techniques to achieve coherent and readable summaries. The experiments showed that the summary generated is coherent the selected features are really helpful in extracting the important information in the document.",
"title": ""
},
{
"docid": "143811aed9c2c99fb96aa938c7d3c277",
"text": "A novel wideband cavity-backed slot antenna operating at X-band is proposed. Compared to conventional cavity-backed slot antennas, both the bandwidth and gain of the proposed antenna are significantly increased by adding a multilayer dielectric cover. This antenna offers a 40% (|S11| <; -10 dB) impedance bandwidth. Meanwhile, the gain of a single antenna within this band is 6-9.5 dBi, 2-3 dB higher than those without that multilayer dielectric cover at most of the bandwidth. To further validate the capability of the proposed antenna, a 4 ×4 antenna array is designed, fabricated, and tested. The measurement results corroborate with simulations well. Taking advantage of its wider bandwidth and higher gain compared to the counterparts, this antenna could find great potential application in the future.",
"title": ""
},
{
"docid": "7f711c94920e0bfa8917ad1b5875813c",
"text": "With the increasing acceptance of Network Function Virtualization (NFV) and Software Defined Networking (SDN) technologies, a radical transformation is currently occurring inside network providers infrastructures. The trend of Software-based networks foreseen with the 5th Generation of Mobile Network (5G) is drastically changing requirements in terms of how networks are deployed and managed. One of the major changes requires the transaction towards a distributed infrastructure, in which nodes are built with standard commodity hardware. This rapid deployment of datacenters is paving the way towards a different type of environment in which the computational resources are deployed up to the edge of the network, referred to as Multi-access Edge Computing (MEC) nodes. However, MEC nodes do not usually provide enough resources for executing standard virtualization technologies typically used in large datacenters. For this reason, software containerization represents a lightweight and viable virtualization alternative for such scenarios. This paper presents an architecture based on the Open Baton Management and Orchestration (MANO) framework combining different infrastructural technologies supporting the deployment of container-based network services even at the edge of the network.",
"title": ""
},
{
"docid": "af691c2ca5d9fd1ca5109c8b2e7e7b6d",
"text": "As social robots become more widely used as educational tutoring agents, it is important to study how children interact with these systems, and how effective they are as assessed by learning gains, sustained engagement, and perceptions of the robot tutoring system as a whole. In this paper, we summarize our prior work involving a long-term child-robot interaction study and outline important lessons learned regarding individual differences in children. We then discuss how these lessons inform future research in child-robot interaction.",
"title": ""
},
{
"docid": "4ede3f2caa829e60e4f87a9b516e28bd",
"text": "This report describes the difficulties of training neural networks and in particular deep neural networks. It then provides a literature review of training methods for deep neural networks, with a focus on pre-training. It focuses on Deep Belief Networks composed of Restricted Boltzmann Machines and Stacked Autoencoders and provides an outreach on further and alternative approaches. It also includes related practical recommendations from the literature on training them. In the second part, initial experiments using some of the covered methods are performed on two databases. In particular, experiments are performed on the MNIST hand-written digit dataset and on facial emotion data from a Kaggle competition. The results are discussed in the context of results reported in other research papers. An error rate lower than the best contribution to the Kaggle competition is achieved using an optimized Stacked Autoencoder.",
"title": ""
},
{
"docid": "5fbb54e63158066198cdf59e1a8e9194",
"text": "In this paper, we present results of a study of the data rate fairness among nodes within a LoRaWAN cell. Since LoRa/LoRaWAN supports various data rates, we firstly derive the fairest ratios of deploying each data rate within a cell for a fair collision probability. LoRa/LoRaWan, like other frequency modulation based radio interfaces, exhibits the capture effect in which only the stronger signal of colliding signals will be extracted. This leads to unfairness, where far nodes or nodes experiencing higher attenuation are less likely to see their packets received correctly. Therefore, we secondly develop a transmission power control algorithm to balance the received signal powers from all nodes regardless of their distances from the gateway for a fair data extraction. Simulations show that our approach achieves higher fairness in data rate than the state-of-art in almost all network configurations.",
"title": ""
},
{
"docid": "3431f92cd0849f4782858834feebec03",
"text": "DeepFashion is a widely used clothing dataset with 50 categories and more than overall 200k images where each image is annotated with fine-grained attributes. This dataset is often used for clothes recognition and although it provides comprehensive annotations, the attributes distribution is unbalanced and repetitive specially for training fine-grained attribute recognition models. In this work, we tailored DeepFashion for fine-grained attribute recognition task by focusing on each category separately. After selecting categories with sufficient number of images for training, we remove very scarce attributes and merge the duplicate ones in each category, then we clean the dataset based on the new list of attributes. We use a bilinear convolutional neural network with pairwise ranking loss function for multi-label fine-grained attribute recognition and show that the new annotations improve the results for such a task. The detailed annotations for each of the selected categories are provided for public use.",
"title": ""
},
{
"docid": "b22137cbb14396f1dcd24b2a15b02508",
"text": "This paper studies the self-alignment properties between two chips that are stacked on top of each other with copper pillars micro-bumps. The chips feature alignment marks used for measuring the resulting offset after assembly. The accuracy of the alignment is found to be better than 0.5 µm in × and y directions, depending on the process. The chips also feature waveguides and vertical grating couplers (VGC) fabricated in the front-end-of-line (FEOL) and organized in order to realize an optical interconnection between the chips. The coupling of light between the chips is measured and compared to numerical simulation. This high accuracy self-alignment was obtained after studying the impact of flux and fluxless treatments on the wetting of the pads and the successful assembly yield. The composition of the bump surface was analyzed with Time-of-Flight Secondary Ions Mass Spectroscopy (ToF-SIMS) in order to understand the impact of each treatment. This study confirms that copper pillars micro-bumps can be used to self-align photonic integrated circuits (PIC) with another die (for example a microlens array) in order to achieve high throughput alignment of optical fiber to the PIC.",
"title": ""
}
] |
scidocsrr
|
753cdccecf1a83a60dd595b9095c08f2
|
Neural Network based Extreme Classification and Similarity Models for Product Matching
|
[
{
"docid": "755f7e93dbe43a0ed12eb90b1d320cb2",
"text": "This paper presents a deep architecture for learning a similarity metric on variablelength character sequences. The model combines a stack of character-level bidirectional LSTM’s with a Siamese architecture. It learns to project variablelength strings into a fixed-dimensional embedding space by using only information about the similarity between pairs of strings. This model is applied to the task of job title normalization based on a manually annotated taxonomy. A small data set is incrementally expanded and augmented with new sources of variance. The model learns a representation that is selective to differences in the input that reflect semantic differences (e.g., “Java developer” vs. “HR manager”) but also invariant to nonsemantic string differences (e.g., “Java developer” vs. “Java programmer”).",
"title": ""
}
] |
[
{
"docid": "396f6b6c09e88ca8e9e47022f1ae195b",
"text": "Generative Adversarial Network (GAN) and its variants have recently attracted intensive research interests due to their elegant theoretical foundation and excellent empirical performance as generative models. These tools provide a promising direction in the studies where data availability is limited. One common issue in GANs is that the density of the learned generative distribution could concentrate on the training data points, meaning that they can easily remember training samples due to the high model complexity of deep networks. This becomes a major concern when GANs are applied to private or sensitive data such as patient medical records, and the concentration of distribution may divulge critical patient information. To address this issue, in this paper we propose a differentially private GAN (DPGAN) model, in which we achieve differential privacy in GANs by adding carefully designed noise to gradients during the learning procedure. We provide rigorous proof for the privacy guarantee, as well as comprehensive empirical evidence to support our analysis, where we demonstrate that our method can generate high quality data points at a reasonable privacy level.",
"title": ""
},
{
"docid": "9d632b6a40551697a250b2017c29981c",
"text": "In this paper, a novel framework for dense pixel matching based on dynamic programming is introduced. Unlike most techniques proposed in the literature, our approach assumes neither known camera geometry nor the availability of rectified images. Under such conditions, the matching task cannot be reduced to finding correspondences between a pair of scanlines. We propose to extend existing dynamic programming methodologies to a larger dimensional space by using a 3D scoring matrix so that correspondences between a line and a whole image can be calculated. After assessing our framework on a standard evaluation dataset of rectified stereo images, experiments are conducted on unrectified and non-linearly distorted images. Results validate our new approach and reveal the versatility of our algorithm.",
"title": ""
},
{
"docid": "107d6605a6159d5a278b49b8c020cdd9",
"text": "Internet applications increasingly rely on scalable data structures that must support high throughput and store huge amounts of data. These data structures can be hard to implement efficiently. Recent proposals have overcome this problem by giving up on generality and implementing specialized interfaces and functionality (e.g., Dynamo [4]). We present the design of a more general and flexible solution: a fault-tolerant and scalable distributed B-tree. In addition to the usual B-tree operations, our B-tree provides some important practical features: transactions for atomically executing several operations in one or more B-trees, online migration of B-tree nodes between servers for load-balancing, and dynamic addition and removal of servers for supporting incremental growth of the system. Our design is conceptually simple. Rather than using complex concurrency and locking protocols, we use distributed transactions to make changes to B-tree nodes. We show how to extend the B-tree and keep additional information so that these transactions execute quickly and efficiently. Our design relies on an underlying distributed data sharing service, Sinfonia [1], which provides fault tolerance and a light-weight distributed atomic primitive. We use this primitive to commit our transactions. We implemented our B-tree and show that it performs comparably to an existing open-source B-tree and that it scales to hundreds of machines. We believe that our approach is general and can be used to implement other distributed data structures easily.",
"title": ""
},
{
"docid": "ce2139f51970bfa5bd3738392f55ea48",
"text": "A novel type of dual circular polarizer for simultaneously receiving and transmitting right-hand and left-hand circularly polarized waves is developed and tested. It consists of a H-plane T junction of rectangular waveguide, one circular waveguide as an Eplane arm located on top of the junction, and two metallic pins used for matching. The theoretical analysis and design of the three-physicalport and four-mode polarizer were researched by solving ScatteringMatrix of the network and using a full-wave electromagnetic simulation tool. The optimized polarizer has the advantages of a very compact size with a volume smaller than 0.6λ3, low complexity and manufacturing cost. A couple of the polarizer has been manufactured and tested, and the experimental results are basically consistent with the theories.",
"title": ""
},
{
"docid": "36a615660b8f0c60bef06b5a57887bd1",
"text": "Quantum cryptography is an emerging technology in which two parties can secure network communications by applying the phenomena of quantum physics. The security of these transmissions is based on the inviolability of the laws of quantum mechanics. Quantum cryptography was born in the early seventies when Steven Wiesner wrote \"Conjugate Coding\", which took more than ten years to end this paper. The quantum cryptography relies on two important elements of quantum mechanics - the Heisenberg Uncertainty principle and the principle of photon polarization. The Heisenberg Uncertainty principle states that, it is not possible to measure the quantum state of any system without distributing that system. The principle of photon polarization states that, an eavesdropper can not copy unknown qubits i.e. unknown quantum states, due to no-cloning theorem which was first presented by Wootters and Zurek in 1982. This research paper concentrates on the theory of quantum cryptography, and how this technology contributes to the network security. This research paper summarizes the current state of quantum cryptography, and the real–world application implementation of this technology, and finally the future direction in which the quantum cryptography is headed forwards.",
"title": ""
},
{
"docid": "5b43cce2027f1e5afbf7985ca2d4af1a",
"text": "With Internet delivery of video content surging to an unprecedented level, video has become one of the primary sources for online advertising. In this paper, we present VideoSense as a novel contextual in-video advertising system, which automatically associates the relevant video ads and seamlessly inserts the ads at the appropriate positions within each individual video. Unlike most video sites which treat video advertising as general text advertising by displaying video ads at the beginning or the end of a video or around a video, VideoSense aims to embed more contextually relevant ads at less intrusive positions within the video stream. Specifically, given a Web page containing an online video, VideoSense is able to extract the surrounding text related to this video, detect a set of candidate ad insertion positions based on video content discontinuity and attractiveness, select a list of relevant candidate ads according to multimodal relevance. To support contextual advertising, we formulate this task as a nonlinear 0-1 integer programming problem by maximizing contextual relevance while minimizing content intrusiveness at the same time. The experiments proved the effectiveness of VideoSense for online video service.",
"title": ""
},
{
"docid": "41a15d3dcca1ff835b5d983a8bb5343f",
"text": "and is made available as an electronic reprint (preprint) with permission of SPIE. One print or electronic copy may be made for personal use only. Systematic or multiple reproduction, distribution to multiple locations via electronic or other means, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper are prohibited. ABSTRACT We describe the architecture and design of a through-the-wall radar. The radar is applied for the detection and localization of people hidden behind obstacles. It implements a new adaptive processing technique for people detection, which is introduced in this article. This processing technique is based on exponential averaging with adopted weighting coefficients. Through-the-wall detection and localization of a moving person is demonstrated by a measurement example. The localization relies on the time-of-flight approach.",
"title": ""
},
{
"docid": "1e90c85e21c0248e70fae594b152aa8e",
"text": "We recently demonstrated a high function wrist watch computer prototype that runs the Linux operating system and also X11 graphics libraries. In this paper we describe the unique energy related challenges and tradeoffs we encountered while building this watch. We show that the usage duty factor for the device heavily dictates which of the powers, active power or sleep power, needs to be minimized more aggressively in order to achieve the longest perceived battery life. We also describe the energy issues that percolate through several layers of software all the way from device usage scenarios, applications, user interfaces, system level software to device drivers and the need to systematically address all of them to achieve the battery life dictated by the hardware components and the capacity of the battery in the device.",
"title": ""
},
{
"docid": "17bc705ba1e4ee9f5620187582be60cc",
"text": "A new approach to the synthesis of longitudinal autopilots for missiles flying at high angle of attack regimes is presented. The methodology is based on sliding mode control, and uses a combination of aerodynamic surfaces and reaction jet thrusters, to achieve controllability beyond stall. The autopilot is tested on a small section of the flight envelope consisting of a fast 180 heading reversal in the vertical plane, which requires robustness with respect to uncertainties in the system’s dynamics induced by large variations in dynamic pressure and aerodynamic coefficients. Nonlinear simulation results show excellent performance and capabilities of the control system structure.",
"title": ""
},
{
"docid": "27be379b6192aa6db9101b7ec18d5585",
"text": "In this paper, we investigate the problem of detecting depression from recordings of subjects' speech using speech processing and machine learning. There has been considerable interest in this problem in recent years due to the potential for developing objective assessments from real-world behaviors, which may provide valuable supplementary clinical information or may be useful in screening. The cues for depression may be present in “what is said” (content) and “how it is said” (prosody). Given the limited amounts of text data, even in this relatively large study, it is difficult to employ standard method of learning models from n-gram features. Instead, we learn models using word representations in an alternative feature space of valence and arousal. This is akin to embedding words into a real vector space albeit with manual ratings instead of those learned with deep neural networks [1]. For extracting prosody, we employ standard feature extractors such as those implemented in openSMILE and compare them with features extracted from harmonic models that we have been developing in recent years. Our experiments show that our features from harmonic model improve the performance of detecting depression from spoken utterances than other alternatives. The context features provide additional improvements to achieve an accuracy of about 74%, sufficient to be useful in screening applications.",
"title": ""
},
{
"docid": "485cda7203863d2ff0b2070ca61b1126",
"text": "Interestingly, understanding natural language that you really wait for now is coming. It's significant to wait for the representative and beneficial books to read. Every book that is provided in better way and utterance will be expected by many peoples. Even you are a good reader or not, feeling to read this book will always appear when you find it. But, when you feel hard to find it as yours, what to do? Borrow to your friends and don't know when to give back it to her or him.",
"title": ""
},
{
"docid": "773b5914dce6770b2db707ff4536c7f6",
"text": "This paper presents an automatic drowsy driver monitoring and accident prevention system that is based on monitoring the changes in the eye blink duration. Our proposed method detects visual changes in eye locations using the proposed horizontal symmetry feature of the eyes. Our new method detects eye blinks via a standard webcam in real-time at 110fps for a 320×240 resolution. Experimental results in the JZU [3] eye-blink database showed that the proposed system detects eye blinks with a 94% accuracy with a 1% false positive rate.",
"title": ""
},
{
"docid": "972be3022e7123be919d9491a6dafe1c",
"text": "An improved coaxial high-voltage vacuum insulator applied in a Tesla-type generator, model TPG700, has been designed and tested for high-power microwave (HPM) generation. The design improvements include: changing the connection type of the insulator to the conductors from insertion to tangential, making the insulator thickness uniform, and using Nylon as the insulation material. Transient field simulation shows that the electric field (E-field) distribution within the improved insulator is much more uniform and that the average E-field on the two insulator surfaces is decreased by approximately 30% compared with the previous insulator at a voltage of 700 kV. Key structures such as the anode and the cathode shielding rings of the insulator have been optimized to significantly reduce E-field stresses. Aging experiments and experiments for HPM generation with this insulator were conducted based on a relativistic backward-wave oscillator. The preliminary test results show that the output voltage is larger than 700 kV and the HPM power is about 1 GW. Measurements show that the insulator is well within allowable E-field stresses on both the vacuum insulator surface and the cathode shielding ring.",
"title": ""
},
{
"docid": "b47bbb2a59a26fb0d9c2987bc308bc9d",
"text": "Nasal reconstruction is always challenging for plastic surgeons. Its midfacial localisation and the relationship between convexities and concavities of nasal subunits make impossible to hide any sort of deformity without a proper reconstruction. Nasal tissue defects can be caused by tumor removal, trauma or by any other insult to the nasal pyramid, like cocaine abuse, developing an irreversible sequela. Due to the special characteristics of the nasal pyramid surface, the removal of the lesion or the debridement must be performed according to nasal subunits as introduced by Burget. Afterwards, the reconstructive technique or a combination of them must be selected according to the size and the localisation of the defect created, and tissue availability to fulfil the procedure. An anatomical reconstruction must be completed as far as possible, trying to restore the nasal lining, the osteocartilaginous framework and the skin cover. In our department, 35 patients were operated on between 2000 and 2002: three bilobed flaps, five nasolabial flaps, two V-Y advancement flaps from the sidewall, three dorsonasal flaps modified by Ohsumi, 19 paramedian forehead flaps, three cheek advancement flaps, three costocondral grafts, two full-thickness skin grafts and two auricular helix free flaps for alar reconstruction. All flaps but one free flap survived with no postoperative complications. After 12-24 months of follow-up, all reconstructions remained stable from cosmetic and functional point of view. Our aim is to present our choice for nasal reconstruction according to the size and localization of the defect, and donor tissue availability.",
"title": ""
},
{
"docid": "655e2fda8fd2e8f7a665ca64047399a0",
"text": "This article describes a self-propelled dolphin robot that aims to create a stable and controllable experimental platform. A viable bioinspired approach to generate diverse instances of dolphin-like swimming online via a center pattern generator (CPG) network is proposed.The characteristic parameters affecting three-dimensional (3-D) swimming performance are further identified and discussed. Both interactive and programmed swimming tests are provided to illustrate the validity of the present scheme.",
"title": ""
},
{
"docid": "9b702c679d7bbbba2ac29b3a0c2f6d3b",
"text": "Mobile-edge computing (MEC) has recently emerged as a prominent technology to liberate mobile devices from computationally intensive workloads, by offloading them to the proximate MEC server. To make offloading effective, the radio and computational resources need to be dynamically managed, to cope with the time-varying computation demands and wireless fading channels. In this paper, we develop an online joint radio and computational resource management algorithm for multi-user MEC systems, with the objective of minimizing the long-term average weighted sum power consumption of the mobile devices and the MEC server, subject to a task buffer stability constraint. Specifically, at each time slot, the optimal CPU-cycle frequencies of the mobile devices are obtained in closed forms, and the optimal transmit power and bandwidth allocation for computation offloading are determined with the Gauss-Seidel method; while for the MEC server, both the optimal frequencies of the CPU cores and the optimal MEC server scheduling decision are derived in closed forms. Besides, a delay-improved mechanism is proposed to reduce the execution delay. Rigorous performance analysis is conducted for the proposed algorithm and its delay-improved version, indicating that the weighted sum power consumption and execution delay obey an $\\left [{O\\left ({1 / V}\\right), O\\left ({V}\\right) }\\right ]$ tradeoff with $V$ as a control parameter. Simulation results are provided to validate the theoretical analysis and demonstrate the impacts of various parameters.",
"title": ""
},
{
"docid": "e16c419551e73e9787029460f7047b4d",
"text": "Cloud Computing with Virtualization offers attractive flexibility and elasticity to deliver resources by providing a platform for consolidating complex IT resources in a scalable manner. However, efficiently running HPC applications on Cloud Computing systems is still full of challenges. One of the biggest hurdles in building efficient HPC clouds is the unsatisfactory performance offered by underlying virtualized environments, more specifically, virtualized I/O devices. Recently, Single Root I/O Virtualization (SR-IOV) technology has been steadily gaining momentum for high-performance interconnects such as InfiniBand and 10GigE. Due to its near native performance for inter-node communication, many cloud systems such as Amazon EC2 have been using SR-IOV in their production environments. Nevertheless, recent studies have shown that the SR-IOV scheme lacks locality aware communication support, which leads to performance overheads for inter-VM communication within the same physical node. In this paper, we propose an efficient approach to build HPC clouds based on MVAPICH2 over Open Stack with SR-IOV. We first propose an extension for Open Stack Nova system to enable the IV Shmem channel in deployed virtual machines. We further present and discuss our high-performance design of virtual machine aware MVAPICH2 library over Open Stack-based HPC Clouds. Our design can fully take advantage of high-performance SR-IOV communication for inter-node communication as well as Inter-VM Shmem (IVShmem) for intra-node communication. A comprehensive performance evaluation with micro-benchmarks and HPC applications has been conducted on an experimental Open Stack-based HPC cloud and Amazon EC2. The evaluation results on the experimental HPC cloud show that our design and extension can deliver near bare-metal performance for implementing SR-IOV-based HPC clouds with virtualization. Further, compared with the performance on EC2, our experimental HPC cloud can exhibit up to 160X, 65X, 12X improvement potential in terms of point-to-point, collective and application for future HPC clouds.",
"title": ""
},
{
"docid": "2e66317dfe4005c069ceac2d4f9e3877",
"text": "The Semantic Web presents the vision of a distributed, dynamically growing knowledge base founded on formal logic. Common users, however, seem to have problems even with the simplest Boolean expression. As queries from web search engines show, the great majority of users simply do not use Boolean expressions. So how can we help users to query a web of logic that they do not seem to understand? We address this problem by presenting Ginseng, a quasi natural language guided query interface to the Semantic Web. Ginseng relies on a simple question grammar which gets dynamically extended by the structure of an ontology to guide users in formulating queries in a language seemingly akin to English. Based on the grammar Ginseng then translates the queries into a Semantic Web query language (RDQL), which allows their execution. Our evaluation with 20 users shows that Ginseng is extremely simple to use without any training (as opposed to any logic-based querying approach) resulting in very good query performance (precision = 92.8%, recall = 98.4%). We, furthermore, found that even with its simple grammar/approach Ginseng could process over 40% of questions from a query corpus without modification.",
"title": ""
},
{
"docid": "7401d33980f6630191aa7be7bf380ec3",
"text": "We present PennCOSYVIO, a new challenging Visual Inertial Odometry (VIO) benchmark with synchronized data from a VI-sensor (stereo camera and IMU), two Project Tango hand-held devices, and three GoPro Hero 4 cameras. Recorded at UPenn's Singh center, the 150m long path of the hand-held rig crosses from outdoors to indoors and includes rapid rotations, thereby testing the abilities of VIO and Simultaneous Localization and Mapping (SLAM) algorithms to handle changes in lighting, different textures, repetitive structures, and large glass surfaces. All sensors are synchronized and intrinsically and extrinsically calibrated. We demonstrate the accuracy with which ground-truth poses can be obtained via optic localization off of fiducial markers. The data set can be found at https://daniilidis-group.github.io/penncosyvio/.",
"title": ""
}
] |
scidocsrr
|
f0d525ad1953c4a684f4547a90deed95
|
Automating Generation of Low Precision Deep Learning Operators
|
[
{
"docid": "0e5187e6d72082618bd5bda699adab93",
"text": "Many applications of mobile deep learning, especially real-time computer vision workloads, are constrained by computation power. This is particularly true for workloads running on older consumer phones, where a typical device might be powered by a singleor dual-core ARMv7 CPU. We provide an open-source implementation and a comprehensive analysis of (to our knowledge) the state of the art ultra-low-precision (<4 bit precision) implementation of the core primitives required for modern deep learning workloads on ARMv7 devices, and demonstrate speedups of 4x-20x over our additional state-of-the-art float32 and int8 baselines.",
"title": ""
},
{
"docid": "cebdedb344f2ba7efb95c2933470e738",
"text": "To address this shortcoming, we propose a method for training binary neural networks with a mixture of bits, yielding effectively fractional bitwidths. We demonstrate that our method is not only effective in allowing finer tuning of the speed to accuracy trade-off, but also has inherent representational advantages. Middle-Out Algorithm Heterogeneous Bitwidth Binarization in Convolutional Neural Networks",
"title": ""
},
{
"docid": "b9aa1b23ee957f61337e731611a6301a",
"text": "We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFatNet opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 4-bit gradients to get 47% top-1 accuracy on ImageNet validation set.1 The DoReFa-Net AlexNet model is released publicly.",
"title": ""
}
] |
[
{
"docid": "07be887852c3a4220aacd5b59c421ba2",
"text": "Wearable human interaction devices are technologies with various applications for improving human comfort, convenience and security and for monitoring health conditions. Healthcare monitoring includes caring for the welfare of every person, which includes early diagnosis of diseases, real-time monitoring of the effects of treatment, therapy, and the general monitoring of the conditions of people’s health. As a result, wearable electronic devices are receiving greater attention because of their facile interaction with the human body, such as monitoring heart rate, wrist pulse, motion, blood pressure, intraocular pressure, and other health-related conditions. In this paper, various smart sensors and wireless systems are reviewed, the current state of research related to such systems is reported, and their detection mechanisms are compared. Our focus was limited to wearable and attachable sensors. Section 1 presents the various smart sensors. In Section 2, we describe multiplexed sensors that can monitor several physiological signals simultaneously. Section 3 provides a discussion about short-range wireless systems including bluetooth, near field communication (NFC), and resonance antenna systems for wearable electronic devices.",
"title": ""
},
{
"docid": "9d28ff85097a307ed6089b07dae17fca",
"text": "Over the past 15 years there has been increasing recognition that careful attention to the design of a system’s software architecture is critical to satisfying its requirements for quality attributes such as performance, security, and dependability. As a consequence, during this period the field of software architecture has matured significantly. However, current practices of software architecture rely on relatively informal methods, limiting the potential for fully exploiting architectural designs to gain insight and improve the quality of the resulting system. In this paper we draw from a variety of research results to illustrate how formal approaches to software architecture can lead to enhancements in software quality, including improved clarity of design, support for analysis, and assurance that implementations conform to their intended architecture.",
"title": ""
},
{
"docid": "49e80a1e55137e85b0e4a26b24419aea",
"text": "Purpose – The proliferation and advance of web-based technologies create expanded opportunities for retailers to gain a better understanding of their customers. However, the success of these web-based discussion boards depends solely on whether customers are willing to share their knowledge and experience with other customers in these discussion boards. Thus, this study aims at identifying the factors that drive knowledge sharing among customers in web-based discussion boards. Design/methodology/approach – An exploratory study with 104 respondents was conducted to identify and categorize the key factors of customer knowledge sharing in web-based discussion boards. Findings – The results indicate that the enjoyment of helping others is the most frequently cited reason for customer knowledge sharing in web-based discussion boards. On the other hand, the lack of knowledge self-efficacy is the mostly cited reason explaining why customers do not want to share knowledge with others. Research limitations/implications – The exploratory analysis suggests that the underlying reasons that motivate and inhibit customers to share are very different. There is a need to integrate multiple theoretical perspectives from across the social and technical domains if this phenomenon is to be better understood. Practical implications – Building upon the findings of this study, some generic guidelines for retailers and web designers for promoting customer sharing in web-based discussion boards are outlined. Originality/value – This research is one of the first studies to use the socio-technical perspective to investigate customer knowledge sharing phenomena in web-based discussion boards.",
"title": ""
},
{
"docid": "094a524941b9ce2e9d9620264fdfe44e",
"text": "Large graphs are getting increasingly popular and even indispensable in many applications, for example, in social media data, large networks, and knowledge bases. Efficient graph analytics thus becomes an important subject of study. To increase efficiency and scalability, in-memory computation and parallelism have been explored extensively to speed up various graph analytical workloads. In many graph analytical engines (e.g., Pregel, Neo4j, GraphLab), parallelism is achieved via one of the three concurrency control models, namely, bulk synchronization processing (BSP), asynchronous processing, and synchronous processing. Among them, synchronous processing has the potential to achieve the best performance due to fine-grained parallelism, while ensuring the correctness and the convergence of the computation, if an effective concurrency control scheme is used. This paper explores the topological properties of the underlying graph to design and implement a highly effective concurrency control scheme for efficient synchronous processing in an in-memory graph analytical engine. Our design uses a novel hybrid approach that combines 2PL (two-phase locking) with OCC (optimistic concurrency control), for high degree and low degree vertices in a graph respectively. Our results show that the proposed hybrid synchronous scheduler has significantly outperformed other synchronous schedulers in existing graph analytical engines, as well as BSP and asynchronous schedulers.",
"title": ""
},
{
"docid": "85af80aae4bf319aa0af94b2e5f855d6",
"text": "In this paper an extensive literature review on load–frequency control (LFC) problem in power system has been highlighted. The various configuration of power system models and control techniques/ strategies that concerns to LFC issues have been addressed in conventional as well as distribution generation-based power systems. Further, investigations on LFC challenges incorporating storage devices BESS/SMES, FACTS devices, wind–diesel and PV systems etc have been discussed too. & 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "52b5fa0494733f2f6b72df0cdfad01f4",
"text": "Requirements engineering encompasses many difficult, overarching problems inherent to its subareas of process, elicitation, specification, analysis, and validation. Requirements engineering researchers seek innovative, effective means of addressing these problems. One powerful tool that can be added to the researcher toolkit is that of machine learning. Some researchers have been experimenting with their own implementations of machine learning algorithms or with those available as part of the Weka machine learning software suite. There are some shortcomings to using “one off” solutions. It is the position of the authors that many problems exist in requirements engineering that can be supported by Weka's machine learning algorithms, specifically by classification trees. Further, the authors posit that adoption will be boosted if machine learning is easy to use and is integrated into requirements research tools, such as TraceLab. Toward that end, an initial concept validation of a component in TraceLab is presented that applies the Weka classification trees. The component is demonstrated on two different requirements engineering problems. Finally, insights gained on using the TraceLab Weka component on these two problems are offered.",
"title": ""
},
{
"docid": "464b66e2e643096bd344bea8026f4780",
"text": "In this paper we describe an application of our approach to temporal text mining in Competitive Intelligence for the biotechnology and pharmaceutical industry. The main objective is to identify changes and trends of associations among entities of interest that appear in text over time. Text Mining (TM) exploits information contained in textual data in various ways, including the type of analyses that are typically performed in Data Mining [17]. Information Extraction (IE) facilitates the semi-automatic creation of metadata repositories from text. Temporal Text mining combines Information Extraction and Data Mining techniques upon textual repositories and incorporates time and ontologies‟ issues. It consists of three main phases; the Information Extraction phase, the ontology driven generalisation of templates and the discovery of associations over time. Treatment of the temporal dimension is essential to our approach since it influences both the annotation part (IE) of the system as well as the mining part.",
"title": ""
},
{
"docid": "06b43b63aafbb70de2601b59d7813576",
"text": "Facial expression recognizers based on handcrafted features have achieved satisfactory performance on many databases. Recently, deep neural networks, e. g. deep convolutional neural networks (CNNs) have been shown to boost performance on vision tasks. However, the mechanisms exploited by CNNs are not well established. In this paper, we establish the existence and utility of feature maps selective to action units in a deep CNN trained by transfer learning. We transfer a network pre-trained on the Image-Net dataset to the facial expression recognition task using the Karolinska Directed Emotional Faces (KDEF), Radboud Faces Database(RaFD) and extended Cohn-Kanade (CK+) database. We demonstrate that higher convolutional layers of the deep CNN trained on generic images are selective to facial action units. We also show that feature selection is critical in achieving robustness, with action unit selective feature maps being more critical in the facial expression recognition task. These results support the hypothesis that both human and deeply learned CNNs use similar mechanisms for recognizing facial expressions.",
"title": ""
},
{
"docid": "8c89f9342ffc992d5819cbe9774bf6bb",
"text": "Discrete Event Simulation (DES) has been widely used in modelling health-care systems for many years and a simple citation analysis shows that the number of papers published has increased markedly since 2004. Over the last 30 years several significant reviews of DES papers have been published and we build on these to focus on the most recent era, with an interest in performance modelling within hospitals. As there are few papers that propose or illustrate general approaches, we classify papers according to the areas of application evident in the literature, discussing the apparent lack of genericity. There is considerable diversity in the objectives of reported studies and in the consequent level of detail: We discuss why specificity dominates and why more generic approaches are rare. Journal of Simulation (2010) 4, 42–51. doi:10.1057/jos.2009.25",
"title": ""
},
{
"docid": "96b6e87aa29a37a0a06d18f1d32dfaca",
"text": "This paper addresses the problem of joint 3D object structure and camera pose estimation from a single RGB image. Existing approaches typically rely on both images with 2D keypoint annotations and 3D synthetic data to learn a deep network model due to difficulty in obtaining 3D annotations. However, the domain gap between the synthetic and image data usually leads to a 3D object interpretation model sensitive to the viewing angle, occlusion and background clutter in real images. In this work, we propose a semi-supervised learning strategy to build a robust 3D object interpreter, which exploits rich object videos for better generalization under large pose variations and noisy 2D keypoint estimation. The core design of our learning algorithm is a new loss function that enforces the temporal consistency constraint in the 3D predictions on videos. The experiment evaluation on the IKEA, PASCAL3D+ and our object video dataset shows that our approach achieves the state-of-the-art performance in structure and pose estimation.",
"title": ""
},
{
"docid": "559be3dd29ae8f6f9a9c99951c82a8d3",
"text": "This paper presents a comprehensive literature review on environment perception for intelligent vehicles. The state-of-the-art algorithms and modeling methods for intelligent vehicles are given, with a summary of their pros and cons. A special attention is paid to methods for lane and road detection, traffic sign recognition, vehicle tracking, behavior analysis, and scene understanding. In addition, we provide information about datasets, common performance analysis, and perspectives on future research directions in this area.",
"title": ""
},
{
"docid": "3e5e7e38068da120639c3fcc80227bf8",
"text": "The ferric reducing antioxidant power (FRAP) assay was recently adapted to a microplate format. However, microplate-based FRAP (mFRAP) assays are affected by sample volume and composition. This work describes a calibration process for mFRAP assays which yields data free of volume effects. From the results, the molar absorptivity (ε) for the mFRAP assay was 141,698 M(-1) cm(-1) for gallic acid, 49,328 M(-1) cm(-1) for ascorbic acid, and 21,606 M(-1) cm(-1) for ammonium ferrous sulphate. The significance of ε (M(-1) cm(-1)) is discussed in relation to mFRAP assay sensitivity, minimum detectable concentration, and the dimensionless FRAP-value. Gallic acid showed 6.6 mol of Fe(2+) equivalents compared to 2.3 mol of Fe(+2) equivalents for ascorbic acid. Application of the mFRAP assay to Manuka honey samples (rated 5+, 10+, 15+, and 18+ Unique Manuka Factor; UMF) showed that FRAP values (0.54-0.76 mmol Fe(2+) per 100g honey) were strongly correlated with UMF ratings (R(2)=0.977) and total phenols content (R(2) = 0.982)whilst the UMF rating was correlated with the total phenols (R(2) = 0.999). In conclusion, mFRAP assay results were successfully standardised to yield data corresponding to 1-cm spectrophotometer which is useful for quality assurance purposes. The antioxidant capacity of Manuka honey was found to be directly related to the UMF rating.",
"title": ""
},
{
"docid": "2b19c55c2d69158361e27ce459c3112d",
"text": "In many domains, classes have highly regular internal structure. For example, so-called business objects often contain boilerplate code for mapping database fields to class members. The boilerplate code must be repeated per-field for every class, because existing mechanisms for constructing classes do not provide a way to capture and reuse such member-level structure. As a result, programmers often resort to ad hoc code generation. This paper presents a lightweight mechanism for specifying and reusing member-level structure in Java programs. The proposal is based on a modest extension to traits that we have termed trait-based metaprogramming. Although the semantics of the mechanism are straightforward, its type theory is difficult to reconcile with nominal subtyping. We achieve reconciliation by introducing a hybrid structural/nominal type system that extends Java’s type system. The paper includes a formal calculus defined by translation to Featherweight Generic Java.",
"title": ""
},
{
"docid": "cfb790c5b1cbaed183d38265d3ec02b2",
"text": "MOTIVATION\nIntra-tumor heterogeneity is one of the key confounding factors in deciphering tumor evolution. Malignant cells exhibit variations in their gene expression, copy numbers, and mutation even when originating from a single progenitor cell. Single cell sequencing of tumor cells has recently emerged as a viable option for unmasking the underlying tumor heterogeneity. However, extracting features from single cell genomic data in order to infer their evolutionary trajectory remains computationally challenging due to the extremely noisy and sparse nature of the data.\n\n\nRESULTS\nHere we describe 'Dhaka', a variational autoencoder method which transforms single cell genomic data to a reduced dimension feature space that is more efficient in differentiating between (hidden) tumor subpopulations. Our method is general and can be applied to several different types of genomic data including copy number variation from scDNA-Seq and gene expression from scRNA-Seq experiments. We tested the method on synthetic and 6 single cell cancer datasets where the number of cells ranges from 250 to 6000 for each sample. Analysis of the resulting feature space revealed subpopulations of cells and their marker genes. The features are also able to infer the lineage and/or differentiation trajectory between cells greatly improving upon prior methods suggested for feature extraction and dimensionality reduction of such data.\n\n\nAVAILABILITY AND IMPLEMENTATION\nAll the datasets used in the paper are publicly available and developed software package and supporting info is available on Github https://github.com/MicrosoftGenomics/Dhaka.",
"title": ""
},
{
"docid": "10f5ad322eeee68e57b66dd9f2bfe25b",
"text": "Irmin is an OCaml library to design purely functional data structures that can be persisted on disk and be merged and synchronized efficiently. In this paper, we focus on the merge aspect of the library and present two data structures built on top of Irmin: (i) queues and (ii) ropes that extend the corresponding purely functional data structures with a 3-way merge operation. We provide early theoretical and practical complexity results for these new data structures. Irmin is available as open-source code as part of the MirageOS project.",
"title": ""
},
{
"docid": "acfe73c1e02fe1bd1b6cbee0674eefd6",
"text": "EDWIN SEVER BECHIR1, MARIANA PACURAR1, TUDOR ALEXANDRU HANTOIU1, ANAMARIA BECHIR2*, OANA SMATREA2, ALEXANDRU BURCEA2, CHERANA GIOGA2, MONICA MONEA1 1 Medicine and Pharmacy University of Tirgu-Mures, Faculty of Dentistry, 38 Gheorghe Marinescu Str., 540142,Tirgu-Mures, Romania 2 Titu Maiorescu University of Bucharest, Faculty of Dentistry, Department of Dental Specialties, 67A Gheorghe Petrascu Str., 031593, Bucharest, Romania",
"title": ""
},
{
"docid": "0cf81998c0720405e2197c62afa08ee7",
"text": "User-generated online reviews can play a significant role in the success of retail products, hotels, restaurants, etc. However, review systems are often targeted by opinion spammers who seek to distort the perceived quality of a product by creating fraudulent reviews. We propose a fast and effective framework, FRAUDEAGLE, for spotting fraudsters and fake reviews in online review datasets. Our method has several advantages: (1) it exploits the network effect among reviewers and products, unlike the vast majority of existing methods that focus on review text or behavioral analysis, (2) it consists of two complementary steps; scoring users and reviews for fraud detection, and grouping for visualization and sensemaking, (3) it operates in a completely unsupervised fashion requiring no labeled data, while still incorporating side information if available, and (4) it is scalable to large datasets as its run time grows linearly with network size. We demonstrate the effectiveness of our framework on synthetic and real datasets; where FRAUDEAGLE successfully reveals fraud-bots in a large online app review database. Introduction The Web has greatly enhanced the way people perform certain activities (e.g. shopping), find information, and interact with others. Today many people read/write reviews on merchant sites, blogs, forums, and social media before/after they purchase products or services. Examples include restaurant reviews on Yelp, product reviews on Amazon, hotel reviews on TripAdvisor, and many others. Such user-generated content contains rich information about user experiences and opinions, which allow future potential customers to make better decisions about spending their money, and also help merchants improve their products, services, and marketing. Since online reviews can directly influence customer purchase decisions, they are crucial to the success of businesses. While positive reviews with high ratings can yield financial gains, negative reviews can damage reputation and cause monetary loss. This effect is magnified as the information spreads through the Web (Hitlin 2003; Mendoza, Poblete, and Castillo 2010). As a result, online review systems are attractive targets for opinion fraud. Opinion fraud involves reviewers (often paid) writing bogus reviews (Kost May 2012; Copyright c © 2013, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Streitfeld August 2011). These spam reviews come in two flavors: defaming-spam which untruthfully vilifies, or hypespam that deceitfully promotes the target product. The opinion fraud detection problem is to spot the fake reviews in online sites, given all the reviews on the site, and for each review, its text, its author, the product it was written for, timestamp of posting, and its star-rating. Typically no user profile information is available (or is self-declared and cannot be trusted), while more side information for products (e.g. price, brand), and for reviews (e.g. number of (helpful) feedbacks) could be available depending on the site. Detecting opinion fraud, as defined above, is a non-trivial and challenging problem. Fake reviews are often written by experienced professionals who are paid to write high quality, believable reviews. As a result, it is difficult for an average potential customer to differentiate bogus reviews from truthful ones, just by looking at individual reviews text(Ott et al. 2011). As such, manual labeling of reviews is hard and ground truth information is often unavailable, which makes training supervised models less attractive for this problem. Summary of previous work. Previous attempts at solving the problem use several heuristics, such as duplicated reviews (Jindal and Liu 2008), or acquire bogus reviews from non-experts (Ott et al. 2011), to generate pseudo-ground truth, or a reference dataset. This data is then used for learning classification models together with carefully engineered features. One downside of such techniques is that they do not generalize: one needs to collect new data and train a new model for review data from a different domain, e.g., hotel vs. restaurant reviews. Moreover feature selection becomes a tedious sub-problem, as datasets from different domains might exhibit different characteristics. Other feature-based proposals include (Lim et al. 2010; Mukherjee, Liu, and Glance 2012). A large body of work on fraud detection relies on review text information (Jindal and Liu 2008; Ott et al. 2011; Feng, Banerjee, and Choi 2012) or behavioral evidence (Lim et al. 2010; Xie et al. 2012; Feng et al. 2012), and ignore the connectivity structure of review data. On the other hand, the network of reviewers and products contains rich information that implicitly represents correlations among these entities. The review network is also invaluable for detecting teams of fraudsters that operate collaboratively on targeted products. Our contributions. In this work we propose an unsuperProceedings of the Seventh International AAAI Conference on Weblogs and Social Media",
"title": ""
},
{
"docid": "84f5ab1dfcf6e03241fd72d3e76179f5",
"text": "The goal of this work is to develop a meeting transcription system that can recognize speech even when utterances of different speakers are overlapped. While speech overlaps have been regarded as a major obstacle in accurately transcribing meetings, a traditional beamformer with a single output has been exclusively used because previously proposed speech separation techniques have critical constraints for application to real meetings. This paper proposes a new signal processing module, called an unmixing transducer, and describes its implementation using a windowed BLSTM. The unmixing transducer has a fixed number, say J, of output channels, where J may be different from the number of meeting attendees, and transforms an input multi-channel acoustic signal into J time-synchronous audio streams. Each utterance in the meeting is separated and emitted from one of the output channels. Then, each output signal can be simply fed to a speech recognition back-end for segmentation and transcription. Our meeting transcription system using the unmixing transducer outperforms a system based on a stateof-the-art neural mask-based beamformer by 10.8%. Significant improvements are observed in overlapped segments. To the best of our knowledge, this is the first report that applies overlapped speech recognition to unconstrained real meeting audio.",
"title": ""
},
{
"docid": "f94ad2b4cbf3bb6fc4ddb58709d9b46e",
"text": "Probabilistic topic models have proven to be an extremely versatile class of mixed-membership models for discovering the thematic structure of text collections. There are many possible applications, covering a broad range of areas of study: technology, natural science, social science and the humanities. In this thesis, a new efficient parallel Markov Chain Monte Carlo inference algorithm is proposed for Bayesian inference in large topic models. The proposed methods scale well with the corpus size and can be used for other probabilistic topic models and other natural language processing applications. The proposed methods are fast, efficient, scalable, and will converge to the true posterior distribution. In addition, in this thesis a supervised topic model for high-dimensional text classification is also proposed, with emphasis on interpretable document prediction using the horseshoe shrinkage prior in supervised topic models. Finally, we develop a model and inference algorithm that can model agenda and framing of political speeches over time with a priori defined topics. We apply the approach to analyze the evolution of immigration discourse in the Swedish parliament by combining theory from political science and communication science with a probabilistic topic model.",
"title": ""
},
{
"docid": "fb3e2f6c4f790b1f5c30a2d95c8d3eb4",
"text": "Top-N recommender systems typically utilize side information to address the problem of data sparsity. As nowadays side information is growing towards high dimensionality, the performances of existing methods deteriorate in terms of both effectiveness and efficiency, which imposes a severe technical challenge. In order to take advantage of high-dimensional side information, we propose in this paper an embedded feature selection method to facilitate top-N recommendation. In particular, we propose to learn feature weights of side information, where zero-valued features are naturally filtered out. We also introduce non-negativity and sparsity to the feature weights, to facilitate feature selection and encourage low-rank structure. Two optimization problems are accordingly put forward, respectively, where the feature selection is tightly or loosely coupled with the learning procedure. Augmented Lagrange Multiplier and Alternating Direction Method are applied to efficiently solve the problems. Experiment results demonstrate the superior recommendation quality of the proposed algorithm to that of the state-of-the-art alternatives.",
"title": ""
}
] |
scidocsrr
|
417057687f9567b6a10566defb96e299
|
Lung Segmentation in Chest Radiographs Using Anatomical Atlases With Nonrigid Registration
|
[
{
"docid": "d0c75242aad1230e168122930b078671",
"text": "Combinatorial graph cut algorithms have been successfully applied to a wide range of problems in vision and graphics. This paper focusses on possibly the simplest application of graph-cuts: segmentation of objects in image data. Despite its simplicity, this application epitomizes the best features of combinatorial graph cuts methods in vision: global optima, practical efficiency, numerical robustness, ability to fuse a wide range of visual cues and constraints, unrestricted topological properties of segments, and applicability to N-D problems. Graph cuts based approaches to object extraction have also been shown to have interesting connections with earlier segmentation methods such as snakes, geodesic active contours, and level-sets. The segmentation energies optimized by graph cuts combine boundary regularization with region-based properties in the same fashion as Mumford-Shah style functionals. We present motivation and detailed technical description of the basic combinatorial optimization framework for image segmentation via s/t graph cuts. After the general concept of using binary graph cut algorithms for object segmentation was first proposed and tested in Boykov and Jolly (2001), this idea was widely studied in computer vision and graphics communities. We provide links to a large number of known extensions based on iterative parameter re-estimation and learning, multi-scale or hierarchical approaches, narrow bands, and other techniques for demanding photo, video, and medical applications.",
"title": ""
}
] |
[
{
"docid": "94aa0777f80aa25ec854f159dc3e0706",
"text": "To develop a knowledge-aware recommender system, a key data problem is how we can obtain rich and structured knowledge information for recommender system (RS) items. Existing datasets or methods either use side information from original recommender systems (containing very few kinds of useful information) or utilize private knowledge base (KB). In this paper, we present the first public linked KB dataset for recommender systems, named KB4Rec v1.0, which has linked three widely used RS datasets with the popular KB Freebase. Based on our linked dataset, we first preform some interesting qualitative analysis experiments, in which we discuss the effect of two important factors (i.e., popularity and recency) on whether a RS item can be linked to a KB entity. Finally, we present the comparison of several knowledge-aware recommendation algorithms on our linked dataset.",
"title": ""
},
{
"docid": "1ffadb09d21cedca89d27450c38b776b",
"text": "OBJECTIVES\nTo investigate speech outcomes in 5- and 10-year-old children with unilateral cleft lip and palate (UCLP) treated according to minimal incision technique (MIT) - a one-stage palatal method.\n\n\nMETHODS\nA retrospective, longitudinal cohort study of a consecutive series of 69 patients born with UCLP, treated with MIT (mean age 13 months) was included. Forty-two children (43%) received a velopharyngeal flap; 12 before 5 years and another 18 before 10 years of age. Cleft speech variables were rated from standardized audio recordings at 5 and 10 years of age, independently by three experienced, external speech-language pathologists, blinded to the material. The prevalences of cleft speech characteristics were determined, and inter- and intra-rater agreement calculated.\n\n\nRESULTS\nMore than mild hypernasality, weak pressure consonants and perceived incompetent velopharyngeal function were present in 19-22% of the children at 5 years, but improved to less than 5% at 10 years. However, audible nasal air leakage, prevalent in 23% at 5 years, did not improve by age 10. Thirty percent had frequent or almost always persistent compensatory articulation at 5 years, and 6% at age 10. The general impression of speech improved markedly, from 57% giving a normal impression at 5 years to 89% at 10 years. A high prevalence of distorted/s/was found at both 5 and 10 years of age.\n\n\nCONCLUSIONS\nA high occurrence of speech deviances at 5 years of age after MIT was markedly reduced at 10 years in this study of children with unilateral cleft lip and palate. The high pharyngeal flap rate presumably accounted for the positive speech development.",
"title": ""
},
{
"docid": "609fa8716f97a1d30683997d778e4279",
"text": "The role of behavior for the acquisition of sensory representations has been underestimated in the past. We study this question for the task of learning vergence eye movements allowing proper fixation of objects. We model the development of this skill with an artificial neural network based on reinforcement learning. A biologically plausible reward mechanism that is responsible for driving behavior and learning of the representation of disparity is proposed. The network learns to perform vergence eye movements between natural images of objects by receiving a reward whenever an object is fixated with both eyes. Disparity tuned neurons emerge robustly in the hidden layer during development. The characteristics of the cells' tuning curves depend strongly on the task: if mostly small vergence movements are to be performed, tuning curves become narrower at small disparities, as has been measured experimentally in barn owls. Extensive training to discriminate between small disparities leads to an effective enhancement of sensitivity of the tuning curves.",
"title": ""
},
{
"docid": "6be73a6559c7f1b99cec51125169fd5b",
"text": "We investigate a local reparameterizaton technique for greatly reducing the variance of stochastic gradients for variational Bayesian inference (SGVB) of a posterior over model parameters, while retaining parallelizability. This local reparameterization translates uncertainty about global parameters into local noise that is independent across datapoints in the minibatch. Such parameterizations can be trivially parallelized and have variance that is inversely proportional to the minibatch size, generally leading to much faster convergence. Additionally, we explore a connection with dropout: Gaussian dropout objectives correspond to SGVB with local reparameterization, a scale-invariant prior and proportionally fixed posterior variance. Our method allows inference of more flexibly parameterized posteriors; specifically, we propose variational dropout, a generalization of Gaussian dropout where the dropout rates are learned, often leading to better models. The method is demonstrated through several experiments.",
"title": ""
},
{
"docid": "e19b874a9c1942ba0dc7872ed3257a4d",
"text": "This contribution shows the successful implementation of a fully-differential single-chip radar transceiver with switchable transmit path. The chip facilitates cascading of multiple modules via daisy-chaining using the integrated LO power splitter and LO output on the chip edge opposite to the LO input (LO feedthrough). An on-chip differential rat-race coupler provides for separation of transmit (TX) and receive (RX) paths, therefore only a single antenna port is required. The circuit performance is demonstrated in on-board measurements.",
"title": ""
},
{
"docid": "9a2d79d9df9e596e26f8481697833041",
"text": "Novelty search is a recent artificial evolution technique that challenges traditional evolutionary approaches. In novelty search, solutions are rewarded based on their novelty, rather than their quality with respect to a predefined objective. The lack of a predefined objective precludes premature convergence caused by a deceptive fitness function. In this paper, we apply novelty search combined with NEAT to the evolution of neural controllers for homogeneous swarms of robots. Our empirical study is conducted in simulation, and we use a common swarm robotics task—aggregation, and a more challenging task—sharing of an energy recharging station. Our results show that novelty search is unaffected by deception, is notably effective in bootstrapping evolution, can find solutions with lower complexity than fitness-based evolution, and can find a broad diversity of solutions for the same task. Even in non-deceptive setups, novelty search achieves solution qualities similar to those obtained in traditional fitness-based evolution. Our study also encompasses variants of novelty search that work in concert with fitness-based evolution to combine the exploratory character of novelty search with the exploitatory character of objective-based evolution. We show that these variants can further improve the performance of novelty search. Overall, our study shows that novelty search is a promising alternative for the evolution of controllers for robotic swarms.",
"title": ""
},
{
"docid": "f540b04584b7ab04504d00ab0ff4bd95",
"text": "As the performance of the filtering system depends upon the accuracy of the noise detection scheme, in this paper, we present a new scheme for impulse noise detection based on two levels of decision. In this scheme in the first stage we coarsely identify the corrupted pixels and in the second stage we finally decide whether the pixel under consideration is really corrupt or not. The efficacy of the proposed filter has been confirmed by extensive simulations. Keywords—Impulse detection, noise removal, image filtering.",
"title": ""
},
{
"docid": "21e4978e1fcb69246c86a58dc9d23c2a",
"text": "Health and wellness have drawn significant attention in the HCI and CSCW communities. Many prior studies have focused on designing technologies that are patient-centric, allowing caregivers to take better care of patients. Less has been done in understanding and minimizing the burden of caregiving in caregivers' own lives. We conducted a qualitative interview study to understand their experiences in caregiving. The findings reveal a great magnitude of challenges in the caregivers' day-to-day lives, ranging from the physical and social, to the personal and emotional. Caregivers have to constantly balance their personal lives with work, family, and their caregiver roles, which can be overwhelmingly stressful. We discuss how caregivers attempt maintaining this balance through two concepts: first, giving-impact, and second, visibility-invisibility. Our study's findings call for system design that focuses not only on patients but also caregivers, addressing the burdens that often impair their health and wellness.",
"title": ""
},
{
"docid": "29ce9730d55b55b84e195983a8506e5c",
"text": "In situ Raman spectroscopy is an extremely valuable technique for investigating fundamental reactions that occur inside lithium rechargeable batteries. However, specialized in situ Raman spectroelectrochemical cells must be constructed to perform these experiments. These cells are often quite different from the cells used in normal electrochemical investigations. More importantly, the number of cells is usually limited by construction costs; thus, routine usage of in situ Raman spectroscopy is hampered for most laboratories. This paper describes a modification to industrially available coin cells that facilitates routine in situ Raman spectroelectrochemical measurements of lithium batteries. To test this strategy, in situ Raman spectroelectrochemical measurements are performed on Li//V2O5 cells. Various phases of Li(x)V2O5 could be identified in the modified coin cells with Raman spectroscopy, and the electrochemical cycling performance between in situ and unmodified cells is nearly identical.",
"title": ""
},
{
"docid": "e13874aa8c3fe19bb2a176fd3a039887",
"text": "As a typical deep learning model, Convolutional Neural Network (CNN) has shown excellent ability in solving complex classification problems. To apply CNN models in mobile ends and wearable devices, a fully pipelined hardware architecture adopting a Row Processing Tree (RPT) structure with small memory resource consumption between convolutional layers is proposed. A modified Row Stationary (RS) dataflow is implemented to evaluate the RPT architecture. Under the the same work frequency requirement for these two architectures, the experimental results show that the RPT architecture reduces 91% on-chip memory and 75% DRAM bandwidth compared with the modified RS dataflow, but the throughput of the modified RS dataflow is 3 times higher than the our proposed RPT architecture. The RPT architecture can achieve 121fps at 100MHZ while processing a CNN including 4 convolutional layers.",
"title": ""
},
{
"docid": "5b36ec4a7282397402d582de7254d0c1",
"text": "Recurrent neural network language models (RNNLMs) have becoming increasingly popular in many applications such as automatic speech recognition (ASR). Significant performance improvements in both perplexity and word error rate over standard n-gram LMs have been widely reported on ASR tasks. In contrast, published research on using RNNLMs for keyword search systems has been relatively limited. In this paper the application of RNNLMs for the IARPA Babel keyword search task is investigated. In order to supplement the limited acoustic transcription data, large amounts of web texts are also used in large vocabulary design and LM training. Various training criteria were then explored to improved RNNLMs' efficiency in both training and evaluation. Significant and consistent improvements on both keyword search and ASR tasks were obtained across all languages.",
"title": ""
},
{
"docid": "e36eeb99b8d816d77b825daab4839b41",
"text": "3T MRI has become increasingly available for better imaging of interosseous ligaments, TFCC, and avascular necrosis compared with 1.5T MRI. This study assesses the sensitivity and specificity of 3T MRI compared with arthroscopy as the gold standard. Eighteen patients were examined with 3T MRI using coronal T1-TSE; PD-FS; and coronal, sagittal, and axial contrast-enhanced T1-FFE-FS sequences. Two musculoskeletal radiologists evaluated the images independently. Patients underwent diagnostic arthroscopy. The classifications of the cartilage lesions showed good correlations with the arthroscopy findings (κ = 0.8–0.9). In contrast to the arthroscopy, cartilage of the distal carpal row was very good and could be evaluated in all patients on MRI. The sensitivity for the TFCC lesion was 83%, and the specificity was 42% (radiologist 1) and 63% (radiologist 2). For the ligament lesions, the sensitivity and specificity were 75 and 100%, respectively, with a high interobserver agreement (κ = 0.8–0.9). 3T MRI proved to be of good value in diagnosing cartilage lesions, especially in the distal carpal row, whereas wrist arthroscopy provided therapeutic options. When evaluating the surgical therapeutical options, 3T MRI is a good diagnostic tool for pre-operatively evaluating the cartilage of the distal carpal row.",
"title": ""
},
{
"docid": "c399b42e2c7307a5b3c081e34535033d",
"text": "The Internet of Things (IoT) plays an ever-increasing role in enabling smart city applications. An ontology-based semantic approach can help improve interoperability between a variety of IoT-generated as well as complementary data needed to drive these applications. While multiple ontology catalogs exist, using them for IoT and smart city applications require significant amount of work. In this paper, we demonstrate how can ontology catalogs be more effectively used to design and develop smart city applications? We consider four ontology catalogs that are relevant for IoT and smart cities: 1) READY4SmartCities; 2) linked open vocabulary (LOV); 3) OpenSensingCity (OSC); and 4) LOVs for IoT (LOV4IoT). To support semantic interoperability with the reuse of ontology-based smart city applications, we present a methodology to enrich ontology catalogs with those ontologies. Our methodology is generic enough to be applied to any other domains as is demonstrated by its adoption by OSC and LOV4IoT ontology catalogs. Researchers and developers have completed a survey-based evaluation of the LOV4IoT catalog. The usefulness of ontology catalogs ascertained through this evaluation has encouraged their ongoing growth and maintenance. The quality of IoT and smart city ontologies have been evaluated to improve the ontology catalog quality. We also share the lessons learned regarding ontology best practices and provide suggestions for ontology improvements with a set of software tools.",
"title": ""
},
{
"docid": "6483733f9cfd2eaacb5f368e454416db",
"text": "A paradigm shift is required to prevent the increasing automobile accident deaths that are mostly due to the inattentive behavior of drivers. Knowledge of gaze region can provide valuable information regarding a driver's point of attention. Accurate and inexpensive gaze classification systems in cars can improve safe driving. However, monitoring real-time driving behaviors and conditions presents some challenges: dizziness due to long drives, extreme lighting variations, glasses reflections, and occlusions. Past studies on gaze detection in cars have been chiefly based on head movements. The margin of error in gaze detection increases when drivers gaze at objects by moving their eyes without moving their heads. To solve this problem, a pupil center corneal reflection (PCCR)-based method has been considered. However, the error of accurately detecting the pupil center and corneal reflection center is increased in a car environment due to various environment light changes, reflections on glasses surface, and motion and optical blurring of captured eye image. In addition, existing PCCR-based methods require initial user calibration, which is difficult to perform in a car environment. To address this issue, we propose a deep learning-based gaze detection method using a near-infrared (NIR) camera sensor considering driver head and eye movement that does not require any initial user calibration. The proposed system is evaluated on our self-constructed database as well as on open Columbia gaze dataset (CAVE-DB). The proposed method demonstrated greater accuracy than the previous gaze classification methods.",
"title": ""
},
{
"docid": "cb32d25a3335bca24add68144246710f",
"text": "A load fault or a semiconductor breakdown in a high-power inverter results in very challenging shorts for the remaining semiconductors. Understanding the short-circuit behavior improves the robustness of the semiconductor and the inverters. However, this important topic is not very well analyzed and only few literature exist. Up to now, five different short-circuit types are known which can occur in two or multilevel inverters. The five short-circuit types differ in the starting condition and the dominating semiconductor during the short. Despite the different initial conditions, the behavior of the high-voltage IGBT and diode during short-circuit can be explained with the help of a capacitive equivalent circuit. The resulting dv/dt, the current distribution between the semiconductors, and the influence of the gate-drive unit on the short-circuit behavior can be explained. To validate the results retrieved from the equivalent circuit, measurements on a high voltage IGBT and simulations with a semiconductor simulator are presented.",
"title": ""
},
{
"docid": "13153476fac37dd879c34907f7db5317",
"text": "Lean deveLopment is a product development paradigm with an endto-end focus on creating value for the customer, eliminating waste, optimizing value streams, empowering people, and continuously improving (see Figure 11). Lean thinking has penetrated many industries. It was first used in manufacturing, with clear goals to empower teams, reduce waste, optimize work streams, and above all keep market and customer needs as the primary decision driver.2 This IEEE Software special issue addresses lean software development as opposed to management or manufacturing theories. In that context, we sought to address some key questions: What design principles deliver value, and how are they introduced to best manage change?",
"title": ""
},
{
"docid": "56a072fc480c64e6a288543cee9cd5ac",
"text": "The performance of object detection has recently been significantly improved due to the powerful features learnt through convolutional neural networks (CNNs). Despite the remarkable success, there are still several major challenges in object detection, including object rotation, within-class diversity, and between-class similarity, which generally degenerate object detection performance. To address these issues, we build up the existing state-of-the-art object detection systems and propose a simple but effective method to train rotation-invariant and Fisher discriminative CNN models to further boost object detection performance. This is achieved by optimizing a new objective function that explicitly imposes a rotation-invariant regularizer and a Fisher discrimination regularizer on the CNN features. Specifically, the first regularizer enforces the CNN feature representations of the training samples before and after rotation to be mapped closely to each other in order to achieve rotation-invariance. The second regularizer constrains the CNN features to have small within-class scatter but large between-class separation. We implement our proposed method under four popular object detection frameworks, including region-CNN (R-CNN), Fast R- CNN, Faster R- CNN, and R- FCN. In the experiments, we comprehensively evaluate the proposed method on the PASCAL VOC 2007 and 2012 data sets and a publicly available aerial image data set. Our proposed methods outperform the existing baseline methods and achieve the state-of-the-art results.",
"title": ""
},
{
"docid": "473968c14db4b189af126936fd5486ca",
"text": "Disclaimer/Complaints regulations If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: http://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible.",
"title": ""
},
{
"docid": "6b7daba104f8e691dd32cba0b4d66ecd",
"text": "This paper presents the first empirical results to our knowledge on learning synchronous grammars that generate logical forms. Using statistical machine translation techniques, a semantic parser based on a synchronous context-free grammar augmented with λoperators is learned given a set of training sentences and their correct logical forms. The resulting parser is shown to be the bestperforming system so far in a database query domain.",
"title": ""
},
{
"docid": "6d3b9e3f51e45cb5ade883254c7844d8",
"text": "In this paper we have focused on the carbon nano tube field effect transistor technology. The advantages of CNTFET over MOS technology are also discussed. The structure and types of CNTFET are given in detail along with the variation of threshold voltage with respect to the alteration in CNT diameter. The characteristics curve between gate to source current and drain to source voltage is plotted. Various fixed and variable parameters of CNT are also focused.",
"title": ""
}
] |
scidocsrr
|
48a1cd60e0b5b53dec5a5d2d3b59b09d
|
Neural correlates of the psychedelic state as determined by fMRI studies with psilocybin.
|
[
{
"docid": "e50320cfddc32a918389fbf8707d599f",
"text": "Psilocybin, an indoleamine hallucinogen, produces a psychosis-like syndrome in humans that resembles first episodes of schizophrenia. In healthy human volunteers, the psychotomimetic effects of psilocybin were blocked dose-dependently by the serotonin-2A antagonist ketanserin or the atypical antipsychotic risperidone, but were increased by the dopamine antagonist and typical antipsychotic haloperidol. These data are consistent with animal studies and provide the first evidence in humans that psilocybin-induced psychosis is due to serotonin-2A receptor activation, independently of dopamine stimulation. Thus, serotonin-2A overactivity may be involved in the pathophysiology of schizophrenia and serotonin-2A antagonism may contribute to therapeutic effects of antipsychotics.",
"title": ""
}
] |
[
{
"docid": "e3af956e04a55c8bed24efdebdd01931",
"text": "Since the effective and efficient system of water quality monitoring (WQM) are critical implementation for the issue of polluted water globally, with increasing in the development of Wireless Sensor Network (WSN) technology in the Internet of Things (IoT) environment, real time water quality monitoring is remotely monitored by means of real-time data acquisition, transmission and processing. This paper presents a reconfigurable smart sensor interface device for water quality monitoring system in an IoT environment. The smart WQM system consists of Field Programmable Gate Array (FPGA) design board, sensors, Zigbee based wireless communication module and personal computer (PC). The FPGA board is the core component of the proposed system and it is programmed in very high speed integrated circuit hardware description language (VHDL) and C programming language using Quartus II software and Qsys tool. The proposed WQM system collects the five parameters of water data such as water pH, water level, turbidity, carbon dioxide (CO2) on the surface of water and water temperature in parallel and in real time basis with high speed from multiple different sensor nodes.",
"title": ""
},
{
"docid": "1d19e616477e464e00570ca741ee3734",
"text": "Data Warehouses are a good source of data for downstream data mining applications. New data arrives in data warehouses during the periodic refresh cycles. Appending of data on existing data requires that all patterns discovered earlier using various data mining algorithms are updated with each refresh. In this paper, we present an incremental density based clustering algorithm. Incremental DBSCAN is an existing incremental algorithm in which data can be added/deleted to/from existing clusters, one point at a time. Our algorithm is capable of adding points in bulk to existing set of clusters. In this new algorithm, the data points to be added are first clustered using the DBSCAN algorithm and then these new clusters are merged with existing clusters, to come up with the modified set of clusters. That is, we add the clusters incrementally rather than adding points incrementally. It is found that the proposed incremental clustering algorithm produces the same clusters as obtained by Incremental DBSCAN. We have used R*-trees as the data structure to hold the multidimensional data that we need to cluster. One of the major advantages of the proposed approach is that it allows us to see the clustering patterns of the new data along with the existing clustering patterns. Moreover, we can see the merged clusters as well. The proposed algorithm is capable of considerable savings, in terms of region queries performed, as compared to incremental DBSCAN. Results are presented to support the claim.",
"title": ""
},
{
"docid": "ff07ba0331e11f0f5cb8f57b4bf97154",
"text": "Drowsiness while driving is one of the main causes of fatal accidents, especially on monotonous routes such as highways. The goal of this paper is to design a completely standalone, distraction-free, and wearable system for driver drowsiness detection by incorporating the system in a smartwatch. The main objective is to detect the driver's drowsiness level based on the driver behavior derived from the motion data collected from the built-in motion sensors in the smartwatch, such as the accelerometer and the gyroscope. For this purpose, the magnitudes of hand movements are extracted from the motion data and are used to calculate the time, spectral, and phase domain features. The features are selected based on the feature correlation method. Eight features serve as an input to a support vector machine (SVM) classifier. After the SVM training and testing, the highest obtained accuracy was 98.15% (Karolinska sleepiness scale). This user-predefined system can be used by both left-handed and right-handed users, because different SVM models are used for different hands. This is an effective, safe, and distraction-free system for the detection of driver drowsiness.",
"title": ""
},
{
"docid": "b40a6bceb64524aa28cdd668d5dd5900",
"text": "For computer vision applications, prior works have shown the efficacy of reducing numeric precision of model parameters (network weights) in deep neural networks. Activation maps, however, occupy a large memory footprint during both the training and inference step when using mini-batches of inputs. One way to reduce this large memory footprint is to reduce the precision of activations. However, past works have shown that reducing the precision of activations hurts model accuracy. We study schemes to train networks from scratch using reduced-precision activations without hurting accuracy. We reduce the precision of activation maps (along with model parameters) and increase the number of filter maps in a layer, and find that this scheme matches or surpasses the accuracy of the baseline full-precision network. As a result, one can significantly improve the execution efficiency (e.g. reduce dynamic memory footprint, memory bandwidth and computational energy) and speed up the training and inference process with appropriate hardware support. We call our scheme WRPN wide reduced-precision networks. We report results and show that WRPN scheme is better than previously reported accuracies on ILSVRC-12 dataset while being computationally less expensive compared to previously reported reduced-precision networks.",
"title": ""
},
{
"docid": "b8b82691002e3d694d5766ea3269a78e",
"text": "This article presents a framework for improving the Software Configuration Management (SCM) process, that includes a maturity model to assess software organizations and an approach to guide the transition from diagnosis to action planning. The maturity model and assessment tool are useful to identify the degree of satisfaction for practices considered key for SCM. The transition approach is also important because the application of a model to produce a diagnosis is just a first step, organizations are demanding the generation of action plans to implement the recommendations. The proposed framework has been used to assess a number of software organizations and to generate the basis to build an action plan for improvement. In summary, this article shows that the maturity model and action planning approach are instrumental to reach higher SCM control and visibility, therefore producing higher quality software.",
"title": ""
},
{
"docid": "ad61c6474832ecbe671040dfcb64e6aa",
"text": "This paper provides a brief overview on the recent advances of small-scale unmanned aerial vehicles (UAVs) from the perspective of platforms, key elements, and scientific research. The survey starts with an introduction of the recent advances of small-scale UAV platforms, based on the information summarized from 132 models available worldwide. Next, the evolvement of the key elements, including onboard processing units, navigation sensors, mission-oriented sensors, communication modules, and ground control station, is presented and analyzed. Third, achievements of small-scale UAV research, particularly on platform design and construction, dynamics modeling, and flight control, are introduced. Finally, the future of small-scale UAVs' research, civil applications, and military applications are forecasted.",
"title": ""
},
{
"docid": "d1ba8ad56a6227f771f9cef8139e9f15",
"text": "We study sentiment analysis beyond the typical granularity of polarity and instead use Plutchik’s wheel of emotions model. We introduce RBEM-Emo as an extension to the Rule-Based Emission Model algorithm to deduce such emotions from human-written messages. We evaluate our approach on two different datasets and compare its performance with the current state-of-the-art techniques for emotion detection, including a recursive autoencoder. The results of the experimental study suggest that RBEM-Emo is a promising approach advancing the current state-of-the-art in emotion detection.",
"title": ""
},
{
"docid": "897bdad46b659d8b2b1ce0ffd588a0bc",
"text": "Gene expression is controlled at transcriptional and post-transcriptional levels including decoding of messenger RNA (mRNA) into polypeptides via ribosome-mediated translation. Translational regulation has been intensively studied in the model dicot plant Arabidopsis thaliana, and in this study, we assessed the translational status [proportion of steady-state mRNA associated with ribosomes] of mRNAs by Translating Ribosome Affinity Purification followed by mRNA-sequencing (TRAP-seq) in rice (Oryza sativa), a model monocot plant and the most important food crop. A survey of three tissues found that most transcribed rice genes are translated whereas few transposable elements are associated with ribosomes. Genes with short and GC-rich coding regions are overrepresented in ribosome-associated mRNAs, suggesting that the GC-richness characteristic of coding sequences in grasses may be an adaptation that favors efficient translation. Transcripts with retained introns and extended 5' untranslated regions are underrepresented on ribosomes, and rice genes belonging to different evolutionary lineages exhibited differential enrichment on the ribosomes that was associated with GC content. Genes involved in photosynthesis and stress responses are preferentially associated with ribosomes, whereas genes in epigenetic regulation pathways are the least enriched on ribosomes. Such variation is more dramatic in rice than that in Arabidopsis and is correlated with the wide variation of GC content of transcripts in rice. Taken together, variation in the translation status of individual transcripts reflects important mechanisms of gene regulation, which may have a role in evolution and diversification.",
"title": ""
},
{
"docid": "4f6b8ea6fb0884bbcf6d4a6a4f658e52",
"text": "Ballistocardiography (BCG) enables the recording of heartbeat, respiration, and body movement data from an unconscious human subject. In this paper, we propose a new heartbeat detection algorithm for calculating heart rate (HR) and heart rate variability (HRV) from the BCG signal. The proposed algorithm consists of a moving dispersion calculation method to effectively highlight the respective heartbeat locations and an adaptive heartbeat peak detection method that can set a heartbeat detection window by automatically predicting the next heartbeat location. To evaluate the proposed algorithm, we compared it with other reference algorithms using a filter, waveform analysis and envelope calculation of signal by setting the ECG lead I as the gold standard. The heartbeat detection in BCG should be able to measure sensitively in the regions for lower and higher HR. However, previous detection algorithms are optimized mainly in the region of HR range (60~90 bpm) without considering the HR range of lower (40~60 bpm) and higher (90~110 bpm) HR. Therefore, we proposed an improved method in wide HR range that 40~110 bpm. The proposed algorithm detected the heartbeat greater stability in varying and wider heartbeat intervals as comparing with other previous algorithms. Our proposed algorithm achieved a relative accuracy of 98.29% with a root mean square error (RMSE) of 1.83 bpm for HR, as well as coverage of 97.63% and relative accuracy of 94.36% for HRV. And we obtained the root mean square (RMS) value of 1.67 for separated ranges in HR.",
"title": ""
},
{
"docid": "894e945c9bb27f5464d1b8f119139afc",
"text": "Motion analysis is used in computer vision to understand the behaviour of moving objects in sequences of images. Optimising the interpretation of dynamic biological systems requires accurate and precise motion tracking as well as efficient representations of high-dimensional motion trajectories so that these can be used for prediction tasks. Here we use image sequences of the heart, acquired using cardiac magnetic resonance imaging, to create time-resolved three-dimensional segmentations using a fully convolutional network trained on anatomical shape priors. This dense motion model formed the input to a supervised denoising autoencoder (4Dsurvival), which is a hybrid network consisting of an autoencoder that learns a task-specific latent code representation trained on observed outcome data, yielding a latent representation optimised for survival prediction. To handle right-censored survival outcomes, our network used a Cox partial likelihood loss function. In a study of 302 patients the predictive accuracy (quantified by Harrell's C-index) was significantly higher (p = .0012) for our model C=0.75 (95% CI: 0.70 - 0.79) than the human benchmark of C=0.59 (95% CI: 0.53 - 0.65). This work demonstrates how a complex computer vision task using high-dimensional medical image data can efficiently predict human survival.",
"title": ""
},
{
"docid": "740d130948c25d5cd2027645bab151a9",
"text": "Ahstract-The Amazon Robotics Challenge enlisted sixteen teams to each design a pick-and-place robot for autonomous warehousing, addressing development in robotic vision and manipulation. This paper presents the design of our custom-built, cost-effective, Cartesian robot system Cartman, which won first place in the competition finals by stowing 14 (out of 16) and picking all 9 items in 27 minutes, scoring a total of 272 points. We highlight our experience-centred design methodology and key aspects of our system that contributed to our competitiveness. We believe these aspects are crucial to building robust and effective robotic systems.",
"title": ""
},
{
"docid": "d42bbb6fe8d99239993ed01aa44c32ef",
"text": "Chemical communication plays a very important role in the lives of many social insects. Several different types of pheromones (species-specific chemical messengers) of ants have been described, particularly those involved in recruitment, recognition, territorial and alarm behaviours. Properties of pheromones include activity in minute quantities (thus requiring sensitive methods for chemical analysis) and specificity (which can have chemotaxonomic uses). Ants produce pheromones in various exocrine glands, such as the Dufour, poison, pygidial and mandibular glands. A wide range of substances have been identified from these glands.",
"title": ""
},
{
"docid": "58061318f47a2b96367fe3e8f3cd1fce",
"text": "The growth of lymphatic vessels (lymphangiogenesis) is actively involved in a number of pathological processes including tissue inflammation and tumor dissemination but is insufficient in patients suffering from lymphedema, a debilitating condition characterized by chronic tissue edema and impaired immunity. The recent explosion of knowledge on the molecular mechanisms governing lymphangiogenesis provides new possibilities to treat these diseases.",
"title": ""
},
{
"docid": "e64320b71675f2a059a50fd9479d2056",
"text": "Extreme sports (ES) are usually pursued in remote locations with little or no access to medical care with the athlete competing against oneself or the forces of nature. They involve high speed, height, real or perceived danger, a high level of physical exertion, spectacular stunts, and heightened risk element or death.Popularity for such sports has increased exponentially over the past two decades with dedicated TV channels, Internet sites, high-rating competitions, and high-profile sponsors drawing more participants.Recent data suggest that the risk and severity of injury in some ES is unexpectedly high. Medical personnel treating the ES athlete need to be aware there are numerous differences which must be appreciated between the common traditional sports and this newly developing area. These relate to the temperament of the athletes themselves, the particular epidemiology of injury, the initial management following injury, treatment decisions, and rehabilitation.The management of the injured extreme sports athlete is a challenge to surgeons and sports physicians. Appropriate safety gear is essential for protection from severe or fatal injuries as the margins for error in these sports are small.The purpose of this review is to provide an epidemiologic overview of common injuries affecting the extreme athletes through a focus on a few of the most popular and exciting extreme sports.",
"title": ""
},
{
"docid": "868df6c0c43dd49588cc0892b50e8079",
"text": "Software bugs, such as concurrency, memory and semantic bugs, can significantly affect system reliability. Although much effort has been made to address this problem, there are still many bugs that cannot be detected, especially concurrency bugs due to the complexity of concurrent programs. Effective approaches for detecting these common bugs are therefore highly desired.\n This paper presents an invariant-based bug detection tool, DefUse, which can detect not only concurrency bugs (including the previously under-studied order violation bugs), but also memory and semantic bugs. Based on the observation that many bugs appear as violations to programmers' data flow intentions, we introduce three different types of definition-use invariants that commonly exist in both sequential and concurrent programs. We also design an algorithm to automatically extract such invariants from programs, which are then used to detect bugs. Moreover, DefUse uses various techniques to prune false positives and rank error reports.\n We evaluated DefUse using sixteen real-world applications with twenty real-world concurrency and sequential bugs. Our results show that DefUse can effectively detect 19 of these bugs, including 2 new bugs that were never reported before, with only a few false positives. Our training sensitivity results show that, with the benefit of the pruning and ranking algorithms, DefUse is accurate even with insufficient training.",
"title": ""
},
{
"docid": "effbe5c9cd150b01e0659707e72650a9",
"text": "Research on grammatical error correction has received considerable attention. For dealing with all types of errors, grammatical error correction methods that employ statistical machine translation (SMT) have been proposed in recent years. An SMT system generates candidates with scores for all candidates and selects the sentence with the highest score as the correction result. However, the 1-best result of an SMT system is not always the best result. Thus, we propose a reranking approach for grammatical error correction. The reranking approach is used to re-score N-best results of the SMT and reorder the results. Our experiments show that our reranking system using parts of speech and syntactic features improves performance and achieves state-of-theart quality, with an F0.5 score of 40.0.",
"title": ""
},
{
"docid": "5cfd76e2e09e94daded01722fdcda704",
"text": "Online genealogy datasets contain extensive information about millions of people and their past and present family connections. This vast amount of data can help identify various patterns in the human population. In this study, we present methods and algorithms that can assist in identifying variations in lifespan distributions of the human population in the past centuries, in detecting social and genetic features that correlate with the human lifespan, and in constructing predictive models of human lifespan based on various features that can easily be extracted from genealogy datasets.\n We have evaluated the presented methods and algorithms on a large online genealogy dataset with over a million profiles and over 9 million connections, all of which were collected from the WikiTree website. Our findings indicate that significant but small positive correlations exist between the parents’ lifespan and their children’s lifespan. Additionally, we found slightly higher and significant correlations between the lifespans of spouses. We also discovered a very small positive and significant correlation between longevity and reproductive success in males, and a small and significant negative correlation between longevity and reproductive success in females. Moreover, our predictive models presented results with a Mean Absolute Error as low as 13.18 in predicting the lifespans of individuals who outlived the age of 10, and our classification models presented better than random classification results in predicting which people who outlive the age of 50 will also outlive the age of 80.\n We believe that this study will be the first of many studies to utilize the wealth of data on human populations, existing in online genealogy datasets, to better understand factors that influence the human lifespan. Understanding these factors can assist scientists in providing solutions for successful aging.",
"title": ""
},
{
"docid": "186f2950bd4ce621eb0696c2fd09a468",
"text": "In this paper, I investigate the use of a disentangled VAE for downstream image classification tasks. I train a disentangled VAE in an unsupervised manner, and use the learned encoder as a feature extractor on top of which a linear classifier is learned. The models are trained and evaluated on the MNIST handwritten digits dataset. Experiments compared the disentangled VAE with both a standard (entangled) VAE and a vanilla supervised model. Results show that the disentangled VAE significantly outperforms the other two models when the proportion of labelled data is artificially reduced, while it loses this advantage when the amount of labelled data increases, and instead matches the performance of the other models. These results suggest that the disentangled VAE may be useful in situations where labelled data is scarce but unlabelled data is abundant.",
"title": ""
},
{
"docid": "8d7ece4b518223bc8156b173875d06e3",
"text": "This paper presents two robot devices for use in the rehabilitation of upper limb movements and reports the quantitative parameters obtained to characterize the rate of improvement, thus allowing a precise monitoring of patient's recovery. A one degree of freedom (DoF) wrist manipulator and a two-DoF elbow-shoulder manipulator were designed using an admittance control strategy; if the patient could not move the handle, the devices completed the motor task. Two groups of chronic post-stroke patients (G1 n=7, and G2 n=9) were enrolled in a three week rehabilitation program including standard physical therapy (45 min daily) plus treatment by means of robot devices, respectively, for wrist and elbow-shoulder movements (40 min, twice daily). Both groups were evaluated by means of standard clinical assessment scales and a new robot measured evaluation metrics that included an active movement index quantifying the patient's ability to execute the assigned motor task without robot assistance, the mean velocity, and a movement accuracy index measuring the distance of the executed path from the theoretic one. After treatment, both groups improved their motor deficit and disability. In G1, there was a significant change in the clinical scale values (p<0.05) and range of motion wrist extension (p<0.02). G2 showed a significant change in clinical scales (p<0.01), in strength (p<0.05) and in the robot measured parameters (p<0.01). The relationship between robot measured parameters and the clinical assessment scales showed a moderate and significant correlation (r>0.53 p<0.03). Our findings suggest that robot-aided neurorehabilitation may improve the motor outcome and disability of chronic post-stroke patients. The new robot measured parameters may provide useful information about the course of treatment and its effectiveness at discharge.",
"title": ""
},
{
"docid": "63934b1fdc9b7c007302cdc41a42744e",
"text": "Recently, various energy harvesting techniques from ambient environments were proposed as alternative methods for powering sensor nodes, which convert the ambient energy from environments into electricity to power sensor nodes. However, those techniques are not applicable to the wireless sensor networks (WSNs) in the environment with no ambient energy source. To overcome this problem, an RF energy transfer method was proposed to power wireless sensor nodes. However, the RF energy transfer method also has a problem of unfairness among sensor nodes due to the significant difference between their energy harvesting rates according to their positions. In this paper, we propose a medium access control (MAC) protocol for WSNs based on RF energy transfer. The proposed MAC protocol adaptively manages the duty cycle of sensor nodes according to their the amount of harvested energy as well as the contention time of sensor nodes considering fairness among them. Through simulations, we show that our protocol can achieve a high degree of fairness, while maintaining duty cycle of sensor nodes appropriately according to the amount of their harvested energy.",
"title": ""
}
] |
scidocsrr
|
dae4ad0e257750eb7a8bcf649b9b9708
|
From the Virtual to the RealWorld: Referring to Objects in Real-World Spatial Scenes
|
[
{
"docid": "9eaab923986bf74bdd073f6766ca45b2",
"text": "This paper introduces a novel generation system that composes humanlike descriptions of images from computer vision detections. By leveraging syntactically informed word co-occurrence statistics, the generator filters and constrains the noisy detections output from a vision system to generate syntactic trees that detail what the computer vision system sees. Results show that the generation system outperforms state-of-the-art systems, automatically generating some of the most natural image descriptions to date.",
"title": ""
}
] |
[
{
"docid": "203ae6dee1000e83dbce325c14539365",
"text": "In this paper, the usefulness of several topologies of DC-DC converters for measuring the characteristic curves of photovoltaic (PV) modules is theoretically analyzed. Eight topologies of DC-DC converters with step-down/step-up conversion relation (buck-boost single inductor, CSC (canonical switching cell), Cuk, SEPIC (single-ended primary inductance converter), zeta, flyback, boost-buck-cascaded, and buck-boost-cascaded converters) are compared and evaluated. This application is based on the property of these converters for emulating a resistor when operating in continuous conduction mode. Therefore, they are suitable to implement a system capable of measuring the I-V curve of PV modules. Other properties have been taken into account: input ripple, devices stress, size of magnetic components and input-output isolation. The study determines that SEPIC and Cuk converters are the most suitable for this application mainly due to the low input current ripple, allow input-output insulation and can be connected in parallel in order to measure PV modules o arrays with greater power. CSC topology is also suitable because it uses fewer components but of a larger size. Experimental results validate the comparative analysis.",
"title": ""
},
{
"docid": "8d45138ec69bb4ee47efa088c03d7a42",
"text": "Precision medicine is at the forefront of biomedical research. Cancer registries provide rich perspectives and electronic health records (EHRs) are commonly utilized to gather additional clinical data elements needed for translational research. However, manual annotation is resource-intense and not readily scalable. Informatics-based phenotyping presents an ideal solution, but perspectives obtained can be impacted by both data source and algorithm selection. We derived breast cancer (BC) receptor status phenotypes from structured and unstructured EHR data using rule-based algorithms, including natural language processing (NLP). Overall, the use of NLP increased BC receptor status coverage by 39.2% from 69.1% with structured medication information alone. Using all available EHR data, estrogen receptor-positive BC cases were ascertained with high precision (P = 0.976) and recall (R = 0.987) compared with gold standard chart-reviewed patients. However, status negation (R = 0.591) decreased 40.2% when relying on structured medications alone. Using multiple EHR data types (and thorough understanding of the perspectives offered) are necessary to derive robust EHR-based precision medicine phenotypes.",
"title": ""
},
{
"docid": "6827fc6b1096dbfc7dfbd1886911a4ff",
"text": "This paper proposes a new formulation and solution to image-based 3D modeling (aka “multi-view stereo”) based on generative statistical modeling and inference. The proposed new approach, named statistical inverse ray tracing, models and estimates the occlusion relationship accurately through optimizing a physically sound image generation model based on volumetric ray tracing. Together with geometric priors, they are put together into a Bayesian formulation known as Markov random field (MRF) model. This MRF model is different from typical MRFs used in image analysis in the sense that the ray clique, which models the ray-tracing process, consists of thousands of random variables instead of two to dozens. To handle the computational challenges associated with large clique size, an algorithm with linear computational complexity is developed by exploiting, using dynamic programming, the recursive chain structure of the ray clique. We further demonstrate the benefit of exact modeling and accurate estimation of the occlusion relationship by evaluating the proposed algorithm on several challenging data sets.",
"title": ""
},
{
"docid": "d3ce4e666ce658228be23c5a26b87527",
"text": "Deep Neural Networks (DNNs) have emerged as a powerful and versatile set of techniques to address challenging artificial intelligence (AI) problems. Applications in domains such as image/video processing, natural language processing, speech synthesis and recognition, genomics and many others have embraced deep learning as the foundational technique. DNNs achieve superior accuracy for these applications using very large models which require 100s of MBs of data storage, ExaOps of computation and high bandwidth for data movement. Despite advances in computing systems, training state-of-the-art DNNs on large datasets takes several days/weeks, directly limiting the pace of innovation and adoption. In this paper, we discuss how these challenges can be addressed via approximate computing. Based on our earlier studies demonstrating that DNNs are resilient to numerical errors from approximate computing, we present techniques to reduce communication overhead of distributed deep learning training via adaptive residual gradient compression (AdaComp), and computation cost for deep learning inference via Prameterized clipping ACTivation (PACT) based network quantization. Experimental evaluation demonstrates order of magnitude savings in communication overhead for training and computational cost for inference while not compromising application accuracy.",
"title": ""
},
{
"docid": "53a7aff5f5409e3c2187a5d561ff342e",
"text": "We present a study focused on constructing models of players for the major commercial title Tomb Raider: Underworld (TRU). Emergent self-organizing maps are trained on high-level playing behavior data obtained from 1365 players that completed the TRU game. The unsupervised learning approach utilized reveals four types of players which are analyzed within the context of the game. The proposed approach automates, in part, the traditional user and play testing procedures followed in the game industry since it can inform game developers, in detail, if the players play the game as intended by the game design. Subsequently, player models can assist the tailoring of game mechanics in real-time for the needs of the player type identified.",
"title": ""
},
{
"docid": "797301307659377049b04ff1c02ca6ec",
"text": "Spectrograms of speech and audio signals are time-frequency densities, and by construction, they are non-negative and do not have phase associated with them. Under certain conditions on the amount of overlap between consecutive frames and frequency sampling, it is possible to reconstruct the signal from the spectrogram. Deviating from this requirement, we develop a new technique to incorporate the phase of the signal in the spectrogram by satisfying what we call as the delta dominance condition, which in general is different from the well known minimum-phase condition. In fact, there are signals that are delta dominant but not minimum-phase and vice versa. The delta dominance condition can be satisfied in multiple ways, for example by placing a Kronecker impulse of the right amplitude or by choosing a suitable window function. A direct consequence of this novel way of constructing the spectrograms is that the phase of the signal is directly encoded or embedded in the spectrogram. We also develop a reconstruction methodology that takes such phase-encoded spectrograms and obtains the signal using the discrete Fourier transform (DFT). It is envisaged that the new class of phase-encoded spectrogram representations would find applications in various speech processing tasks such as analysis, synthesis, enhancement, and recognition.",
"title": ""
},
{
"docid": "c1eb39f2c823a9c40041268b78a75e86",
"text": "Distamycin binds the minor groove of duplex DNA at AT-rich regions and has been a valuable probe of protein interactions with double-stranded DNA. We ®nd that distamycin can also inhibit protein interactions with G-quadruplex (G4) DNA, a stable fourstranded structure in which the repeating unit is a G-quartet. Using NMR, we show that distamycin binds speci®cally to G4 DNA, stacking on the terminal G-quartets and contacting the ̄anking bases. These results demonstrate the utility of distamycin as a probe of G4 DNA±protein interactions and show that there are (at least) two distinct modes of protein±G4 DNA recognition which can be distinguished by sensitivity to distamycin.",
"title": ""
},
{
"docid": "b14ce16f81bf19c2e3ae1120b42f14c0",
"text": "Most robotic grasping tasks assume a stationary or fixed object. In this paper, we explore the requirements for tracking and grasping a moving object. The focus of our work is to achieve a high level of interaction between a real-time vision system capable of tracking moving objects in 3-D and a robot arm with gripper that can be used to pick up a moving object. There is an interest in exploring the interplay of hand-eye coordination for dynamic grasping tasks such as grasping of parts on a moving conveyor system, assembly of articulated parts, or for grasping from a mobile robotic system. Coordination between an organism's sensing modalities and motor control system is a hallmark of intelligent behavior, and we are pursuing the goal of building an integrated sensing and actuation system that can operate in dynamic as opposed to static environments. The system we have built addresses three distinct problems in robotic hand-eye coordination for grasping moving objects: fast computation of 3-D motion parameters from vision, predictive control of a moving robotic arm to track a moving object, and interception and grasping. The system is able to operate at approximately human arm movement rates, and experimental results in which a moving model train is tracked is presented, stably grasped, and picked up by the system. The algorithms we have developed that relate sensing to actuation are quite general and applicable to a variety of complex robotic tasks that require visual feedback for arm and hand control.",
"title": ""
},
{
"docid": "7cf7a419cf681e9deea42d77e0e9cec2",
"text": "Industrial organizations use Energy Management Systems (EMS) to monitor, control, and optimize their energy consumption. Industrial EMS are complex and expensive systems due to the unique requirements of performance, reliability, and interoperability. Moreover, industry is facing challenges with current EMS implementations such as cross-site monitoring of energy consumption and CO2 emissions, integration between energy and production data, and meaningful energy efficiency benchmarking. Additionally, big data has emerged because of recent advances in field instrumentation that led to the generation of large quantities of machine data, with much more detail and higher sampling rates. This created a challenge for real-time analytics. In order to address all these needs and challenges, we propose a cloud-native industrial EMS solution with cloud computing capabilities. Through this innovative approach we expect to generate useful knowledge in a shorter time period, enabling organizations to react quicker to changes of events and detect hidden patterns that compromise efficiency.",
"title": ""
},
{
"docid": "b6f4a2122f8fe1bc7cb4e59ad7cf8017",
"text": "The use of biomass to provide energy has been fundamental to the development of civilisation. In recent times pressures on the global environment have led to calls for an increased use of renewable energy sources, in lieu of fossil fuels. Biomass is one potential source of renewable energy and the conversion of plant material into a suitable form of energy, usually electricity or as a fuel for an internal combustion engine, can be achieved using a number of different routes, each with specific pros and cons. A brief review of the main conversion processes is presented, with specific regard to the production of a fuel suitable for spark ignition gas engines.",
"title": ""
},
{
"docid": "f1018166da0922b5428bd1b37e2120ee",
"text": "In many water distribution systems, a significant amount of water is lost because of leakage during transit from the water treatment plant to consumers. As a result, water leakage detection and localization have been a consistent focus of research. Typically, diagnosis or detection systems based on sensor signals incur significant computational and time costs, whereas the system performance depends on the features selected as input to the classifier. In this paper, to solve this problem, we propose a novel, fast, and accurate water leakage detection system with an adaptive design that fuses a one-dimensional convolutional neural network and a support vector machine. We also propose a graph-based localization algorithm to determine the leakage location. An actual water pipeline network is represented by a graph network and it is assumed that leakage events occur at virtual points on the graph. The leakage location at which costs are minimized is estimated by comparing the actual measured signals with the virtually generated signals. The performance was validated on a wireless sensor network based test bed, deployed on an actual WDS. Our proposed methods achieved 99.3% leakage detection accuracy and a localization error of less than 3 m.",
"title": ""
},
{
"docid": "92ac3bfdcf5e554152c4ce2e26b77315",
"text": "How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? We introduce a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case. Our contributions is two-fold. First, we show that a reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods. Second, we show that for i.i.d. datasets with continuous latent variables per datapoint, posterior inference can be made especially efficient by fitting an approximate inference model (also called a recognition model) to the intractable posterior using the proposed lower bound estimator. Theoretical advantages are reflected in experimental results.",
"title": ""
},
{
"docid": "f157b3fb65d4ce1df6d6bb549b020fa0",
"text": "We have developed a reversible method to convert color graphics and pictures to gray images. The method is based on mapping colors to low-visibility high-frequency textures that are applied onto the gray image. After receiving a monochrome textured image, the decoder can identify the textures and recover the color information. More specifically, the image is textured by carrying a subband (wavelet) transform and replacing bandpass subbands by the chrominance signals. The low-pass subband is the same as that of the luminance signal. The decoder performs a wavelet transform on the received gray image and recovers the chrominance channels. The intent is to print color images with black and white printers and to be able to recover the color information afterwards. Registration problems are discussed and examples are presented.",
"title": ""
},
{
"docid": "0fd4268e53504d296f4cbe8330928759",
"text": "Wireless sensor networks, which are used for monitoring large-scale linear structures, are having a significant importance due to its implementation and low cost maintenance. The main challenges that this technology faces are the location and automatic identification of thousands of nodes that are part of a linear large-scale infrastructure, with minimum processing and low power consumption of the nodes. This paper proposes an automatic identification and location mechanism of the nodes that are part of a linear structure using link-level processes and the IEEE 802.15.4 protocol. This mechanism improves the reliability in the allocation process of identifiers without using routing protocols to minimize the number of sent messages, also the computation process is done at the nodes, and therefore it reduces energy consumption. The proposed algorithm is evaluated using a prototype, for which we present initial performance analysis results focusing on delay. The implementation results confirm that with the proposed algorithm we obtain automatic allocations of identifiers.",
"title": ""
},
{
"docid": "8ab4f34c736742a153477f919dfb4d8f",
"text": "In this paper, we model the trajectory of sea vessels and provide a service that predicts in near-real time the position of any given vessel in 4’, 10’, 20’ and 40’ time intervals. We explore the necessary tradeoffs between accuracy, performance and resource utilization are explored given the large volume and update rates of input data. We start with building models based on well-established machine learning algorithms using static datasets and multi-scan training approaches and identify the best candidate to be used in implementing a single-pass predictive approach, under real-time constraints. The results are measured in terms of accuracy and performance and are compared against the baseline kinematic equations. Results show that it is possible to efficiently model the trajectory of multiple vessels using a single model, which is trained and evaluated using an adequately large, static dataset, thus achieving a significant gain in terms of resource usage while not compromising accuracy.",
"title": ""
},
{
"docid": "e328b04e434ed3bff2077b703b7b4c1e",
"text": "We have created a large diverse set of cars from overhead images, which are useful for training a deep learner to binary classify, detect and count them. The dataset and all related material will be made publically available. The set contains contextual matter to aid in identification of difficult targets. We demonstrate classification and detection on this dataset using a neural network we call ResCeption. This network combines residual learning with Inception-style layers and is used to count cars in one look. This is a new way to count objects rather than by localization or density estimation. It is fairly accurate, fast and easy to implement. Additionally, the counting method is not car or scene specific. It would be easy to train this method to count other kinds of objects and counting over new scenes requires no extra set up or assumptions about object locations.",
"title": ""
},
{
"docid": "80e90d91ccbe79d932fbdd0ece7a2578",
"text": "In this paper, a novel high-performance compact dual planar electromagnetic bandgap (DP-EBG) microstrip low-pass filter with a U-shaped geometry is proposed. By employing the unique DP-EBG configuration and the U-shaped geometry of the microstrip line (MLIN), the proposed structure achieves a wide stopband with high attenuation and a high selectivity within a small circuit area. Its passband ripple level is low due to the U-shaped geometry and the electromagnetic bandgap (EBG) structure with square patches inserted at the bends of the MLIN. The Chebyshev tapering technique is used to taper components of the proposed structure in order to eliminate ripples caused by the EBG periodicity. The structure was fabricated and the measured results are in good agreement with the simulated results. This novel design demonstrates superior low-pass filtering functionality and can easily be applied to monolithic circuits.",
"title": ""
},
{
"docid": "c9ad1daa4ee0d900c1a2aa9838eb9918",
"text": "A central question in human development is how young children gain knowledge so fast. We propose that analogical generalization drives much of this early learning and allows children to generate new abstractions from experience. In this paper, we review evidence for analogical generalization in both children and adults. We discuss how analogical processes interact with the child's changing knowledge base to predict the course of learning, from conservative to domain-general understanding. This line of research leads to challenges to existing assumptions about learning. It shows that (a) it is not enough to consider the distribution of examples given to learners; one must consider the processes learners are applying; (b) contrary to the general assumption, maximizing variability is not always the best route for maximizing generalization and transfer.",
"title": ""
},
{
"docid": "aacfd1e4670044e597f8a321375bdfc1",
"text": "This article presents the main outcome findings from two inter-related randomized trials conducted at four sites to evaluate the effectiveness and cost-effectiveness of five short-term outpatient interventions for adolescents with cannabis use disorders. Trial 1 compared five sessions of Motivational Enhancement Therapy plus Cognitive Behavioral Therapy (MET/CBT) with a 12-session regimen of MET and CBT (MET/CBT12) and another that included family education and therapy components (Family Support Network [FSN]). Trial II compared the five-session MET/CBT with the Adolescent Community Reinforcement Approach (ACRA) and Multidimensional Family Therapy (MDFT). The 600 cannabis users were predominately white males, aged 15-16. All five CYT interventions demonstrated significant pre-post treatment during the 12 months after random assignment to a treatment intervention in the two main outcomes: days of abstinence and the percent of adolescents in recovery (no use or abuse/dependence problems and living in the community). Overall, the clinical outcomes were very similar across sites and conditions; however, after controlling for initial severity, the most cost-effective interventions were MET/CBT5 and MET/CBT12 in Trial 1 and ACRA and MET/CBT5 in Trial 2. It is possible that the similar results occurred because outcomes were driven more by general factors beyond the treatment approaches tested in this study; or because of shared, general helping factors across therapies that help these teens attend to and decrease their connection to cannabis and alcohol.",
"title": ""
},
{
"docid": "9888ef3aefca1049307ecd49ea5a3a49",
"text": "We live in a \"small world,\" where two arbitrary people are likely connected by a short chain of intermediate friends. With scant information about a target individual, people can successively forward a message along such a chain. Experimental studies have verified this property in real social networks, and theoretical models have been advanced to explain it. However, existing theoretical models have not been shown to capture behavior in real-world social networks. Here, we introduce a richer model relating geography and social-network friendship, in which the probability of befriending a particular person is inversely proportional to the number of closer people. In a large social network, we show that one-third of the friendships are independent of geography and the remainder exhibit the proposed relationship. Further, we prove analytically that short chains can be discovered in every network exhibiting the relationship.",
"title": ""
}
] |
scidocsrr
|
0046bc63ba5db0620e78267300865ad2
|
A Human Activity Recognition System Using Skeleton Data from RGBD Sensors
|
[
{
"docid": "f50c735147be5112bc3c81107002d99a",
"text": "Over the years, several spatio-temporal interest point detectors have been proposed. While some detectors can only extract a sparse set of scaleinvariant features, others allow for the detection of a larger amount of features at user-defined scales. This paper presents for the first time spatio-temporal interest points that are at the same time scale-invariant (both spatially and temporally) and densely cover the video content. Moreover, as opposed to earlier work, the features can be computed efficiently. Applying scale-space theory, we show that this can be achieved by using the determinant of the Hessian as the saliency measure. Computations are speeded-up further through the use of approximative box-filter operations on an integral video structure. A quantitative evaluation and experimental results on action recognition show the strengths of the proposed detector in terms of repeatability, accuracy and speed, in comparison with previously proposed detectors.",
"title": ""
},
{
"docid": "1616d9fb3fb2b2a3c97f0bf1d36d8b79",
"text": "Platt’s probabilistic outputs for Support Vector Machines (Platt, J. in Smola, A., et al. (eds.) Advances in large margin classifiers. Cambridge, 2000) has been popular for applications that require posterior class probabilities. In this note, we propose an improved algorithm that theoretically converges and avoids numerical difficulties. A simple and ready-to-use pseudo code is included.",
"title": ""
},
{
"docid": "ca2d9b2fe08cda70aa37410aa30e2f2a",
"text": "3D human pose estimation from a single image is a challenging problem, especially for in-the-wild settings due to the lack of 3D annotated data. We propose two anatomically inspired loss functions and use them with the weaklysupervised learning framework of [41] to jointly learn from large-scale in-thewild 2D and indoor/synthetic 3D data. We also present a simple temporal network that exploits temporal and structural cues present in predicted pose sequences to temporally harmonize the pose estimations. We carefully analyze the proposed contributions through loss surface visualizations and sensitivity analysis to facilitate deeper understanding of their working mechanism. Our complete pipeline improves the state-of-the-art by 11.8% and 12% on Human3.6M and MPI-INF3DHP, respectively, and runs at 30 FPS on a commodity graphics card.",
"title": ""
}
] |
[
{
"docid": "ed7ee17fd3410f05a4057a1385b9d215",
"text": "Social loafing is the tendency for individuals to expend less effort when working collectively than when working individually. A meta-analysis of 78 studies demonstrates that social loafing is robust and generalizes across tasks and S populations. A large number of variables were found to moderate social loafing. Evaluation potential, expectations of co-worker performance, task meaningfulness, and culture had especially strong influence. These findings are interpreted in the light of a Collective Effort Model that integrates elements of expectancy-value, social identity, and self-validation theories.",
"title": ""
},
{
"docid": "7f799fbe03849971cb3272e35e7b13db",
"text": "Text often expresses the writer's emotional state or evokes emotions in the reader. The nature of emotional phenomena like reading and writing can be interpreted in different ways and represented with different computational models. Affective computing (AC) researchers often use a categorical model in which text data is associated with emotional labels. We introduce a new way of using normative databases as a way of processing text with a dimensional model and compare it with different categorical approaches. The approach is evaluated using four data sets of texts reflecting different emotional phenomena. An emotional thesaurus and a bag-‐of-‐words model are used to generate vectors for each pseudo-‐ document, then for the categorical models three dimensionality reduction techniques are evaluated: Latent Semantic Analysis (LSA), Probabilistic Latent Semantic Analysis (PLSA), and Non-‐negative Matrix Factorization (NMF). For the dimensional model a normative database is used to produce three-‐dimensional vectors (valence, arousal, dominance) for each pseudo-‐document. This 3-‐dimensional model can be used to generate psychologically driven visualizations. Both models can be used for affect detection based on distances amongst categories and pseudo-‐documents. Experiments show that the categorical model using NMF and the dimensional model tend to perform best. 1. INTRODUCTION Emotions and affective states are pervasive in all forms of communication, including text based, and increasingly recognized as important to understanding the full meaning that a message conveys, or the impact it will have on readers. Given the increasing amounts of textual communication being produced (e.g. emails, user created content, published content) researchers are seeking automated language processing techniques that include models of emotions. Emotions and other affective states (e.g. moods) have been studied by many disciplines. Affect scientists have studied emotions since Darwin (Darwin, 1872), and different schools within psychology have produced different theories representing different ways of interpreting affective phenomena (comprehensively reviewed in Davidson, Scherer and Goldsmith, 2003). In the last decade technologists have also started contributing to this research. Affective Computing (AC) in particular is contributing new ways to improve communication between the sensitive human and the unemotionally computer. AC researchers have developed computational systems that recognize and respond to the affective states of the user (Calvo and D'Mello, 2010). Affect-‐sensitive user interfaces are being developed in a number of domains including gaming, mental health, and learning technologies. The basic tenet behind most AC systems is that automatically recognizing and responding to a user's affective states during interactions with a computer, …",
"title": ""
},
{
"docid": "bcf0156fdc95f431c550e0554cddbcbc",
"text": "This paper deals with incremental classification and its particular application to invoice classification. An improved version of an already existant incremental neural network called IGNG (incremental growing neural gas) is used for this purpose. This neural network tries to cover the space of data by adding or deleting neurons as data is fed to the system. The improved version of the IGNG, called I2GNG used local thresholds in order to create or delete neurons. Applied on invoice documents represented with graphs, I2GNG shows a recognition rate of 97.63%.",
"title": ""
},
{
"docid": "cc3f47aba00cb986bdb8234f98726c57",
"text": "Gender differences in brain development and in the prevalence of neuropsychiatric disorders such as depression have been reported. Gender differences in human brain might be related to patterns of gene expression. Microarray technology is one useful method for investigation of gene expression in brain. We investigated gene expression, cell types, and regional expression patterns of differentially expressed sex chromosome genes in brain. We profiled gene expression in male and female dorsolateral prefrontal cortex, anterior cingulate cortex, and cerebellum using the Affymetrix oligonucleotide microarray platform. Differentially expressed genes between males and females on the Y chromosome (DBY, SMCY, UTY, RPS4Y, and USP9Y) and X chromosome (XIST) were confirmed using real-time PCR measurements. In situ hybridization confirmed the differential expression of gender-specific genes and neuronal expression of XIST, RPS4Y, SMCY, and UTY in three brain regions examined. The XIST gene, which silences gene expression on regions of the X chromosome, is expressed in a subset of neurons. Since a subset of neurons express gender-specific genes, neural subpopulations may exhibit a subtle sexual dimorphism at the level of differences in gene regulation and function. The distinctive pattern of neuronal expression of XIST, RPS4Y, SMCY, and UTY and other sex chromosome genes in neuronal subpopulations may possibly contribute to gender differences in prevalence noted for some neuropsychiatric disorders. Studies of the protein expression of these sex-chromosome-linked genes in brain tissue are required to address the functional consequences of the observed gene expression differences.",
"title": ""
},
{
"docid": "53c2835a45ff743633f9d08867ca3f06",
"text": "This paper presents a mathematical model and vertical flight control algorithms for a new tilt-wing unmanned aerial vehicle (UAV). The vehicle is capable of vertical take-off and landing (VTOL). Due to its tilt-wing structure, it can also fly horizontally. The mathematical model of the vehicle is obtained using Newton-Euler formulation. A gravity compensated PID controller is designed for altitude control, and three PID controllers are designed for attitude stabilization of the vehicle. Performances of these controllers are found to be quite satisfactory as demonstrated by indoor and outdoor flight experiments.",
"title": ""
},
{
"docid": "8ff324e6321ea14b1b5270b069116dd8",
"text": "In this chapter, the topic of using process improvement approaches to improve knowledge work is addressed. The effective performance of knowledge work is critical to contemporary sophisticated economies. It is suggested that traditional, engineering-based approaches to knowledge work are incompatible with the autonomy and work approaches of many knowledge workers. Therefore, a variety of alternative process-oriented approaches to knowledge work are described. Emphasis is placed on differentiating among different types of knowledge work and applying process interventions that are more behaviorally sensitive.",
"title": ""
},
{
"docid": "8df79c58b061dda84421108438d0563e",
"text": "A case of self-strangulation with a rare kind of ligature material is reported and discussed. The merit of the case lies in the 'self-retaining' nature of the ligature material deployed. The case subject was a 50 year old man found dead in an open field with a unique ligature material of 'plastic lock tie' in-situ at neck. Forensic autopsy revealed ligature mark above the level of thyroid cartilage, evidence of bleeding through mouth and nostrils along with generalized features of congestion. Toxicological analysis of blood and viscera detected organophosphorus poison in stomach contents. Cause of death was opined as mechanical asphyxia due to compression of neck by self-strangulation. The importance of a scrupulous forensic autopsy supplemented by ancillary investigations and circumstantial evidences are highlighted. The relevance of the visit of autopsy surgeon to the scene of occurrence is emphasized.",
"title": ""
},
{
"docid": "669b4b1574c22a0c18dd1dc107bc54a1",
"text": "T lymphocytes respond to foreign antigens both by producing protein effector molecules known as lymphokines and by multiplying. Complete activation requires two signaling events, one through the antigen-specific receptor and one through the receptor for a costimulatory molecule. In the absence of the latter signal, the T cell makes only a partial response and, more importantly, enters an unresponsive state known as clonal anergy in which the T cell is incapable of producing its own growth hormone, interleukin-2, on restimulation. Our current understanding at the molecular level of this modulatory process and its relevance to T cell tolerance are reviewed.",
"title": ""
},
{
"docid": "ee472d575bb598dcb4d5d8e4218d25e7",
"text": "This paper proposes a new target impact point estimation system using acoustic sensors. The proposed system estimates projectile trajectory where it hits a target plane by detecting shock wave created by the passage of a supersonic projectile near the target. The method first measures TDOA (Time Delay Of Arrival) of the shock wave from the two sets of acoustic sensors of the equilateral triangular shape arranged horizontally under the target. Then the acoustic hit coordinate on the target is calculated using triangulation method. The performance of the proposed algorithm was confirmed by comparing the actual impact point with the estimated coordinates of the impact point calculated by proposed algorithm through the actual shooting experiments.",
"title": ""
},
{
"docid": "a67df1737ca4e5cb41fe09ccb57c0e88",
"text": "Generation of electricity from solar energy has gained worldwide acceptance due to its abundant availability and eco-friendly nature. Even though the power generated from solar looks to be attractive; its availability is subjected to variation owing to many factors such as change in irradiation, temperature, shadow etc. Hence, extraction of maximum power from solar PV using Maximum Power Point Tracking (MPPT) method was the subject of study in the recent past. Among many methods proposed, Hill Climbing and Incremental Conductance MPPT methods were popular in reaching Maximum Power under constant irradiation. However, these methods show large steady state oscillations around MPP and poor dynamic performance when subjected to change in environmental conditions. On the other hand, bioinspired algorithms showed excellent characteristics when dealing with non-linear, non-differentiable and stochastic optimization problems without involving excessive mathematical computations. Hence, in this paper an attempt is made by applying modifications to Particle Swarm Optimization technique, with emphasis on initial value selection, for Maximum Power Point Tracking. The key features of this method include ability to track the global peak power accurately under change in environmental condition with almost zero steady state oscillations, faster dynamic response and easy implementation. Systematic evaluation has been carried out for different partial shading conditions and finally the results obtained are compared with existing methods. In addition, simulations results are validated via built-in hardware prototype. © 2015 Published by Elsevier B.V. 37 38 39 40 41 42 43 44 45 46 47 48 . Introduction Ever growing energy demand by mankind and the limited availbility of resources remain as a major challenge to the power sector ndustry. The need for renewable energy resources has been augented in large scale and aroused due to its huge availability nd pollution free operation. Among the various renewable energy esources, solar energy has gained worldwide recognition because f its minimal maintenance, zero noise and reliability. Because of he aforementioned advantages; solar energy have been widely sed for various applications, but not limited to, such as megawatt cale power plants, water pumping, solar home systems, commuPlease cite this article in press as: R. Venugopalan, et al., Modified Parti Tracking for uniform and under partial shading condition, Appl. Soft C ication satellites, space vehicles and reverse osmosis plants [1]. owever, power generation using solar energy still remain uncerain, despite of all the efforts, due to various factors such as poor ∗ Corresponding author at: SELECT, VIT University, Vellore, Tamilnadu 632014, ndia. Tel.: +91 9600117935; fax: +91 9490113830. E-mail address: sudhakar.babu2013@vit.ac.in (T. Sudhakarbabu). ttp://dx.doi.org/10.1016/j.asoc.2015.05.029 568-4946/© 2015 Published by Elsevier B.V. 49 50 51 52 conversion efficiency, high installation cost and reduced power output under varying environmental conditions. Further, the characteristics of solar PV are non-linear in nature imposing constraints on solar power generation. Therefore, to maximize the power output from solar PV and to enhance the operating efficiency of the solar photovoltaic system, Maximum Power Point Tracking (MPPT) algorithms are essential [2]. Various MPPT algorithms [3–5] have been investigated and reported in the literature and the most popular ones are Fractional Open Circuit Voltage [6–8], Fractional Short Circuit Current [9–11], Perturb and Observe (P&O) [12–17], Incremental Conductance (Inc. Cond.) [18–22], and Hill Climbing (HC) algorithm [23–26]. In fractional open circuit voltage, and fractional short circuit current method; its performance depends on an approximate linear correlation between Vmpp, Voc and Impp, Isc values. However, the above relation is not practically valid; hence, exact value of Maximum cle Swarm Optimization technique based Maximum Power Point omput. J. (2015), http://dx.doi.org/10.1016/j.asoc.2015.05.029 Power Point (MPP) cannot be assured. Perturb and Observe (P&O) method works with the voltage perturbation based on present and previous operating power values. Regardless of its simple structure, its efficiency principally depends on the tradeoff between the 53 54 55 56 ARTICLE IN G Model ASOC 2982 1–12 2 R. Venugopalan et al. / Applied Soft C Nomenclature IPV Current source Rs Series resistance Rp Parallel resistance VD diode voltage ID diode current I0 leakage current Vmpp voltage at maximum power point Voc open circuit voltage Impp current at maximum power point Isc short circuit current Vmpn nominal maximum power point voltage at 1000 W/m2 Npp number of parallel PV modules Nss number of series PV modules w weight factor c1 acceleration factor c2 acceleration factor pbest personal best position gbest global best position Vt thermal voltage K Boltzmann constant T temperature q electron charge Ns number of cells in series Vocn nominal open circuit voltage at 1000W/m2 G irradiation Gn nominal Irradiation Kv voltage temperature coefficient dT difference in temperature RLmin minimum value of load at output RLmax maximum value of load at output Rin internal resistance of the PV module RPVmin minimum reflective impedance of PV array RPVmax maximum reflective impedance of PV array R equivalent output load resistance t M o w t b A c M h n ( e a i p p w a u t H o i 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 o b converter efficiency racking speed and the steady state oscillations in the region of PP [15]. Incremental Conductance (Inc. Cond.) algorithm works n the principle of comparing ratios of Incremental Conductance ith instantaneous conductance and it has the similar disadvanage as that of P&O method [20,21]. HC method works alike P&O ut it is based on the perturbation of duty cycle of power converter. ll these traditional methods have the following disadvantages in ommon; reduced efficiency and steady state oscillations around PP. Realizing the above stated drawbacks; various researchers ave worked on applying certain Artificial Intelligence (AI) techiques like Neural Network (NN) [27,28] and Fuzzy Logic Control FLC) [29,30]. However, these techniques require periodic training, normous volume of data for training, computational complexity nd large memory capacity. Application of aforementioned MPPT methods for centralzed/string PV system is limited as they fail to track the global eak power under partial shading conditions. In addition, multile peaks occur in P-V curve under partial shading condition in hich the unique peak point i.e., global power peak should be ttained. However, when conventional MPPT techniques are used nder such conditions, they usually get trapped in any one of Please cite this article in press as: R. Venugopalan, et al., Modified Part Tracking for uniform and under partial shading condition, Appl. Soft C he local power peaks; drastically lowering the search efficiency. ence, to improve MPP tracking efficiency of conventional methds under PS conditions certain modifications have been proposed n Ref. [31]. Some used two stage approach to track the MPP [32]. PRESS omputing xxx (2015) xxx–xxx In the first stage, a wide search is performed which ensures that the operating point is moved closer to the global peak which is further fine-tuned in the second stage to reach the global peak value. Even though tracking efficiency has improved the method still fails to find the global maximum under all conditions. Another interesting approach is improving the Fibonacci search method for global MPP tracking [33]. Alike two stage method, this one also suffers from the same drawback that it does not guarantee accurate MPP tracking under all shaded conditions [34]. Yet another unique formulation combining DIRECT search method with P&O was put forward for global MPP tracking in Ref. [35]. Even though it is rendered effective, it is very complex and increases the computational burden. In the recent past, bio-inspired algorithms like GA, PSO and ACO have drawn considerable researcher’s attention for MPPT application; since they ensure sufficient class of accuracy while dealing with non-linear, non-differentiable and stochastic optimization problems without involving excessive mathematical computations [32,36–38]. Further, these methods offer various advantages such as computational simplicity, easy implementation and faster response. Among those methods, PSO method is largely discussed and widely used for solar MPPT due to the fact that it has simple structure, system independency, high adaptability and lesser number of tuning parameters. Further in PSO method, particles are allowed to move in random directions and the best values are evolved based on pbest and gbest values. This exploration process is very suitable for MPPT application. To improve the search efficiency of the conventional PSO method authors have proposed modifications to the existing algorithm. In Ref. [39], the authors have put forward an additional perception capability for the particles in search space so that best solutions are evolved with higher accuracy than PSO. However, details on implementation under partial shading condition are not discussed. Further, this method is only applicable when the entire module receive uniform insolation cannot be considered. Traditional PSO method is modified in Ref. [40] by introducing equations for velocity update and inertia. Even though the method showed better performance, use of extra coefficients in the conventional PSO search limits its advantage and increases the computational burden of the algorithm. Another approach",
"title": ""
},
{
"docid": "e9d0c366c241e1fc071d82ca810d1be2",
"text": "The problem of distributed Kalman filtering (DKF) for sensor networks is one of the most fundamental distributed estimation problems for scalable sensor fusion. This paper addresses the DKF problem by reducing it to two separate dynamic consensus problems in terms of weighted measurements and inverse-covariance matrices. These to data fusion problems are solved is a distributed way using low-pass and band-pass consensus filters. Consensus filters are distributed algorithms that allow calculation of average-consensus of time-varying signals. The stability properties of consensus filters is discussed in a companion CDC ’05 paper [24]. We show that a central Kalman filter for sensor networks can be decomposed into n micro-Kalman filters with inputs that are provided by two types of consensus filters. This network of micro-Kalman filters collectively are capable to provide an estimate of the state of the process (under observation) that is identical to the estimate obtained by a central Kalman filter given that all nodes agree on two central sums. Later, we demonstrate that our consensus filters can approximate these sums and that gives an approximate distributed Kalman filtering algorithm. A detailed account of the computational and communication architecture of the algorithm is provided. Simulation results are presented for a sensor network with 200 nodes and more than 1000 links.",
"title": ""
},
{
"docid": "e75d3488f38e08a7e83970f444675069",
"text": "In 1950, Gräfenberg described a distinct erotogenic zone on the anterior wall of the vagina, which was referred to as the Gräfenberg spot (G-spot) by Addiego, Whipple (a nurse) et al. in 1981. As a result, the G-spot has become a central topic of popular speculation and a basis of a huge business surrounding it. In our opinion, these sexologists have made a hotchpotch of Gräfenberg’s thoughts and ideas that were set forth and expounded in his 1950 article: the intraurethral glands are not the corpus spongiosum of the female urethra, and Gräfenberg did not report an orgasm of the intraurethral glands. G-spot amplification is a cosmetic surgery procedure for temporarily increasing the size and sensitivity of the G-spot in which a dermal filler or a collagen-like material is injected into the bladder–vaginal septum. All published scientific data point to the fact that the G-spot does not exist, and the supposed G-spot should not be identified with Gräfenberg’s name. Moreover, G-spot amplification is not medically indicated and is an unnecessary and inefficacious medical procedure.",
"title": ""
},
{
"docid": "58e16ce868473276550f17f19ab9938b",
"text": "By fully exploiting the optical channel properties, we propose in this paper the coherent optical zero padding orthogonal frequency division multiplexing (CO-ZP-OFDM) for future high-speed optical transport networks to increase the spectral efficiency and improve the system reliability. Unlike the periodically inserted training symbols in conventional optical OFDM systems, we design the polarization-time-frequency (PTF) coded pilots scattered within the time-frequency grid of the ZP-OFDM payload symbols to realize low-complexity multiple-input multiple-output (MIMO) channel estimation with high accuracy. Compared with conventional optical OFDM systems, CO-ZP-OFDM improves the spectral efficiency by about 6.62%. Simulation results indicate that the low-density parity-check (LDPC) coded bit error rate of the proposed scheme only suffers from no more than 0.3 dB optical signal-to-noise ratio (OSNR) loss compared with the ideal back-to-back case even when the optical channel impairments like chromatic dispersion (CD) and polarization mode dispersion (PMD) are severe.",
"title": ""
},
{
"docid": "5a4d8576222e8b704baaa1b67815ca01",
"text": "In evolutionary robotics, populations of robots are typically trained in simulation before one or more of them are instantiated as physical robots. However, in order to evolve robust behavior, each robot must be evaluated in multiple environments. If an environment is characterized by f free parameters, each of which can take one of np features, each robot must be evaluated in all np environments to ensure robustness. Here, we show that if the robots are constrained to have modular morphologies and controllers, they only need to be evaluated in np environments to reach the same level of robustness. This becomes possible because the robots evolve such that each module of the morphology allows the controller to independently recognize a familiar percept in the environment, and each percept corresponds to one of the environmental free parameters. When exposed to a new environment, the robot perceives it as a novel combination of familiar percepts which it can solve without requiring further training. A non-modular morphology and controller however perceives the same environment as a completely novel environment, requiring further training. This acceleration in evolvability – the rate of the evolution of adaptive and robust behavior – suggests that evolutionary robotics may become a scalable approach for automatically creating complex autonomous machines, if the evolution of neural and morphological modularity is taken into account.",
"title": ""
},
{
"docid": "a88b5c0c627643e0d7b17649ac391859",
"text": "Abduction is a useful decision problem that is related to diagnostics. Given some observation in form of a set of axioms, that is not entailed by a knowledge base, we are looking for explanations, sets of axioms, that can be added to the knowledge base in order to entail the observation. ABox abduction limits both observations and explanations to ABox assertions. In this work we focus on direct tableau-based approach to answer ABox abduction. We develop an ABox abduction algorithm for the ALCHO DL, that is based on Reiter’s minimal hitting set algorithm. We focus on the class of explanations allowing atomic and negated atomic concept assertions, role assertions, and negated role assertions. The algorithm is sound and complete for this class. The algorithm was also implemented, on top of the Pellet reasoner.",
"title": ""
},
{
"docid": "fbcb346e8e7dbe7551a4a87a533c025b",
"text": "Emoji is an essential component in dialogues which has been broadly utilized on almost all social platforms. It could express more delicate feelings beyond plain texts and thus smooth the communications between users, making dialogue systems more anthropomorphic and vivid. In this paper, we focus on automatically recommending appropriate emojis given the contextual information in multiturn dialogue systems, where the challenges locate in understanding the whole conversations. More specifically, we propose the hierarchical long shortterm memory model (H-LSTM) to construct dialogue representations, followed by a softmax classifier for emoji classification. We evaluate our models on the task of emoji classification in a real-world dataset, with some further explorations on parameter sensitivity and case study. Experimental results demonstrate that our method achieves the best performances on all evaluation metrics. It indicates that our method could well capture the contextual information and emotion flow in dialogues, which is significant for emoji recommendation.",
"title": ""
},
{
"docid": "7bb17491cb10db67db09bc98aba71391",
"text": "This paper presents a constrained backpropagation (CPROP) methodology for solving nonlinear elliptic and parabolic partial differential equations (PDEs) adaptively, subject to changes in the PDE parameters or external forcing. Unlike existing methods based on penalty functions or Lagrange multipliers, CPROP solves the constrained optimization problem associated with training a neural network to approximate the PDE solution by means of direct elimination. As a result, CPROP reduces the dimensionality of the optimization problem, while satisfying the equality constraints associated with the boundary and initial conditions exactly, at every iteration of the algorithm. The effectiveness of this method is demonstrated through several examples, including nonlinear elliptic and parabolic PDEs with changing parameters and nonhomogeneous terms.",
"title": ""
},
{
"docid": "63cef4e93184c865e0d42970ca9de9db",
"text": "Numerous applications such as stock market or medical information systems require that both historical and current data be logically integrated into a temporal database. The underlying access method must support different forms of “time-travel” queries, the migration of old record versions onto inexpensive archive media, and high insertion and update rates. This paper presents an access method for transaction-time temporal data, called the log-structured history data access method (LHAM) that meets these demands. The basic principle of LHAM is to partition the data into successive components based on the timestamps of the record versions. Components are assigned to different levels of a storage hierarchy, and incoming data is continuously migrated through the hierarchy. The paper discusses the LHAM concepts, including concurrency control and recovery, our full-fledged LHAM implementation, and experimental performance results based on this implementation. A detailed comparison with the TSB-tree, both analytically and based on experiments with real implementations, shows that LHAM is highly superior in terms of insert performance, while query performance is in almost all cases at least as good as for the TSB-tree; in many cases it is much better.",
"title": ""
},
{
"docid": "23a4ae9092694cd0b49ad7e4b657baae",
"text": "BACKGROUND\nRegular physical activity is known to be beneficial for people with type 2 diabetes. Nevertheless, most of the people who have diabetes lead a sedentary lifestyle. Smartphones create new possibilities for helping people to adhere to their physical activity goals through continuous monitoring and communication, coupled with personalized feedback.\n\n\nOBJECTIVE\nThe aim of this study was to help type 2 diabetes patients increase the level of their physical activity.\n\n\nMETHODS\nWe provided 27 sedentary type 2 diabetes patients with a smartphone-based pedometer and a personal plan for physical activity. Patients were sent short message service messages to encourage physical activity between once a day and once per week. Messages were personalized through a Reinforcement Learning algorithm so as to improve each participant's compliance with the activity regimen. The algorithm was compared with a static policy for sending messages and weekly reminders.\n\n\nRESULTS\nOur results show that participants who received messages generated by the learning algorithm increased the amount of activity and pace of walking, whereas the control group patients did not. Patients assigned to the learning algorithm group experienced a superior reduction in blood glucose levels (glycated hemoglobin [HbA1c]) compared with control policies, and longer participation caused greater reductions in blood glucose levels. The learning algorithm improved gradually in predicting which messages would lead participants to exercise.\n\n\nCONCLUSIONS\nMobile phone apps coupled with a learning algorithm can improve adherence to exercise in diabetic patients. This algorithm can be used in large populations of diabetic patients to improve health and glycemic control. Our results can be expanded to other areas where computer-led health coaching of humans may have a positive impact. Summary of a part of this manuscript has been previously published as a letter in Diabetes Care, 2016.",
"title": ""
}
] |
scidocsrr
|
b3d5705e91c282ece4c2f09269b4a034
|
Global analytic solution of fully-observed variational Bayesian matrix factorization
|
[
{
"docid": "215ccfeaf75d443e8eb6ead8172c9b92",
"text": "Maximum Margin Matrix Factorization (MMMF) was recently suggested (Srebro et al., 2005) as a convex, infinite dimensional alternative to low-rank approximations and standard factor models. MMMF can be formulated as a semi-definite programming (SDP) and learned using standard SDP solvers. However, current SDP solvers can only handle MMMF problems on matrices of dimensionality up to a few hundred. Here, we investigate a direct gradient-based optimization method for MMMF and demonstrate it on large collaborative prediction problems. We compare against results obtained by Marlin (2004) and find that MMMF substantially outperforms all nine methods he tested.",
"title": ""
},
{
"docid": "8c043576bd1a73b783890cdba3a5e544",
"text": "We present a novel approach to collaborative prediction, using low-norm instead of low-rank factorizations. The approach is inspired by, and has strong connections to, large-margin linear discrimination. We show how to learn low-norm factorizations by solving a semi-definite program, and discuss generalization error bounds for them.",
"title": ""
},
{
"docid": "0d292d5c1875845408c2582c182a6eb9",
"text": "Partial Least Squares (PLS) is a wide class of methods for modeling relations between sets of observed variables by means of latent variables. It comprises of regression and classification tasks as well as dimension reduction techniques and modeling tools. The underlying assumption of all PLS methods is that the observed data is generated by a system or process which is driven by a small number of latent (not directly observed or measured) variables. Projections of the observed data to its latent structure by means of PLS was developed by Herman Wold and coworkers [48, 49, 52]. PLS has received a great amount of attention in the field of chemometrics. The algorithm has become a standard tool for processing a wide spectrum of chemical data problems. The success of PLS in chemometrics resulted in a lot of applications in other scientific areas including bioinformatics, food research, medicine, pharmacology, social sciences, physiology–to name but a few [28, 25, 53, 29, 18, 22]. This chapter introduces the main concepts of PLS and provides an overview of its application to different data analysis problems. Our aim is to present a concise introduction, that is, a valuable guide for anyone who is concerned with data analysis. In its general form PLS creates orthogonal score vectors (also called latent vectors or components) by maximising the covariance between different sets of variables. PLS dealing with two blocks of variables is considered in this chapter, although the PLS extensions to model relations among a higher number of sets exist [44, 46, 47, 48, 39]. PLS is similar to Canonical Correlation Analysis (CCA) where latent vectors with maximal correlation are extracted [24]. There are different PLS techniques to extract latent vectors, and each of them gives rise to a variant of PLS. PLS can be naturally extended to regression problems. The predictor and predicted (response) variables are each considered as a block of variables. PLS then extracts the score vectors which serve as a new predictor representation",
"title": ""
}
] |
[
{
"docid": "dbb6a635df0cae8d1994c94947e235db",
"text": "We study the problem of allocating indivisible goods among n agents in a fair manner. For this problem, maximin share (MMS) is a well-studied solution concept which provides a fairness threshold. Specifically, maximin share is defined as the minimum utility that an agent can guarantee for herself when asked to partition the set of goods into n bundles such that the remaining (n−1) agents pick their bundles adversarially. An allocation is deemed to be fair if every agent gets a bundle whose valuation is at least her maximin share. Even though maximin shares provide a natural benchmark for fairness, it has its own drawbacks and, in particular, it is not sufficient to rule out unsatisfactory allocations. Motivated by these considerations, in this work we define a stronger notion of fairness, called groupwise maximin share guarantee (GMMS). In GMMS, we require that the maximin share guarantee is achieved not just with respect to the grand bundle, but also among all the subgroups of agents. Hence, this solution concept strengthens MMS and provides an ex-post fairness guarantee. We show that in specific settings, GMMS allocations always exist. We also establish the existence of approximate GMMS allocations under additive valuations, and develop a polynomial-time algorithm to find such allocations. Moreover, we establish a scale of fairness wherein we show that GMMS implies approximate envy freeness. Finally, we empirically demonstrate the existence of GMMS allocations in a large set of randomly generated instances. For the same set of instances, we additionally show that our algorithm achieves an approximation factor better than the established, worst-case bound.",
"title": ""
},
{
"docid": "2949191659d01de73abdc749d5e51ca7",
"text": "BACKGROUND\nIsolated infraspinatus muscle atrophy is common in overhead athletes, who place significant and repetitive stresses across their dominant shoulders. Studies on volleyball and baseball players report infraspinatus atrophy in 4% to 34% of players; however, the prevalence of infraspinatus atrophy in professional tennis players has not been reported.\n\n\nPURPOSE\nTo investigate the incidence of isolated infraspinatus atrophy in professional tennis players and to identify any correlations with other physical examination findings, ranking performance, and concurrent shoulder injuries.\n\n\nSTUDY DESIGN\nCross-sectional study; Level of evidence, 3.\n\n\nMETHODS\nA total of 125 professional female tennis players underwent a comprehensive preparticipation physical health status examination. Two orthopaedic surgeons examined the shoulders of all players and obtained digital goniometric measurements of range of motion (ROM). Infraspinatus atrophy was defined as loss of soft tissue bulk in the infraspinatus scapula fossa (and increased prominence of dorsal scapular bony anatomy) of the dominant shoulder with clear asymmetry when compared with the contralateral side. Correlations were examined between infraspinatus atrophy and concurrent shoulder disorders, clinical examination findings, ROM, glenohumeral internal rotation deficit, singles tennis ranking, and age.\n\n\nRESULTS\nThere were 65 players (52%) with evidence of infraspinatus atrophy in their dominant shoulders. No wasting was noted in the nondominant shoulder of any player. No statistically significant differences were seen in mean age, left- or right-hand dominance, height, weight, or body mass index for players with or without atrophy. Of the 77 players ranked in the top 100, 58% had clinical infraspinatus atrophy, compared with 40% of players ranked outside the top 100. No associations were found with static physical examination findings (scapular dyskinesis, ROM glenohumeral internal rotation deficit, postural abnormalities), concurrent shoulder disorders, or compromised performance when measured by singles ranking.\n\n\nCONCLUSION\nThis study reports a high level of clinical infraspinatus atrophy in the dominant shoulder of elite female tennis players. Infraspinatus atrophy was associated with a higher performance ranking, and no functional deficits or associations with concurrent shoulder disorders were found. Team physicians can be reassured that infraspinatus atrophy is a common finding in high-performing tennis players and, if asymptomatic, does not appear to significantly compromise performance.",
"title": ""
},
{
"docid": "9086291516a6a45cdb9c68ab3695f231",
"text": "The study is to investigate resellers’ point of view about the impact of brand awareness, perceived quality and customer loyalty on brand profitability and purchase intention. Further the study is also focused on finding out the mediating role of purchase intension on the relationship of brand awareness and profitability, perceived quality and profitability and brand loyalty and profitability. The study was causal in nature and data was collected from 200 resellers. The results showed insignificant impact of brand awareness and loyalty whereas significant impact of perceived quality on profitability. Further the results revealed significant impact of brand awareness, perceived quality and loyalty on purchase intention. Sobel test for mediation showed that purchase intension mediates the relationship of the perceived quality and profitability only.",
"title": ""
},
{
"docid": "885fb29f5189381de351b634f4c7365c",
"text": "The main objectives of this study were to determine the most frequent and the most significant individual and social factors related to students’ academic achievement and motivation for learning. The study was conducted among 740 students from the Faculty of Education and the Faculty of Philosophy in Vojvodina. The participants completed questionnaires measuring students’ dominant individual and social motivational factors, the level of their motivation for learning, the level of their academic achievement and students’ socio-demographic characteristics. The results of this study showed that the students reported that both individual and social factors are related to their academic achievement and motivation for learning. Individual factors – the perceived interest in content and perceived content usefulness for personal development proved to be the most significant predictors of a high level of motivation for learning and academic success, but social motivational factors showed themselves to be the most frequent among students. The results are especially important for university teachers as guidelines for improving students’ motivation.",
"title": ""
},
{
"docid": "557694b6db3f20adc700876d75ad7720",
"text": "Unseen Action Recognition (UAR) aims to recognise novel action categories without training examples. While previous methods focus on inner-dataset seen/unseen splits, this paper proposes a pipeline using a large-scale training source to achieve a Universal Representation (UR) that can generalise to a more realistic Cross-Dataset UAR (CDUAR) scenario. We first address UAR as a Generalised Multiple-Instance Learning (GMIL) problem and discover 'building-blocks' from the large-scale ActivityNet dataset using distribution kernels. Essential visual and semantic components are preserved in a shared space to achieve the UR that can efficiently generalise to new datasets. Predicted UR exemplars can be improved by a simple semantic adaptation, and then an unseen action can be directly recognised using UR during the test. Without further training, extensive experiments manifest significant improvements over the UCF101 and HMDB51 benchmarks.",
"title": ""
},
{
"docid": "35443ed37528685e3395622327f7ea06",
"text": "The e-book industry is starting to flourish due, in part, to the availability of affordable and user-friendly e-book readers. As users are increasingly moving from traditional paper books to e-books, there is an opportunity to reinvent and enhance their reading experience, for example, by leveraging the multimedia capabilities of these devices in order to turn the act of reading into a real multimedia experience. In this paper, we focus on the augmentation of the written text with its associated audiobook, so that users can listen to the book they are (currently) reading. We propose an audiobook-to-ebook alignment system by applying a Text-to-Speech (TTS)-based text to audio alignment algorithm, and enhance it with a silence filtering algorithm to cope with the difference on reading style between the TTS output and the speakers in the ebook environment. Experiments done using 12 five-minute excerpts of 6 different audio-books (read by men and women) yield usable word alignment errors below 120ms for 90% of the words. Finally, we also show a user interface implementation in the Ipad for synchronized e-book reading while listening to the associated audiobook.",
"title": ""
},
{
"docid": "a75e29521b04d5e09228918e4ed560a6",
"text": "This study assessed motives for social network site (SNS) use, group belonging, collective self-esteem, and gender effects among older adolescents. Communication with peer group members was the most important motivation for SNS use. Participants high in positive collective self-esteem were strongly motivated to communicate with peer group via SNS. Females were more likely to report high positive collective self-esteem, greater overall use, and SNS use to communicate with peers. Females also posted higher means for group-in-self, passing time, and entertainment. Negative collective self-esteem correlated with social compensation, suggesting that those who felt negatively about their social group used SNS as an alternative to communicating with other group members. Males were more likely than females to report negative collective self-esteem and SNS use for social compensation and social identity gratifications.",
"title": ""
},
{
"docid": "45484e263769ada08d6af03e32f079fe",
"text": "In this paper, a triple-band monopole antenna for WLAN and WiMAX wireless communication applications is presented. The antenna has a simple structure designed for 2.4/5.2/5.8 GHz WLAN and 3.5/5.5 GHz WiMAX bands. The radiator is composed of just two branches and a short stub. The antenna is designed on a 40 × 40 × 0.8 mm3 substrate using computer simulation. For verification of simulation results, a prototype is fabricated and measured. Results show that the antenna can provide three impedance bandwidths, 2.35-2.58 GHz, 3.25-4 GHz and 4.95-5.9 GHz, for the WLAN and WiMAX applications. The simulated and measured radiation patterns, efficiencies and gains of the antenna are all presented.",
"title": ""
},
{
"docid": "3be81ea5a817d9999998c9d0b008d65b",
"text": "This paper contributes a real time method for recovering facial shape and expression from a single depth image. The method also estimates an accurate and dense correspondence field between the input depth image and a generic face model. Both outputs are a result of minimizing the error in reconstructing the depth image, achieved by applying a set of identity and expression blend shapes to the model. Traditionally, such a generative approach has shown to be computationally expensive and non-robust because of the non-linear nature of the reconstruction error. To overcome this problem, we use a discriminatively trained prediction pipeline that employs random forests to generate an initial dense but noisy correspondence field. Our method then exploits a fast ICP-like approximation to update these correspondences, allowing us to quickly obtain a robust initial fit of our model. The model parameters are then fine tuned to minimize the true reconstruction error using a stochastic optimization technique. The correspondence field resulting from our hybrid generative-discriminative pipeline is accurate and useful for a variety of applications such as mesh deformation and retexturing. Our method works in real-time on a single depth image i.e. Without temporal tracking, is free from per-user calibration, and works in low-light conditions.",
"title": ""
},
{
"docid": "326def5d55a8f45f9f1d85fd606588a9",
"text": "Visualization and situational awareness are of vital importance for power systems, as the earlier a power-system event such as a transmission line fault or cyber-attack is identified, the quicker operators can react to avoid unnecessary loss. Accurate time-synchronized data, such as system measurements and device status, provide benefits for system state monitoring. However, the time-domain analysis of such heterogeneous data to extract patterns is difficult due to the existence of transient phenomena in the analyzed measurement waveforms. This paper proposes a sequential pattern mining approach to accurately extract patterns of power-system disturbances and cyber-attacks from heterogeneous time-synchronized data, including synchrophasor measurements, relay logs, and network event monitor logs. The term common path is introduced. A common path is a sequence of critical system states in temporal order that represent individual types of disturbances and cyber-attacks. Common paths are unique signatures for each observed event type. They can be compared to observed system states for classification. In this paper, the process of automatically discovering common paths from labeled data logs is introduced. An included case study uses the common path-mining algorithm to learn common paths from a fusion of heterogeneous synchrophasor data and system logs for three types of disturbances (in terms of faults) and three types of cyber-attacks, which are similar to or mimic faults. The case study demonstrates the algorithm's effectiveness at identifying unique paths for each type of event and the accompanying classifier's ability to accurately discern each type of event.",
"title": ""
},
{
"docid": "324bbe1712342fcdbc29abfbebfaf29c",
"text": "Non-interactive zero-knowledge proofs are a powerful cryptographic primitive used in privacypreserving protocols. We design and build C∅C∅, the first system enabling developers to build efficient, composable, non-interactive zero-knowledge proofs for generic, user-defined statements. C∅C∅ extends state-of-the-art SNARK constructions by applying known strengthening transformations to yield UC-composable zero-knowledge proofs suitable for modular use in larger cryptographic protocols. To attain fast practical performance, C∅C∅ includes a library of several “SNARK-friendly” cryptographic primitives. These primitives are used in the strengthening transformations in order to reduce the overhead of achieving composable security. Our open-source library of optimized arithmetic circuits for these functions are up to 40× more efficient than standard implementations and are thus of independent interest for use in other NIZK projects. Finally, we evaluate C∅C∅ on applications such as anonymous credentials, private smart contracts, and nonoutsourceable proof-of-work puzzles and demonstrate 5× to 8× speedup in these application settings compared to naive implementations.",
"title": ""
},
{
"docid": "28574c82a49b096b11f1b78b5d62e425",
"text": "A major reason for the current reproducibility crisis in the life sciences is the poor implementation of quality control measures and reporting standards. Improvement is needed, especially regarding increasingly complex in vitro methods. Good Cell Culture Practice (GCCP) was an effort from 1996 to 2005 to develop such minimum quality standards also applicable in academia. This paper summarizes recent key developments in in vitro cell culture and addresses the issues resulting for GCCP, e.g. the development of induced pluripotent stem cells (iPSCs) and gene-edited cells. It further deals with human stem-cell-derived models and bioengineering of organo-typic cell cultures, including organoids, organ-on-chip and human-on-chip approaches. Commercial vendors and cell banks have made human primary cells more widely available over the last decade, increasing their use, but also requiring specific guidance as to GCCP. The characterization of cell culture systems including high-content imaging and high-throughput measurement technologies increasingly combined with more complex cell and tissue cultures represent a further challenge for GCCP. The increasing use of gene editing techniques to generate and modify in vitro culture models also requires discussion of its impact on GCCP. International (often varying) legislations and market forces originating from the commercialization of cell and tissue products and technologies are further impacting on the need for the use of GCCP. This report summarizes the recommendations of the second of two workshops, held in Germany in December 2015, aiming map the challenge and organize the process or developing a revised GCCP 2.0.",
"title": ""
},
{
"docid": "adc84153f83ad1587a4218d817befe8d",
"text": "Improving the sluggish kinetics for the electrochemical reduction of water to molecular hydrogen in alkaline environments is one key to reducing the high overpotentials and associated energy losses in water-alkali and chlor-alkali electrolyzers. We found that a controlled arrangement of nanometer-scale Ni(OH)(2) clusters on platinum electrode surfaces manifests a factor of 8 activity increase in catalyzing the hydrogen evolution reaction relative to state-of-the-art metal and metal-oxide catalysts. In a bifunctional effect, the edges of the Ni(OH)(2) clusters promoted the dissociation of water and the production of hydrogen intermediates that then adsorbed on the nearby Pt surfaces and recombined into molecular hydrogen. The generation of these hydrogen intermediates could be further enhanced via Li(+)-induced destabilization of the HO-H bond, resulting in a factor of 10 total increase in activity.",
"title": ""
},
{
"docid": "b01cd9a7135dfa82bdcb14bcc52c8e43",
"text": "Path queries on a knowledge graph can be used to answer compositional questions such as “What languages are spoken by people living in Lisbon?”. However, knowledge graphs often have missing facts (edges) which disrupts path queries. Recent models for knowledge base completion impute missing facts by embedding knowledge graphs in vector spaces. We show that these models can be recursively applied to answer path queries, but that they suffer from cascading errors. This motivates a new “compositional” training objective, which dramatically improves all models’ ability to answer path queries, in some cases more than doubling accuracy. On a standard knowledge base completion task, we also demonstrate that compositional training acts as a novel form of structural regularization, reliably improving performance across all base models (reducing errors by up to 43%) and achieving new state-of-the-art results.",
"title": ""
},
{
"docid": "1e607279360f3318f3f020e19e1bd86f",
"text": "Only one late period is allowed for this homework (11:59pm 2/23). Submission instructions: These questions require thought but do not require long answers. Please be as concise as possible. You should submit your answers as a writeup in PDF format via GradeScope and code via the Snap submission site. Submitting writeup: Prepare answers to the homework questions into a single PDF file and submit it via http://gradescope.com. Make sure that the answer to each question is on a separate page. On top of each page write the number of the question you are answering. Please find the cover sheet and the recommended templates located here: Not including the cover sheet in your submission will result in a 2-point penalty. It is also important to tag your answers correctly on Gradescope. We will deduct 5/N points for each incorrectly tagged subproblem (where N is the number of subproblems). This means you can lose up to 5 points for incorrect tagging. Put all the code for a single question into a single file and upload it. Warning: This problem requires substantial computing time (it can be a few hours on some systems). Don't start it at the last minute. 7 7 7 The goal of this problem is to implement the Stochastic Gradient Descent algorithm to build a Latent Factor Recommendation system. We can use it to recommend movies to users.",
"title": ""
},
{
"docid": "b908c2f0da3ea9ae15bdcda54442b381",
"text": "OpenMusic (OM) is a visual programming language developed on top of Common Lisp and CLOS, in which most of the functional and object-oriented programming concepts can be implemented and carried out graphically. Although this visual language was designed for musical applications, the focus in this paper is to describe and study OM as a complete general-purpose programming environment.",
"title": ""
},
{
"docid": "17146c10df3ff112c5c70dc2bdf03973",
"text": "Internet-based media and especially social networking sites differ from traditional media in that they allow individuals to interact with their friends in their networks. Moreover, Internet-based media are easily available on devices such as smartphones or tablets. Previous research has demonstrated that mass media contribute powerfully to an individual’s body dissatisfaction. To date, research on the effects related to exposure to ‘newer’ forms of media, in particular social media on the Internet, is scarce. The purpose of the current study is to review the extant body of research dealing with the influence of social media on body image concerns, especially among adolescents. Adolescents, via the Internet, get access to different kinds of Internet-based media, such as social media (including social networking sites). Our results document the importance of idealized social media models—especially thin-ideal models for girls and muscular-ideal models for boys—in shaping the body perceptions of adolescents. However, the effects of pressure from social media on body image concerns in men need to be further investigated both in clinical and community samples.",
"title": ""
},
{
"docid": "3e01af44d4819d8c78615e66f56e5983",
"text": "The amount of dynamic content on the web has been steadily increasing. Scripting languages such as JavaScript and browser extensions such as Adobe's Flash have been instrumental in creating web-based interfaces that are similar to those of traditional applications. Dynamic content has also become popular in advertising, where Flash is used to create rich, interactive ads that are displayed on hundreds of millions of computers per day. Unfortunately, the success of Flash-based advertisements and applications attracted the attention of malware authors, who started to leverage Flash to deliver attacks through advertising networks. This paper presents a novel approach whose goal is to automate the analysis of Flash content to identify malicious behavior. We designed and implemented a tool based on the approach, and we tested it on a large corpus of real-world Flash advertisements. The results show that our tool is able to reliably detect malicious Flash ads with limited false positives. We made our tool available publicly and it is routinely used by thousands of users.",
"title": ""
},
{
"docid": "8da51332da3cda4644dc3360117aa7f7",
"text": "This article presents a new theory of subjective probability according to which different descriptions of the same event can give rise to different judgments. The experimental evidence confirms the major predictions of the theory. First, judged probability increases by unpacking the focal hypothesis and decreases by unpacking the alternative hypothesis. Second, judged probabilities are complementary in the binary case and subadditive in the general case, contrary to both classical and revisionist models of belief. Third, subadditivity is more pronounced for probability judgments than for frequency judgments and is enhanced by compatible evidence. The theory provides a unified treatment of a wide range of empirical findings. It is extended to ordinal judgments and to the assessment of upper and lower probabilities.",
"title": ""
},
{
"docid": "738e0b3b40a04255f36ba99d4975a39f",
"text": "We describe the creation of a recommendation system for Reddit, a social news and entertainment site where community members can submit content in the form of text posts, links, or images. Our dataset consists of 23,091,688 votes from 43,976 users over 3,436,063 links in 11,675 subreddits. Using this network, we constructed a weighted graph of subreddits, partitioned it into 217 distinct subcommunities, and created both an item-based and user-based recommendation algorithm. Given a user and a subreddit, our algorithm recommends to the user novel subreddits within the same subcommunity. User-based recommendation was found to outperform item-based recommendation.",
"title": ""
}
] |
scidocsrr
|
039e3e5e4c9a46130c751fc12b95a679
|
On Loss Functions for Deep Neural Networks in Classification
|
[
{
"docid": "b0bd9a0b3e1af93a9ede23674dd74847",
"text": "This paper introduces WaveNet, a deep neural network for generating raw audio waveforms. The model is fully probabilistic and autoregressive, with the predictive distribution for each audio sample conditioned on all previous ones; nonetheless we show that it can be efficiently trained on data with tens of thousands of samples per second of audio. When applied to text-to-speech, it yields state-ofthe-art performance, with human listeners rating it as significantly more natural sounding than the best parametric and concatenative systems for both English and Mandarin. A single WaveNet can capture the characteristics of many different speakers with equal fidelity, and can switch between them by conditioning on the speaker identity. When trained to model music, we find that it generates novel and often highly realistic musical fragments. We also show that it can be employed as a discriminative model, returning promising results for phoneme recognition.",
"title": ""
}
] |
[
{
"docid": "ada35607fa56214e5df8928008735353",
"text": "Osseous free flaps have become the preferred method for reconstructing segmental mandibular defects. Of 457 head and neck free flaps, 150 osseous mandible reconstructions were performed over a 10-year period. This experience was retrospectively reviewed to establish an approach to osseous free flap mandible reconstruction. There were 94 male and 56 female patients (mean age, 50 years; range 3 to 79 years); 43 percent had hemimandibular defects, and the rest had central, lateral, or a combination defect. Donor sites included the fibula (90 percent), radius (4 percent), scapula (4 percent), and ilium (2 percent). Rigid fixation (up to five osteotomy sites) was used in 98 percent of patients. Aesthetic and functional results were evaluated a minimum of 6 months postoperatively. The free flap success rate was 100 percent, and bony union was achieved in 97 percent of the osteotomy sites. Osseointegrated dental implants were placed in 20 patients. A return to an unrestricted diet was achieved in 45 percent of patients; 45 percent returned to a soft diet, and 5 percent were on a liquid diet. Five percent of patients required enteral feeding to maintain weight. Speech was assessed as normal (36 percent), near normal (27 percent), intelligible (28 percent), or unintelligible (9 percent). Aesthetic outcome was judged as excellent (32 percent), good (27 percent), fair (27 percent), or poor (14 percent). This study demonstrates a very high success rate, with good-to-excellent functional and aesthetic results using osseous free flaps for primary mandible reconstruction. The fibula donor site should be the first choice for most cases, particularly those with anterior or large bony defects requiring multiple osteotomies. Use of alternative donor sites (i.e., radius and scapula) is best reserved for cases with large soft-tissue and minimal bone requirements. The ilium is recommended only when other options are unavailable. Thoughtful flap selection and design should supplant the need for multiple, simultaneous free flaps and vein grafting in most cases.",
"title": ""
},
{
"docid": "11e2ec2aab62ba8380e82a18d3fcb3d8",
"text": "In this paper we describe our effort to create a dataset for the evaluation of cross-language textual similarity detection. We present preexisting corpora and their limits and we explain the various gathered resources to overcome these limits and build our enriched dataset. The proposed dataset is multilingual, includes cross-language alignment for different granularities (from chunk to document), is based on both parallel and comparable corpora and contains human and machine translated texts. Moreover, it includes texts written by multiple types of authors (from average to professionals). With the obtained dataset, we conduct a systematic and rigorous evaluation of several state-of-the-art cross-language textual similarity detection methods. The evaluation results are reviewed and discussed. Finally, dataset and scripts are made publicly available on GitHub: http://github.com/FerreroJeremy/Cross-Language-Dataset.",
"title": ""
},
{
"docid": "cdcfd25cd84870b51297ec776c8fa447",
"text": "This paper aims at the construction of a music composition system that generates 16-bars musical works by interaction between human and the system, using interactive genetic algorithm. The present system generates not only various kinds of melody parts but also various kinds of patterns of backing parts and tones of all parts, so that users can acquire satisfied musical work. The users choose generating mode of musical work from three points, i.e., melody part, tones of all parts, or patterns of backing parts, and the users evaluate impressions of presented candidates of musical work through the user interface. The present system generates the candidates based on user's subjective evaluation. This paper shows evaluation experiments to confirm the usefulness of the present system.",
"title": ""
},
{
"docid": "9229c3eae864cf924226ffb483617220",
"text": "Great effort has been put into the development of diagnosis methods for the most dangerous type of skin diseases Melanoma. This paper aims to develop a prototype capable of segment and classify skin lesions in dermoscopy images based on ABCD rule. The proposed work is divided into four distinct stages: 1) Pre-processing, consists of filtering and contrast enhancing techniques. 2) Segmentation, thresholding and statistical properties are computed to localize the lesion. 3) Features extraction, Asymmetry is calculated by averaging the calculated results of the two methods: Entropy and Bi-fold. Border irregularity is calculated by accumulate the statistical scores of the eight segments of the segmented lesion. Color feature is calculated among the existence of six candidate colors: white, black, red, light-brown, dark-brown, and blue-gray. Diameter is measured by the conversion operation from the total number of pixels in the greatest diameter into millimeter (mm). 4) Classification, the summation of the four extracted feature scores multiplied by their weights to yield a total dermoscopy score (TDS); hence, the lesion is classified into benign, suspicious, or malignant. The prototype is implemented in MATLAB and the dataset used consists of 200 dermoscopic images from Hospital Pedro Hispano, Matosinhos. The achieved results shows an acceptable performance rates, an accuracy 90%, sensitivity 85%, and specificity 92.22%.",
"title": ""
},
{
"docid": "d3b2283ce3815576a084f98c34f37358",
"text": "We present a system for the detection of the stance of headlines with regard to their corresponding article bodies. The approach can be applied in fake news, especially clickbait detection scenarios. The component is part of a larger platform for the curation of digital content; we consider veracity and relevancy an increasingly important part of curating online information. We want to contribute to the debate on how to deal with fake news and related online phenomena with technological means, by providing means to separate related from unrelated headlines and further classifying the related headlines. On a publicly available data set annotated for the stance of headlines with regard to their corresponding article bodies, we achieve a (weighted) accuracy score of 89.59.",
"title": ""
},
{
"docid": "7bdebaf86fd679ae00520dc8f7ee3afa",
"text": "Studies show that attractive women demonstrate stronger preferences for masculine men than relatively unattractive women do. Such condition-dependent preferences may occur because attractive women can more easily offset the costs associated with choosing a masculine partner, such as lack of commitment and less interest in parenting. Alternatively, if masculine men display negative characteristics less to attractive women than to unattractive women, attractive women may perceive masculine men to have more positive personality traits than relatively unattractive women do. We examined how two indices of women’s attractiveness, body mass index (BMI) and waist–hip ratio (WHR), relate to perceptions of both the attractiveness and trustworthiness of masculinized versus feminized male faces. Consistent with previous studies, women with a low (attractive) WHR had stronger preferences for masculine male faces than did women with a relatively high (unattractive) WHR. This relationship remained significant when controlling for possible effects of BMI. Neither WHR nor BMI predicted perceptions of trustworthiness. These findings present converging evidence for condition-dependent mate preferences in women and suggest that such preferences do not reflect individual differences in the extent to which pro-social traits are ascribed to feminine versus masculine men. 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "b4ed15850674851fb7e479b7181751d7",
"text": "In this paper we propose an approach to holistic scene understanding that reasons jointly about regions, location, class and spatial extent of objects, presence of a class in the image, as well as the scene type. Learning and inference in our model are efficient as we reason at the segment level, and introduce auxiliary variables that allow us to decompose the inherent high-order potentials into pairwise potentials between a few variables with small number of states (at most the number of classes). Inference is done via a convergent message-passing algorithm, which, unlike graph-cuts inference, has no submodularity restrictions and does not require potential specific moves. We believe this is very important, as it allows us to encode our ideas and prior knowledge about the problem without the need to change the inference engine every time we introduce a new potential. Our approach outperforms the state-of-the-art on the MSRC-21 benchmark, while being much faster. Importantly, our holistic model is able to improve performance in all tasks.",
"title": ""
},
{
"docid": "34ab20699d12ad6cca34f67cee198cd9",
"text": "Such as relational databases, most graphs databases are OLTP databases (online transaction processing) of generic use and can be used to produce a wide range of solutions. That said, they shine particularly when the solution depends, first, on our understanding of how things are connected. This is more common than one may think. And in many cases it is not only how things are connected but often one wants to know something about the different relationships in our field their names, qualities, weight and so on. Briefly, connectivity is the key. The graphs are the best abstraction one has to model and query the connectivity; databases graphs in turn give developers and the data specialists the ability to apply this abstraction to their specific problems. For this purpose, in this paper one used this approach to simulate the route planner application, capable of querying connected data. Merely having keys and values is not enough; no more having data partially connected through joins semantically poor. We need both the connectivity and contextual richness to operate these solutions. The case study herein simulates a railway network railway stations connected with one another where each connection between two stations may have some properties. And one answers the question: how to find the optimized route (path) and know whether a station is reachable from one station or not and in which depth.",
"title": ""
},
{
"docid": "df48f9d3096d8528e9f517783a044df8",
"text": "We propose a novel generative neural network architecture for Dialogue Act classification. Building upon the Recurrent Neural Network framework, our model incorporates a new attentional technique and a label-to-label connection for sequence learning, akin to Hidden Markov Models. Our experiments show that both of these innovations enable our model to outperform strong baselines for dialogue-act classification on the MapTask and Switchboard corpora. In addition, we analyse empirically the effectiveness of each of these innovations.",
"title": ""
},
{
"docid": "83b79fc95e90a303f29a44ef8730a93f",
"text": "Internet of Things (IoT) is a concept that envisions all objects around us as part of internet. IoT coverage is very wide and includes variety of objects like smart phones, tablets, digital cameras and sensors. Once all these devices are connected to each other, they enable more and more smart processes and services that support our basic needs, environment and health. Such enormous number of devices connected to internet provides many kinds of services. They also produce huge amount of data and information. Cloud computing is one such model for on-demand access to a shared pool of configurable resources (computer, networks, servers, storage, applications, services, and software) that can be provisioned as infrastructures ,software and applications. Cloud based platforms help to connect to the things around us so that we can access anything at any time and any place in a user friendly manner using customized portals and in built applications. Hence, cloud acts as a front end to access IoT. Applications that interact with devices like sensors have special requirements of massive storage to store big data, huge computation power to enable the real time processing of the data, information and high speed network to stream audio or video. Here we have describe how Internet of Things and Cloud computing can work together can address the Big Data problems. We have also illustrated about Sensing as a service on cloud using few applications like Augmented Reality, Agriculture, Environment monitoring,etc. Finally, we propose a prototype model for providing sensing as a service on cloud.",
"title": ""
},
{
"docid": "be5b9ba8398732d0e5a55fd918097f36",
"text": "There has been a significant amount of research in Artificial Intelligence focusing on the representation of legislation and regulations. The motivation for this has been twofold: on the one hand there have been opportunities for developing advisory systems for legal practitioners; on the other hand the law is a complex domain in which diverse modes of reasoning are employed, offering ample opportunity to test existing Artificial Intelligence techniques as well as to develop new ones. The general aim of the thesis is to explore the potential for developing logic-based tools for the analysis and representation of legal contracts, by considering the following two questions: (a) To what extent can techniques developed for the representation of legislation and regulations be transferred and applied usefully in the domain of legal contracts? (b) What features are specific to legal contracts and what techniques can be developed to address them? The intended applications include both the drafting of new contracts and the management and administration of existing ones, that is to say, the general problem of storing and retrieving information from large contractual documents, and more specific tasks such as monitoring compliance or establishing parties’ duties/rights under a given agreement when it is in force. Experimental material is drawn mostly from engineering contracts, which are typically large and complex and contain a multitude of interrelated provisions. The term ‘contract’ is commonly used to refer both to a legally binding agreement between (usually) two parties and to the document, that records such an agreement. The first part of the thesis is concerned with documents and the representation of contracts at the macro-level: the emphasis is on issues relevant to the design of structurally coherent documents. The thesis presents a document assembly tool designed to be applicable, where contract drafting is based on model-form contracts or existing examples of a given type. The main features of the approach are: (i) the representation addresses the structure and interrelationships between the constituent parts of contracts but not the text of the document itself; (ii) the representation of documents is separated from the mechanisms that manipulate it; and (iii) the drafting process is subject to a collection of explicitly represented constraints that govern the structure of documents. The second part of the thesis deals with the contents of agreements and representations at the micro-level. Micro-level drafting is the source of a host of issues ranging from representing the detailed wording of individual sections, to representing the nature of provisions (obligations, powers, reparations, procedures), to representing their \"fitness\" or effectiveness in securing some party's best interests. Various techniques are available to assist in aspects of this task, such as disambiguating contractual provisions and in detecting inconsistency or incompleteness. The second part of the thesis comprises three discussions. The first is on contractual obligations as the result of promissory exchanges between parties and draws upon work by Kimbrough and his associates. The second concentrates on contractual obligations and common patterns encountered in contracts. The third is concerned with temporal verification of contracts and shows how the techniques employed in model checking for hardware specification can be transferred to the domain of contracts.",
"title": ""
},
{
"docid": "e00295dc86476d1d350d11068439fe87",
"text": "A 10-bit LCD column driver, consisting of piecewise linear digital to analog converters (DACs), is proposed. Piecewise linear compensation is utilized to reduce the die area and to increase the effective color depth. The data conversion is carried out by a resistor string type DAC (R-DAC) and a charge sharing DAC, which are used for the most significant bit and least significant bit data conversions, respectively. Gamma correction voltages are applied to the R-DAC to lit the inverse of the liquid crystal trans-mittance-voltage characteristic. The gamma correction can also be digitally fine-tuned in the timing controller or column drivers. A prototype 10-bit LCD column driver implemented in a 0.35-mum CMOS technology demonstrates that the settling time is within 3 mus and the average die size per channel is 0.063 mm2, smaller than those of column drivers based exclusively on R-DACs.",
"title": ""
},
{
"docid": "350c7855cf36fcde407a84f8b66f33d8",
"text": "This paper describes Tacotron 2, a neural network architecture for speech synthesis directly from text. The system is composed of a recurrent sequence-to-sequence feature prediction network that maps character embeddings to mel-scale spectrograms, followed by a modified WaveNet model acting as a vocoder to synthesize time-domain waveforms from those spectrograms. Our model achieves a mean opinion score (MOS) of 4.53 comparable to a MOS of 4.58 for professionally recorded speech. To validate our design choices, we present ablation studies of key components of our system and evaluate the impact of using mel spectrograms as the conditioning input to WaveNet instead of linguistic, duration, and $F_{0}$ features. We further show that using this compact acoustic intermediate representation allows for a significant reduction in the size of the WaveNet architecture.",
"title": ""
},
{
"docid": "9cf59b5f67d07787da8eeae825066525",
"text": "Event correlation has become the cornerstone of many reactive applications, particularly in distributed systems. However, support for programming with complex events is still rather specific and rudimentary. This paper presents EventJava, an extension of Java with generic support for event-based distributed programming. EventJava seamlessly integrates events with methods, and broadcasting with unicasting of events; it supports reactions to combinations of events, and predicates guarding those reactions. EventJava is implemented as a framework to allow for customization of event semantics, matching, and dispatching. We present its implementation, based on a compiler transforming specific primitives to Java, along with a reference implementation of the framework. We discuss ordering properties of EventJava through a formalization of its core as an extension of Featherweight Java. In a performance evaluation, we show that EventJava compares favorably to a highly tuned database-backed event correlation engine as well as to a comparably lightweight concurrency mechanism.",
"title": ""
},
{
"docid": "268a7147cc4ae486bf4b9184787b9492",
"text": "Autonomous vehicles will need to decide on a course of action when presented with multiple less-than-ideal outcomes.",
"title": ""
},
{
"docid": "e8c97daac0301310074698273d813772",
"text": "Deep learning-based robotic grasping has made significant progress thanks to algorithmic improvements and increased data availability. However, state-of-the-art models are often trained on as few as hundreds or thousands of unique object instances, and as a result generalization can be a challenge. In this work, we explore a novel data generation pipeline for training a deep neural network to perform grasp planning that applies the idea of domain randomization to object synthesis. We generate millions of unique, unrealistic procedurally generated objects, and train a deep neural network to perform grasp planning on these objects. Since the distribution of successful grasps for a given object can be highly multimodal, we propose an autoregressive grasp planning model that maps sensor inputs of a scene to a probability distribution over possible grasps. This model allows us to sample grasps efficiently at test time (or avoid sampling entirely). We evaluate our model architecture and data generation pipeline in simulation and the real world. We find we can achieve a >90% success rate on previously unseen realistic objects at test time in simulation despite having only been trained on random objects. We also demonstrate an 80% success rate on real-world grasp attempts despite having only been trained on random simulated objects.",
"title": ""
},
{
"docid": "339f0935708aa6c5f8be704a1e8004e5",
"text": "Evolution sculpts both the body plans and nervous systems of agents together over time. By contrast, in artificial intelligence and robotics, a robot's body plan is usually designed by hand, and control policies are then optimized for that fixed design. The task of simultaneously co-optimizing the morphology and controller of an embodied robot has remained a challenge. In psychology, the theory of embodied cognition posits that behaviour arises from a close coupling between body plan and sensorimotor control, which suggests why co-optimizing these two subsystems is so difficult: most evolutionary changes to morphology tend to adversely impact sensorimotor control, leading to an overall decrease in behavioural performance. Here, we further examine this hypothesis and demonstrate a technique for 'morphological innovation protection', which temporarily reduces selection pressure on recently morphologically changed individuals, thus enabling evolution some time to 'readapt' to the new morphology with subsequent control policy mutations. We show the potential for this method to avoid local optima and converge to similar highly fit morphologies across widely varying initial conditions, while sustaining fitness improvements further into optimization. While this technique is admittedly only the first of many steps that must be taken to achieve scalable optimization of embodied machines, we hope that theoretical insight into the cause of evolutionary stagnation in current methods will help to enable the automation of robot design and behavioural training-while simultaneously providing a test bed to investigate the theory of embodied cognition.",
"title": ""
},
{
"docid": "de96ac151e5a3a2b38f2fa309862faee",
"text": "Venue recommendation is an important application for Location-Based Social Networks (LBSNs), such as Yelp, and has been extensively studied in recent years. Matrix Factorisation (MF) is a popular Collaborative Filtering (CF) technique that can suggest relevant venues to users based on an assumption that similar users are likely to visit similar venues. In recent years, deep neural networks have been successfully applied to tasks such as speech recognition, computer vision and natural language processing. Building upon this momentum, various approaches for recommendation have been proposed in the literature to enhance the effectiveness of MF-based approaches by exploiting neural network models such as: word embeddings to incorporate auxiliary information (e.g. textual content of comments); and Recurrent Neural Networks (RNN) to capture sequential properties of observed user-venue interactions. However, such approaches rely on the traditional inner product of the latent factors of users and venues to capture the concept of collaborative filtering, which may not be sufficient to capture the complex structure of user-venue interactions. In this paper, we propose a Deep Recurrent Collaborative Filtering framework (DRCF) with a pairwise ranking function that aims to capture user-venue interactions in a CF manner from sequences of observed feedback by leveraging Multi-Layer Perception and Recurrent Neural Network architectures. Our proposed framework consists of two components: namely Generalised Recurrent Matrix Factorisation (GRMF) and Multi-Level Recurrent Perceptron (MLRP) models. In particular, GRMF and MLRP learn to model complex structures of user-venue interactions using element-wise and dot products as well as the concatenation of latent factors. In addition, we propose a novel sequence-based negative sampling approach that accounts for the sequential properties of observed feedback and geographical location of venues to enhance the quality of venue suggestions, as well as alleviate the cold-start users problem. Experiments on three large checkin and rating datasets show the effectiveness of our proposed framework by outperforming various state-of-the-art approaches.",
"title": ""
},
{
"docid": "33390e96d05644da201db3edb3ad7338",
"text": "This paper addresses the difficult problem of finding an optimal neural architecture design for a given image classification task. We propose a method that aggregates two main results of the previous state-of-the-art in neural architecture search. These are, appealing to the strong sampling efficiency of a search scheme based on sequential modelbased optimization (SMBO) [15], and increasing training efficiency by sharing weights among sampled architectures [18]. Sequential search has previously demonstrated its capabilities to find state-of-the-art neural architectures for image classification. However, its computational cost remains high, even unreachable under modest computational settings. Affording SMBO with weight-sharing alleviates this problem. On the other hand, progressive search with SMBO is inherently greedy, as it leverages a learned surrogate function to predict the validation error of neural architectures. This prediction is directly used to rank the sampled neural architectures. We propose to attenuate the greediness of the original SMBO method by relaxing the role of the surrogate function so it predicts architecture sampling probability instead. We demonstrate with experiments on the CIFAR-10 dataset that our method, denominated Efficient progressive neural architecture search (EPNAS), leads to increased search efficiency, while retaining competitiveness of found architectures.",
"title": ""
},
{
"docid": "1da9ea0ec4c33454ad9217bcf7118c1c",
"text": "We use quantitative media (blogs, and news as a comparison) data generated by a large-scale natural language processing (NLP) text analysis system to perform a comprehensive and comparative study on how a company’s reported media frequency, sentiment polarity and subjectivity anticipates or reflects its stock trading volumes and financial returns. Our analysis provides concrete evidence that media data is highly informative, as previously suggested in the literature – but never studied on our scale of several large collections of blogs and news for over five years. Building on our findings, we give a sentiment-based market-neutral trading strategy which gives consistently favorable returns with low volatility over a five year period (2005-2009). Our results are significant in confirming the performance of general blog and news sentiment analysis methods over broad domains and sources. Moreover, several remarkable differences between news and blogs are also identified in this paper.",
"title": ""
}
] |
scidocsrr
|
2937d8a25a283954f7fa502245460da2
|
A 3 D-polar Coordinate Colour Representation Suitable for Image Analysis
|
[
{
"docid": "daa4114fe8ba064e816db1d579808fee",
"text": "Digital control of color television monitors—in particular, via frame buffers—has added precise control of a large subset of human colorspace to the capabilities of computer graphics. This subset is the gamut of colors spanned by the red, green, and blue (RGB) electron guns exciting their respective phosphors. It is called the RGB monitor gamut. Full-blown color theory is a quite complex subject involving physics, psychology, and physiology, but restriction to the RGB monitor gamut simplifies matters substantially. It is linear, for example, and admits to familiar spatial representations. This paper presents a set of alternative models of the RGB monitor gamut based on the perceptual variables hue (H), saturation (S), and value (V) or brightness (L). Algorithms for transforming between these models are derived. Particular emphasis is placed on an RGB to HSV non-trigonometric pair of transforms which have been used successfully for about four years in frame buffer painting programs. These are fast, accurate, and adequate in many applications. Computationally more difficult transform pairs are sometimes necessary, however. Guidelines for choosing among the models are provided. Psychophysical corrections are described within the context of the definitions established by the NTSC (National Television Standards Committee).",
"title": ""
}
] |
[
{
"docid": "5fe1fa98c953d778ee27a104802e5f2b",
"text": "We describe two general approaches to creating document-level maps of science. To create a local map one defines and directly maps a sample of data, such as all literature published in a set of information science journals. To create a global map of a research field one maps ‘all of science’ and then locates a literature sample within that full context. We provide a deductive argument that global mapping should create more accurate partitions of a research field than local mapping, followed by practical reasons why this may not be so. The field of information science is then mapped at the document level using both local and global methods to provide a case illustration of the differences between the methods. Textual coherence is used to assess the accuracies of both maps. We find that document clusters in the global map have significantly higher coherence than those in the local map, and that the global map provides unique insights into the field of information science that cannot be discerned from the local map. Specifically, we show that information science and computer science have a large interface and that computer science is the more progressive discipline at that interface. We also show that research communities in temporally linked threads have a much higher coherence than isolated communities, and that this feature can be used to predict which threads will persist into a subsequent year. Methods that could increase the accuracy of both local and global maps in the future are also discussed.",
"title": ""
},
{
"docid": "b382f93bb45e7324afaff9950d814cf3",
"text": "OBJECTIVE\nA vocational rehabilitation program (occupational therapy and supported employment) for promoting the return to the community of long-stay persons with schizophrenia was established at a psychiatric hospital in Japan. The purpose of the study was to evaluate the program in terms of hospitalization rates, community tenure, and social functioning with each individual serving as his or her control.\n\n\nMETHODS\nFifty-two participants, averaging 8.9 years of hospitalization, participated in the vocational rehabilitation program consisting of 2 to 6 hours of in-hospital occupational therapy for 6 days per week and a post-discharge supported employment component. Seventeen years after the program was established, a retrospective study was conducted to evaluate the impact of the program on hospitalizations, community tenure, and social functioning after participants' discharge from hospital, using an interrupted time-series analysis. The postdischarge period was compared with the period from onset of illness to the index discharge on the three outcome variables.\n\n\nRESULTS\nAfter discharge from the hospital, the length of time spent by participants out of the hospital increased, social functioning improved, and risk of hospitalization diminished by 50%. Female participants and those with supportive families spent more time out of the hospital than participants who were male or came from nonsupportive families.\n\n\nCONCLUSION\nA combined program of occupational therapy and supported employment was successful in a Japanese psychiatric hospital when implemented with the continuing involvement of a clinical team. Interventions that improve the emotional and housing supports provided to persons with schizophrenia by their families are likely to enhance the outcome of vocational services.",
"title": ""
},
{
"docid": "9694672ebbc3d79557fd65a5381f780a",
"text": "The Web enables broad dissemination of information and services; however, the ways in which sites are designed can either facilitate or impede users' benefit from these resources. We present a longitudinal study of web site design from 2000 to 2003. We analyze over 150 quantitative measures of interface aspects (e.g., amount of text on pages, numbers and types of links, consistency, accessibility, etc.) for 22,000 pages and over 1,500 sites that received ratings from Internet professionals. We examine characteristics of highly rated sites and provide three perspectives on the evolution of web site design patterns: (1) descriptions of design patterns during each time period; (2) changes in design patterns across the three time periods; and (3) comparisons of design patterns to those that are recommended in the relevant literature (i.e., texts by recognized experts and user studies). We illustrate how design practices conform to or deviate from recommended practices and the consequent implications. We show that the most glaring deficiency of web sites, even for sites that are highly rated, is their inadequate accessibility, in particular for browser scripts, tables, and form elements.",
"title": ""
},
{
"docid": "09bfd65053c41aae476ddda960e5fc0d",
"text": "With the proliferation of portable and mobile IoT devices and their increasing processing capability, we witness that the edge of network is moving to the IoT gateways and smart devices. To avoid Big Data issues (e.g. high latency of cloud based IoT), the processing of the captured data is starting from the IoT edge node. However, the available processing capabilities and energy resources are still limited and do not allow to fully process the data on-board. It calls for offloading some portions of computation to the gateway or servers. Due to the limited bandwidth of the IoT gateways, choosing the offloading levels of connected devices and allocating bandwidth to them is a challenging problem. This paper proposes a technique for managing computation offloading in a local IoT network under bandwidth constraints. The existing bandwidth allocation and computation offloading management techniques underutilize the gateway's resources (e.g. bandwidth) due to the fragmentation issue. This issue stems from the discrete coarse-grained choices (i.e. offloading levels) on the IoT end nodes. Our proposed technique addresses this issue, and utilizes the available resources of the gateway effectively. The experimental results show on average 1 hour (up to 1.5 hour) improvement in battery life of edge devices. The utilization of gateway's bandwidth increased by 40%.",
"title": ""
},
{
"docid": "8492ba0660b06ca35ab3f4e96f3a33c3",
"text": "Young men who have sex with men (YMSM) are increasingly using mobile smartphone applications (“apps”), such as Grindr, to meet sex partners. A probability sample of 195 Grindr-using YMSM in Southern California were administered an anonymous online survey to assess patterns of and motivations for Grindr use in order to inform development and tailoring of smartphone-based HIV prevention for YMSM. The number one reason for using Grindr (29 %) was to meet “hook ups.” Among those participants who used both Grindr and online dating sites, a statistically significantly greater percentage used online dating sites for “hook ups” (42 %) compared to Grindr (30 %). Seventy percent of YMSM expressed a willingness to participate in a smartphone app-based HIV prevention program. Development and testing of smartphone apps for HIV prevention delivery has the potential to engage YMSM in HIV prevention programming, which can be tailored based on use patterns and motivations for use. Los hombres que mantienen relaciones sexuales con hombres (YMSM por las siglas en inglés de Young Men Who Have Sex with Men) están utilizando más y más aplicaciones para teléfonos inteligentes (smartphones), como Grindr, para encontrar parejas sexuales. En el Sur de California, se administró de forma anónima un sondeo en internet a una muestra de probabilidad de 195 YMSM usuarios de Grindr, para evaluar los patrones y motivaciones del uso de Grindr, con el fin de utilizar esta información para el desarrollo y personalización de prevención del VIH entre YMSM con base en teléfonos inteligentes. La principal razón para utilizar Grindr (29 %) es para buscar encuentros sexuales casuales (hook-ups). Entre los participantes que utilizan tanto Grindr como otro sitios de citas online, un mayor porcentaje estadísticamente significativo utilizó los sitios de citas online para encuentros casuales sexuales (42 %) comparado con Grindr (30 %). Un setenta porciento de los YMSM expresó su disposición para participar en programas de prevención del VIH con base en teléfonos inteligentes. El desarrollo y evaluación de aplicaciones para teléfonos inteligentes para el suministro de prevención del VIH tiene el potencial de involucrar a los YMSM en la programación de la prevención del VIH, que puede ser adaptada según los patrones y motivaciones de uso.",
"title": ""
},
{
"docid": "18bc3abbd6a4f51fdcfbafcc280f0805",
"text": "Complex disease genetics has been revolutionised in recent years by the advent of genome-wide association (GWA) studies. The chronic inflammatory bowel diseases (IBDs), Crohn's disease and ulcerative colitis have seen notable successes culminating in the discovery of 99 published susceptibility loci/genes (71 Crohn's disease; 47 ulcerative colitis) to date. Approximately one-third of loci described confer susceptibility to both Crohn's disease and ulcerative colitis. Amongst these are multiple genes involved in IL23/Th17 signalling (IL23R, IL12B, JAK2, TYK2 and STAT3), IL10, IL1R2, REL, CARD9, NKX2.3, ICOSLG, PRDM1, SMAD3 and ORMDL3. The evolving genetic architecture of IBD has furthered our understanding of disease pathogenesis. For Crohn's disease, defective processing of intracellular bacteria has become a central theme, following gene discoveries in autophagy and innate immunity (associations with NOD2, IRGM, ATG16L1 are specific to Crohn's disease). Genetic evidence has also demonstrated the importance of barrier function to the development of ulcerative colitis (HNF4A, LAMB1, CDH1 and GNA12). However, when the data are analysed in more detail, deeper themes emerge including the shared susceptibility seen with other diseases. Many immune-mediated diseases overlap in this respect, paralleling the reported epidemiological evidence. However, in several cases the reported shared susceptibility appears at odds with the clinical picture. Examples include both type 1 and type 2 diabetes mellitus. In this review we will detail the presently available data on the genetic overlap between IBD and other diseases. The discussion will be informed by the epidemiological data in the published literature and the implications for pathogenesis and therapy will be outlined. This arena will move forwards very quickly in the next few years. Ultimately, we anticipate that these genetic insights will transform the landscape of common complex diseases such as IBD.",
"title": ""
},
{
"docid": "375e98976f35a221af87388f4b8f83d5",
"text": "Infomediaries are information intermediaries in the Internet that play an important part in reducing online customers’ search costs for finding the most suitable vendors and products. This research explores the process of Web customers’ trust building using infomediaries. Specifically, we identify four sets of trust-related beliefs that impact web customers’ trust attitude and intended behavior, as well as the antecedents of trust-related beliefs. This conceptualization is built on a number of theories, mainly theories of trust, reasoned action, and actor-network. Our empirical results in testing the model indicate that web customers’ trust attitude toward web infomediaries is formed based on their beliefs regarding risk, content quality, system quality, and trustbuilding beliefs. We also found that initial trust is the antecedent of trust-forming beliefs, whereas individuals’ propensity to trust influences their calculative risk beliefs. The implications of these findings are also discussed.",
"title": ""
},
{
"docid": "11afe3e3e94ca2ec411f38bf1b0b2e82",
"text": "The requirements engineering program at Siemens Corporate Research has been involved with process improvement, training and project execution across many of the Siemens operating companies. We have been able to observe and assist with process improvement in mainly global software development efforts. Other researchers have reported extensively on various aspects of distributed requirements engineering, but issues specific to organizational structure have not been well categorized. Our experience has been that organizational and other management issues can overshadow technical problems caused by globalization. This paper describes some of the different organizational structures we have encountered, the problems introduced into requirements engineering processes by these structures, and techniques that were effective in mitigating some of the negative effects of global software development.",
"title": ""
},
{
"docid": "e19de4cf8ddf88b5469104f83151780b",
"text": "Nonholonomic mobile robots are characterized by no-slip constraints. However, in many practical situations, slips are inevitable. In this work, we develop a theoretical and systematic framework to include slip dynamics into the overall dynamics of the wheeled mobile robot (WMR). Such a dynamic model is useful to understand the slip characteristics during navigation of the WMR. We further design a planner and a controller that allow efficient navigation of the WMR by controlling the slip. Preliminary simulation results are presented to demonstrate the usefulness of the proposed modeling and control techniques.",
"title": ""
},
{
"docid": "5feea8e7bcb96c826bdf19922e47c922",
"text": "This chapter is a review of conceptions of knowledge as they appear in selected bodies of research on teaching. Writing as a philosopher of education, my interest is in how notions of knowledge are used and analyzed in a number of research programs that study teachers and their teaching. Of particular interest is the growing research literature on the knowledge that teachers generate as a result of their experience as teachers, in contrast to the knowledge of teaching that is generated by those who specialize in research on teaching. This distinction, as will become apparent, is one that divides more conventional scientific approaches to the study of teaching from what might be thought of as alternative approaches.",
"title": ""
},
{
"docid": "90494c890c7f9625fa69ea3d8aa3f6ae",
"text": "Mobile phones' increasing ubiquity has created many opportunities for personal context sensing. Personal activity is an important part of a user's context, and automatically recognizing it is vital for health and fitness monitoring applications. Recording a stream of activity data enables monitoring patients with chronic conditions affecting ambulation and motion, as well as those undergoing rehabilitation treatments. Modern mobile phones are powerful enough to perform activity classification in real time, but they typically use a static classifier that is trained in advance or require the user to manually add training data after the application is on his/her device. This paper investigates ways of automatically augmenting activity classifiers after they are deployed in an application. It compares active learning and three different semi-supervised learning methods, self-learning, En-Co-Training, and democratic co-learning, to determine which show promise for this purpose. The results show that active learning, En-Co-Training, and democratic co-learning perform well when the initial classifier's accuracy is low (75–80%). When the initial accuracy is already high (90%), these methods are no longer effective, but they do not hurt the accuracy either. Overall, active learning gave the highest improvement, but democratic co-learning was almost as good and does not require user interaction. Thus, democratic co-learning would be the best choice for most applications, since it would significantly increase the accuracy for initial classifiers that performed poorly.",
"title": ""
},
{
"docid": "18e02cb5b4e6ca19d7e9f769d34d1337",
"text": "The performance of object detection has been improved as the success of deep architectures. The main algorithm predominantly used for general detection is Faster R-CNN because of their high accuracy and fast inference time. In pedestrian detection, Region Proposal Network (RPN) itself which is used for region proposals in Faster R-CNN can be used as a pedestrian detector. Also, RPN even shows better performance than Faster R-CNN for pedestrian detection. However, RPN generates severe false positives such as high score backgrounds and double detections because it does not have downstream classifier. From this observations, we made a network to refine results generated from the RPN. Our Refinement Network refers to the feature maps of the RPN and trains the network to rescore severe false positives. Also, we found that different type of feature referencing method is crucial for improving performance. Our network showed better accuracy than RPN with almost same speed on Caltech Pedestrian Detection benchmark.",
"title": ""
},
{
"docid": "e834a1a349cc4f0da70c6eaedc32f5e3",
"text": "The ability to create stable, encompassing grasps with subsets of fingers is greatly increased by using soft fingertips that deform during contact and apply a larger space of frictional forces and moments than their rigid counterparts. This is true not only for human grasping, but also for robotic hands using fingertips made of soft materials. The superiority of deformable human fingertips as compared to hard robot gripper fingers for grasping and manipulation has led to a number of investigations with robot hands employing elastomers or materials such as fluids or powders beneath a membrane at the fingertips. When the fingers are soft, during holding and for manipulation of the object through precise dimensions, their property of softness maintains the area contact between, the fingertips and the manipulating object, which restraints the object and provides stability. In human finger there is a natural softness which is a combination of elasticity and damping. This combination of elasticity and damping is produced by nature due to flesh and blood beneath the skin. This keeps the contact firm and helps in holding the object firmly and stably.",
"title": ""
},
{
"docid": "3a0d38ba7d29358e511d5eef24360713",
"text": "In this paper, we investigate the problem of learning a machine translation model that can simultaneously translate sentences from one source language to multiple target languages. Our solution is inspired by the recently proposed neural machine translation model which generalizes machine translation as a sequence learning problem. We extend the neural machine translation to a multi-task learning framework which shares source language representation and separates the modeling of different target language translation. Our framework can be applied to situations where either large amounts of parallel data or limited parallel data is available. Experiments show that our multi-task learning model is able to achieve significantly higher translation quality over individually learned model in both situations on the data sets publicly available.",
"title": ""
},
{
"docid": "68191b71a4f944178ffcf5e8317e9725",
"text": "There is a wide inter-individual response to statin therapy including rosuvastatin calcium (RC), and it has been hypothesized that genetic differences may contribute to these variations. In fact, several studies have shown that pharmacokinetic (PK) parameters for RC are affected by race. The aim of this study is to demonstrate the interchangeability between two generic RC 20 mg film-coated tablets under fasting conditions among Mediterranean Arabs and to compare the pharmacokinetic results with Asian and Caucasian subjects from other studies. A single oral RC 20 mg dose, randomized, open-label, two-way crossover design study was conducted in 30 healthy Mediterranean Arab volunteers. Blood samples were collected prior to dosing and over a 72-h period. Concentrations in plasma were quantified using a validated liquid chromatography tandem mass spectrometry method. Twenty-six volunteers completed the study. Statistical comparison of the main PK parameters showed no significant difference between the generic and branded products. The point estimates (ratios of geometric mean %) were 107.73 (96.57-120.17), 103.61 (94.03-114.16), and 104.23 (94.84-114.54) for peak plasma concentration (Cmax), Area Under the Curve (AUC)0→last, and AUC0→∞, respectively. The 90% confidence intervals were within the pre-defined limits of 80%-125% as specified by the Food and Drug Administration and European Medicines Agency for bioequivalence studies. Both formulations were well-tolerated and no serious adverse events were reported. The PK results (AUC0→last and Cmax) were close to those of the Caucasian subjects. This study showed that the test and reference products met the regulatory criteria for bioequivalence following a 20 mg oral dose of RC under fasting conditions. Both formulations also showed comparable safety results. The PK results of the test and reference in the study subjects fall within the acceptable interval of 80%-125% and they were very close to the results among Caucasians. These PK results may be useful in order to determine the suitable RC dose among Arab Mediterranean patients.",
"title": ""
},
{
"docid": "7203aedbdb4c3b42c34dafdefe082b63",
"text": "We discuss silver ink as a low cost option for manufacturing RFID tags at ultra high frequencies (UHF). An analysis of two different RFID tag antennas, made from silver ink and from copper, is presented at UHF. The influence of each material on tag performance is discussed along with simulation results and measurement data which are in good agreement. It is observed that RFID tag performance depends both on material and on the shape of the antenna. For some classes of antennas, silver ink with higher conductivity performs as well as copper, which makes it an attractive low cost alternative material to copper for RFID tag antennas.",
"title": ""
},
{
"docid": "8d7a41aad86633c9bb7da8adfde71883",
"text": "Nuclear receptors (NRs) are major pharmacological targets that allow an access to the mechanisms controlling gene regulation. As such, some NRs were identified as biological targets of active compounds contained in herbal remedies found in traditional medicines. We aim here to review this expanding literature by focusing on the informative articles regarding the mechanisms of action of traditional Chinese medicines (TCMs). We exemplified well-characterized TCM action mediated by NR such as steroid receptors (ER, GR, AR), metabolic receptors (PPAR, LXR, FXR, PXR, CAR) and RXR. We also provided, when possible, examples from other traditional medicines. From these, we draw a parallel between TCMs and phytoestrogens or endocrine disrupting chemicals also acting via NR. We define common principle of action and highlight the potential and limits of those compounds. TCMs, by finely tuning physiological reactions in positive and negative manners, could act, in a subtle but efficient way, on NR sensors and their transcriptional network.",
"title": ""
},
{
"docid": "303509037a36933e6c999067e7b34bc6",
"text": "Corporates and organizations across the globe are spending huge sums on information security as they are reporting an increase in security related incidents. The proliferation of cloud, social network and multiple mobile device usage is on one side represent an opportunity and benefits to the organisation and on other side have posed new challenges for those policing cybercrimes. Cybercriminals have devised more sophisticated and targeted methods/techniques to trap victim and breach security setups. The emergence of highly technical nature of digital crimes has created a new branch of science known as digital forensics. Digital Forensics is the field of forensics science that deals with digital crimes and crimes involving computers. This paper focuses on briefing of digital forensics, various phases of digital forensics, digital forensics tools and its comparisons, and emerging trends and issues in this fascinated area. Keywords— Digital forensics, Digital evidence, Digital forensics tools, Network intrusion, Information security,",
"title": ""
},
{
"docid": "b50ea06c20fb22d7060f08bc86d9d6ca",
"text": "The advent of the Social Web has provided netizens with new tools for creating and sharing, in a time- and cost-efficient way, their contents, ideas, and opinions with virtually the millions of people connected to the World Wide Web. This huge amount of information, however, is mainly unstructured as specifically produced for human consumption and, hence, it is not directly machine-processable. In order to enable a more efficient passage from unstructured information to structured data, aspect-based opinion mining models the relations between opinion targets contained in a document and the polarity values associated with these. Because aspects are often implicit, however, spotting them and calculating their respective polarity is an extremely difficult task, which is closer to natural language understanding rather than natural language processing. To this end, Sentic LDA exploits common-sense reasoning to shift LDA clustering from a syntactic to a semantic level. Rather than looking at word co-occurrence frequencies, Sentic LDA leverages on the semantics associated with words and multi-word expressions to improve clustering and, hence, outperform state-of-the-art techniques for aspect extraction.",
"title": ""
},
{
"docid": "0873dd0181470d722f0efcc8f843eaa6",
"text": "Compared to traditional service, the characteristics of the customer behavior in electronic service are personalized demand, convenient consumed circumstance and perceptual consumer behavior. Therefore, customer behavior is an important factor to facilitate online electronic service. The purpose of this study is to explore the key success factors affecting customer purchase intention of electronic service through the behavioral perspectives of customers. Based on the theory of technology acceptance model (TAM) and self service technology (SST), the study proposes a theoretical model for the empirical examination of the customer intention for purchasing electronic services. A comprehensive survey of online customers having e-shopping experiences is undertaken. Then this model is tested by means of the statistical analysis method of structure equation model (SEM). The empirical results indicated that perceived usefulness and perceived assurance have a significant impact on purchase in e-service. Discussion and implication are presented in the end.",
"title": ""
}
] |
scidocsrr
|
0a83242fdeb2369b97c1bd58cdc6c123
|
On the Sensor Design of Torque Controlled Actuators: A Comparison Study of Strain Gauge and Encoder-Based Principles
|
[
{
"docid": "1a485c2a234c76ea24ac680920a87574",
"text": "The introduction of intrinsic compliance in the actuation system of assistive robots improves safety and dynamical adaptability. Furthermore, in the case of wearable robots for gait assistance, the exploitation of conservative compliant elements as energy buffers can mimic the intrinsic dynamical properties of legs during locomotion. However, commercially available compliant components do not generally allow to meet the desired requirements in terms of admissible peak load, as typically required by gait assistance, while guaranteeing low stiffness and a compact and lightweight design. This paper presents a novel compact monolithic torsional spring to be used as the basic component of a modular compliant system for Series Elastic Actuators. The spring, whose design was re ned through an iterative FEA-based optimization process, has an external diameter of 85 mm, a thickness of 3 mm and a weight of 61.5 g. The spring, characterized using a custom dynamometric test bed, shows a linear torque vs. angle characteristic. The compliant element has a stiffness of 98 N m/rad and it is capable of withstanding a maximum torque of 7.68 N m. A good agreement between simulated and experimental data was observed, with a maximum resultant error of 6%. By arranging a number of identical springs in series or in parallel, it is possible to render different torque vs. angle characteristics, in order to match the speci c applications requirements.",
"title": ""
},
{
"docid": "4b5c5b76d7370a82f96f36659cd63850",
"text": "For force control of robot and collision detection with humans, robots that has joint torque sensors have been developed. However, existing torque sensors cannot measure correct torque because of crosstalk error. In order to solve this problem, we proposed a novel torque sensor that can measure the pure torque without crosstalk. The hexaform of the proposed sensor with truss structure increase deformation of the sensor and restoration, and the Wheatstone bridge circuit of strain gauge removes crosstalk error. Sensor performance is verified with FEM analysis.",
"title": ""
}
] |
[
{
"docid": "6d1abbe80f2d3aebee09e0d473afc400",
"text": "Vehicular Ad-hoc Network (VANET) is a rising & most challenging research area to provide Intelligent Transportation System (ITS) services to the end users. The implementation of routing protocols in VANET is an exigent task as of its high mobility & frequent link disruption topology. VANET is basically used to provide various infotainment services to each and every end user; these services are further responsible to provide an efficient driving environment. At present, to provide efficient communication in vehicular networks several routing protocols have been designed, but the networks are vulnerable to several threats in the presence of malicious nodes. Today, security is the major concern for various VANET applications where a wrong message may directly or indirectly affect the human lives. In this paper, we investigate the several security issues on network layer in VANET. In this, we also examine routing attacks such as Sybil & Illusion attacks, as well as available solutions for such attacks in existing VANET protocols. KeywordsVehicular Ad-hoc Network (VANET), Intelligent Transportation System (ITS), On-board Unit (OBU), security attacks, privacy.",
"title": ""
},
{
"docid": "660f957b70e53819724e504ed3de0776",
"text": "We propose several econometric measures of connectedness based on principalcomponents analysis and Granger-causality networks, and apply them to the monthly returns of hedge funds, banks, broker/dealers, and insurance companies. We find that all four sectors have become highly interrelated over the past decade, likely increasing the level of systemic risk in the finance and insurance industries through a complex and time-varying network of relationships. These measures can also identify and quantify financial crisis periods, and seem to contain predictive power in out-of-sample tests. Our results show an asymmetry in the degree of connectedness among the four sectors, with banks playing a much more important role in transmitting shocks than other financial institutions. & 2011 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "71b59076bf36de415c5cf6b86cec165f",
"text": "Most existing structure from motion (SFM) approaches for unordered images cannot handle multiple instances of the same structure in the scene. When image pairs containing different instances are matched based on visual similarity, the pairwise geometric relations as well as the correspondences inferred from such pairs are erroneous, which can lead to catastrophic failures in the reconstruction. In this paper, we investigate the geometric ambiguities caused by the presence of repeated or duplicate structures and show that to disambiguate between multiple hypotheses requires more than pure geometric reasoning. We couple an expectation maximization (EM)-based algorithm that estimates camera poses and identifies the false match-pairs with an efficient sampling method to discover plausible data association hypotheses. The sampling method is informed by geometric and image-based cues. Our algorithm usually recovers the correct data association, even in the presence of large numbers of false pairwise matches.",
"title": ""
},
{
"docid": "0a8300fd3760223f5bf0df3d1187a6a5",
"text": "The glare illusion is commonly used in CG rendering, especially in game engines, to achieve a higher brightness than that of the maximum luminance of a display. In this work, we measure the perceived luminance of the glare illusion in a psychophysical experiment. To evoke the illusion, an image is convolved with either a point spread function (PSF) of the eye or a Gaussian kernel. It is found that 1) the Gaussian kernel evokes an illusion of the same or higher strength than that produced by the PSF while being computationally much less expensive, 2) the glare illusion can raise the perceived luminance by 20 -- 35%, 3) some convolution kernels can produce undesirable Mach-band effects and thereby reduce the brightness boost of the glare illusion. The reported results have practical implications for glare rendering in computer graphics.",
"title": ""
},
{
"docid": "3a1f2070cad8641d9116c3738a36e5bc",
"text": "Several real-world prediction problems are subject to changes over time due to their dynamic nature. These changes, named concept drift, usually lead to immediate and disastrous loss in classifier's performance. In order to cope with such a serious problem, drift detection methods have been proposed in the literature. However, current methods cannot be widely used since they are based either on performance monitoring or on fully labeled data, or even both. Focusing on overcoming these drawbacks, in this work we propose using density variation of the most significant instances as an explicit unsupervised trigger for concept drift detection. Here, density variation is based on Active Learning, and it is calculated from virtual margins projected onto the input space according to classifier confidence. In order to investigate the performance of the proposed method, we have carried out experiments on six databases, precisely four synthetic and two real databases focusing on setting up all parameters involved in our method and on comparing it to three baselines, including two supervised drift detectors and one Active Learning-based strategy. The obtained results show that our method, when compared to the supervised baselines, reached better recognition rates in the majority of the investigated databases, while keeping similar or higher detection rates. In terms of the Active Learning-based strategies comparison, our method outperformed the baseline taking into account both recognition and detection rates, even though the baseline employed much less labeled samples. Therefore, the proposed method established a better trade-off between amount of labeled samples and detection capability, as well as recognition rate.",
"title": ""
},
{
"docid": "2d7ea221d2bce97c2a91ee26a3793d0d",
"text": "In this article we introduce modern statistical machine learning and bioinformatics approaches that have been used in learning statistical relationships from big data in medicine and behavioral science that typically include clinical, genomic (and proteomic) and environmental variables. Every year, data collected from biomedical and behavioral science is getting larger and more complicated. Thus, in medicine, we also need to be aware of this trend and understand the statistical tools that are available to analyze these datasets. Many statistical analyses that are aimed to analyze such big datasets have been introduced recently. However, given many different types of clinical, genomic, and environmental data, it is rather uncommon to see statistical methods that combine knowledge resulting from those different data types. To this extent, we will introduce big data in terms of clinical data, single nucleotide polymorphism and gene expression studies and their interactions with environment. In this article, we will introduce the concept of well-known regression analyses such as linear and logistic regressions that has been widely used in clinical data analyses and modern statistical models such as Bayesian networks that has been introduced to analyze more complicated data. Also we will discuss how to represent the interaction among clinical, genomic, and environmental data in using modern statistical models. We conclude this article with a promising modern statistical method called Bayesian networks that is suitable in analyzing big data sets that consists with different type of large data from clinical, genomic, and environmental data. Such statistical model form big data will provide us with more comprehensive understanding of human physiology and disease.",
"title": ""
},
{
"docid": "720318ed13643b2e4890e64e011c3bff",
"text": "This paper proposes a deep learning model to efficiently detect salient regions in videos. It addresses two important issues: (1) deep video saliency model training with the absence of sufficiently large and pixel-wise annotated video data; and (2) fast video saliency training and detection. The proposed deep video saliency network consists of two modules, for capturing the spatial and temporal saliency stimuli, respectively. The dynamic saliency model, explicitly incorporating saliency estimates from the static saliency model, directly produces spatiotemporal saliency inference without time-consuming optical flow computation. We further propose a novel data augmentation technique that simulates video training data from existing annotated image datasets, which enables our network to learn diverse saliency stimuli and prevents overfitting with the limited number of training videos. Leveraging our synthetic video data (150K video sequences) and real videos, our deep video saliency model successfully learns both spatial and temporal saliency stimuli, thus producing accurate spatiotemporal saliency estimate. We advance the state-of-the-art on the DAVIS dataset (MAE of .06) and the FBMS dataset (MAE of .07), and do so with much improved speed (2fps with all steps) on one GPU.",
"title": ""
},
{
"docid": "774872f2e95615e83d90948b209f26b8",
"text": "Most recent approaches to monocular 3D human pose estimation rely on Deep Learning. They typically involve regressing from an image to either 3D joint coordinates directly or 2D joint locations from which 3D coordinates are inferred. Both approaches have their strengths and weaknesses and we therefore propose a novel architecture designed to deliver the best of both worlds by performing both simultaneously and fusing the information along the way. At the heart of our framework is a trainable fusion scheme that learns how to fuse the information optimally instead of being hand-designed. This yields significant improvements upon the state-of-the-art on standard 3D human pose estimation benchmarks.",
"title": ""
},
{
"docid": "b44c6f387fb8ae7084854e0eca27a6fa",
"text": "Static memory management replaces runtime garbage collection with compile-time annotations that make all memory allocation and deallocation explicit in a program. We improve upon the Tofte/Talpin region-based scheme for compile-time memory management[TT94]. In the Tofte/Talpin approach, all values, including closures, are stored in regions. Region lifetimes coincide with lexical scope, thus forming a runtime stack of regions and eliminating the need for garbage collection. We relax the requirement that region lifetimes be lexical. Rather, regions are allocated late and deallocated as early as possible by explicit memory operations. The placement of allocation and deallocation annotations is determined by solving a system of constraints that expresses all possible annotations. Experiments show that our approach reduces memory requirements significantly, in some cases asymptotically.",
"title": ""
},
{
"docid": "2f4a4c223c13c4a779ddb546b3e3518c",
"text": "Machine learning systems trained on user-provided data are susceptible to data poisoning attacks, whereby malicious users inject false training data with the aim of corrupting the learned model. While recent work has proposed a number of attacks and defenses, little is understood about the worst-case loss of a defense in the face of a determined attacker. We address this by constructing approximate upper bounds on the loss across a broad family of attacks, for defenders that first perform outlier removal followed by empirical risk minimization. Our approximation relies on two assumptions: (1) that the dataset is large enough for statistical concentration between train and test error to hold, and (2) that outliers within the clean (nonpoisoned) data do not have a strong effect on the model. Our bound comes paired with a candidate attack that often nearly matches the upper bound, giving us a powerful tool for quickly assessing defenses on a given dataset. Empirically, we find that even under a simple defense, the MNIST-1-7 and Dogfish datasets are resilient to attack, while in contrast the IMDB sentiment dataset can be driven from 12% to 23% test error by adding only 3% poisoned data.",
"title": ""
},
{
"docid": "a3cea7fc6c034c7f06595e8e1150e3c8",
"text": "Tweets are Donald Trump's quickest and most frequently employed way to send shockwaves. Tweets allow the user to respond quickly to the Kairos of developing situations—an advantage to the medium, but perhaps also a disadvantage, as Trump's 3am and 4am tweets tend to show. In this paper, we apply the three classical modes of rhetoric—forensic/judicial, deliberative, and epideictic/ceremonial rhetoric—to see how the modes manifest in Donald Trump's tweets as a presidential candidate, as President-Elect, and as President. Does the use of these three modes shift as Trump's rhetorical situation and especially subject position shift? Besides looking for quantitative changes in Trump's favored modes over time, our qualitative analysis includes representative examples and interesting examples of Trump's use of each mode (and combinations of them) during each time period.",
"title": ""
},
{
"docid": "ddef188a971d53c01d242bb9198eac10",
"text": "State-of-the-art slot filling models for goal-oriented human/machine conversational language understanding systems rely on deep learning methods. While multi-task training of such models alleviates the need for large in-domain annotated datasets, bootstrapping a semantic parsing model for a new domain using only the semantic frame, such as the back-end API or knowledge graph schema, is still one of the holy grail tasks of language understanding for dialogue systems. This paper proposes a deep learning based approach that can utilize only the slot description in context without the need for any labeled or unlabeled in-domain examples, to quickly bootstrap a new domain. The main idea of this paper is to leverage the encoding of the slot names and descriptions within a multi-task deep learned slot filling model, to implicitly align slots across domains. The proposed approach is promising for solving the domain scaling problem and eliminating the need for any manually annotated data or explicit schema alignment. Furthermore, our experiments on multiple domains show that this approach results in significantly better slot-filling performance when compared to using only in-domain data, especially in the low data regime.",
"title": ""
},
{
"docid": "d3afec9fcaabe6db91aa433370d0b4f1",
"text": "Low-rank modeling generally refers to a class of methods that solves problems by representing variables of interest as low-rank matrices. It has achieved great success in various fields including computer vision, data mining, signal processing, and bioinformatics. Recently, much progress has been made in theories, algorithms, and applications of low-rank modeling, such as exact low-rank matrix recovery via convex programming and matrix completion applied to collaborative filtering. These advances have brought more and more attention to this topic. In this article, we review the recent advances of low-rank modeling, the state-of-the-art algorithms, and the related applications in image analysis. We first give an overview of the concept of low-rank modeling and the challenging problems in this area. Then, we summarize the models and algorithms for low-rank matrix recovery and illustrate their advantages and limitations with numerical experiments. Next, we introduce a few applications of low-rank modeling in the context of image analysis. Finally, we conclude this article with some discussions.",
"title": ""
},
{
"docid": "c1e12041fbeaf82447cd5bf62e5207f0",
"text": "Numerous approaches have been investigated to achieve single system image in order to simplify the complexity of programming and administration of cluster. This paper focuses on virtualization approach and presents NEX, a system of cooperative hypervisors with single system image support. NEX could provide a virtual SMP machine over cluster nodes. Commodity operating systems that support SMP could run across nodes through NEX in parallel. The approach is based on prior research on virtualization and distributed shared memory. This paper describes the design and implementation of NEX and gives a preliminary evaluation.",
"title": ""
},
{
"docid": "b84890b3a8d0311a14bf1c9aff773660",
"text": "BACKGROUND\nHealth care systems will integrate new computing paradigms in the coming years. Context-awareness computing is a research field which often refers to health care as an interesting and rich area of application.\n\n\nAIM\nThrough a survey of the research literature, we intended to derive an objective view of the actual dynamism of context awareness in health care, and to identify strengths and weaknesses in this field.\n\n\nMETHODS\nAfter discussing definitions of context, we proposed a simple framework to analyse and characterize the use of context through three main axes. We then focused on context-awareness computing and reported on the main teams working in this area. We described some of the context-awareness projects in health care. A deeper analysis of the hospital-based projects demonstrated the gap between recommendations expressed for modelling context awareness and the actual use in a prototype. Finally, we identified pitfalls encountered in this area of research.\n\n\nRESULTS\nA number of opportunities remain for this evolving field of research. We found relatively few groups with such a specific focus. As yet there is no consensus as to the most appropriate models or attributes to include in context awareness. We conclude that a greater understanding of which aspects of context are important in a health care setting is required; the inherent sociotechnical nature of context-aware applications in health care; and the need to draw on a number of disciplines to conduct this research.",
"title": ""
},
{
"docid": "8d9fef4de18e4b84db3ae0ae684a3a1d",
"text": "Seven form-finding methods for tensegrity structures are reviewed and classified. The three kinematical methods include an analytical approach, a non-linear optimisation, and a pseudo-dynamic iteration. The four statical methods include an analytical method, the formulation of linear equations of equilibrium in terms of force densities, an energy minimisation, and a search for the equilibrium configurations of the struts of the structure connected by cables whose lengths are to be determined, using a reduced set of equilibrium equations. It is concluded that the kinematical methods are best suited to obtaining only configuration details of structures that are already essentially known, the force density method is best suited to searching for new configurations, but affords no control over the lengths of the elements of the structure. The reduced coordinates method offers a greater control on elements lengths, but requires more extensive symbolic manipulations.",
"title": ""
},
{
"docid": "3ea549bf872f042a05ee0243122e4745",
"text": "Studies on perceived restoration have focused on the differences between natural and artificial environments, whereas studies on what makes people select a particular restorative environment are limited. Using the location of Cheonggyecheon Stream Park in the urban center of Seoul, South Korea, this study tests whether people self-select locations based on individual and environmental characteristics. Empirical testing was conducted on 268 responses on a visitor survey that was developed based on the Perceived Restorativeness Scale. The major findings were that visitors’ characteristics such as gender, age, number of companions, visit frequency, and travel mode affect their selection of a particular setting, and that the chosen setting subsequently influences three dimensions of the Scale: being away, fascination, and coherence. These findings suggest that both individual and environmental characteristics should be considered in the creation of an effective perceived restorative environment in an urban center.",
"title": ""
},
{
"docid": "982d9a7e483bb254cbe3e6f90e7adcbd",
"text": "Multiple transmitters can be used to simultaneously transmit power wirelessly to a single receiver via strongly coupled magnetic resonance. A simple circuit model is used to help explain the multiple-transmitter wireless power transfer system. Through this particular scheme, there is an increase in gain and “diversity” of the transmitted power according to the number of transmit coils. The effect of transmitter resonant coil coupling is also shown. Resonant frequency detuning due to nearby metallic objects is observed, and the extent of how much tuning can be done is demonstrated. A practical power line synchronization technique is proposed to synchronize all transmit coils, which reduces additional dedicated synchronization wiring or the addition of an RF front-end module to send the reference driving signal.",
"title": ""
},
{
"docid": "0dc3961e8e42bb629e555f1a99fe2d74",
"text": "Õ( √ TK log(N/δ)); intuitively based on a nonconstructive minimax argument for choosing a distribution over policies such that the reward estimates for each policy have low variance. 2. Algorithm RANDOMIZEDUCB, also achieving optimal regret bound Õ( √ TK log(N/δ)); selection of distribution over policies in each round t can be computed in poly(t, log(N)) time, given a cost-sensitive classification learning algorithm for policy class Π.",
"title": ""
}
] |
scidocsrr
|
a23a288df5f4228eedb94d26d84583bf
|
Quasi-Homography Warps in Image Stitching
|
[
{
"docid": "b29947243b1ad21b0529a6dd8ef3c529",
"text": "We define a multiresolution spline technique for combining two or more images into a larger image mosaic. In this procedure, the images to be splined are first decomposed into a set of band-pass filtered component images. Next, the component images in each spatial frequency hand are assembled into a corresponding bandpass mosaic. In this step, component images are joined using a weighted average within a transition zone which is proportional in size to the wave lengths represented in the band. Finally, these band-pass mosaic images are summed to obtain the desired image mosaic. In this way, the spline is matched to the scale of features within the images themselves. When coarse features occur near borders, these are blended gradually over a relatively large distance without blurring or otherwise degrading finer image details in the neighborhood of th e border.",
"title": ""
},
{
"docid": "916fd932ae299b30f322aed6b5f35a9c",
"text": "This paper proposes a novel parametric warp which is a spatial combination of a projective transformation and a similarity transformation. Given the projective transformation relating two input images, based on an analysis of the projective transformation, our method smoothly extrapolates the projective transformation of the overlapping regions into the non-overlapping regions and the resultant warp gradually changes from projective to similarity across the image. The proposed warp has the strengths of both projective and similarity warps. It provides good alignment accuracy as projective warps while preserving the perspective of individual image as similarity warps. It can also be combined with more advanced local-warp-based alignment methods such as the as-projective-as-possible warp for better alignment accuracy. With the proposed warp, the field of view can be extended by stitching images with less projective distortion (stretched shapes and enlarged sizes).",
"title": ""
}
] |
[
{
"docid": "3531efcf8308541b0187b2ea4ab91721",
"text": "This paper proposed a novel controlling technique of pulse width modulation (PWM) mode and pulse frequency modulation (PFM) mode to keep the high efficiency within width range of loading. The novel control method is using PWM and PFM detector to achieve two modes switching appropriately. The controlling technique can make the efficiency of current mode DC-DC buck converter up to 88% at light loading and this paper is implemented by TSMC 0.35 mum CMOS process.",
"title": ""
},
{
"docid": "7990aa405f43f6e176bd25f150a58307",
"text": "The human skin is a promising surface for input to computing devices but differs fundamentally from existing touch-sensitive devices. The authors propose the use of skin landmarks, which offer unique tactile and visual cues, to enhance body-based user interfaces.",
"title": ""
},
{
"docid": "3ea6de664a7ac43a1602b03b46790f0a",
"text": "After reviewing the design of a class of lowpass recursive digital filters having integer multiplier and linear phase characteristics, the possibilities for extending the class to include high pass, bandpass, and bandstop (‘notch’) filters are described. Experience with a PDP 11 computer has shown that these filters may be programmed simply using machine code, and that online operation at sampling rates up to about 8 kHz is possible. The practical application of such filters is illustrated by using a notch desgin to remove mains-frequency interference from an e.c.g. waveform. Après avoir passé en revue la conception d'un type de filtres digitaux récurrents passe-bas à multiplicateurs incorporés et à caractéristiques de phase linéaires, cet article décrit les possibilités d'extension de ce type aux filtres, passe-haut, passe-bande et à élimination de bande. Une expérience menée avec un ordinateur PDP 11 a indiqué que ces filtres peuvent être programmés de manière simple avec un code machine, et qu'il est possible d'effectuer des opérations en ligne avec des taux d'échantillonnage jusqu'à environ 8 kHz. L'application pratique de tels filtres est illustrée par un exemple dans lequel un filtre à élimination de bande est utilisé pour éliminer les interférences due à la fréquence du courant d'alimentation dans un tracé d'e.c.g. Nach einer Untersuchung der Konstruktion einer Gruppe von Rekursivdigitalfiltern mit niedrigem Durchlässigkeitsbereich und mit ganzzahligen Multipliziereinrichtungen und Linearphaseneigenschaften werden die Möglichkeiten beschrieben, die Gruppe so zu erweitern, daß sie Hochfilter, Bandpaßfilter und Bandstopfilter (“Kerbfilter”) einschließt. Erfahrungen mit einem PDP 11-Computer haben gezeigt, daß diese Filter auf einfache Weise unter Verwendung von Maschinenkode programmiert werden können und daß On-Line-Betrieb bei Entnahmegeschwindigkeiten von bis zu 8 kHz möglich ist. Die praktische Anwendung solcher Filter wird durch Verwendung einer Kerbkonstruktion zur Ausscheidung von Netzfrequenzstörungen von einer ECG-Wellenform illustriert.",
"title": ""
},
{
"docid": "0b4a107c825a095573ecded075b77b51",
"text": "Primary Argument Nursing has a rich heritage of advocating for a healthy society established on a foundation of social justice. The future legitimacy and success of public health nursing depends on recognising and appropriately addressing the social, economic and political determinants of health in the populations served. There is an incontrovertible association between population health status, absolute income levels and income inequality. Thus, along with other social determinants of health, income differentials within populations must be a fundamental consideration when planning and delivering nursing services. Ensuring that federal and state health policy explicitly addresses this key issue remains an important challenge for the nursing profession, the public health system and the Australian community.",
"title": ""
},
{
"docid": "5466fef2418d06ac195f4165103d0472",
"text": "Research suggests that select processing speed measures can also serve as embedded validity indicators (EVIs). The present study examined the diagnostic utility of Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) subtests as EVIs in a mixed clinical sample of 205 patients medically referred for neuropsychological assessment (53.3% female, mean age = 45.1). Classification accuracy was calculated against 3 composite measures of performance validity as criterion variables. A PSI ≤79 produced a good combination of sensitivity (.23-.56) and specificity (.92-.98). A Coding scaled score ≤5 resulted in good specificity (.94-1.00), but low and variable sensitivity (.04-.28). A Symbol Search scaled score ≤6 achieved a good balance between sensitivity (.38-.64) and specificity (.88-.93). A Coding-Symbol Search scaled score difference ≥5 produced adequate specificity (.89-.91) but consistently low sensitivity (.08-.12). A 2-tailed cutoff on the Coding/Symbol Search raw score ratio (≤1.41 or ≥3.57) produced acceptable specificity (.87-.93), but low sensitivity (.15-.24). Failing ≥2 of these EVIs produced variable specificity (.81-.93) and sensitivity (.31-.59). Failing ≥3 of these EVIs stabilized specificity (.89-.94) at a small cost to sensitivity (.23-.53). Results suggest that processing speed based EVIs have the potential to provide a cost-effective and expedient method for evaluating the validity of cognitive data. Given their generally low and variable sensitivity, however, they should not be used in isolation to determine the credibility of a given response set. They also produced unacceptably high rates of false positive errors in patients with moderate-to-severe head injury. Combining evidence from multiple EVIs has the potential to improve overall classification accuracy. (PsycINFO Database Record",
"title": ""
},
{
"docid": "6788bfdd287778ac8c600ee94a0b2a9c",
"text": "The predominant approach to Visual Question Answering (VQA) demands that the model represents within its weights all of the information required to answer any question about any image. Learning this information from any real training set seems unlikely, and representing it in a reasonable number of weights doubly so. We propose instead to approach VQA as a meta learning task, thus separating the question answering method from the information required. At test time, the method is provided with a support set of example questions/answers, over which it reasons to resolve the given question. The support set is not fixed and can be extended without retraining, thereby expanding the capabilities of the model. To exploit this dynamically provided information, we adapt a state-of-the-art VQA model with two techniques from the recent meta learning literature, namely prototypical networks and meta networks. Experiments demonstrate the capability of the system to learn to produce completely novel answers (i.e. never seen during training) from examples provided at test time. In comparison to the existing state of the art, the proposed method produces qualitatively distinct results with higher recall of rare answers, and a better sample efficiency that allows training with little initial data. More importantly, it represents an important step towards vision-and-language methods that can learn and reason on-the-fly.",
"title": ""
},
{
"docid": "d91077f97e745cdd73315affb5cbbdd2",
"text": "We consider the problem of learning the underlying graph of an unknown Ising model on p spins from a collection of i.i.d. samples generated from the model. We suggest a new estimator that is computationally efficient and requires a number of samples that is near-optimal with respect to previously established informationtheoretic lower-bound. Our statistical estimator has a physical interpretation in terms of “interaction screening”. The estimator is consistent and is efficiently implemented using convex optimization. We prove that with appropriate regularization, the estimator recovers the underlying graph using a number of samples that is logarithmic in the system size p and exponential in the maximum coupling-intensity and maximum node-degree.",
"title": ""
},
{
"docid": "1b52822b76e7ace1f7e12a6f2c92b060",
"text": "We treated the mandibular retrusion of a 20-year-old man by distraction osteogenesis. Our aim was to avoid any visible discontinuities in the soft tissue profile that may result from conventional \"one-step\" genioplasty. The result was excellent. In addition to a good aesthetic outcome, there was increased bone formation not only between the two surfaces of the osteotomy but also adjacent to the distraction zone, resulting in improved coverage of the roots of the lower incisors. Only a few patients have been treated so far, but the method seems to hold promise for the treatment of extreme retrognathism, as these patients often have insufficient buccal bone coverage.",
"title": ""
},
{
"docid": "1c04afe05954a425209aaf0267236255",
"text": "Twitter is an online social networking service where worldwide users publish their opinions on a variety of topics, discuss current issues, complain, and express positive or negative sentiment for products they use in daily life. Therefore, Twitter is a rich source of data for opinion mining and sentiment analysis. However, sentiment analysis for Twitter messages (tweets) is regarded as a challenging problem because tweets are short and informal. This paper focuses on this problem by the analyzing of symbols called emotion tokens, including emotion symbols (e.g. emoticons and emoji ideograms). According to observation, these emotion tokens are commonly used. They directly express one’s emotions regardless of his/her language, hence they have become a useful signal for sentiment analysis on multilingual tweets. The paper describes the approach to performing sentiment analysis, that is able to determine positive, negative and neutral sentiments for a tested topic.",
"title": ""
},
{
"docid": "c3b6d46a9e1490c720056682328586d5",
"text": "BACKGROUND\nBirth preparedness and complication preparedness (BPACR) is a key component of globally accepted safe motherhood programs, which helps ensure women to reach professional delivery care when labor begins and to reduce delays that occur when mothers in labor experience obstetric complications.\n\n\nOBJECTIVE\nThis study was conducted to assess practice and factors associated with BPACR among pregnant women in Aleta Wondo district in Sidama Zone, South Ethiopia.\n\n\nMETHODS\nA community based cross sectional study was conducted in 2007, on a sample of 812 pregnant women. Data were collected using pre-tested and structured questionnaire. The collected data were analyzed by SPSS for windows version 12.0.1. The women were asked whether they followed the desired five steps while pregnant: identified a trained birth attendant, identified a health facility, arranged for transport, identified blood donor and saved money for emergency. Taking at least two steps was considered being well-prepared.\n\n\nRESULTS\nAmong 743 pregnant women only a quarter (20.5%) of pregnant women identified skilled provider. Only 8.1% identified health facility for delivery and/or for obstetric emergencies. Preparedness for transportation was found to be very low (7.7%). Considerable (34.5%) number of families saved money for incurred costs of delivery and emergency if needed. Only few (2.3%) identified potential blood donor in case of emergency. Majority (87.9%) of the respondents reported that they intended to deliver at home, and only 60(8%) planned to deliver at health facilities. Overall only 17% of pregnant women were well prepared. The adjusted multivariate model showed that significant predictors for being well-prepared were maternal availing of antenatal services (OR = 1.91 95% CI; 1.21-3.01) and being pregnant for the first time (OR = 6.82, 95% CI; 1.27-36.55).\n\n\nCONCLUSION\nBPACR practice in the study area was found to be low. Effort to increase BPACR should focus on availing antenatal care services.",
"title": ""
},
{
"docid": "32b04b91bc796a082fb9c0d4c47efbf9",
"text": "Intell Sys Acc Fin Mgmt. 2017;24:49–55. Summary A two‐step system is presented to improve prediction of telemarketing outcomes and to help the marketing management team effectively manage customer relationships in the banking industry. In the first step, several neural networks are trained with different categories of information to make initial predictions. In the second step, all initial predictions are combined by a single neural network to make a final prediction. Particle swarm optimization is employed to optimize the initial weights of each neural network in the ensemble system. Empirical results indicate that the two‐ step system presented performs better than all its individual components. In addition, the two‐ step system outperforms a baseline one where all categories of marketing information are used to train a single neural network. As a neural networks ensemble model, the proposed two‐step system is robust to noisy and nonlinear data, easy to interpret, suitable for large and heterogeneous marketing databases, fast and easy to implement.",
"title": ""
},
{
"docid": "8ec018e0fc4ca7220387854bdd034a58",
"text": "Despite the overwhelming success of deep learning in various speech processing tasks, the problem of separating simultaneous speakers in a mixture remains challenging. Two major difficulties in such systems are the arbitrary source permutation and unknown number of sources in the mixture. We propose a novel deep learning framework for single channel speech separation by creating attractor points in high dimensional embedding space of the acoustic signals which pull together the time-frequency bins corresponding to each source. Attractor points in this study are created by finding the centroids of the sources in the embedding space, which are subsequently used to determine the similarity of each bin in the mixture to each source. The network is then trained to minimize the reconstruction error of each source by optimizing the embeddings. The proposed model is different from prior works in that it implements an end-to-end training, and it does not depend on the number of sources in the mixture. Two strategies are explored in the test time, K-means and fixed attractor points, where the latter requires no post-processing and can be implemented in real-time. We evaluated our system on Wall Street Journal dataset and show 5.49% improvement over the previous state-of-the-art methods.",
"title": ""
},
{
"docid": "83d788ffb340b89c482965b96d6803c2",
"text": "A dead-time compensation method in voltage-source inverters (VSIs) is proposed. The method is based on a feedforward approach which produces compensating signals obtained from those of the I/sub d/-I/sub q/ current and primary angular frequency references in a rotating reference (d-q) frame. The method features excellent inverter output voltage distortion correction for both fundamental and harmonic components. The correction is not affected by the magnitude of the inverter output voltage or current distortions. Since this dead-time compensation method allows current loop calculations in the d-q frame at a slower sampling rate with a conventional microprocessor than calculations in a stationary reference frame, a fully digital, vector-controlled speed regulator with just a current component loop is realized for PWM (pulsewidth modulation) VSIs. Test results obtained for the compression method are described.<<ETX>>",
"title": ""
},
{
"docid": "3ae6703f2ea27b1c3418ce623aa394a0",
"text": "A Hardware Trojan is a malicious, undesired, intentional modification of an electronic circuit or design, resulting in the incorrect behaviour of an electronic device when in operation – a back-door that can be inserted into hardware. A Hardware Trojan may be able to defeat any and all security mechanisms (software or hardware-based) and subvert or augment the normal operation of an infected device. This may result in modifications to the functionality or specification of the hardware, the leaking of sensitive information, or a Denial of Service (DoS) attack. Understanding Hardware Trojans is vital when developing next generation defensive mechanisms for the development and deployment of electronics in the presence of the Hardware Trojan threat. Research over the past five years has primarily focussed on detecting the presence of Hardware Trojans in infected devices. This report reviews the state-of-the-art in Hardware Trojans, from the threats they pose through to modern prevention, detection and countermeasure techniques. APPROVED FOR PUBLIC RELEASE",
"title": ""
},
{
"docid": "b5997c5c88f57b387e56dc68445b38e2",
"text": "Identifying the relationship between two text objects is a core research problem underlying many natural language processing tasks. A wide range of deep learning schemes have been proposed for text matching, mainly focusing on sentence matching, question answering or query document matching. We point out that existing approaches do not perform well at matching long documents, which is critical, for example, to AI-based news article understanding and event or story formation. The reason is that these methods either omit or fail to fully utilize complicated semantic structures in long documents. In this paper, we propose a graph approach to text matching, especially targeting long document matching, such as identifying whether two news articles report the same event in the real world, possibly with different narratives. We propose the Concept Interaction Graph to yield a graph representation for a document, with vertices representing different concepts, each being one or a group of coherent keywords in the document, and with edges representing the interactions between different concepts, connected by sentences in the document. Based on the graph representation of document pairs, we further propose a Siamese Encoded Graph Convolutional Network that learns vertex representations through a Siamese neural network and aggregates the vertex features though Graph Convolutional Networks to generate the matching result. Extensive evaluation of the proposed approach based on two labeled news article datasets created at Tencent for its intelligent news products show that the proposed graph approach to long document matching significantly outperforms a wide range of state-of-the-art methods.",
"title": ""
},
{
"docid": "ca4d2862ba75bfc35d8e9ada294192e1",
"text": "This paper provides a model that realistically represents the movements in a disaster area scenario. The model is based on an analysis of tactical issues of civil protection. This analysis provides characteristics influencing network performance in public safety communication networks like heterogeneous area-based movement, obstacles, and joining/leaving of nodes. As these characteristics cannot be modelled with existing mobility models, we introduce a new disaster area mobility model. To examine the impact of our more realistic modelling, we compare it to existing ones (modelling the same scenario) using different pure movement and link based metrics. The new model shows specific characteristics like heterogeneous node density. Finally, the impact of the new model is evaluated in an exemplary simulative network performance analysis. The simulations show that the new model discloses new information and has a significant impact on performance analysis.",
"title": ""
},
{
"docid": "6ddad64507fa5ebf3b2930c261584967",
"text": "In this article we propose a methodology to determine snow cover by means of Landsat-7 ETM+ and Landsat-5 TM images, as well as an improvement in daily Snow Cover TERRA- MODIS product (MOD10A1), between 2002 and 2005. Both methodologies are based on a NDSI threshold > 0.4. In the Landsat case, and although this threshold also selects water bodies, we have obtained optimal results using a mask of water bodies and generating a pre-boundary snow mask around the snow cover. Moreover, an important improvement in snow cover mapping in shadow cast areas by means of a hybrid classification has been obtained. Using these results as ground truth we have verified MODIS Snow Cover product using coincident dates. In the MODIS product, we have noted important commission errors in water bodies, forest covers and orographic shades because of the NDVI-NDSI filter applied to this product. In order to improve MODIS snow cover determination using MODIS images, we propose a hybrid methodology based on experience with Landsat images, which provide greater spatial resolution.",
"title": ""
},
{
"docid": "9841dd0b1c71f33f9fae95b6621b5ecc",
"text": "In recent years the number of wind turbines installed in Europe and other continents has increase dramatically. Appropriate lightning protection is required in order to avoid costly replacements of lightning damaged turbine blades, components of the electronic control system, and/or temporary loss of energy production. Depending on local site conditions elevated objects with heights of 100 m and more can frequently initiate upward lightning. From the 100 m high and instrumented radio tower on Gaisberg in Austria more than 50 flashes per year are initiated and measured. Also lightning location systems or video studies in Japan [1], [2] or in the US [3] show frequent occurrence of lightning initiated from wind turbines, especially during cold season. Up to now no reliable method exists to estimate the expected frequency of upward lightning for a given structure and location. About half of the flashes observed at the GBT are of ICCOnly type. Unfortunately this type of discharge is not detected by lightning location systems as its current waveform does not show any fast rising and high peak current pulses as typical for first or subsequent return strokes in downward lightning (cloud-to-ground). Nevertheless some of this ICCOnly type discharges transferred the highest amount of charge, exceeding the 300 C specified in IEC 62305 for lightning protection level LPL I.",
"title": ""
},
{
"docid": "a28a96adfef7854a864e45c4351e1bd5",
"text": "In the real-time bidding (RTB) display advertising ecosystem, when receiving a bid request, the demandside platform (DSP) needs to predict the click-through rate (CTR) for ads and calculate the bid price according to the CTR estimated. In addition to challenges similar to those encountered in sponsored search advertising, such as data sparsity and cold start problems, more complicated feature interactions involving multi-aspects, such as the user, publisher and advertiser, make CTR estimation in RTB more difficult. We consider CTR estimation in RTB as a tensor complement problem and propose a fully coupled interactions tensor factorization (FCTF) model based on Tucker decomposition (TD) to model three pairwise interactions between the user, publisher and advertiser and ultimately complete the tensor complement task. FCTF is a special case of the Tucker decomposition model; however, it is linear in runtime for both learning and prediction. Different from pairwise interaction tensor factorization (PITF), which is another special case of TD, FCTF is independent from the Bayesian personalized ranking optimization algorithm and is applicable to generic third-order tensor decomposition with popular simple optimizations, such as the least square method or mean square error. In addition, we also incorporate all explicit information obtained from different aspects into the FCTF model to alleviate the impact of cold start and sparse data on the final performance. We compare the performance and runtime complexity of our method with Tucker decomposition, canonical decomposition and other popular methods for CTR prediction over real-world advertising datasets. Our experimental results demonstrate that the improved model not only achieves better prediction quality than the others due to considering fully coupled interactions between three entities, user, publisher and advertiser but also can accomplish training and prediction with linear runtime. 2016 Elsevier B.V. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
deca7266d583f32d79115ab982063297
|
My expressed breast milk turned pink!
|
[
{
"docid": "f6f1efae0bd6c6a8a9405814005a8352",
"text": "BACKGROUND\nSerratia marcescens, a known pathogen associated with postpartum mastitis, may be identified by its characteristic pigmentation.\n\n\nCASE\nA 36-year-old P0102 woman presented postpartum and said that her breast pump tubing had turned bright pink. S marcescens was isolated, indicating colonization. She was started on antibiotics. After viewing an Internet report in which a patient nearly died from a Serratia infection, she immediately stopped breastfeeding.\n\n\nCONCLUSION\nSerratia colonization may be noted before the development of overt infection. Because this pathogen can be associated with mastitis, physicians should be ready to treat and should encourage patients to continue nursing after clearance of the organism. Exposure to sensational Internet reports may make treatment recommendations difficult.",
"title": ""
}
] |
[
{
"docid": "268e0e06a23f495cc36958dafaaa045a",
"text": "Artificial intelligence (AI) has undergone a renaissance recently, making major progress in key domains such as vision, language, control, and decision-making. This has been due, in part, to cheap data and cheap compute resources, which have fit the natural strengths of deep learning. However, many defining characteristics of human intelligence, which developed under much different pressures, remain out of reach for current approaches. In particular, generalizing beyond one’s experiences—a hallmark of human intelligence from infancy—remains a formidable challenge for modern AI. The following is part position paper, part review, and part unification. We argue that combinatorial generalization must be a top priority for AI to achieve human-like abilities, and that structured representations and computations are key to realizing this objective. Just as biology uses nature and nurture cooperatively, we reject the false choice between “hand-engineering” and “end-to-end” learning, and instead advocate for an approach which benefits from their complementary strengths. We explore how using relational inductive biases within deep learning architectures can facilitate learning about entities, relations, and rules for composing them. We present a new building block for the AI toolkit with a strong relational inductive bias—the graph network—which generalizes and extends various approaches for neural networks that operate on graphs, and provides a straightforward interface for manipulating structured knowledge and producing structured behaviors. We discuss how graph networks can support relational reasoning and combinatorial generalization, laying the foundation for more sophisticated, interpretable, and flexible patterns of reasoning. As a companion to this paper, we have also released an open-source software library for building graph networks, with demonstrations of how to use them in practice.",
"title": ""
},
{
"docid": "a1fb87b94d93da7aec13044d95ee1e44",
"text": "Many natural language processing tasks solely rely on sparse dependencies between a few tokens in a sentence. Soft attention mechanisms show promising performance in modeling local/global dependencies by soft probabilities between every two tokens, but they are not effective and efficient when applied to long sentences. By contrast, hard attention mechanisms directly select a subset of tokens but are difficult and inefficient to train due to their combinatorial nature. In this paper, we integrate both soft and hard attention into one context fusion model, “reinforced self-attention (ReSA)”, for the mutual benefit of each other. In ReSA, a hard attention trims a sequence for a soft self-attention to process, while the soft attention feeds reward signals back to facilitate the training of the hard one. For this purpose, we develop a novel hard attention called “reinforced sequence sampling (RSS)”, selecting tokens in parallel and trained via policy gradient. Using two RSS modules, ReSA efficiently extracts the sparse dependencies between each pair of selected tokens. We finally propose an RNN/CNN-free sentence-encoding model, “reinforced self-attention network (ReSAN)”, solely based on ReSA. It achieves state-of-the-art performance on both Stanford Natural Language Inference (SNLI) and Sentences Involving Compositional Knowledge (SICK) datasets.",
"title": ""
},
{
"docid": "66435d5b38f460edf7781372cd4e125b",
"text": "Network Function Virtualization (NFV) is emerging as a new paradigm for providing elastic network functions through flexible virtual network function (VNF) instances executed on virtualized computing platforms exemplified by cloud datacenters. In the new NFV market, well defined VNF instances each realize an atomic function that can be chained to meet user demands in practice. This work studies the dynamic market mechanism design for the transaction of VNF service chains in the NFV market, to help relinquish the full power of NFV. Combining the techniques of primal-dual approximation algorithm design with Myerson's characterization of truthful mechanisms, we design a VNF chain auction that runs efficiently in polynomial time, guarantees truthfulness, and achieves near-optimal social welfare in the NFV eco-system. Extensive simulation studies verify the efficacy of our auction mechanism.",
"title": ""
},
{
"docid": "79c484a91d7c7dc1ad84d1fe42e578be",
"text": "Falls, heart attack and stroke are among the leading causes of hospitalization for the elderly and illness individual. The chances of surviving a fall, heart attack or stroke are much greater if the senior gets help within an hour. In this project, a smart elderly home monitoring system (SEHMS) is designed and developed. An Android-based smart phone with 3-axial accelerometer is used as the telehealth device which could detect a fall of the carrier. The smart phone is then connected to the monitoring system by using the TCP/IP networking method via Wi-Fi. A graphical user interface (GUI) is developed as the monitoring system which exhibits the information gathered from the system. In addition, the concept of a remote panic button has been tested and implemented in this project by using the same android based smart phone. With the developed system, elderly and chronically ill patients could stay independently in their own home with care facilities and secure in the knowledge that they are being monitored.",
"title": ""
},
{
"docid": "ef98936202fea16571be47ee629b0955",
"text": "Macro tree transducers are a combination of top-down tree transducers and macro grammars. They serve as a model for syntax-directed semantics in which context information can be handled. In this paper the formal model of macro tree transducers is studied by investigating typical automata theoretical topics like composition, decomposition, domains, and ranges of the induced translation classes. The extension with regular look-ahead is considered. 0 1985 Academic Press, Inc.",
"title": ""
},
{
"docid": "6720ae7a531d24018bdd1d3d1c7eb28b",
"text": "This study investigated the effects of mobile phone text-messaging method (predictive and multi-press) and experience (in texters and non-texters) on children’s textism use and understanding. It also examined popular claims that the use of text-message abbreviations, or textese spelling, is associated with poor literacy skills. A sample of 86 children aged 10 to 12 years read and wrote text messages in conventional English and in textese, and completed tests of spelling, reading, and non-word reading. Children took significantly longer, and made more errors, when reading messages written in textese than in conventional English. Further, they were no faster at writing messages in textese than in conventional English, regardless of texting method or experience. Predictive texters were faster at reading and writing messages than multi-press texters, and texting experience increased writing, but not reading, speed. General spelling and reading scores did not differ significantly with usual texting method. However, better literacy skills were associated with greater textese reading speed and accuracy. These findings add to the growing evidence for a positive relationship between texting proficiency and traditional literacy skills. Children’s text-messaging and literacy skills 3 The advent of mobile phones, and of text-messaging in particular, has changed the way that people communicate, and adolescents and children seem especially drawn to such technology. Australian surveys have revealed that 19% of 8to 11-year-olds and 76% of 12to 14-year-olds have their own mobile phone (Cupitt, 2008), and that 69% of mobile phone users aged 14 years and over use text-messaging (Australian Government, 2008), with 90% of children in Grades 7-12 sending a reported average of 11 texts per week (ABS, 2008). Text-messaging has also been the catalyst for a new writing style: textese. Described as a hybrid of spoken and written English (Plester & Wood, 2009), textese is a largely soundbased, or phonological, form of spelling that can reduce the time and cost of texting (Leung, 2007). Common abbreviations, or textisms, include letter and number homophones (c for see, 2 for to), contractions (txt for text), and non-conventional spellings (skool for school) (Plester, Wood, & Joshi, 2009; Thurlow, 2003). Estimates of the proportion of textisms that children use in their messages range from 21-47% (increasing with age) in naturalistic messages (Wood, Plester, & Bowyer, 2009), to 34% for messages elicited by a given scenario (Plester et al., 2009), to 50-58% for written messages that children ‘translated’ to and from textese (Plester, Wood, & Bell, 2008). One aim of the current study was to examine the efficiency of using textese for both the message writer and the reader, in order to understand the reasons behind (Australian) children’s use of textisms. The spread of textese has been attributed to texters’ desire to overcome the confines of the alphanumeric mobile phone keypad (Crystal, 2008). Since several letters are assigned to each number, the multi-press style of texting requires the somewhat laborious pressing of the same button one to four times to type each letter (Taylor & Vincent, 2005). The use of textese thus has obvious savings for multi-press texters, of both time and screen-space (as message character count cannot exceed 160). However, there is evidence, discussed below, that reading textese can be relatively slow and difficult for the message recipient, compared to Children’s text-messaging and literacy skills 4 reading conventional English. Since the use of textese is now widespread, it is important to examine the potential advantages and disadvantages that this form of writing may have for message senders and recipients, especially children, whose knowledge of conventional English spelling is still developing. To test the potential advantages of using textese for multi-press texters, Neville (2003) examined the speed and accuracy of textese versus conventional English in writing and reading text messages. British girls aged 11-16 years were dictated two short passages to type into a mobile phone: one using conventional English spelling, and the other “as if writing to a friend”. They also read two messages aloud from the mobile phone, one in conventional English, and the other in textese. The proportion of textisms produced is not reported, but no differences in textese use were observed between texters and non-texters. Writing time was significantly faster for textese than conventional English messages, with greater use of textisms significantly correlated with faster message typing times. However, participants were significantly faster at reading messages written in conventional English than in textese, regardless of their usual texting frequency. Kemp (2010) largely followed Neville’s (2003) design, but with 61 Australian undergraduates (mean age 22 years), all regular texters. These adults, too, were significantly faster at writing, but slower at reading, messages written in textese than in conventional English, regardless of their usual messaging frequency. Further, adults also made significantly more reading errors for messages written in textese than conventional English. These findings converge on the important conclusion that while the use of textisms makes writing more efficient for the message sender, it costs the receiver more time to read it. However, both Neville (2003) and Kemp (2010) examined only multi-press method texting, and not the predictive texting method now also available. Predictive texting requires only a single key-press per letter, and a dictionary-based system suggests one or more likely words Children’s text-messaging and literacy skills 5 based on the combinations entered (Taylor & Vincent, 2005). Textese may be used less by predictive texters than multi-press texters for two reasons. Firstly, predictive texting requires fewer key-presses than multi-press texting, which reduces the need to save time by taking linguistic short-cuts. Secondly, the dictionary-based predictive system makes it more difficult to type textisms that are not pre-programmed into the dictionary. Predictive texting is becoming increasingly popular, with recent studies reporting that 88% of Australian adults (Kemp, in press), 79% of Australian 13to 15-year-olds (De Jonge & Kemp, in press) and 55% of British 10to 12-year-olds (Plester et al., 2009) now use this method. Another aim of this study was thus to compare the reading and writing of textese and conventional English messages in children using their typical input method: predictive or multi-press texting, as well as in children who do not normally text. Finally, this study sought to investigate the popular assumption that exposure to unconventional word spellings might compromise children’s conventional literacy skills (e.g., Huang, 2008; Sutherland, 2002), with media articles revealing widespread disapproval of this communication style (Thurlow, 2006). In contrast, some authors have suggested that the use of textisms might actually improve children’s literacy skills (e.g., Crystal, 2008). Many textisms commonly used by children rely on the ability to distinguish, blend, and/or delete letter sounds (Plester et al., 2008, 2009). Practice at reading and creating textisms may therefore lead to improved phonological awareness (Crystal, 2008), which consistently predicts both reading and spelling prowess (e.g., Bradley & Bryant, 1983; Lundberg, Frost, & Petersen, 1988). Alternatively, children who use more textisms may do so because they have better phonological awareness, or poorer spellers may be drawn to using textisms to mask weak spelling ability (e.g., Sutherland, 2002). Thus, studying children’s textism use can provide further information on the links between the component skills that constitute both conventional and alternative, including textism-based, literacy. Children’s text-messaging and literacy skills 6 There is evidence for a positive link between the use of textisms and literacy skills in preteen children. Plester et al. (2008) asked 10to 12-year-old British children to translate messages from standard English to textese, and vice versa, with pen and paper. They found a significant positive correlation between textese use and verbal reasoning scores (Study 1) and spelling scores (Study 2). Plester et al. (2009) elicited text messages from a similar group of children by asking them to write messages in response to a given scenario. Again, textism use was significantly positively associated with word reading ability and phonological awareness scores (although not with spelling scores). Neville (2003) found that the number of textisms written, and the number read accurately, as well as the speed with which both conventional and textese messages were read and written, all correlated significantly with general spelling skill in 11to 16-year-old girls. The cross-sectional nature of these studies, and of the current study, means that causal relationships cannot be firmly established. However, Wood et al. (2009) report on a longitudinal study in which 8to 12-year-old children’s use of textese at the beginning of the school year predicted their skills in reading ability and phonological awareness at the end of the year, even after controlling for verbal IQ. These results provide the first support for the idea that textism use is driving the development of literacy skills, and thus that this use of technology can improve learning in the area of language and literacy. Taken together, these findings also provide important evidence against popular media claims that the use of textese is harming children’s traditional literacy skills. No similar research has yet been published with children outside the UK. The aim of the current study was thus to examine the speed and proficiency of textese use in Australian 10to 12-year-olds and, for the first time, to compare the r",
"title": ""
},
{
"docid": "cbb87b1e7e94c95a2502e79d8440e17f",
"text": "Research on integrating small numbers of datasets suggests the use of customized matching rules in order to adapt to the patterns in the data and achieve better results. The state-of-the-art work on matching large numbers of datasets exploits attribute co-occurrence as well as the similarity of values between multiple sources. We build upon these research directions in order to develop a method for generalizing matching knowledge using minimal human intervention. The central idea of our research program is that even in large numbers of datasets of a specific domain patterns (matching knowledge) reoccur, and discovering those can facilitate the integration task. Our proposed approach plans to use and extend existing work of our group on schema and instance matching as well as on learning expressive rules with active learning. We plan to evaluate our approach on publicly available e-commerce data collected from the Web.",
"title": ""
},
{
"docid": "00e5128bdf1cfe572852682c9dc27497",
"text": "To bridge the gap between the capabilities of the state-of-the-art in factoid question answering (QA) and what real users ask, we need large datasets of real user questions that capture the various question phenomena users are interested in, and the diverse ways in which these questions are formulated. We introduce ComQA, a large dataset of real user questions that exhibit different challenging aspects such as temporal reasoning, compositionality, etc. ComQA questions come from the WikiAnswers community QA platform1. Through a large crowdsourcing effort, we clean the question dataset, group questions into paraphrase clusters, and annotate clusters with their answers. ComQA contains 11,214 questions grouped into 4,834 paraphrase clusters. We detail the process of constructing ComQA, including the measures taken to ensure its high quality while making effective use of crowdsourcing. We also present an extensive analysis of the dataset and the results achieved by state-of-the-art systems on ComQA, demonstrating that our dataset can be a driver of future research on QA.",
"title": ""
},
{
"docid": "5f7ea9c7398ddbb5062d029e307fcf22",
"text": "This paper presents a low cost and flexible home control and monitoring system using an embedded micro-web server, with IP connectivity for accessing and controlling devices and appliances remotely using Android based Smart phone app. The proposed system does not require a dedicated server PC with respect to similar systems and offers a novel communication protocol to monitor and control the home environment with more than just the switching functionality.",
"title": ""
},
{
"docid": "57225d9e25270898f78921703c5db93f",
"text": "This paper summarizes the main problems and solutions of power quality in microgrids, distributed-energy-storage systems, and ac/dc hybrid microgrids. First, the power quality enhancement of grid-interactive microgrids is presented. Then, the cooperative control for enhance voltage harmonics and unbalances in microgrids is reviewed. Afterward, the use of static synchronous compensator (STATCOM) in grid-connected microgrids is introduced in order to improve voltage sags/swells and unbalances. Finally, the coordinated control of distributed storage systems and ac/dc hybrid microgrids is explained.",
"title": ""
},
{
"docid": "a25338ae0035e8a90d6523ee5ef667f7",
"text": "Activity recognition in video is dominated by low- and mid-level features, and while demonstrably capable, by nature, these features carry little semantic meaning. Inspired by the recent object bank approach to image representation, we present Action Bank, a new high-level representation of video. Action bank is comprised of many individual action detectors sampled broadly in semantic space as well as viewpoint space. Our representation is constructed to be semantically rich and even when paired with simple linear SVM classifiers is capable of highly discriminative performance. We have tested action bank on four major activity recognition benchmarks. In all cases, our performance is better than the state of the art, namely 98.2% on KTH (better by 3.3%), 95.0% on UCF Sports (better by 3.7%), 57.9% on UCF50 (baseline is 47.9%), and 26.9% on HMDB51 (baseline is 23.2%). Furthermore, when we analyze the classifiers, we find strong transfer of semantics from the constituent action detectors to the bank classifier.",
"title": ""
},
{
"docid": "0ff8c4799b62c70ef6b7d70640f1a931",
"text": "Using on-chip interconnection networks in place of ad-hoc glo-bal wiring structures the top level wires on a chip and facilitates modular design. With this approach, system modules (processors, memories, peripherals, etc...) communicate by sending packets to one another over the network. The structured network wiring gives well-controlled electrical parameters that eliminate timing iterations and enable the use of high-performance circuits to reduce latency and increase bandwidth. The area overhead required to implement an on-chip network is modest, we estimate 6.6%. This paper introduces the concept of on-chip networks, sketches a simple network, and discusses some challenges in the architecture and design of these networks.",
"title": ""
},
{
"docid": "b3bb84322c28a9d0493d9b8a626666e4",
"text": "Underwater images often suffer from color distortion and low contrast, because light is scattered and absorbed when traveling through water. Such images with different color tones can be shot in various lighting conditions, making restoration and enhancement difficult. We propose a depth estimation method for underwater scenes based on image blurriness and light absorption, which can be used in the image formation model (IFM) to restore and enhance underwater images. Previous IFM-based image restoration methods estimate scene depth based on the dark channel prior or the maximum intensity prior. These are frequently invalidated by the lighting conditions in underwater images, leading to poor restoration results. The proposed method estimates underwater scene depth more accurately. Experimental results on restoring real and synthesized underwater images demonstrate that the proposed method outperforms other IFM-based underwater image restoration methods.",
"title": ""
},
{
"docid": "865c1ee7044cbb23d858706aa1af1a63",
"text": "Fault analysis in solar photovoltaic (PV) arrays is a fundamental task to protect PV modules from damages and to eliminate the risks of safety hazards. This paper examines two types of unique faults found in photovoltaic (PV) array installations that have not been studied in the literature. One is a fault that occurs under low irradiance conditions. In some circumstances, fault current protection devices are unable to detect certain types of faults so that the fault may remain hidden in the PV system, even after irradiance increases. The other type of fault occurs when a string of PV modules is reversely connected, caused by inappropriate installation. This fault type brings new challenges for overcurrent protection devices because of the high rating voltage requirement. In both cases, these unique PV faults may subsequently lead to unexpected safety hazards, reduced system efficiency and reduced reliability.",
"title": ""
},
{
"docid": "fba7801d0b187a9a5fbb00c9d4690944",
"text": "Acute pulmonary embolism (PE) poses a significant burden on health and survival. Its severity ranges from asymptomatic, incidentally discovered subsegmental thrombi to massive, pressor-dependent PE complicated by cardiogenic shock and multisystem organ failure. Rapid and accurate risk stratification is therefore of paramount importance to ensure the highest quality of care. This article critically reviews currently available and emerging tools for risk-stratifying acute PE, and particularly for distinguishing between elevated (intermediate) and low risk among normotensive patients. We focus on the potential value of risk assessment strategies for optimizing severity-adjusted management. Apart from reviewing the current evidence on advanced early therapy of acute PE (thrombolysis, surgery, catheter interventions, vena cava filters), we discuss recent advances in oral anticoagulation with vitamin K antagonists, and with new direct inhibitors of factor Xa and thrombin, which may contribute to profound changes in the treatment and secondary prophylaxis of venous thrombo-embolism in the near future.",
"title": ""
},
{
"docid": "bbfdc30b412df84861e242d4305ca20d",
"text": "OBJECTIVES\nLocal anesthetic injection into the interspace between the popliteal artery and the posterior capsule of the knee (IPACK) has the potential to provide motor-sparing analgesia to the posterior knee after total knee arthroplasty. The primary objective of this cadaveric study was to evaluate injectate spread to relevant anatomic structures with IPACK injection.\n\n\nMETHODS\nAfter receipt of Institutional Review Board Biospecimen Subcommittee approval, IPACK injection was performed on fresh-frozen cadavers. The popliteal fossa in each specimen was dissected and examined for injectate spread.\n\n\nRESULTS\nTen fresh-frozen cadaver knees were included in the study. Injectate was observed to spread in the popliteal fossa at a mean ± SD of 6.1 ± 0.7 cm in the medial-lateral dimension and 10.1 ± 3.2 cm in the proximal-distal dimension. No injectate was noted to be in contact with the proximal segment of the sciatic nerve, but 3 specimens showed injectate spread to the tibial nerve. In 3 specimens, the injectate showed possible contact with the common peroneal nerve. The middle genicular artery was consistently surrounded by injectate.\n\n\nCONCLUSIONS\nThis cadaver study of IPACK injection demonstrated spread throughout the popliteal fossa without proximal sciatic involvement. However, the potential for injectate to spread to the tibial or common peroneal nerve was demonstrated. Consistent surrounding of the middle genicular artery with injectate suggests a potential mechanism of analgesia for the IPACK block, due to the predictable relationship between articular sensory nerves and this artery. Further study is needed to determine the ideal site of IPACK injection.",
"title": ""
},
{
"docid": "6c2ac0d096c1bcaac7fd70bd36a5c056",
"text": "The purpose of this review is to illustrate the ways in which molecular neurobiological investigations will contribute to an improved understanding of drug addiction and, ultimately, to the development of more effective treatments. Such molecular studies of drug addiction are needed to establish two general types of information: (1) mechanisms of pathophysiology, identification of the changes that drugs of abuse produce in the brain that lead to addiction; and (2) mechanisms of individual risk, identification of specific genetic and environmental factors that increase or decrease an individual's vulnerability for addiction. This information will one day lead to fundamentally new approaches to the treatment and prevention of addictive disorders.",
"title": ""
},
{
"docid": "4cf8a9921e9a86fb7e6ae1c7ca17e0b8",
"text": "Improving farm productivity is essential for increasing farm profitability and meeting the rapidly growing demand for food that is fuelled by rapid population growth across the world. Farm productivity can be increased by understanding and forecasting crop performance in a variety of environmental conditions. Crop recommendation is currently based on data collected in field-based agricultural studies that capture crop performance under a variety of conditions (e.g., soil quality and environmental conditions). However, crop performance data collection is currently slow, as such crop studies are often undertaken in remote and distributed locations, and such data are typically collected manually. Furthermore, the quality of manually collected crop performance data is very low, because it does not take into account earlier conditions that have not been observed by the human operators but is essential to filter out collected data that will lead to invalid conclusions (e.g., solar radiation readings in the afternoon after even a short rain or overcast in the morning are invalid, and should not be used in assessing crop performance). Emerging Internet of Things (IoT) technologies, such as IoT devices (e.g., wireless sensor networks, network-connected weather stations, cameras, and smart phones) can be used to collate vast amount of environmental and crop performance data, ranging from time series data from sensors, to spatial data from cameras, to human observations collected and recorded via mobile smart phone applications. Such data can then be analysed to filter out invalid data and compute personalised crop recommendations for any specific farm. In this paper, we present the design of SmartFarmNet, an IoT-based platform that can automate the collection of environmental, soil, fertilisation, and irrigation data; automatically correlate such data and filter-out invalid data from the perspective of assessing crop performance; and compute crop forecasts and personalised crop recommendations for any particular farm. SmartFarmNet can integrate virtually any IoT device, including commercially available sensors, cameras, weather stations, etc., and store their data in the cloud for performance analysis and recommendations. An evaluation of the SmartFarmNet platform and our experiences and lessons learnt in developing this system concludes the paper. SmartFarmNet is the first and currently largest system in the world (in terms of the number of sensors attached, crops assessed, and users it supports) that provides crop performance analysis and recommendations.",
"title": ""
},
{
"docid": "c68b94c11170fae3caf7dc211ab83f91",
"text": "Data mining is the extraction of useful, prognostic, interesting, and unknown information from massive transaction databases and other repositories. Data mining tools predict potential trends and actions, allowing various fields to make proactive, knowledge-driven decisions. Recently, with the rapid growth of information technology, the amount of data has exponentially increased in various fields. Big data mostly comes from people’s day-to-day activities and Internet-based companies. Mining frequent itemsets and association rule mining (ARM) are well-analysed techniques for revealing attractive correlations among variables in huge datasets. The Apriori algorithm is one of the most broadly used algorithms in ARM, and it collects the itemsets that frequently occur in order to discover association rules in massive datasets. The original Apriori algorithm is for sequential (single node or computer) environments. This Apriori algorithm has many drawbacks for processing huge datasets, such as that a single machine’s memory, CPU and storage capacity are insufficient. Parallel and distributed computing is the better solution to overcome the above problems. Many researchers have parallelized the Apriori algorithm. This study performs a survey on several well-enhanced and revised techniques for the parallel Apriori algorithm in the HadoopMapReduce environment. The Hadoop-MapReduce framework is a programming model that efficiently and effectively processes enormous databases in parallel. It can handle large clusters of commodity hardware in a reliable and fault-tolerant manner. This survey will provide an overall view of the parallel Apriori algorithm implementation in the Hadoop-MapReduce environment and briefly discuss the challenges and open issues of big data in the cloud and Hadoop-MapReduce. Moreover, this survey will not only give overall existing improved Apriori algorithm methods on Hadoop-MapReduce but also provide future research direction for upcoming researchers.",
"title": ""
},
{
"docid": "c34cadf2a05909bb659e0e52f77dd0c3",
"text": "The present complexity in designing web applications makes software security a difficult goal to achieve. An attacker can explore a deployed service on the web and attack at his/her own leisure. Moving Target Defense (MTD) in web applications is an effective mechanism to nullify this advantage of their reconnaissance but the framework demands a good switching strategy when switching between multiple configurations for its web-stack. To address this issue, we propose the modeling of a real world MTD web application as a repeated Bayesian game. We formulate an optimization problem that generates an effective switching strategy while considering the cost of switching between different web-stack configurations. To use this model for a developed MTD system, we develop an automated system for generating attack sets of Common Vulnerabilities and Exposures (CVEs) for input attacker types with predefined capabilities. Our framework obtains realistic reward values for the players (defenders and attackers) in this game by using security domain expertise on CVEs obtained from the National Vulnerability Database (NVD). We also address the issue of prioritizing vulnerabilities that when fixed, improves the security of the MTD system. Lastly, we demonstrate the robustness of our proposed model by evaluating its performance when there is uncertainty about input attacker information.",
"title": ""
}
] |
scidocsrr
|
f8d3b94a8f20f0abc0238a6cd2d3d909
|
ADONN: Adaptive Design of Optimized Deep Neural Networks for Embedded Systems
|
[
{
"docid": "52d6711ebbafd94ab5404e637db80650",
"text": "At present, designing convolutional neural network (CNN) architectures requires both human expertise and labor. New architectures are handcrafted by careful experimentation or modified from a handful of existing networks. We introduce MetaQNN, a meta-modeling algorithm based on reinforcement learning to automatically generate high-performing CNN architectures for a given learning task. The learning agent is trained to sequentially choose CNN layers using Qlearning with an -greedy exploration strategy and experience replay. The agent explores a large but finite space of possible architectures and iteratively discovers designs with improved performance on the learning task. On image classification benchmarks, the agent-designed networks (consisting of only standard convolution, pooling, and fully-connected layers) beat existing networks designed with the same layer types and are competitive against the state-of-the-art methods that use more complex layer types. We also outperform existing meta-modeling approaches for network design on image classification tasks.",
"title": ""
},
{
"docid": "73f9c6fc5dfb00cc9b05bdcd54845965",
"text": "The convolutional neural network (CNN), which is one of the deep learning models, has seen much success in a variety of computer vision tasks. However, designing CNN architectures still requires expert knowledge and a lot of trial and error. In this paper, we attempt to automatically construct CNN architectures for an image classification task based on Cartesian genetic programming (CGP). In our method, we adopt highly functional modules, such as convolutional blocks and tensor concatenation, as the node functions in CGP. The CNN structure and connectivity represented by the CGP encoding method are optimized to maximize the validation accuracy. To evaluate the proposed method, we constructed a CNN architecture for the image classification task with the CIFAR-10 dataset. The experimental result shows that the proposed method can be used to automatically find the competitive CNN architecture compared with state-of-the-art models.",
"title": ""
},
{
"docid": "33817271f39357c4aef254ac96aab480",
"text": "Evolutionary computation methods have been successfully applied to neural networks since two decades ago, while those methods cannot scale well to the modern deep neural networks due to the complicated architectures and large quantities of connection weights. In this paper, we propose a new method using genetic algorithms for evolving the architectures and connection weight initialization values of a deep convolutional neural network to address image classification problems. In the proposed algorithm, an efficient variable-length gene encoding strategy is designed to represent the different building blocks and the unpredictable optimal depth in convolutional neural networks. In addition, a new representation scheme is developed for effectively initializing connection weights of deep convolutional neural networks, which is expected to avoid networks getting stuck into local minima which is typically a major issue in the backward gradient-based optimization. Furthermore, a novel fitness evaluation method is proposed to speed up the heuristic search with substantially less computational resource. The proposed algorithm is examined and compared with 22 existing algorithms on nine widely used image classification tasks, including the stateof-the-art methods. The experimental results demonstrate the remarkable superiority of the proposed algorithm over the stateof-the-art algorithms in terms of classification error rate and the number of parameters (weights).",
"title": ""
}
] |
[
{
"docid": "0f25a4cd8a0a94f6666caadb6d4be3d3",
"text": "The tradeoff between the switching energy and electro-thermal robustness is explored for 1.2-kV SiC MOSFET, silicon power MOSFET, and 900-V CoolMOS body diodes at different temperatures. The maximum forward current for dynamic avalanche breakdown is decreased with increasing supply voltage and temperature for all technologies. The CoolMOS exhibited the largest latch-up current followed by the SiC MOSFET and silicon power MOSFET; however, when expressed as current density, the SiC MOSFET comes first followed by the CoolMOS and silicon power MOSFET. For the CoolMOS, the alternating p and n pillars of the superjunctions in the drift region suppress BJT latch-up during reverse recovery by minimizing lateral currents and providing low-resistance paths for carriers. Hence, the temperature dependence of the latch-up current for CoolMOS was the lowest. The switching energy of the CoolMOS body diode is the largest because of its superjunction architecture which means the drift region have higher doping, hence more reverse charge. In spite of having a higher thermal resistance, the SiC MOSFET has approximately the same latch-up current while exhibiting the lowest switching energy because of the least reverse charge. The silicon power MOSFET exhibits intermediate performance on switching energy with lowest dynamic latching current.",
"title": ""
},
{
"docid": "a8b5f7a5ab729a7f1664c5a22f3b9d9b",
"text": "The smart grid is an electronically controlled electrical grid that connects power generation, transmission, distribution, and consumers using information communication technologies. One of the key characteristics of the smart grid is its support for bi-directional information flow between the consumer of electricity and the utility provider. This two-way interaction allows electricity to be generated in real-time based on consumers’ demands and power requests. As a result, consumer privacy becomes an important concern when collecting energy usage data with the deployment and adoption of smart grid technologies. To protect such sensitive information it is imperative that privacy protection mechanisms be used to protect the privacy of smart grid users. We present an analysis of recently proposed smart grid privacy solutions and identify their strengths and weaknesses in terms of their implementation complexity, efficiency, robustness, and simplicity.",
"title": ""
},
{
"docid": "071b898fa3944ec0dabca317d3707217",
"text": "Objects often occlude each other in scenes; Inferring their appearance beyond their visible parts plays an important role in scene understanding, depth estimation, object interaction and manipulation. In this paper, we study the challenging problem of completing the appearance of occluded objects. Doing so requires knowing which pixels to paint (segmenting the invisible parts of objects) and what color to paint them (generating the invisible parts). Our proposed novel solution, SeGAN, jointly optimizes for both segmentation and generation of the invisible parts of objects. Our experimental results show that: (a) SeGAN can learn to generate the appearance of the occluded parts of objects; (b) SeGAN outperforms state-of-the-art segmentation baselines for the invisible parts of objects; (c) trained on synthetic photo realistic images, SeGAN can reliably segment natural images; (d) by reasoning about occluder-occludee relations, our method can infer depth layering.",
"title": ""
},
{
"docid": "ab052fe98e171c00711f5aa8e0d9c94e",
"text": "BACKGROUND\nScoping studies are increasingly undertaken as distinct activities. The interpretation, methodology and expectations of scoping are highly variable. This suggests that conceptually, scoping is a poorly defined ambiguous term. The distinction between scoping as an integral preliminary process in the development of a research proposal or a formative, methodologically rigorous activity in its own right has not been extensively examined.\n\n\nAIMS\nThe aim of this review is to explore the nature and status of scoping studies within the nursing literature and develop a working definition to ensure consistency in the future use of scoping as a research related activity.\n\n\nDESIGN\nThis paper follows an interpretative scoping review methodology.\n\n\nDATA SOURCES\nAn explicit systematic search strategy included literary and web-based key word searches and advice from key researchers. Electronic sources included bibliographic and national research register databases and a general browser.\n\n\nRESULTS\nThe scoping studies varied widely in terms of intent, procedural and methodological rigor. An atheoretical stance was common although explicit conceptual clarification and development of a topic was limited. Four different levels of inquiry ranging from preliminary descriptive surveys to more substantive conceptual approaches were conceptualised. These levels reflected differing dimensional distinctions in which some activities constitute research whereas in others the scoping activities appear to fall outside the remit of research. Reconnaissance emerges as a common synthesising construct to explain the purpose of scoping.\n\n\nCONCLUSIONS\nScoping studies in relation to nursing are embryonic and continue to evolve. Its main strengths lie in its ability to extract the essence of a diverse body of evidence giving it meaning and significance that is both developmental and intellectually creative. As with other approaches to research and evidence synthesis a more standardized approach is required.",
"title": ""
},
{
"docid": "65896a8a8cc20ed4e69a47b09800819f",
"text": "Attention to processes has increased, as thousands of organizations have adopted process-focused programs such as TQM and ISO 9000. Proponents of such programs stress the promise of improved efficiency and profitability. But research has not consistently borne out these prospects. Moreover, the expectation of universal benefits is not consistent with research highlighting the important role of firm-specific capabilities in sustaining competitive advantage. In this paper, we use longitudinal panel data on ISO 9000 practices for firms in the auto supplier industry to study two new issues related to the adoption of process management practices. First, we find that, as the majority of firms within an industry adopt ISO 9000, late adopters no longer gain financial benefits from these practices. Second, we explore how firms’ technological coherence moderates the performance advantages of ISO 9000 practices. We find that firms that have a very narrow or very broad technological focus have fewer opportunities for complementary interactions that arise from process management practices and thus benefit less than those with limited breadth in technologically related activities. # 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "3e177f8b02a5d67c7f4d93ce601c4539",
"text": "This research proposes an approach for text classification that uses a simple neural network called Dynamic Text Classifier Neural Network (DTCNN). The neural network uses as input vectors of words with variable dimension without information loss called Dynamic Token Vectors (DTV). The proposed neural network is designed for the classification of large and short text into categories. The learning process combines competitive and Hebbian learning. Due to the combination of these learning rules the neural network is able to work in a supervised or semi-supervised mode. In addition, it provides transparency in the classification. The network used in this paper is quite simple, and that is what makes enough for its task. The results of evaluation the proposed method shows an improvement in the text classification problem using the DTCNN compared to baseline approaches.",
"title": ""
},
{
"docid": "7fed6f57ba2e17db5986d47742dc1a9c",
"text": "Partial Least Squares Regression (PLSR) is a linear regression technique developed to deal with high-dimensional regressors and one or several response variables. In this paper we introduce robustified versions of the SIMPLS algorithm being the leading PLSR algorithm because of its speed and efficiency. Because SIMPLS is based on the empirical cross-covariance matrix between the response variables and the regressors and on linear least squares regression, the results are affected by abnormal observations in the data set. Two robust methods, RSIMCD and RSIMPLS, are constructed from a robust covariance matrix for high-dimensional data and robust linear regression. We introduce robust RMSECV and RMSEP values for model calibration and model validation. Diagnostic plots are constructed to visualize and classify the outliers. Several simulation results and the analysis of real data sets show the effectiveness and the robustness of the new approaches. Because RSIMPLS is roughly twice as fast as RSIMCD, it stands out as the overall best method.",
"title": ""
},
{
"docid": "230d380cbe134f01f3711309d8cc8e35",
"text": "For privacy concerns to be addressed adequately in today’s machine learning systems, the knowledge gap between the machine learning and privacy communities must be bridged. This article aims to provide an introduction to the intersection of both fields with special emphasis on the techniques used to protect the data.",
"title": ""
},
{
"docid": "a9f1cc1cd6a608a20caf2550cfa2d4f4",
"text": "Answer Set Programming (ASP) is a well-established declarative paradigm. One of the successes of ASP is the availability of efficient systems. State-of-the-art systems are based on the ground+solve approach. In some applications this approach is infeasible because the grounding of one or few constraints is expensive. In this paper, we systematically compare alternative strategies to avoid the instantiation of problematic constraints, that are based on custom extensions of the solver. Results on real and synthetic benchmarks highlight some strengths and weaknesses of the different strategies. (Under consideration for acceptance in TPLP, ICLP 2017 Special Issue.)",
"title": ""
},
{
"docid": "b4ed57258b85ab4d81d5071fc7ad2cc9",
"text": "We present LEAR (Lexical Entailment AttractRepel), a novel post-processing method that transforms any input word vector space to emphasise the asymmetric relation of lexical entailment (LE), also known as the IS-A or hyponymy-hypernymy relation. By injecting external linguistic constraints (e.g., WordNet links) into the initial vector space, the LE specialisation procedure brings true hyponymyhypernymy pairs closer together in the transformed Euclidean space. The proposed asymmetric distance measure adjusts the norms of word vectors to reflect the actual WordNetstyle hierarchy of concepts. Simultaneously, a joint objective enforces semantic similarity using the symmetric cosine distance, yielding a vector space specialised for both lexical relations at once. LEAR specialisation achieves state-of-the-art performance in the tasks of hypernymy directionality, hypernymy detection, and graded lexical entailment, demonstrating the effectiveness and robustness of the proposed asymmetric specialisation model.",
"title": ""
},
{
"docid": "a27660db1d7d2a6724ce5fd8991479f7",
"text": "An electromyographic (EMG) activity pattern for individual muscles in the gait cycle exhibits a great deal of intersubject, intermuscle and context-dependent variability. Here we examined the issue of common underlying patterns by applying factor analysis to the set of EMG records obtained at different walking speeds and gravitational loads. To this end healthy subjects were asked to walk on a treadmill at speeds of 1, 2, 3 and 5 kmh(-1) as well as when 35-95% of the body weight was supported using a harness. We recorded from 12-16 ipsilateral leg and trunk muscles using both surface and intramuscular recording and determined the average, normalized EMG of each record for 10-15 consecutive step cycles. We identified five basic underlying factors or component waveforms that can account for about 90% of the total waveform variance across different muscles during normal gait. Furthermore, while activation patterns of individual muscles could vary dramatically with speed and gravitational load, both the limb kinematics and the basic EMG components displayed only limited changes. Thus, we found a systematic phase shift of all five factors with speed in the same direction as the shift in the onset of the swing phase. This tendency for the factors to be timed according to the lift-off event supports the idea that the origin of the gait cycle generation is the propulsion rather than heel strike event. The basic invariance of the factors with walking speed and with body weight unloading implies that a few oscillating circuits drive the active muscles to produce the locomotion kinematics. A flexible and dynamic distribution of these basic components to the muscles may result from various descending and proprioceptive signals that depend on the kinematic and kinetic demands of the movements.",
"title": ""
},
{
"docid": "38156f5376f9d4643ce451bddce78408",
"text": "Association rule mining is one of the most popular data mining methods. However, mining association rules often results in a very large number of found rules, leaving the analyst with the task to go through all the rules and discover interesting ones. Sifting manually through large sets of rules is time consuming and strenuous. Visualization has a long history of making large amounts of data better accessible using techniques like selecting and zooming. However, most association rule visualization techniques are still falling short when it comes to a large number of rules. In this paper we present a new interactive visualization technique which lets the user navigate through a hierarchy of groups of association rules. We demonstrate how this new visualization techniques can be used to analyze a large sets of association rules with examples from our implementation in the R-package arulesViz.",
"title": ""
},
{
"docid": "e5752ff995d5c1133761986223269883",
"text": "Although much research has been performed on the adoption and usage phases of the information systems life cycle, the final phase, termination, has received little attention. This paper focuses on the development of discontinuous usage intentions, i.e. the behavioural intention in the termination phase, in the context of social networking services (SNSs), where it plays an especially crucial role. We argue that users stressed by using SNSs try to avoid the stress and develop discontinuous usage intentions, which we identify as a behavioural response to SNS-stress creators and SNS-exhaustion. Furthermore, as discontinuing the use of an SNS also takes effort and has costs, we theorize that switching-stress creators and switching-exhaustion reduce discontinuous usage intentions. We tested and validated these effects empirically in an experimental setting monitoring individuals who stopped using Facebook for a certain period and switched to alternatives. Our results show that SNS-stress creators and SNS-exhaustion cause discontinuous usage intentions, and switching-stress creators and switching-exhaustion reduce these intentions.",
"title": ""
},
{
"docid": "054443e445ec15d7a54215d3d201bb04",
"text": "In this study, a survey of the scientific literature in the field of optimum and preferred human joint angles in automotive sitting posture was conducted by referring to thirty different sources published between 1940 and today. The strategy was to use only sources with numerical angle data in combination with keywords. The aim of the research was to detect commonly used joint angles in interior car design. The main analysis was on data measurement, usability and comparability of the different studies. In addition, the focus was on the reasons for the differently described results. It was found that there is still a lack of information in methodology and description of background. Due to these reasons published data is not always usable to design a modern ergonomic car environment. As a main result of our literature analysis we suggest undertaking further research in the field of biomechanics and ergonomics to work out scientific based and objectively determined \"optimum\" joint angles in automotive sitting position.",
"title": ""
},
{
"docid": "c6f3d4b2a379f452054f4220f4488309",
"text": "3D Morphable Models (3DMMs) are powerful statistical models of 3D facial shape and texture, and among the state-of-the-art methods for reconstructing facial shape from single images. With the advent of new 3D sensors, many 3D facial datasets have been collected containing both neutral as well as expressive faces. However, all datasets are captured under controlled conditions. Thus, even though powerful 3D facial shape models can be learnt from such data, it is difficult to build statistical texture models that are sufficient to reconstruct faces captured in unconstrained conditions (in-the-wild). In this paper, we propose the first, to the best of our knowledge, in-the-wild 3DMM by combining a powerful statistical model of facial shape, which describes both identity and expression, with an in-the-wild texture model. We show that the employment of such an in-the-wild texture model greatly simplifies the fitting procedure, because there is no need to optimise with regards to the illumination parameters. Furthermore, we propose a new fast algorithm for fitting the 3DMM in arbitrary images. Finally, we have captured the first 3D facial database with relatively unconstrained conditions and report quantitative evaluations with state-of-the-art performance. Complementary qualitative reconstruction results are demonstrated on standard in-the-wild facial databases.",
"title": ""
},
{
"docid": "0c886080015642aa5b7c103adcd2a81d",
"text": "The problem of gauging information credibility on social networks has received considerable attention in recent years. Most previous work has chosen Twitter, the world's largest micro-blogging platform, as the premise of research. In this work, we shift the premise and study the problem of information credibility on Sina Weibo, China's leading micro-blogging service provider. With eight times more users than Twitter, Sina Weibo is more of a Facebook-Twitter hybrid than a pure Twitter clone, and exhibits several important characteristics that distinguish it from Twitter. We collect an extensive set of microblogs which have been confirmed to be false rumors based on information from the official rumor-busting service provided by Sina Weibo. Unlike previous studies on Twitter where the labeling of rumors is done manually by the participants of the experiments, the official nature of this service ensures the high quality of the dataset. We then examine an extensive set of features that can be extracted from the microblogs, and train a classifier to automatically detect the rumors from a mixed set of true information and false information. The experiments show that some of the new features we propose are indeed effective in the classification, and even the features considered in previous studies have different implications with Sina Weibo than with Twitter. To the best of our knowledge, this is the first study on rumor analysis and detection on Sina Weibo.",
"title": ""
},
{
"docid": "df99d221aa2f31f03a059106991a1728",
"text": "With the advancement of mobile computing technology and cloud-based streaming music service, user-centered music retrieval has become increasingly important. User-specific information has a fundamental impact on personal music preferences and interests. However, existing research pays little attention to the modeling and integration of user-specific information in music retrieval algorithms/models to facilitate music search. In this paper, we propose a novel model, named User-Information-Aware Music Interest Topic (UIA-MIT) model. The model is able to effectively capture the influence of user-specific information on music preferences, and further associate users' music preferences and search terms under the same latent space. Based on this model, a user information aware retrieval system is developed, which can search and re-rank the results based on age- and/or gender-specific music preferences. A comprehensive experimental study demonstrates that our methods can significantly improve the search accuracy over existing text-based music retrieval methods.",
"title": ""
},
{
"docid": "aaa0e09d31dbc6cdf74c640b03a2fbbe",
"text": "Received: 26 April 2008 Accepted: 4 September 2008 Abstract There has been a gigantic shift from a product based economy to one based on services, specifically digital services. From every indication it is likely to be more than a passing fad and the changes these emerging digital services represent will continue to transform commerce and have yet to reach market saturation. Digital services are being designed for and offered to users, yet very little is known about the design process that goes behind these developments. Is there a science behind designing digital services? By examining 12 leading digital services, we have developed a design taxonomy to be able to classify and contrast digital services. What emerged in the taxonomy were two broad dimensions; a set of fundamental design objectives and a set of fundamental service provider objectives. This paper concludes with an application of the proposed taxonomy to three leading digital services. We hope that the proposed taxonomy will be useful in understanding the science behind the design of digital services. European Journal of Information Systems (2008) 17, 505–517. doi:10.1057/ejis.2008.38",
"title": ""
},
{
"docid": "e83c81831f659303f3fe27987dd18a58",
"text": "We experimentally evaluate the network-level switching time of a functional 23-host prototype hybrid optical circuit-switched/electrical packet-switched network for datacenters called Mordia (Microsecond Optical Research Datacenter Interconnect Architecture). This hybrid network uses a standard electrical packet switch and an optical circuit-switched architecture based on a wavelength-selective switch that has a measured mean port-to-port network reconfiguration time of 11.5 $\\mu{\\rm s}$ including the signal acquisition by the network interface card. Using multiple parallel rings, we show that this architecture can scale to support the large bisection bandwidth required for future datacenters.",
"title": ""
}
] |
scidocsrr
|
20cda694e5af96ce06fbf5efddfdba12
|
Retrieval practice: the lack of transfer to deductive inferences.
|
[
{
"docid": "3ea9d312027505fb338a1119ff01d951",
"text": "Many experiments provide evidence that practicing retrieval benefits retention relative to conditions of no retrieval practice. Nearly all prior research has employed retrieval practice requiring overt responses, but a few experiments have shown that covert retrieval also produces retention advantages relative to control conditions. However, direct comparisons between overt and covert retrieval are scarce: Does covert retrieval-thinking of but not producing responses-on a first test produce the same benefit as overt retrieval on a criterial test given later? We report 4 experiments that address this issue by comparing retention on a second test following overt or covert retrieval on a first test. In Experiment 1 we used a procedure designed to ensure that subjects would retrieve on covert as well as overt test trials and found equivalent testing effects in the 2 cases. In Experiment 2 we replicated these effects using a procedure that more closely mirrored natural retrieval processes. In Experiment 3 we showed that overt and covert retrieval produced equivalent testing effects after a 2-day delay. Finally, in Experiment 4 we showed that covert retrieval benefits retention more than restudying. We conclude that covert retrieval practice is as effective as overt retrieval practice, a conclusion that contravenes hypotheses in the literature proposing that overt responding is better. This outcome has an important educational implication: Students can learn as much from covert self-testing as they would from overt responding.",
"title": ""
}
] |
[
{
"docid": "68487f024611acabdf6ea15b3a527c6a",
"text": "GEODETIC data, obtained by ground- or space-based techniques, can be used to infer the distribution of slip on a fault that has ruptured in an earthquake. Although most geodetic techniques require a surveyed network to be in place before the earthquake1–3, satellite images, when collected at regular intervals, can capture co-seismic displacements without advance knowledge of the earthquake's location. Synthetic aperture radar (SAR) interferometry, first introduced4 in 1974 for topographic mapping5–8 can also be used to detect changes in the ground surface, by removing the signal from the topography9,10. Here we use SAR interferometry to capture the movements produced by the 1992 earthquake in Landers, California11. We construct an interferogram by combining topographic information with SAR images obtained by the ERS-1 satellite before and after the earthquake. The observed changes in range from the ground surface to the satellite agree well with the slip measured in the field, with the displacements measured by surveying, and with the results of an elastic dislocation model. As a geodetic tool, the SAR interferogram provides a denser spatial sampling (100 m per pixel) than surveying methods1–3 and a better precision (∼3 cm) than previous space imaging techniques12,13.",
"title": ""
},
{
"docid": "89596e6eedbc1f13f63ea144b79fdc64",
"text": "This paper describes our work in integrating three different lexical resources: FrameNet, VerbNet, and WordNet, into a unified, richer knowledge-base, to the end of enabling more robust semantic parsing. The construction of each of these lexical resources has required many years of laborious human effort, and they all have their strengths and shortcomings. By linking them together, we build an improved resource in which (1) the coverage of FrameNet is extended, (2) the VerbNet lexicon is augmented with frame semantics, and (3) selectional restrictions are implemented using WordNet semantic classes. The synergistic exploitation of various lexical resources is crucial for many complex language processing applications, and we prove it once again effective in building a robust semantic parser.",
"title": ""
},
{
"docid": "58164220c13b39eb5d2ca48139d45401",
"text": "There is general agreement that structural similarity — a match in relational structure — is crucial in analogical processing. However, theories differ in their definitions of structural similarity: in particular, in whether there must be conceptual similarity between the relations in the two domains or whether parallel graph structure is sufficient. In two studies, we demonstrate, first, that people draw analogical correspondences based on matches in conceptual relations, rather than on purely structural graph matches; and, second, that people draw analogical inferences between passages that have matching conceptual relations, but not between passages with purely structural graph matches.",
"title": ""
},
{
"docid": "0400a0f84566110d3e4c19a39d711ace",
"text": "bnlearn is an R package (R Team 2009) which includes several algorithms for learning the structure of Bayesian networks with either discrete or continuous variables. Both constraint-based and score-based algorithms are implemented, and can use the functionality provided by the snow package (Tierney et al. 2008) to improve their performance via parallel computing. Several network scores and conditional independence algorithms are available for both the learning algorithms and independent use. Advanced plotting options are provided by the Rgraphviz package (Gentry et al. 2009).",
"title": ""
},
{
"docid": "137af0008a33c9d4b111c1d43f261d88",
"text": "Since the first cellular networks were trialled in the 1970s, we have witnessed an incredible wireless revolution. From 1G to 4G, the massive traffic growth has been managed by a combination of wider bandwidths, refined radio interfaces, and network densification, namely increasing the number of antennas per site [1]. Due its cost-efficiency, the latter has contributed the most. Massive MIMO (multiple-input multiple-output) is a key 5G technology that uses massive antenna arrays to provide a very high beamforming gain and spatially multiplexing of users, and hence, increases the spectral and energy efficiency (see [2] and references herein). It constitutes a centralized solution to densify a network, and its performance is limited by the inter-cell interference inherent in its cell-centric design. Conversely, ubiquitous cell-free Massive MIMO [3] refers to a distributed Massive MIMO system implementing coherent user-centric transmission to overcome the inter-cell interference limitation in cellular networks and provide additional macro-diversity. These features, combined with the system scalability inherent in the Massive MIMO design, distinguishes ubiquitous cell-free Massive MIMO from prior coordinated distributed wireless systems. In this article, we investigate the enormous potential of this promising technology while addressing practical deployment issues to deal with the increased back/front-hauling overhead deriving from the signal co-processing.",
"title": ""
},
{
"docid": "30f73e4086c3f763c37058691d0c435b",
"text": "There has been a recent trend toward delaying newborn baths because of mounting evidence that delayed bathing promotes breastfeeding, decreases hypothermia, and allows for more parental involvement with newborn care. A multidisciplinary team from a maternal-new-born unit at a military medical center designed and implemented an evidence-based practice change from infant sponge baths shortly after birth to delayed immersion baths. An analysis of newborn temperature data showed that newborns who received delayed immersion baths were less likely to be hypothermic than those who received a sponge bath shortly after birth. Furthermore, parents reported that they liked participating in bathing their newborns and that they felt prepared to bathe them at home.",
"title": ""
},
{
"docid": "5f40ac6afd39e3d2fcbc5341bc3af7b4",
"text": "We present a modified quasi-Yagi antenna for use in WLAN access points. The antenna uses a new microstrip-to-coplanar strip (CPS) transition, consisting of a tapered microstrip input, T-junction, conventional 50-ohm microstrip line, and three artificial transmission line (ATL) sections. The design concept, mode conversion scheme, and simulated and experimental S-parameters of the transition are discussed first. It features a compact size, and a 3dB-insertion loss bandwidth of 78.6%. Based on the transition, a modified quasi-Yagi antenna is demonstrated. In addition to the new transition, the antenna consists of a CPS feed line, a meandered dipole, and a parasitic element. The meandered dipole can substantially increase to the front-to-back ratio of the antenna without sacrificing the operating bandwidth. The parasitic element is placed in close proximity to the driven element to improve impedance bandwidth and radiation characteristics. The antenna exhibits excellent end-fire radiation with a front-to-back ratio of greater than 15 dB. It features a moderate gain around 4 dBi, and a fractional bandwidth of 38.3%. We carefully investigate the concept, methodology, and experimental results of the proposed antenna.",
"title": ""
},
{
"docid": "e165cac5eb7ad77b43670e4558011210",
"text": "PURPOSE\nTo retrospectively review our experience in infants with glanular hypospadias or hooded prepuce without meatal anomaly, who underwent circumcision with the plastibell device. Although circumcision with the plastibell device is well described, there are no reported experiences pertaining to hooded prepuce or glanular hypospadias that have been operated on by this technique.\n\n\nMATERIALS AND METHODS\nBetween September 2002 and September 2008, 21 children with hooded prepuce (age 1 to 11 months, mean 4.6 months) were referred for hypospadias repair. Four of them did not have meatal anomaly. Their parents accepted this small anomaly and requested circumcision without glanuloplasty. In all cases, the circumcision was corrected by a plastibell device.\n\n\nRESULTS\nNo complications occurred in the circumcised patients, except delayed falling of bell in one case that was removed by a surgeon, after the tenth day.\n\n\nCONCLUSION\nCircumcision with the plastibell device is a suitable method for excision of hooded prepuce. It can also be used successfully in infants, who have miniglanular hypospadias, and whose parents accepted this small anomaly.",
"title": ""
},
{
"docid": "b95776a33ab5ff12d405523a90cbfb93",
"text": "In this paper, we introduce the splitter placement problem in wavelength-routed networks (SP-WRN). Given a network topology, a set of multicast sessions, and a fixed number of multicast-capable cross-connects, the SP-WRN problem entails the placement of the multicast-capable cross-connects so that the blocking probability is minimized. The SP-WRN problem is NP-complete as it includes as a subproblem the routing and wavelength assignment problem which is NP-complete. To gain a deeper insight into the computational complexity of the SP-WRN problem, we define a graph-theoretic version of the splitter placement problem (SPG), and show that even SPG is NP-complete. We develop three heuristics for the SP-WRN problem with different degrees of trade-off between computation time and quality of solution. The first heuristic uses the CPLEX general solver to solve an integer-linear program (ILP) of the problem. The second heuristic is based on a greedy approach and is called most-saturated node first (MSNF). The third heuristic employs simulated annealing (SA) with route-coordination. Through numerical examples on a wide variety of network topologies we demonstrate that: (1) no more than 50% of the cross-connects need to be multicast-capable, (2) the proposed SA heuristic provides fast near-optimal solutions, and (3) it is not practical to use general solvers such as CPLEX for solving the SP-WRN problem.",
"title": ""
},
{
"docid": "8ab9f1be0a8ed182137c9a8a9c9e71d0",
"text": "PURPOSE OF REVIEW\nTo document recent evidence regarding the role of nutrition as an intervention for sarcopenia.\n\n\nRECENT FINDINGS\nA review of seven randomized controlled trials (RCTs) on beta-hydroxy-beta-methylbutyrate (HMB) alone on muscle loss in 147 adults showed greater muscle mass gain in the intervention group, but no benefit in muscle strength and physical performance measures. Three other review articles examined nutrition and exercise as combined intervention, and suggest enhancement of benefits of exercise by nutrition supplements (energy, protein, vitamin D). Four trials reported on nutrition alone as intervention, mainly consisting of whey protein, leucine, HMB and vitamin D, with variable results on muscle mass and function. Four trials examined the combined effects of nutrition combined with exercise, showing improvements in muscle mass and function.\n\n\nSUMMARY\nTo date, evidence suggests that nutrition intervention alone does have benefit, and certainly enhances the impact of exercise. Nutrients include high-quality protein, leucine, HMB and vitamin D. Long-lasting impact may depend on baseline nutritional status, baseline severity of sarcopenia, and long-lasting adherence to the intervention regime. Future large-scale multicentered RCTs using standardized protocols may provide evidence for formulating guidelines on nutritional intervention for sarcopenia. There is a paucity of data for nursing home populations.",
"title": ""
},
{
"docid": "1f7c871c9e0fb22d33abd536dd695175",
"text": "A decade bandwidth 90 W, GaN HEMT push-pull power amplifier has been demonstrated. The power amplifier exhibits 18 dB small-signal gain with 20-1100 MHz 3-dB bandwidth and obtains 82.2-107.5 W CW output power with 51.9-73.8 % drain efficiency and 15.2-16.3 dB power gain over the 100-1000 MHz band. The push-pull power amplifier occupies a 2 x 2 inch PCB area and uses a novel compact broadband low loss coaxial coiled 1:1 balun to combine two 45 W packaged broadband lossy matched GaN HEMT amplifiers matched to 25 U. The packaged amplifiers contain a GaN on SiC HEMT operating at 50 V drain voltage with integrated passive matching circuitry on GaAs substrate. These amplifiers are targeted for use in multi-band multi-standard communication systems and for instrumentation applications.",
"title": ""
},
{
"docid": "727a97b993098aa1386e5bfb11a99d4b",
"text": "Inevitably, reading is one of the requirements to be undergone. To improve the performance and quality, someone needs to have something new every day. It will suggest you to have more inspirations, then. However, the needs of inspirations will make you searching for some sources. Even from the other people experience, internet, and many books. Books and internet are the recommended media to help you improving your quality and performance.",
"title": ""
},
{
"docid": "243342f89a0670486fac8c1c4e5801c8",
"text": "Twitter is not only a social network, but also an increasingly important news media. In Twitter, retweeting is the most important information propagation mechanism, and supernodes (news medias) that have many followers are the most important information sources. Therefore, it is important to understand the news retweet propagation from supernodes and predict news popularity quickly at the very first few seconds upon publishing. Such understanding and prediction will benefit many applications such as social media management, advertisement and interaction optimization between news medias and followers. In this paper, we identify the characteristics of news propagation from supernodes from the trace data we crawled from Twitter. Based on the characteristics, we build a news popularity prediction model that can predict the final number of retweets of a news tweet very quickly. Through trace-driven experiments, we then validate our prediction model by comparing our predicted popularity and real popularity, and show its superior performance in comparison with the regression prediction model. From the study, we found that the average interaction frequency between the retweeters and the news source is correlated with news popularity. Also, the negative sentiment of news has some correlations with retweet popularity while the positive sentiment of news does not have such obvious correlation. Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "d08aa142ca5134b34dd61c4eab6852f0",
"text": "The first version of the 3GPP NarrowBand-IoT (NB-IoT) standards has been finalized in June 2016 as part of Release 13. NB-IoT is a promising new radio access technology which can coexist with existing GSM, UMTS and LTE deployments. In fact, NB-IoT specifications have been integrated into LTE standards. NB-IoT goes a step further than the MTC [3] (Machine Type Communication) specification, focusing on extremely low cost devices, massive deployments and reduced data rates with a carrier bandwidth of just 200 kHz (hence its name). In this paper we will take a close look at 3GPP specifications to discover which are the modifications required in traditional LTE deployments to provide connectivity to the upcoming Cat-NB1 User Equipments (UEs).",
"title": ""
},
{
"docid": "deedf390faeef304bf0479a844297113",
"text": "A compact 24-GHz Doppler radar module is developed in this paper for non-contact human vital-sign detection. The 24-GHz radar transceiver chip, transmitting and receiving antennas, baseband circuits, microcontroller, and Bluetooth transmission module have been integrated and implemented on a printed circuit board. For a measurement range of 1.5 m, the developed radar module can successfully detect the respiration and heartbeat of a human adult.",
"title": ""
},
{
"docid": "95395c693b4cdfad722ae0c3545f45ef",
"text": "Aiming at automatic, convenient and non-instrusive motion capture, this paper presents a new generation markerless motion capture technique, the FlyCap system, to capture surface motions of moving characters using multiple autonomous flying cameras (autonomous unmanned aerial vehicles(UAVs) each integrated with an RGBD video camera). During data capture, three cooperative flying cameras automatically track and follow the moving target who performs large-scale motions in a wide space. We propose a novel non-rigid surface registration method to track and fuse the depth of the three flying cameras for surface motion tracking of the moving target, and simultaneously calculate the pose of each flying camera. We leverage the using of visual-odometry information provided by the UAV platform, and formulate the surface tracking problem in a non-linear objective function that can be linearized and effectively minimized through a Gaussian-Newton method. Quantitative and qualitative experimental results demonstrate the plausible surface and motion reconstruction results.",
"title": ""
},
{
"docid": "0481c35949653971b75a3a4c3051c590",
"text": "Handling appearance variations is a very challenging problem for visual tracking. Existing methods usually solve this problem by relying on an effective appearance model with two features: 1) being capable of discriminating the tracked target from its background 2) being robust to the target’s appearance variations during tracking. Instead of integrating the two requirements into the appearance model, in this paper, we propose a tracking method that deals with these problems separately based on sparse representation in a particle filter framework. Each target candidate defined by a particle is linearly represented by the target and background templates with an additive representation error. Discriminating the target from its background is achieved by activating the target templates or the background templates in the linear system in a competitive manner. The target’s appearance variations are directly modeled as the representation error. An online algorithm is used to learn the basis functions that sparsely span the representation error. The linear system is solved via l1 minimization. The candidate with the smallest reconstruction error using the target templates is selected as the tracking result. We test the proposed approach using four sequences with heavy occlusions, large pose variations, drastic illumination changes and low foreground-background contrast. The proposed approach shows excellent performance in comparison with two latest state-of-the-art trackers.",
"title": ""
},
{
"docid": "92099d409e506a776853d4ae80c4285e",
"text": "Arti
cial intelligence (AI) has achieved superhuman performance in a growing number of tasks, but understanding and explaining AI remain challenging. This paper clari
es the connections between machine-learning algorithms to develop AIs and the econometrics of dynamic structural models through the case studies of three famous game AIs. Chess-playing Deep Blue is a calibrated value function, whereas shogiplaying Bonanza is an estimated value function via Rusts (1987) nested
xed-point method. AlphaGos supervised-learning policy network is a deep neural network implementation of Hotz and Millers (1993) conditional choice probability estimation; its reinforcement-learning value networkis equivalent to Hotz, Miller, Sanders, and Smiths (1994) conditional choice simulation method. Relaxing these AIs implicit econometric assumptions would improve their structural interpretability. Keywords: Arti
cial intelligence, Conditional choice probability, Deep neural network, Dynamic game, Dynamic structural model, Simulation estimator. JEL classi
cations: A12, C45, C57, C63, C73. First version: October 30, 2017. This paper bene
ted from seminar comments at Riken AIP, Georgetown, Tokyo, Osaka, Harvard, and The Third Cambridge Area Economics and Computation Day conference at Microsoft Research New England, as well as conversations with Susan Athey, Xiaohong Chen, Jerry Hausman, Greg Lewis, Robert Miller, Yusuke Narita, Aviv Nevo, Anton Popov, John Rust, Takuo Sugaya, Elie Tamer, and Yosuke Yasuda. yYale Department of Economics and MIT Department of Economics. E-mail: mitsuru.igami@gmail.com.",
"title": ""
},
{
"docid": "dd01a74456f7163e3240ebde99cad89e",
"text": "Features of consciousness difficult to understand in terms of conventional neuroscience have evoked application of quantum theory, which describes the fundamental behavior of matter and energy. In this paper we propose that aspects of quantum theory (e.g. quantum coherence) and of a newly proposed physical phenomenon of quantum wave function \"self-collapse\"(objective reduction: OR Penrose, 1994) are essential for consciousness, and occur in cytoskeletal microtubules and other structures within each of the brain's neurons. The particular characteristics of microtubules suitable for quantum effects include their crystal-like lattice structure, hollow inner core, organization of cell function and capacity for information processing. We envisage that conformational states of microtubule subunits (tubulins) are coupled to internal quantum events, and cooperatively interact (compute) with other tubulins. We further assume that macroscopic coherent superposition of quantum-coupled tubulin conformational states occurs throughout significant brain volumes and provides the global binding essential to consciousness. We equate the emergence of the microtubule quantum coherence with pre-conscious processing which grows (for up to 500 milliseconds) until the mass-energy difference among the separated states of tubulins reaches a threshold related to quantum gravity. According to the arguments for OR put forth in Penrose (1994), superpositioned states each have their own space-time geometries. When the degree of coherent mass-energy difference leads to sufficient separation of space-time geometry, the system must choose and decay (reduce, collapse) to a single universe state. In this way, a transient superposition of slightly differing space-time geometries persists until an abrupt quantum classical reduction occurs. Unlike the random, \"subjective reduction\"(SR, or R) of standard quantum theory caused by observation or environmental entanglement, the OR we propose in microtubules is a self-collapse and it results in particular patterns of microtubule-tubulin conformational states that regulate neuronal activities including synaptic functions. Possibilities and probabilities for postreduction tubulin states are influenced by factors including attachments of microtubule-associated proteins (MAPs) acting as \"nodes\"which tune and \"orchestrate\"the quantum oscillations. We thus term the self-tuning OR process in microtubules \"orchestrated objective reduction\"(\"B>Orch OR\", and calculate an estimate for the number of tubulins (and neurons) whose coherence for relevant time periods (e.g. 500 milliseconds) will elicit Orch OR. In providing a connection among 1) pre-conscious to conscious transition, 2) fundamental space-time notions, 3) noncomputability, and 4) binding of various (time scale and spatial) reductions into an instantaneous event (\"conscious now\", we believe Orch OR in brain microtubules is the most specific and plausible model for consciousness yet proposed.",
"title": ""
},
{
"docid": "12db7d3dfc43cef474acea4eaf5ba4c3",
"text": "A growing list of medically important developmental defects and disease mechanisms can be traced to disruption of the planar cell polarity (PCP) pathway. The PCP system polarizes cells in epithelial sheets along an axis orthogonal to their apical-basal axis. Studies in the fruitfly, Drosophila, have suggested that components of the PCP signaling system function in distinct modules, and that these modules and the effector systems with which they interact function together to produce emergent patterns. Experimental methods allow the manipulation of individual PCP signaling molecules in specified groups of cells; these interventions not only perturb the polarization of the targeted cells at a subcellular level, but also perturb patterns of polarity at the multicellular level, often affecting nearby cells in characteristic ways. These kinds of experiments should, in principle, allow one to infer the architecture of the PCP signaling system, but the relationships between molecular interactions and tissue-level pattern are sufficiently complex that they defy intuitive understanding. Mathematical modeling has been an important tool to address these problems. This article explores the emergence of a local signaling hypothesis, and describes how a local intercellular signal, coupled with a directional cue, can give rise to global pattern. We will discuss the critical role mathematical modeling has played in guiding and interpreting experimental results, and speculate about future roles for mathematical modeling of PCP. Mathematical models at varying levels of inhibition have and are expected to continue contributing in distinct ways to understanding the regulation of PCP signaling.",
"title": ""
}
] |
scidocsrr
|
50099f5e41fde52e443e6551904d23b9
|
Exploiting self-similarity in geometry for voxel based solid modeling
|
[
{
"docid": "91dbb5df6bc5d3db43b51fc7a4c84468",
"text": "An assortment of algorithms, termed three-dimensional (3D) scan-conversion algorithms, is presented. These algorithms scan-convert 3D geometric objects into their discrete voxel-map representation within a Cubic Frame Buffer (CFB). The geometric objects that are studied here include three-dimensional lines, polygons (optionally filled), polyhedra (optionally filled), cubic parametric curves, bicubic parametric surface patches, circles (optionally filled), and quadratic objects (optionally filled) like those used in constructive solid geometry: cylinders, cones, and spheres.\nAll algorithms presented here do scan-conversion with computational complexity which is linear in the number of voxels written to the CFB. All algorithms are incremental and use only additions, subtractions, tests and simpler operations inside the inner algorithm loops. Since the algorithms are basically sequential, the temporal complexity is also linear. However, the polyhedron-fill and sphere-fill algorithms have less than linear temporal complexity, as they use a mechanism for writing a voxel run into the CFB. The temporal complexity would then be linear with the number of pixels in the object's 2D projection. All algorithms have been implemented as part of the CUBE Architecture, which is a voxel-based system for 3D graphics. The CUBE architecture is also presented.",
"title": ""
},
{
"docid": "1d8db3e4aada7f5125cd72df4dfab1f4",
"text": "Advances in 3D scanning technologies have enabled the practical creation of meshes with hundreds of millions of polygons. Traditional algorithms for display, simplification, and progressive transmission of meshes are impractical for data sets of this size. We describe a system for representing and progressively displaying these meshes that combines a multiresolution hierarchy based on bounding spheres with a rendering system based on points. A single data structure is used for view frustum culling, backface culling, level-of-detail selection, and rendering. The representation is compact and can be computed quickly, making it suitable for large data sets. Our implementation, written for use in a large-scale 3D digitization project, launches quickly, maintains a user-settable interactive frame rate regardless of object complexity or camera position, yields reasonable image quality during motion, and refines progressively when idle to a high final image quality. We have demonstrated the system on scanned models containing hundreds of millions of samples.",
"title": ""
}
] |
[
{
"docid": "7959204dbaa087fc7c37e4157e057efc",
"text": "OBJECTIVE\nThe primary objective of this study was to compare the effectiveness of a water flosser plus sonic toothbrush to a sonic toothbrush alone on the reduction of bleeding, gingivitis, and plaque. The secondary objective was to compare the effectiveness of different sonic toothbrushes on bleeding, gingivitis, and plaque.\n\n\nMETHODS\nOne-hundred and thirty-nine subjects completed this randomized, four-week, single-masked, parallel clinical study. Subjects were assigned to one of four groups: Waterpik Complete Care, which is a combination of a water flosser plus power toothbrush (WFS); Sensonic Professional Plus Toothbrush (SPP); Sonicare FlexCare toothbrush (SF); or an Oral-B Indicator manual toothbrush (MT). Subjects were provided written and verbal instructions for all power products at baseline, and instructions were reviewed at the two-week visit. Data were evaluated for whole mouth, facial, and lingual surfaces for bleeding on probing (BOP) and gingivitis (MGI). Plaque data were evaluated for whole mouth, lingual, facial, approximal, and marginal areas of the tooth using the Rustogi Modification of the Navy Plaque Index (RMNPI). Data were recorded at baseline (BL), two weeks (W2), and four weeks (W4).\n\n\nRESULTS\nAll groups showed a significant reduction from BL in BOP, MGI, and RMNPI for all areas measured at the W2 and W4 visits (p < 0.001). The reduction of BOP was significantly higher for the WFS group than the other three groups at W2 and W4 for all areas measured (p < 0.001 for all, except p = 0.007 at W2 and p = 0.008 for W4 lingual comparison to SPP). The WFS group was 34% more effective than the SPP group, 70% more effective than the SF group, and 1.59 times more effective than the MT group for whole mouth bleeding scores (p < 0.001) at W4. The reduction of MGI was significantly higher for the WFS group; 23% more effective than SPP, 48% more effective than SF, and 1.35 times more effective than MT for whole mouth (p <0.001) at W4. The reduction of MGI was significantly higher for WFS than the SF and MT for facial and lingual surfaces, and more effective than the SPP for facial surfaces (p < 0.001) at W4. The WFS group showed significantly better reductions for plaque than the SF and MT groups for whole mouth, facial, lingual, approximal, and marginal areas at W4 (p < 0.001; SF facial p = 0.025). For plaque reduction, the WFS was significantly better than the SPP for whole mouth (p = 0.003) and comparable for all other areas and surfaces at W4. The WFS was 52% more effective for whole mouth, 31% for facial, 77% for lingual, 1.22 times for approximal, and 1.67 times for marginal areas compared to the SF for reducing plaque scores at W4 (p < 0.001; SF facial p = 0.025). The SPP had significantly higher reductions than the SF for whole mouth and lingual BOP and MGI scores, and whole mouth, approximal, marginal, and lingual areas for plaque at W4.\n\n\nCONCLUSION\nThe Waterpik Complete Care is significantly more effective than the Sonicare FlexCare toothbrush for reducing gingival bleeding, gingivitis, and plaque. The Sensonic Professional Plus Toothbrush is significantly more effective than the Sonicare Flex-Care for reducing gingival bleeding, gingivitis, and plaque.",
"title": ""
},
{
"docid": "e69a90ff7c2cd96a8e31cef5cb1ee2d4",
"text": "Smart grids are essentially electric grids that use information and communication technology to provide reliable, efficient electricity transmission and distribution. Security and trust are of paramount importance. Among various emerging security issues, FDI attacks are one of the most substantial ones, which can significantly increase the cost of the energy distribution process. However, most current research focuses on countermeasures to FDIs for traditional power grids rather smart grid infrastructures. We propose an efficient and real-time scheme to detect FDI attacks in smart grids by exploiting spatial-temporal correlations between grid components. Through realistic simulations based on the US smart grid, we demonstrate that the proposed scheme provides an accurate and reliable solution.",
"title": ""
},
{
"docid": "001b5a976b6b6ccb15ab80ead4617422",
"text": "Multivariate time-series modeling and forecasting is an important problem with numerous applications. Traditional approaches such as VAR (vector auto-regressive) models and more recent approaches such as RNNs (recurrent neural networks) are indispensable tools in modeling time-series data. In many multivariate time series modeling problems, there is usually a significant linear dependency component, for which VARs are suitable, and a nonlinear component, for which RNNs are suitable. Modeling such times series with only VAR or only RNNs can lead to poor predictive performance or complex models with large training times. In this work, we propose a hybrid model called R2N2 (Residual RNN), which first models the time series with a simple linear model (like VAR) and then models its residual errors using RNNs. R2N2s can be trained using existing algorithms for VARs and RNNs. Through an extensive empirical evaluation on two real world datasets (aviation and climate domains), we show that R2N2 is competitive, usually better than VAR or RNN, used alone. We also show that R2N2 is faster to train as compared to an RNN, while requiring less number of hidden units.",
"title": ""
},
{
"docid": "d21e4e55966bac19bbed84b23360b66d",
"text": "Smart growth is an approach to urban planning that provides a framework for making community development decisions. Despite its growing use, it is not known whether smart growth can impact physical activity. This review utilizes existing built environment research on factors that have been used in smart growth planning to determine whether they are associated with physical activity or body mass. Searching the MEDLINE, Psycinfo and Web-of-Knowledge databases, 204 articles were identified for descriptive review, and 44 for a more in-depth review of studies that evaluated four or more smart growth planning principles. Five smart growth factors (diverse housing types, mixed land use, housing density, compact development patterns and levels of open space) were associated with increased levels of physical activity, primarily walking. Associations with other forms of physical activity were less common. Results varied by gender and method of environmental assessment. Body mass was largely unaffected. This review suggests that several features of the built environment associated with smart growth planning may promote important forms of physical activity. Future smart growth community planning could focus more directly on health, and future research should explore whether combinations or a critical mass of smart growth features is associated with better population health outcomes.",
"title": ""
},
{
"docid": "2d0f0ebf29edc46ad68f1f6c358984db",
"text": "A multilevel approach was used to analyse relationships between perceived classroom environments and emotions in mathematics. Based on Pekrun’s (2000) [A social-cognitive, control-value theory of achievement emotions. In J. Heckhausen (Ed.), Motivational psychology of human development (pp. 143e163)] social-cognitive, control-value theory of achievement emotions, we hypothesized that environmental characteristics conveying control and value to the students would be related to their experience of enjoyment, anxiety, anger, and boredom in mathematics. Multilevel modelling of data from 1623 students from 69 classes (grades 5e10) confirmed close relationships between environmental variables and emotional experiences that functioned predominantly at the individual level. Compositional effects further revealed that classes’ aggregate environment perceptions as well as their compositions in terms of aggregate achievement and gender ratio were additionally linked to students’ emotions in mathematics. Methodological and practical implications of the findings are discussed. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "c638fe67f5d4b6e04a37e216edb849fa",
"text": "An exceedingly large number of scientific and engineering fields are confronted with the need for computer simulations to study complex, real world phenomena or solve challenging design problems. However, due to the computational cost of these high fidelity simulations, the use of neural networks, kernel methods, and other surrogate modeling techniques have become indispensable. Surrogate models are compact and cheap to evaluate, and have proven very useful for tasks such as optimization, design space exploration, prototyping, and sensitivity analysis. Consequently, in many fields there is great interest in tools and techniques that facilitate the construction of such regression models, while minimizing the computational cost and maximizing model accuracy. This paper presents a mature, flexible, and adaptive machine learning toolkit for regression modeling and active learning to tackle these issues. The toolkit brings together algorithms for data fitting, model selection, sample selection (active learning), hyperparameter optimization, and distributed computing in order to empower a domain expert to efficiently generate an accurate model for the problem or data at hand.",
"title": ""
},
{
"docid": "7a52fecf868040da5db3bd6fcbdcc0b2",
"text": "Mobile edge computing (MEC) is a promising paradigm to provide cloud-computing capabilities in close proximity to mobile devices in fifth-generation (5G) networks. In this paper, we study energy-efficient computation offloading (EECO) mechanisms for MEC in 5G heterogeneous networks. We formulate an optimization problem to minimize the energy consumption of the offloading system, where the energy cost of both task computing and file transmission are taken into consideration. Incorporating the multi-access characteristics of the 5G heterogeneous network, we then design an EECO scheme, which jointly optimizes offloading and radio resource allocation to obtain the minimal energy consumption under the latency constraints. Numerical results demonstrate energy efficiency improvement of our proposed EECO scheme.",
"title": ""
},
{
"docid": "b3d4f37cbf2b277ecec7291d12f4dde5",
"text": "This paper reports on the design, fabrication, assembly, as well as the optical, mechanical and thermal characterization of a novel MEMS-based optical cochlear implant (OCI). Building on advances in optogenetics, it will enable the optical stimulation of neural activity in the auditory pathway at 10 independently controlled spots. The optical stimulation of the spiral ganglion neurons (SGNs) promises a pronounced increase in the number of discernible acoustic frequency channels in comparison with commercial cochlear implants based on the electrical stimulation. Ten high-efficiency light-emitting diodes are integrated as a linear array onto an only 12-μm-thick highly flexible polyimide substrate with three metal and three polyimide layers. The high mechanical flexibility of this novel OCI enables its insertion into a 300 μm wide channel with an outer bending radius of 1 mm. The 2 cm long and only 240 μm wide OCI is electrically passivated with a thin layer of Cy-top™.",
"title": ""
},
{
"docid": "9cf48e5fa2cee6350ac31f236696f717",
"text": "Komatiites are rare ultramafic lavas that were produced most commonly during the Archean and Early Proterozoic and less frequently in the Phanerozoic. These magmas provide a record of the thermal and chemical characteristics of the upper mantle through time. The most widely cited interpretation is that komatiites were produced in a plume environment and record high mantle temperatures and deep melting pressures. The decline in their abundance from the Archean to the Phanerozoic has been interpreted as primary evidence for secular cooling (up to 500‡C) of the mantle. In the last decade new evidence from petrology, geochemistry and field investigations has reopened the question of the conditions of mantle melting preserved by komatiites. An alternative proposal has been rekindled: that komatiites are produced by hydrous melting at shallow mantle depths in a subduction environment. This alternative interpretation predicts that the Archean mantle was only slightly (V100‡C) hotter than at present and implicates subduction as a process that operated in the Archean. Many thermal evolution and chemical differentiation models of the young Earth use the plume origin of komatiites as a central theme in their model. Therefore, this controversy over the mechanism of komatiite generation has the potential to modify widely accepted views of the Archean Earth and its subsequent evolution. This paper briefly reviews some of the pros and cons of the plume and subduction zone models and recounts other hypotheses that have been proposed for komatiites. We suggest critical tests that will improve our understanding of komatiites and allow us to better integrate the story recorded in komatiites into our view of early Earth evolution. 6 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "69f72b8eadadba733f240fd652ca924e",
"text": "We address the problem of finding descriptive explanations of facts stored in a knowledge graph. This is important in high-risk domains such as healthcare, intelligence, etc. where users need additional information for decision making and is especially crucial for applications that rely on automatically constructed knowledge bases where machine learned systems extract facts from an input corpus and working of the extractors is opaque to the end-user. We follow an approach inspired from information retrieval and propose a simple and efficient, yet effective solution that takes into account passage level as well as document level properties to produce a ranked list of passages describing a given input relation. We test our approach using Wikidata as the knowledge base and Wikipedia as the source corpus and report results of user studies conducted to study the effectiveness of our proposed model.",
"title": ""
},
{
"docid": "63de2448edead6e16ef2bc86c3acd77b",
"text": "In traditional topic models such as LDA, a word is generated by choosing a topic from a collection. However, existing topic models do not identify different types of topics in a document, such as topics that represent the content and topics that represent the sentiment. In this paper, our goal is to discover such different types of topics, if they exist. We represent our model as several parallel topic models (called topic factors), where each word is generated from topics from these factors jointly. Since the latent membership of the word is now a vector, the learning algorithms become challenging. We show that using a variational approximation still allows us to keep the algorithm tractable. Our experiments over several datasets show that our approach consistently outperforms many classic topic models while also discovering fewer, more meaningful, topics. 1",
"title": ""
},
{
"docid": "db9ab90f56a5762ebf6729ffc802a02a",
"text": "In this paper we present a novel approach to music analysis, in which a grammar is automatically generated explaining a musical work’s structure. The proposed method is predicated on the hypothesis that the shortest possible grammar provides a model of the musical structure which is a good representation of the composer’s intent. The effectiveness of our approach is demonstrated by comparison of the results with previously-published expert analysis; our automated approach produces results comparable to human annotation. We also illustrate the power of our approach by showing that it is able to locate errors in scores, such as introduced by OMR or human transcription. Further, our approach provides a novel mechanism for intuitive high-level editing and creative transformation of music. A wide range of other possible applications exists, including automatic summarization and simplification; estimation of musical complexity and similarity, and plagiarism detection.",
"title": ""
},
{
"docid": "0d5fd1dfdcb6beda733eb43f2ed834ea",
"text": "In this paper, approximation techniques based on the shifted Jacobi together with spectral tau technique are presented to solve a class of initial-boundary value problems for the fractional diffusion equations with variable coefficients on a finite domain. The fractional derivatives are described in the Caputo sense. The technique is derived by expanding the required approximate solution as the elements of shifted Jacobi polynomials. Using the operational matrix of the fractional derivative, the problem can be reduced to a set of linear algebraic equations. Numerical examples are included to demonstrate the validity and applicability of the technique and a comparison is made with the existing results to show that the proposed method is easy to implement and produce accurate results.",
"title": ""
},
{
"docid": "426a7c1572e9d68f4ed2429f143387d5",
"text": "Face tracking is an active area of computer vision research and an important building block for many applications. However, opposed to face detection, there is no common benchmark data set to evaluate a tracker’s performance, making it hard to compare results between different approaches. In this challenge we propose a data set, annotation guidelines and a well defined evaluation protocol in order to facilitate the evaluation of face tracking systems in the future.",
"title": ""
},
{
"docid": "261318ee599b56b005a5581bd33938b9",
"text": "This paper reports on a study of the prevalence of and possible reasons for peer-to-peer transaction marketplace (P2PM) users turning to out-of-market (OOM) transactions after finding transaction partners within a P2P system. We surveyed 97 P2PM users and interviewed 22 of 58 who reported going OOM. We did not find any evidence of predisposing personality factors for OOM activity; instead, it seems to be a rational response to circumstances, with a variety of situationally rational motivations at play, such as liking the transaction partner and trusting that good quality repeat transactions will occur in the future.",
"title": ""
},
{
"docid": "4c3d8c30223ef63b54f8c7ba3bd061ed",
"text": "There is much recent work on using the digital footprints left by people on social media to predict personal traits and gain a deeper understanding of individuals. Due to the veracity of social media, imperfections in prediction algorithms, and the sensitive nature of one's personal traits, much research is still needed to better understand the effectiveness of this line of work, including users' preferences of sharing their computationally derived traits. In this paper, we report a two- part study involving 256 participants, which (1) examines the feasibility and effectiveness of automatically deriving three types of personality traits from Twitter, including Big 5 personality, basic human values, and fundamental needs, and (2) investigates users' opinions of using and sharing these traits. Our findings show there is a potential feasibility of automatically deriving one's personality traits from social media with various factors impacting the accuracy of models. The results also indicate over 61.5% users are willing to share their derived traits in the workplace and that a number of factors significantly influence their sharing preferences. Since our findings demonstrate the feasibility of automatically inferring a user's personal traits from social media, we discuss their implications for designing a new generation of privacy-preserving, hyper-personalized systems.",
"title": ""
},
{
"docid": "e24743e3a183ebd20d5d3cfd2b3b3235",
"text": "This new book by Andrew Cohen comes in the well-established series, Applied Linguistics and Language Study, which explores key issues in language acquisition and language use. Cohen’s book focuses on learner strategies and is written primarily for teachers, administrators, and researchers of second and foreign language programmes. It is hard to think of a more suitable author of a book on how to go about the complex endeavour of learning a second or foreign language than Cohen, himself a learner of twelve languages and a continuous user of seven! Of course, Cohen is also an experienced conductor of research focusing on learner strategies and the author and co-author of numerous articles on the topic. Except for a research report on strategies-based instruction which appears in print for the first time in Chapter 5, all of the chapters in the present volume consist of previously published material, either by Cohen alone or co-authored with Cohen, which has been revised and updated. After a short introduction, Cohen starts out with a discussion of terminology in Chapter 2, suggesting the broad working definition of second language learner strategies to encompass both second language learning and second language use strategies. According to Cohen, second language learner strategies can be defined:",
"title": ""
},
{
"docid": "ea75bf062f21a12aacd88ccb61ba47a0",
"text": "This paper describes a Twitter sentiment analysis system that classifies a tweet as positive or negative based on its overall tweet-level polarity. Supervised learning classifiers often misclassify tweets containing conjunctions such as “but” and conditionals such as “if”, due to their special linguistic characteristics. These classifiers also assign a decision score very close to the decision boundary for a large number tweets, which suggests that they are simply unsure instead of being completely wrong about these tweets. To counter these two challenges, this paper proposes a system that enhances supervised learning for polarity classification by leveraging on linguistic rules and sentic computing resources. The proposed method is evaluated on two publicly available Twitter corpora to illustrate its effectiveness.",
"title": ""
},
{
"docid": "10fa3df6bc00cb1165d4ef07d6e2f85c",
"text": "We present a novel algorithm for view synthesis that utilizes a soft 3D reconstruction to improve quality, continuity and robustness. Our main contribution is the formulation of a soft 3D representation that preserves depth uncertainty through each stage of 3D reconstruction and rendering. We show that this representation is beneficial throughout the view synthesis pipeline. During view synthesis, it provides a soft model of scene geometry that provides continuity across synthesized views and robustness to depth uncertainty. During 3D reconstruction, the same robust estimates of scene visibility can be applied iteratively to improve depth estimation around object edges. Our algorithm is based entirely on O(1) filters, making it conducive to acceleration and it works with structured or unstructured sets of input views. We compare with recent classical and learning-based algorithms on plenoptic lightfields, wide baseline captures, and lightfield videos produced from camera arrays.",
"title": ""
},
{
"docid": "e18ddc1b569a6f39ee5cbf133738a2a1",
"text": "Dropout is used as a practical tool to obtain uncertainty estimates in large vision models and reinforcement learning (RL) tasks. But to obtain well-calibrated uncertainty estimates, a grid-search over the dropout probabilities is necessary— a prohibitive operation with large models, and an impossible one with RL. We propose a new dropout variant which gives improved performance and better calibrated uncertainties. Relying on recent developments in Bayesian deep learning, we use a continuous relaxation of dropout’s discrete masks. Together with a principled optimisation objective, this allows for automatic tuning of the dropout probability in large models, and as a result faster experimentation cycles. In RL this allows the agent to adapt its uncertainty dynamically as more data is observed. We analyse the proposed variant extensively on a range of tasks, and give insights into common practice in the field where larger dropout probabilities are often used in deeper model layers.",
"title": ""
}
] |
scidocsrr
|
de2beeb9b4e04a3eb85df0388ff8d764
|
Effective emotion recognition in movie audio tracks
|
[
{
"docid": "3f5eed1f718e568dc3ba9abbcd6bfedd",
"text": "The automatic recognition of spontaneous emotions from speech is a challenging task. On the one hand, acoustic features need to be robust enough to capture the emotional content for various styles of speaking, and while on the other, machine learning algorithms need to be insensitive to outliers while being able to model the context. Whereas the latter has been tackled by the use of Long Short-Term Memory (LSTM) networks, the former is still under very active investigations, even though more than a decade of research has provided a large set of acoustic descriptors. In this paper, we propose a solution to the problem of `context-aware' emotional relevant feature extraction, by combining Convolutional Neural Networks (CNNs) with LSTM networks, in order to automatically learn the best representation of the speech signal directly from the raw time representation. In this novel work on the so-called end-to-end speech emotion recognition, we show that the use of the proposed topology significantly outperforms the traditional approaches based on signal processing techniques for the prediction of spontaneous and natural emotions on the RECOLA database.",
"title": ""
}
] |
[
{
"docid": "a95b9fbd2f5f6373fb9d04a29f1beab3",
"text": "Discovering and accessing hydrologic and climate data for use in research or water management can be a difficult task that consumes valuable time and personnel resources. Until recently, this task required discovering and navigating many different data repositories, each having its ownwebsite, query interface, data formats, and descriptive language. New advances in cyberinfrastructure and in semantic mediation technologies have provided the means for creating better tools supporting data discovery and access. In this paper we describe a freely available and open source software tool, called HydroDesktop, that can be used for discovering, downloading, managing, visualizing, and analyzing hydrologic data. HydroDesktop was created as a means for searching across and accessing hydrologic data services that have been published using the Consortium of Universities for the Advancement of Hydrologic Science, Inc. (CUAHSI) Hydrologic Information System (HIS). We describe the design and architecture of HydroDesktop, its novel contributions in web services-based hydrologic data search and discovery, and its unique extensibility interface that enables developers to create custom data analysis and visualization plug-ins. The functionality of HydroDesktop and some of its existing plug-ins are introduced in the context of a case study for discovering, downloading, and visualizing data within the Bear River Watershed in Idaho, USA. 2012 Elsevier Ltd. All rights reserved. Software availability All CUAHSI HydroDesktop software and documentation can be accessed at http://his.cuahsi.org. Source code and additional documentation for HydroDesktop can be accessed at the HydroDesktop code repository website http://hydrodesktop.codeplex. com. HydroDesktop and its source code are released under the New Berkeley Software Distribution (BSD) License which allows for liberal reuse of the software and code.",
"title": ""
},
{
"docid": "9b6ef205d9697f8ee4958858c0fde651",
"text": "Considerable literature has accumulated over the years regarding the combination of forecasts. The primary conclusion of this line of research is that forecast accuracy can be substantially improved through the combination of multiple individual forecasts. Furthermore, simple combination methods often work reasonably well relative to more complex combinations. This paper provides a review and annotated bibliography of that literature, including contributions from the forecasting, psychology, statistics, and management science literatures. The objectives are to provide a guide to the literature for students and researchers and to help researchers locate contributions in specific areas, both theoretical and applied. Suggestions for future research directions include (1) examination of simple combining approaches to determine reasons for their robustness, (2) development of alternative uses of multiple forecasts in order to make better use of the information they contain, (3) use of combined forecasts as benchmarks for forecast evaluation, and (4) study of subjective combination procedures. Finally, combining forecasts should become part of the mainstream of forecasting practice. In order to achieve this, practitioners should be encouraged to combine forecasts, and software to produce combined forecasts easily should be made available.",
"title": ""
},
{
"docid": "627492847287cba412f68b1311120fe7",
"text": "ion as though it were an existing object, objectifying it (Whitehead’s „fallacy of misplaced concreteness‟). Interpreting reality concretely when what is required is interpreting it abstractly is a familiar epistemic distortion. Still another is the early positivist supposition that only those propositions are meaningful that are empirically verifiable. 2. Socio-cultural Distortions: involve taking for granted belief systems that pertain to power and social relationships, especially those currently prevailing and legitimized and enforced by institutions. A common sociocultural distortion is mistaking self-fulfilling and self-validating beliefs for beliefs that are not selffulfilling or self-validating. If we believe that members of a subgroup are lazy, unintelligent, and unreliable and treat them accordingly, they may become lazy, unintelligent, and unreliable. We have created a ‘selffulfilling prophesy’. When based on mistaken premises in the first place, such a belief becomes a distorted meaning perspective. Another distortion of this type is assuming that the particular interest of a subgroup is the general interest of the group as a whole. (Geuss, 1981, p.14). When people refer to ideology as a distorted belief system, they usually refer to what here is understood as sociocultural distortion. As critical social theorists have emphasized, ideology can become a form or false consciousness in that it supports, stablizes, or legitimates dependency-producing social institutions, unjust social practices, and relations of exploitation, exclusion, and domination. It reflects the hegemony of the collective, mainstream meaning perspective and existing power relationships that actively support the status quo. Ideology is a form of prereflexive consciousness, which does not question the validity of existing social norms and resists critique of presuppositions. Such social amnesia is manifested in every facet of our lives in the economic, political, social, health, religious, educational, occupational, and familial. Television has become a major force in perpetuating and extending the hegemony of mainstream ideology as, increasingly, will the Internet. The work of Paulo Freire (1970) in traditional village cultures has demonstrated how an adult educator can precipitate as well as facilitate learning that is critically reflective on long-established and oppressive social norms. 3. Psychic Distortions: Psychological distortions have to do with presuppositions generating unwarranted anxiety that impedes taking action. Psychiatrist Roger Gould’s „epigenetic‟ theory of adult development (1978, 1988) suggest that traumatic events in childhood can result in parental prohibitions that though submerged from consciousness continue to inhibit adult action by generating anxiety feelings when there is a risk of breaching them. This dynamic results in a lost function such as the ability to confront, to feel sexual, or take risks that must be regained in one is to become a fully functional adult. Adulthood is a time of regaining such lost functions. The learner must be helped to identify both the particular action that they feel blocked about taking and the source and nature of stress in making a decision to act. The learner is assisted in identifying the source of this inhibition and differentiating between the anxiety that is a function of childhood trauma and the anxiety that is warranted by their immediate adult life situation. With guidance, the adult can learn to distinguish between past and present pressures and between irrational and rational feelings and to challenge distorting assumptions (such as “If I confront, I may lose all control and violently assault”) that inhibit taking the needed action and regaining the lost function. The psychoeducational process of helping adults learn to overcome such ordinary existential psychological distortions can be facilitated by skilled adult counsellors and educators as well as by therapists. It is crucially important that they do so, inasmuch as the most significant adult learning occurs in connection with Life transitions. While psychotherapists make transference inferences in a treatment modality, educators do not but they can provide skilful emotional support and collaborate as co-learners in an educational context. Recent advances in counselling technology greatly enhance their potential for providing this kind of help. For example, Roger Gould’s therapeutic learning programme represents an extraordinary resource for counsellors and educators working with adults who are having trouble dealing with such stressful existential life transitions as divorce, retirement, returning to school or the work force, or a change in job status. This interactive, computerized programme of guided self-study provides the learner with the clinical insights and many of the benefits associated with short-term psychotherapy. The counsellor or educator provides emotional support, helps the learner think through choices posed by the programme, explains its theoretical context, provides supplementary information relevant to the life transition, makes referrals, and leads group discussion as required. This extract briefly adumbrates an emerging transformation theory of adult learning in which the construing of meaning is of central importance. Following Habermas (1984), I make a fundamental distinction between instrumental and communicative learning. I have identified the central function of reflection as that of validating what is known. Reflection, in the context of problem solving, commonly focuses on procedures or methods. It may also focus on premises. Reflection on premises involves a critical view of distorted presuppositions that may be epistemic, sociocultural or psychic. Meaning schemes and perspectives that are not viable are transformed through reflection. Uncritically assimilated meaning perspectives, which determine what, how, and why we learn, may be transformed through critical reflection. Reflection on one‟s own premises can lead to transformative learning. In communicative learning, meaning is validated through critical discourse. The nature of discourse suggests ideal conditions for participation in a consensual assessment of the justification for an expressed or implied idea when its validity is in doubt. These ideal conditions of human communication provide a firm philosophical foundation for adult education. Transformative learning involves a particular function of reflection: reassessing the presuppositions on which our beliefs are based and acting on insights derived from the transformed meaning perspective that results from such reassessments. This learning may occur in the domains of either instrumental or communicative learning. It may involve correcting distorted assumptions epistemic, sociocultural or psychic from prior learning. This extract constitutes the framework in adult learning theory for understanding the efforts of other authors who suggest specific approaches to emancipatory adult education. Emancipatory education is an organized effort to help the learner challenge presuppositions, explore alternative perspectives, transform old ways of understanding, and act on new perspectives.",
"title": ""
},
{
"docid": "153d23d5f736b9a9e0f3cb88e61dc400",
"text": "Context\nTrichostasis spinulosa (TS) is a common but underdiagnosed follicular disorder involving retention of successive telogen hair in the hair follicle. Laser hair removal is a newer treatment modality for TS with promising results.\n\n\nAims\nThis study aims to evaluate the efficacy of 800 nm diode laser to treat TS in Asian patients.\n\n\nSubjects and Methods\nWe treated 50 Indian subjects (Fitzpatrick skin phototype IV-V) with untreated trichostasis spinulosa on the nose with 800 nm diode laser at fluence ranging from 22 to 30 J/cm2 and pulse width of 30 ms. The patients were given two sittings at 8 week intervals. The evaluation was done by blinded assessment of photographs by independent dermatologists.\n\n\nResults\nTotally 45 (90%) patients had complete clearance of the lesions at the end of treatment. Five (10%) subjects needed one-third sitting for complete clearance. 45 patients had complete resolution and no recurrence even at 2 years follow-up visit. 5 patients had partial recurrence after 8-9 months and needed an extra laser session.\n\n\nConclusions\nLaser hair reduction in patients with TS targets and removes the hair follicles which are responsible for the plugged appearance. Due to permanent ablation of the hair bulb and bulge, the recurrence which is often seen with other modalities of treatment for TS is not observed here.",
"title": ""
},
{
"docid": "ad606470b92b50eae9b0f729968cde7a",
"text": "It is projected that increasing on-chip integration with technology scaling will lead to the so-called dark silicon era in which more transistors are available on a chip than can be simultaneously powered on. It is conventionally assumed that the dark silicon will be provisioned with heterogeneous resources, for example dedicated hardware accelerators. In this paper we challenge the conventional assumption and build a case for homogeneous dark silicon CMPs that exploit the inherent variations in process parameters that exist in scaled technologies to offer increased performance. Since process variations result in core-to-core variations in power and frequency, the idea is to cherry pick the best subset of cores for an application so as to maximize performance within the power budget. To this end, we propose a polynomial time algorithm for optimal core selection, thread mapping and frequency assignment for a large class of multi-threaded applications. Our experimental results based on the Sniper multi-core simulator show that up to 22% and 30% performance improvement is observed for homogeneous CMPs with 33% and 50% dark silicon, respectively.",
"title": ""
},
{
"docid": "5588970df0b3ea94f7cc470963b419ef",
"text": "Obesity, type 2 diabetes, and non-alcoholic fatty liver disease (NAFLD) are serious health concerns, especially in Western populations. Antibiotic exposure and high-fat diet (HFD) are important and modifiable factors that may contribute to these diseases. To investigate the relationship of antibiotic exposure with microbiome perturbations in a murine model of growth promotion, C57BL/6 mice received lifelong sub-therapeutic antibiotic treatment (STAT), or not (control), and were fed HFD starting at 13 weeks. To characterize microbiota changes caused by STAT, the V4 region of the 16S rRNA gene was examined from collected fecal samples and analyzed. In this model, which included HFD, STAT mice developed increased weight and fat mass compared to controls. Although results in males and females were not identical, insulin resistance and NAFLD were more severe in the STAT mice. Fecal microbiota from STAT mice were distinct from controls. Compared with controls, STAT exposure led to early conserved diet-independent microbiota changes indicative of an immature microbial community. Key taxa were identified as STAT-specific and several were found to be predictive of disease. Inferred network models showed topological shifts concurrent with growth promotion and suggest the presence of keystone species. These studies form the basis for new models of type 2 diabetes and NAFLD that involve microbiome perturbation.",
"title": ""
},
{
"docid": "f29b8c75a784a71dfaac5716017ff4f3",
"text": "The objective of this paper is to design a multi-agent system architecture for the Scrum methodology. Scrum is an iterative, incremental framework for software development which is flexible, adaptable and highly productive. An agent is a system situated within and a part of an environment that senses the environment and acts on it, over time, in pursuit of its own agenda and so as to effect what it senses in the future (Franklin and Graesser, 1996). To our knowledge, this is first attempt to include software agents in the Scrum framework. Furthermore, our design covers all the stages of software development. Alternative approaches were only restricted to the analysis and design phases. This Multi-Agent System (MAS) Architecture for Scrum acts as a design blueprint and a baseline architecture that can be realised into a physical implementation by using an appropriate agent development framework. The development of an experimental prototype for the proposed MAS Architecture is in progress. It is expected that this tool will provide support to the development team who will no longer be expected to report, update and manage non-core activities daily.",
"title": ""
},
{
"docid": "5125f5099f77a32ff9a1f2054ef1e664",
"text": "Human activities are inherently translation invariant and hierarchical. Human activity recognition (HAR), a field that has garnered a lot of attention in recent years due to its high demand in various application domains, makes use of time-series sensor data to infer activities. In this paper, a deep convolutional neural network (convnet) is proposed to perform efficient and effective HAR using smartphone sensors by exploiting the inherent characteristics of activities and 1D time-series signals, at the same time providing a way to automatically and data-adaptively extract robust features from raw data. Experiments show that convnets indeed derive relevant and more complex features with every additional layer, although difference of feature complexity level decreases with every additional layer. A wider time span of temporal local correlation can be exploited (1x9~1x14) and a low pooling size (1x2~1x3) is shown to be beneficial. Convnets also achieved an almost perfect classification on moving activities, especially very similar ones which were previously perceived to be very difficult to classify. Lastly, convnets outperform other state-of-the-art data mining techniques in HAR for the benchmark dataset collected from 30 volunteer subjects, achieving an overall performance of 94.79% on the test set with raw sensor data, and 95.75% with additional information of temporal fast Fourier transform of the HAR data set.",
"title": ""
},
{
"docid": "75968e08364929ec1e9098a2d5eb0869",
"text": "Aneurysmal bone cysts (ABC) are benign osteolytic lesions that are more common in young people than in adults and involve the skull only exceptionally. The origin of ABC is still debated; indeed, some authors consider ABC to be an anomalous bony reaction that is secondary to traumas or tumours. Conversely, others consider ABC to be a distinct entity. A case of a healthy young female affected by a left frontal ABC is reported here. The clinical onset was characterised by the sudden appearance of a tender and mildly painful frontal mass. Neuroradiological assessment showed a well-circumscribed lytic lesion of the frontal bone with predominantly outward extension. En bloc surgical removal of the lesion was successfully achieved; a reconstructive cranioplasty was also performed to repair the cranial defect. The rarity of the condition described, together with the absence of clear guidelines, prompted us to review the more recent literature with the twin goals of identifying radiological features and becoming able to address the diagnosis and rules for treatment of such a rare entity.",
"title": ""
},
{
"docid": "058f57adeaa6c2bbab49e7cec1a47c6f",
"text": "The purpose of this document is to summarize the main points from the paper, “On Bias, Variance, 0/1 Loss, and the Curse of Dimensionality”, written by Jerome H.Friedman(1997).",
"title": ""
},
{
"docid": "5aa10413b995b6b86100585f3245e4d9",
"text": "In this paper, we describe the design of Neurogrid, a neuromorphic system for simulating large-scale neural models in real time. Neuromorphic systems realize the function of biological neural systems by emulating their structure. Designers of such systems face three major design choices: 1) whether to emulate the four neural elements-axonal arbor, synapse, dendritic tree, and soma-with dedicated or shared electronic circuits; 2) whether to implement these electronic circuits in an analog or digital manner; and 3) whether to interconnect arrays of these silicon neurons with a mesh or a tree network. The choices we made were: 1) we emulated all neural elements except the soma with shared electronic circuits; this choice maximized the number of synaptic connections; 2) we realized all electronic circuits except those for axonal arbors in an analog manner; this choice maximized energy efficiency; and 3) we interconnected neural arrays in a tree network; this choice maximized throughput. These three choices made it possible to simulate a million neurons with billions of synaptic connections in real time-for the first time-using 16 Neurocores integrated on a board that consumes three watts.",
"title": ""
},
{
"docid": "48cfb0c1b3b2ce7ce00aa972a3e599e7",
"text": "This paper discusses some relevant work of emotion detection from text which is a main field in affecting computing and artificial intelligence field. Artificial intelligence is not only the ability for a machine to think or interact with end user smartly but also to act humanly or rationally so emotion detection from text plays a key role in human-computer interaction. It has attracted the attention of many researchers due to the great revolution of emotional data available on social and web applications of computers and much more in mobile devices. This survey mainly collects history of unsupervised emotion detection from text.",
"title": ""
},
{
"docid": "ff939b33128e2b8d2cd0074a3b021842",
"text": "Breast cancer is the most common form of cancer among women worldwide. Ultrasound imaging is one of the most frequently used diagnostic tools to detect and classify abnormalities of the breast. Recently, computer-aided diagnosis (CAD) systems using ultrasound images have been developed to help radiologists to increase diagnosis accuracy. However, accurate ultrasound image segmentation remains a challenging problem due to various ultrasound artifacts. In this paper, we investigate approaches developed for breast ultrasound (BUS) image segmentation. In this paper, we reviewed the literature on the segmentation of BUS images according to the techniques adopted, especially over the past 10 years. By dividing into seven classes (i.e., thresholding-based, clustering-based, watershed-based, graph-based, active contour model, Markov random field and neural network), we have introduced corresponding techniques and representative papers accordingly. We have summarized and compared many techniques on BUS image segmentation and found that all these techniques have their own pros and cons. However, BUS image segmentation is still an open and challenging problem due to various ultrasound artifacts introduced in the process of imaging, including high speckle noise, low contrast, blurry boundaries, low signal-to-noise ratio and intensity inhomogeneity To the best of our knowledge, this is the first comprehensive review of the approaches developed for segmentation of BUS images. With most techniques involved, this paper will be useful and helpful for researchers working on segmentation of ultrasound images, and for BUS CAD system developers.",
"title": ""
},
{
"docid": "d8cb31c41a2e1ff3f3d43367aa165680",
"text": "This article reviews the evidence for rhythmic categorization that has emerged on the basis of rhythm metrics, and argues that the metrics are unreliable predictors of rhythm which provide no more than a crude measure of timing. It is further argued that timing is distinct from rhythm and that equating them has led to circularity and a psychologically questionable conceptualization of rhythm in speech. It is thus proposed that research on rhythm be based on the same principles for all languages, something that does not apply to the widely accepted division of languages into stress- and syllable-timed. The hypothesis is advanced that these universal principles are grouping and prominence and evidence to support it is provided.",
"title": ""
},
{
"docid": "893408bc41eb46a75fc59e23f74339cf",
"text": "We discuss cutting stock problems (CSPs) from the perspective of the paper industry and the financial impact they make. Exact solution approaches and heuristics have been used for decades to support cutting stock decisions in that industry. We have developed polylithic solution techniques integrated in our ERP system to solve a variety of cutting stock problems occurring in real world problems. Among them is the simultaneous minimization of the number of rolls and the number of patterns while not allowing any overproduction. For two cases, CSPs minimizing underproduction and CSPs with master rolls of different widths and availability, we have developed new column generation approaches. The methods are numerically tested using real world data instances. An assembly of current solved and unsolved standard and non-standard CSPs at the forefront of research are put in perspective.",
"title": ""
},
{
"docid": "78f272578191996200259e10d209fe19",
"text": "The information in government web sites, which are widely adopted in many countries, must be accessible for all people, easy to use, accurate and secure. The main objective of this study is to investigate the usability, accessibility and security aspects of e-government web sites in Kyrgyz Republic. The analysis of web government pages covered 55 sites listed in the State Information Resources of the Kyrgyz Republic and five government web sites which were not included in the list. Analysis was conducted using several automatic evaluation tools. Results suggested that government web sites in Kyrgyz Republic have a usability error rate of 46.3 % and accessibility error rate of 69.38 %. The study also revealed security vulnerabilities in these web sites. Although the “Concept of Creation and Development of Information Network of the Kyrgyz Republic” was launched at September 23, 1994, government web sites in the Kyrgyz Republic have not been reviewed and still need great efforts to improve accessibility, usability and security.",
"title": ""
},
{
"docid": "f13d3c01729d9f3dcb2b220a0fcce902",
"text": "User generated content on Twitter (produced at an enormous rate of 340 million tweets per day) provides a rich source for gleaning people's emotions, which is necessary for deeper understanding of people's behaviors and actions. Extant studies on emotion identification lack comprehensive coverage of \"emotional situations\" because they use relatively small training datasets. To overcome this bottleneck, we have automatically created a large emotion-labeled dataset (of about 2.5 million tweets) by harnessing emotion-related hash tags available in the tweets. We have applied two different machine learning algorithms for emotion identification, to study the effectiveness of various feature combinations as well as the effect of the size of the training data on the emotion identification task. Our experiments demonstrate that a combination of unigrams, big rams, sentiment/emotion-bearing words, and parts-of-speech information is most effective for gleaning emotions. The highest accuracy (65.57%) is achieved with a training data containing about 2 million tweets.",
"title": ""
},
{
"docid": "97ec541daef17eb4ff0772e34ee4de48",
"text": "Neural machine translation (NMT) models are usually trained with the word-level loss using the teacher forcing algorithm, which not only evaluates the translation improperly but also suffers from exposure bias. Sequence-level training under the reinforcement framework can mitigate the problems of the word-level loss, but its performance is unstable due to the high variance of the gradient estimation. On these grounds, we present a method with a differentiable sequence-level training objective based on probabilistic n-gram matching which can avoid the reinforcement framework. In addition, this method performs greedy search in the training which uses the predicted words as context just as at inference to alleviate the problem of exposure bias. Experiment results on the NIST Chinese-to-English translation tasks show that our method significantly outperforms the reinforcement-based algorithms and achieves an improvement of 1.5 BLEU points on average over a strong baseline system.",
"title": ""
},
{
"docid": "f53d13eeccff0048fc96e532a52a2154",
"text": "The physical principles underlying some current biomedical applications of magnetic nanoparticles are reviewed. Starting from well-known basic concepts, and drawing on examples from biology and biomedicine, the relevant physics of magnetic materials and their responses to applied magnetic fields are surveyed. The way these properties are controlled and used is illustrated with reference to (i) magnetic separation of labelled cells and other biological entities; (ii) therapeutic drug, gene and radionuclide delivery; (iii) radio frequency methods for the catabolism of tumours via hyperthermia; and (iv) contrast enhancement agents for magnetic resonance imaging applications. Future prospects are also discussed.",
"title": ""
}
] |
scidocsrr
|
7ff1ce2e43512aae09b6ba5a13690fe3
|
Patch-based Terrain Synthesis
|
[
{
"docid": "df6f6e52f97cfe2d7ff54d16ed9e2e54",
"text": "Example-based texture synthesis algorithms have gained widespread popularity for their ability to take a single input image and create a perceptually similar non-periodic texture. However, previous methods rely on single input exemplars that can capture only a limited band of spatial scales. For example, synthesizing a continent-like appearance at a variety of zoom levels would require an impractically high input resolution. In this paper, we develop a multiscale texture synthesis algorithm. We propose a novel example-based representation, which we call an exemplar graph, that simply requires a few low-resolution input exemplars at different scales. Moreover, by allowing loops in the graph, we can create infinite zooms and infinitely detailed textures that are impossible with current example-based methods. We also introduce a technique that ameliorates inconsistencies in the user's input, and show that the application of this method yields improved interscale coherence and higher visual quality. We demonstrate optimizations for both CPU and GPU implementations of our method, and use them to produce animations with zooming and panning at multiple scales, as well as static gigapixel-sized images with features spanning many spatial scales.",
"title": ""
},
{
"docid": "374e5a4ad900a6f31e4083bef5c08ca4",
"text": "Procedural modeling deals with (semi-)automatic content generation by means of a program or procedure. Among other advantages, its data compression and the potential to generate a large variety of detailed content with reduced human intervention, have made procedural modeling attractive for creating virtual environments increasingly used in movies, games, and simulations. We survey procedural methods that are useful to generate features of virtual worlds, including terrains, vegetation, rivers, roads, buildings, and entire cities. In this survey, we focus particularly on the degree of intuitive control and of interactivity offered by each procedural method, because these properties are instrumental for their typical users: designers and artists. We identify the most promising research results that have been recently achieved, but we also realize that there is far from widespread acceptance of procedural methods among non-technical, creative professionals. We conclude by discussing some of the most important challenges of procedural modeling.",
"title": ""
}
] |
[
{
"docid": "193042bd07d5e9672b04ede9160d406c",
"text": "We report on the flip chip packaging of Micro-Electro-Mechanical System (MEMS)-based digital silicon photonic switching device and the characterization results of 12 × 12 switching ports. The challenges in packaging N<sup> 2</sup> electrical and 2N optical interconnections are addressed with single-layer electrical redistribution lines of 25 <italic>μ</italic>m line width and space on aluminum nitride interposer and 13° polished 64-channel lidless fiber array (FA) with a pitch of 127 <italic>μ</italic>m. 50 <italic>μ</italic>m diameter solder spheres are laser-jetted onto the electrical bond pads surrounded by suspended MEMS actuators on the device before fluxless flip-chip bonding. A lidless FA is finally coupled near-vertically onto the device gratings using a 6-degree-of-freedom (6-DOF) alignment system. Fiber-to-grating coupler loss of 4.25 dB/facet, 10<sup>–11 </sup> bit error rate (BER) through the longest optical path, and 0.4 <italic>μ</italic>s switch reconfiguration time have been demonstrated using 10 Gb/s Ethernet data stream.",
"title": ""
},
{
"docid": "ed7832f6fbb1777ab3139cc8b5dd2d28",
"text": "Tree ensemble models such as random forests and boosted trees are among the most widely used and practically successful predictive models in applied machine learning and business analytics. Although such models have been used to make predictions based on exogenous, uncontrollable independent variables, they are increasingly being used to make predictions where the independent variables are controllable and are also decision variables. In this paper, we study the problem of tree ensemble optimization: given a tree ensemble that predicts some dependent variable using controllable independent variables, how should we set these variables so as to maximize the predicted value? We formulate the problem as a mixed-integer optimization problem. We theoretically examine the strength of our formulation, provide a hierarchy of approximate formulations with bounds on approximation quality and exploit the structure of the problem to develop two large-scale solution methods, one based on Benders decomposition and one based on iteratively generating tree split constraints. We test our methodology on real data sets, including two case studies in drug design and customized pricing, and show that our methodology can efficiently solve large-scale instances to near or full optimality, and outperforms solutions obtained by heuristic approaches. In our drug design case, we show how our approach can identify compounds that efficiently trade-off predicted performance and novelty with respect to existing, known compounds. In our customized pricing case, we show how our approach can efficiently determine optimal store-level prices under a random forest model that delivers excellent predictive accuracy.",
"title": ""
},
{
"docid": "77564f157ea8ab43d6d9f95a212e7948",
"text": "We consider the problem of mining association rules on a shared-nothing multiprocessor. We present three algorithms that explore a spectrum of trade-oos between computation, communication, memory usage, synchronization, and the use of problem-speciic information. The best algorithm exhibits near perfect scaleup behavior, yet requires only minimal overhead compared to the current best serial algorithm.",
"title": ""
},
{
"docid": "ca6e39436be1b44ab0e20e0024cd0bbe",
"text": "This paper introduces a new approach, named micro-crowdfunding, for motivating people to participate in achieving a sustainable society. Increasing people's awareness of how they participate in maintaining the sustainability of common resources, such as public sinks, toilets, shelves, and office areas, is central to achieving a sustainable society. Micro-crowdfunding, as proposed in the paper, is a new type of community-based crowdsourcing architecture that is based on the crowdfunding concept and uses the local currency idea as a tool for encouraging people who live in urban environments to increase their awareness of how important it is to sustain small, common resources through their minimum efforts. Because our approach is lightweight and uses a mobile phone, people can participate in micro-crowdfunding activities with little effort anytime and anywhere.\n We present the basic concept of micro-crowdfunding and a prototype system. We also describe our experimental results, which show how economic and social factors are effective in facilitating micro-crowdfunding. Our results show that micro-crowdfunding increases the awareness about social sustainability, and we believe that micro-crowdfunding makes it possible to motivate people for achieving a sustainable society.",
"title": ""
},
{
"docid": "2eabe3d3edbc9b57b1a13c41688b9d68",
"text": "This paper presents a design method of on-chip patch antenna integration in a standard CMOS technology without post processing. A 60 GHz on-chip patch antenna is designed utilizing the top metal layer and an intermediate metal layer as the patch and ground plane, respectively. Interference between the patch and digital baseband circuits located beneath the ground plane is analyzed. The 60 GHz on-chip antenna occupies an area of 1220 µm by 1580 µm with carefully placed fillers and slots to meet the design rules of the CMOS process. The antenna is centered at 60.51 GHz with 810 MHz bandwidth. The peak gain and radiation efficiency are −3.32 dBi and 15.87%, respectively. Analysis for mutual signal coupling between the antenna and the clock H-tree beneath the ground plane is reported, showing a −61 dB coupling from the antenna to the H-tree and a −95 dB coupling of 2 GHz clock signal from the H-tree to the antenna.",
"title": ""
},
{
"docid": "1403e5ee76253ebf7e58300bf9f4dc8a",
"text": "PURPOSE\nTo evaluate the marginal fit of CAD/CAM copings milled from hybrid ceramic (Vita Enamic) blocks and lithium disilicate (IPS e.max CAD) blocks, and to evaluate the effect of crystallization firing on the marginal fit of lithium disilicate copings.\n\n\nMATERIALS AND METHODS\nA standardized metal die with a 1-mm-wide shoulder finish line was imaged using the CEREC AC Bluecam. The coping was designed using CEREC 3 software. The design was used to fabricate 15 lithium disilicate and 15 hybrid ceramic copings. Design and milling were accomplished by one operator. The copings were seated on the metal die using a pressure clamp with a uniform pressure of 5.5 lbs. A Macroview Microscope (14×) was used for direct viewing of the marginal gap. Four areas were imaged on each coping (buccal, distal, lingual, mesial). Image analysis software was used to measure the marginal gaps in μm at 15 randomly selected points on each of the four surfaces. A total of 60 measurements were made per specimen. For lithium disilicate copings the measurements for marginal gap were made before and after crystallization firing. Data were analyzed using paired t-test and Kruskal-Wallis test.\n\n\nRESULTS\nThe overall mean difference in marginal gap between the hybrid ceramic and crystallized lithium disilicate copings was statistically significant (p < 0.01). Greater mean marginal gaps were measured for crystallized lithium disilicate copings. The overall mean difference in marginal gap before and after firing (precrystallized and crystallized lithium disilicate copings) showed an average of 62 μm increase in marginal gap after firing. This difference was also significant (p < 0.01).\n\n\nCONCLUSIONS\nA significant difference exists in the marginal gap discrepancy when comparing hybrid ceramic and lithium disilicate CAD/CAM crowns. Also crystallization firing can result in a significant increase in the marginal gap of lithium disilicate CAD/CAM crowns.",
"title": ""
},
{
"docid": "7b3d2bd1f6975b8089b1830674f284f5",
"text": "We present a method that is able to find the most informative video portions, leading to a summarization of video sequences. In contrast to the existing works, our method is able to capture the important video portions through information about individual local motion regions, as well as the interactions between these motion regions. In particular, our proposed context-aware video summarization (CAVS) framework adopts the methodology of sparse coding with generalized sparse group lasso to learn a dictionary of video features and a dictionary of spatiotemporal feature correlation graphs. Sparsity ensures that the most informative features and relationships are retained. The feature correlations, represented by a dictionary of graphs, indicate how motion regions correlate with each other globally. When a new video segment is processed by CAVS, both dictionaries are updated in an online fashion. In particular, CAVS scans through every video segment to determine if the new features along with the feature correlations can be sparsely represented by the learned dictionaries. If not, the dictionaries are updated, and the corresponding video segments are incorporated into the summarized video. The results on four public data sets, mostly composed of surveillance videos and a small amount of other online videos, show the effectiveness of our proposed method.",
"title": ""
},
{
"docid": "17dce24f26d7cc196e56a889255f92a8",
"text": "As known, to finish this book, you may not need to get it at once in a day. Doing the activities along the day may make you feel so bored. If you try to force reading, you may prefer to do other entertaining activities. But, one of concepts we want you to have this book is that it will not make you feel bored. Feeling bored when reading will be only unless you don't like the book. computational principles of mobile robotics really offers what everybody wants.",
"title": ""
},
{
"docid": "6c5cabfa5ee5b9d67ef25658a4b737af",
"text": "Sentence compression is the task of producing a summary of a single sentence. The compressed sentence should be shorter, contain the important content from the original, and itself be grammatical. The three papers discussed here take different approaches to identifying important content, determining which sentences are grammatical, and jointly optimizing these objectives. One family of approaches we will discuss is those that are tree-based, which create a compressed sentence by making edits to the syntactic tree of the original sentence. A second type of approach is sentence-based, which generates strings directly. Orthogonal to either of these two approaches is whether sentences are treated in isolation or if the surrounding discourse affects compressions. We compare a tree-based, a sentence-based, and a discourse-based approach and conclude with ideas for future work in this area. Comments University of Pennsylvania Department of Computer and Information Science Technical Report No. MSCIS-10-20. This technical report is available at ScholarlyCommons: http://repository.upenn.edu/cis_reports/929 Methods for Sentence Compression",
"title": ""
},
{
"docid": "5a6de3b39cfad9929e8e4bb15b30ff43",
"text": "The paper describes about a method of int rusion detection that uses machine learning algorit hms. Here we discuss about the combinational use of two machine l arning algorithms called Principal Component Anal ysis and Naive Bayes classifier. The dimensionality of the dataset is reduced by using the principal component analys i and the classification of the dataset in to normal and atta ck classes is done by using Naïve Bayes Classifier. The experiments were conducted on the intrusion detection dataset called KDD’99 cup dataset. The comparison of the results with and without dimensionality reduction is also done.",
"title": ""
},
{
"docid": "192b0b494719184be8d40ff9ad28aecc",
"text": "The primary goal of the model proposed in this paper is to predict airline delays caused by inclement weather conditions using data mining and supervised machine learning algorithms. US domestic flight data and the weather data from 2005 to 2015 were extracted and used to train the model. To overcome the effects of imbalanced training data, sampling techniques are applied. Decision trees, random forest, the AdaBoost and the k-Nearest-Neighbors were implemented to build models which can predict delays of individual flights. Then, each of the algorithms' prediction accuracy and the receiver operating characteristic (ROC) curve were compared. In the prediction step, flight schedule and weather forecast were gathered and fed into the model. Using those data, the trained model performed a binary classification to predicted whether a scheduled flight will be delayed or on-time.",
"title": ""
},
{
"docid": "67808f54305bc2bb2b3dd666f8b4ef42",
"text": "Sensing devices are becoming the source of a large portion of the Web data. To facilitate the integration of sensed data with data from other sources, both sensor stream sources and data are being enriched with semantic descriptions, creating Linked Stream Data. Despite its enormous potential, little has been done to explore Linked Stream Data. One of the main characteristics of such data is its “live” nature, which prohibits existing Linked Data technologies to be applied directly. Moreover, there is currently a lack of tools to facilitate publishing Linked Stream Data and making it available to other applications. To address these issues we have developed the Linked Stream Middleware (LSM), a platform that brings together the live real world sensed data and the Semantic Web. A LSM deployment is available at http://lsm.deri.ie/. It provides many functionalities such as: i) wrappers for real time data collection and publishing; ii) a web interface for data annotation and visualisation; and iii) a SPARQL endpoint for querying unified Linked Stream Data and Linked Data. In this paper we describe the system architecture behind LSM, provide details how Linked Stream Data is generated, and demonstrate the benefits of the platform by showcasing its interface.",
"title": ""
},
{
"docid": "b9bf838263410114ec85c783d26d92aa",
"text": "We give a denotational framework (a “meta model”) within which certain properties of models of computation can be compared. It describes concurrent processes in general terms as sets of possible behaviors. A process is determinate if, given the constraints imposed by the inputs, there are exactly one or exactly zero behaviors. Compositions of processes are processes with behaviors in the intersection of the behaviors of the component processes. The interaction between processes is through signals, which are collections of events. Each event is a value-tag pair, where the tags can come from a partially ordered or totally ordered set. Timed models are where the set of tags is totally ordered. Synchronous events share the same tag, and synchronous signals contain events with the same set of tags. Synchronous processes have only synchronous signals as behaviors. Strict causality (in timed tag systems) and continuity (in untimed tag systems) ensure determinacy under certain technical conditions. The framework is used to compare certain essential features of various models of computation, including Kahn process networks, dataflow, sequential processes, concurrent sequential processes with rendezvous, Petri nets, and discrete-event systems.",
"title": ""
},
{
"docid": "15800830f8774211d48110980d08478a",
"text": "This paper surveys the problem of navigation for autonomous underwater vehicles (AUVs). Marine robotics technology has undergone a phase of dramatic increase in capability in recent years. Navigation is one of the key challenges that limits our capability to use AUVs to address problems of critical importance to society. Good navigation information is essential for safe operation and recovery of an AUV. For the data gathered by an AUV to be of value, the location from which the data has been acquired must be accurately known. The three primary methods for navigation of AUVs are (1) dead-reckoning and inertial navigation systems, (2) acoustic navigation, and (3) geophysical navigation techniques. The current state-of-the-art in each of these areas is summarized, and topics for future research are suggested.",
"title": ""
},
{
"docid": "2b8d90c11568bb8b172eca20a48fd712",
"text": "INTRODUCTION\nCancer incidence and mortality estimates for 25 cancers are presented for the 40 countries in the four United Nations-defined areas of Europe and for the European Union (EU-27) for 2012.\n\n\nMETHODS\nWe used statistical models to estimate national incidence and mortality rates in 2012 from recently-published data, predicting incidence and mortality rates for the year 2012 from recent trends, wherever possible. The estimated rates in 2012 were applied to the corresponding population estimates to obtain the estimated numbers of new cancer cases and deaths in Europe in 2012.\n\n\nRESULTS\nThere were an estimated 3.45 million new cases of cancer (excluding non-melanoma skin cancer) and 1.75 million deaths from cancer in Europe in 2012. The most common cancer sites were cancers of the female breast (464,000 cases), followed by colorectal (447,000), prostate (417,000) and lung (410,000). These four cancers represent half of the overall burden of cancer in Europe. The most common causes of death from cancer were cancers of the lung (353,000 deaths), colorectal (215,000), breast (131,000) and stomach (107,000). In the European Union, the estimated numbers of new cases of cancer were approximately 1.4 million in males and 1.2 million in females, and around 707,000 men and 555,000 women died from cancer in the same year.\n\n\nCONCLUSION\nThese up-to-date estimates of the cancer burden in Europe alongside the description of the varying distribution of common cancers at both the regional and country level provide a basis for establishing priorities to cancer control actions in Europe. The important role of cancer registries in disease surveillance and in planning and evaluating national cancer plans is becoming increasingly recognised, but needs to be further advocated. The estimates and software tools for further analysis (EUCAN 2012) are available online as part of the European Cancer Observatory (ECO) (http://eco.iarc.fr).",
"title": ""
},
{
"docid": "8bd9a5cf3ca49ad8dd38750410a462b0",
"text": "Most regional anesthesia in breast surgeries is performed as postoperative pain management under general anesthesia, and not as the primary anesthesia. Regional anesthesia has very few cardiovascular or pulmonary side-effects, as compared with general anesthesia. Pectoral nerve block is a relatively new technique, with fewer complications than other regional anesthesia. We performed Pecs I and Pec II block simultaneously as primary anesthesia under moderate sedation with dexmedetomidine for breast conserving surgery in a 49-year-old female patient with invasive ductal carcinoma. Block was uneventful and showed no complications. Thus, Pecs block with sedation could be an alternative to general anesthesia for breast surgeries.",
"title": ""
},
{
"docid": "a0b40209ee7655fcb08b080467d48915",
"text": "This note describes a simplification of the GKR interactive proof for circuit evaluation (Goldwasser, Kalai, and Rothblum, J. ACM 2015), as efficiently instantiated by Cormode, Mitzenmacher, and Thaler (ITCS 2012). The simplification reduces the prover runtime, round complexity, and total communication cost of the protocol by roughly 33%.",
"title": ""
},
{
"docid": "41c890e5c5925769962713de3f84b948",
"text": "In recent years, with the development of 3D technologies, 3D model retrieval has become a hot topic. The key point of 3D model retrieval is to extract robust feature for 3D model representation. In order to improve the effectiveness of method on 3D model retrieval, this paper proposes a feature extraction model based on convolutional neural networks (CNN). First, we extract a set of 2D images from 3D model to represent each 3D object. SIFT detector is utilized to detect interesting points from each 2D image and extract interesting patches to represent local information of each 3D model. X-means is leveraged to generate the CNN filters. Second, a single CNN layer learns low-level features which are then given as inputs to multiple recursive neural networks (RNN) in order to compose higher order features. RNNs can generate the final feature for 2D image representation. Finally, nearest neighbor is used to compute the similarity between different 3D models in order to handle the retrieval problem. Extensive comparison experiments were on the popular ETH and MV-RED 3D model datasets. The results demonstrate the superiority of the proposed method.",
"title": ""
},
{
"docid": "ed3b8bfdd6048e4a07ee988f1e35fd21",
"text": "Accurate and automatic organ segmentation from 3D radiological scans is an important yet challenging problem for medical image analysis. Specifically, as a small, soft, and flexible abdominal organ, the pancreas demonstrates very high inter-patient anatomical variability in both its shape and volume. This inhibits traditional automated segmentation methods from achieving high accuracies, especially compared to the performance obtained for other organs, such as the liver, heart or kidneys. To fill this gap, we present an automated system from 3D computed tomography (CT) volumes that is based on a two-stage cascaded approach-pancreas localization and pancreas segmentation. For the first step, we localize the pancreas from the entire 3D CT scan, providing a reliable bounding box for the more refined segmentation step. We introduce a fully deep-learning approach, based on an efficient application of holistically-nested convolutional networks (HNNs) on the three orthogonal axial, sagittal, and coronal views. The resulting HNN per-pixel probability maps are then fused using pooling to reliably produce a 3D bounding box of the pancreas that maximizes the recall. We show that our introduced localizer compares favorably to both a conventional non-deep-learning method and a recent hybrid approach based on spatial aggregation of superpixels using random forest classification. The second, segmentation, phase operates within the computed bounding box and integrates semantic mid-level cues of deeply-learned organ interior and boundary maps, obtained by two additional and separate realizations of HNNs. By integrating these two mid-level cues, our method is capable of generating boundary-preserving pixel-wise class label maps that result in the final pancreas segmentation. Quantitative evaluation is performed on a publicly available dataset of 82 patient CT scans using 4-fold cross-validation (CV). We achieve a (mean ± std. dev.) Dice similarity coefficient (DSC) of 81.27 ± 6.27% in validation, which significantly outperforms both a previous state-of-the art method and a preliminary version of this work that report DSCs of 71.80 ± 10.70% and 78.01 ± 8.20%, respectively, using the same dataset.",
"title": ""
}
] |
scidocsrr
|
eb31e628109f04463ac7dfc75171df4d
|
A Substrate Integrated Waveguide Circular Polarized Slot Radiator and Its Linear Array
|
[
{
"docid": "3746275fe4cfd6132d9b7a2a38639356",
"text": "A design procedure for circularly polarized waveguide slot linear arrays is presented. The array element, a circularly polarized radiator, consists of two closely spaced inclined radiating slots. Both the characterization of the isolated element and the evaluation of the mutual coupling between the array elements are performed by using a method of moments procedure. A number of traveling wave arrays with equiphase excitations are designed and then analyzed using a finite element method commercial software. A good circular polarization is achieved, the design goals on the far field pattern are fulfilled and high antenna efficiency can be obtained",
"title": ""
}
] |
[
{
"docid": "026628151680da901c741766248f0055",
"text": "We analyzea corpusof referringexpressionscollected from userinteractionswith a multimodal travel guide application.Theanalysissuggeststhat,in dramaticcontrastto normalmodesof human-humaninteraction,the interpretationof referringexpressionscanbecomputed with very high accuracy usinga modelwhich pairsan impoverishednotionof discoursestatewith asimpleset of rulesthatareinsensiti ve to the type of referringexpressionused. We attribute this result to the implicit mannerin which theinterfaceconveys thesystem’ s beliefs abouttheoperati ve discoursestate,to which users tailor their choiceof referringexpressions.This result offersnew insightinto thewaycomputerinterfacescan shapea user’ s languagebehavior, insightswhich can be exploited to bring otherwisedifficult interpretation problemsinto therealmof tractability.",
"title": ""
},
{
"docid": "6677149025a415e44778d1011b617c36",
"text": "In this paper controller synthesis based on standard and dynamic sliding modes for an uncertain nonlinear MIMO Three tank System is presented. Two types of sliding mode controllers are synthesized; first controller is based on standard first order sliding modes while second controller uses dynamic sliding modes. Sliding manifolds for both controllers are designed in-order to ensure finite time convergence of sliding variable for tracking the desired system trajectories. Simulation results are presented showing the performance analysis of both sliding mode controllers. Simulations are also carried out to assess the performance of dynamic sliding mode controller against parametric uncertainties / disturbances. A comparison of designed sliding mode controllers with LMI based robust H∞ controller is also discussed. The performance of dynamic sliding mode control in terms of response time, control effort and robustness of dynamic sliding mode controller is shown to be better than standard sliding mode controller and H∞ controllers.",
"title": ""
},
{
"docid": "76313eb95f3fbe4453cbe3018bace02f",
"text": "We study the diffusion process in an online social network given the individual connections between members. We model the adoption decision of individuals as a binary choice affected by three factors: (1) the local network structure formed by already adopted neighbors, (2) the average characteristics of adopted neighbors (influencers), and (3) the characteristics of the potential adopters. Focusing on the first factor, we find two marked effects. First, an individual who is connected to many adopters has a higher adoption probability (degree effect). Second, the density of connections in a group of already adopted consumers has a strong positive effect on the adoption of individuals connected to this group (clustering effect). We also record significant effects for influencer and adopter characteristics. Specifically, for adopters, we find that their position in the entire network and some demographic variables are good predictors of adoption. Similarly, in the case of already adopted individuals, average demographics and global network position can predict their influential power on their neighbors. An interesting counter-intuitive finding is that the average influential power of individuals decreases with the total number of their contacts. These results have practical implications for viral marketing in a context where, increasingly, a variety of technology platforms are considering to leverage their consumers’ revealed connection patterns. In particular, our model performs well in predicting the next set of adopters.",
"title": ""
},
{
"docid": "73beec89ce06abfe10edb9e446b8b2f8",
"text": "Pinching is an important capability for mobile robots handling small items or tools. Successful pinching requires force-closure and, in underwater applications, gentle suction flow at the fingertips can dramatically improve the handling of light objects by counteracting the negative effects of water lubrication and enhancing friction. In addition, monitoring the flow gives a measure of suction-engagement and can act as a binary tactile sensor. Although a suction system adds complexity, elastic tubes can double as passive spring elements for desired finger kinematics.",
"title": ""
},
{
"docid": "1b2515c8d20593d7b4446d695e28389f",
"text": "Based on microwave C-sections, rat-race coupler is designed to have a dual-band characteristic and a miniaturized area. The C-section together with two transmission line sections attached to both of its ends is synthesized to realize a phase change of 90° at the first frequency, and 270° at the second passband. The equivalence is established by the transmission line theory, and transcendental equations are derived to determine its structure parameters. Two circuits are realized in this presentation; one is designed at 2.45/5.2 GHz and the other at 2.45/5.8 GHz. The latter circuit occupies only 31% of the area of a conventional hybrid ring at the first band. It is believed that this circuit has the best size reduction for microstrip dual-band rat-race couplers in open literature. The measured results show good agreement with simulation responses.",
"title": ""
},
{
"docid": "0de919048191a4bbbb83a1f0e7fa9522",
"text": "In this paper, we propose a novel threat model-driven security testing approach for detecting undesirable threat behavior at runtime. Threats to security policies are modelled with UML (Unified Modeling Language) sequence diagrams. From a design-level threat model we extract a set of threat traces, each of which is an event sequence that should not occur during the system execution. The same threat model is also used to decide what kind of information should be collected at runtime and to guide the code instrumentation. The instrumented code is recompiled and executed using test cases randomly generated. The execution traces are collected and analyzed to verify whether the aforementioned undesirable threat traces are matched. If an execution trace is an instance of a threat trace, security violations are reported and actions should be taken to mitigate the threat in the system. Thus the linkage between models, code implementations, and security testing are extended to form a systematic methodology that can test certain security policies.",
"title": ""
},
{
"docid": "90e5fc05d96e84668816eb70a06ab709",
"text": "This paper introduces a cooperative parallel metaheuristic for solving the capacitated vehicle routing problem. The proposed metaheuristic consists of multiple parallel tabu search threads that cooperate by asynchronously exchanging best found solutions through a common solution pool. The solutions sent to the pool are clustered according to their similarities. The search history information identified from the solution clusters is applied to guide the intensification or diversification of the tabu search threads. Computational experiments on two sets of large scale benchmarks from the literature demonstrate that the suggested metaheuristic is highly competitive, providing new best solutions to ten of those well-studied instances.",
"title": ""
},
{
"docid": "3b27f02b96f079e57714ef7c2f688b48",
"text": "Polycystic ovary syndrome (PCOS) affects 5-10% of women in reproductive age and is characterized by oligo/amenorrhea, androgen excess, insulin resistance, and typical polycystic ovarian morphology. It is the most common cause of infertility secondary to ovulatory dysfunction. The underlying etiology is still unknown but is believed to be multifactorial. Insulin-sensitizing compounds such as inositol, a B-complex vitamin, and its stereoisomers (myo-inositol and D-chiro-inositol) have been studied as an effective treatment of PCOS. Administration of inositol in PCOS has been shown to improve not only the metabolic and hormonal parameters but also ovarian function and the response to assisted-reproductive technology (ART). Accumulating evidence suggests that it is also capable of improving folliculogenesis and embryo quality and increasing the mature oocyte yield following ovarian stimulation for ART in women with PCOS. In the current review, we collate the evidence and summarize our current knowledge on ovarian stimulation and ART outcomes following inositol treatment in women with PCOS undergoing in vitro fertilization (IVF) and/or intracytoplasmic sperm injection (ICSI).",
"title": ""
},
{
"docid": "c7cfc79579704027bf28fc7197496b8c",
"text": "There is a growing trend nowadays for patients to seek the least invasive treatments possible with less risk of complications and downtime to correct rhytides and ptosis characteristic of aging. Nonsurgical face and neck rejuvenation has been attempted with various types of interventions. Suture suspension of the face, although not a new idea, has gained prominence with the advent of the so called \"lunch-time\" face-lift. Although some have embraced this technique, many more express doubts about its safety and efficacy limiting its widespread adoption. The present review aims to evaluate several clinical parameters pertaining to thread suspensions such as longevity of results of various types of polypropylene barbed sutures, their clinical efficacy and safety, and the risk of serious adverse events associated with such sutures. Early results of barbed suture suspension remain inconclusive. Adverse events do occur though mostly minor, self-limited, and of short duration. Less clear are the data on the extent of the peak correction and the longevity of effect, and the long-term effects of the sutures themselves. The popularity of barbed suture lifting has waned for the time being. Certainly, it should not be presented as an alternative to a face-lift.",
"title": ""
},
{
"docid": "383e88fd5dc669aff5f602f35b319380",
"text": "Automatic Turret Gun (ATG) is a weapon system used in numerous combat platforms and vehicles such as in tanks, aircrafts, or stationary ground platforms. ATG plays a big role in both defensive and offensive scenario. It allows combat engagement while the operator of ATG (soldier) covers himself inside a protected control station. On the other hand, ATGs have significant mass and dimension, therefore susceptible to inertial disturbances that need to be compensated to enable the ATG to reach the targeted position quickly and accurately while undergoing disturbances from weapon fire or platform movement. The paper discusses various conventional control method applied in ATG, namely PID controller, RAC, and RACAFC. A number of experiments have been carried out for various range of angle both in azimuth and elevation axis of turret gun. The results show that for an ATG system working under disturbance, RACAFC exhibits greater performance than both RAC and PID, but in experiments without load, equally satisfactory results are obtained from RAC. The exception is for the PID controller, which cannot reach the entire angle given.",
"title": ""
},
{
"docid": "1d5cd4756e424f3d282545f029c1e9bb",
"text": "Anomaly detection systems deployed for monitoring in oil and gas industries are mostly WSN based systems or SCADA systems which all suffer from noteworthy limitations. WSN based systems are not homogenous or incompatible systems. They lack coordinated communication and transparency among regions and processes. On the other hand, SCADA systems are expensive, inflexible, not scalable, and provide data with long delay. In this paper, a novel IoT based architecture is proposed for Oil and gas industries to make data collection from connected objects as simple, secure, robust, reliable and quick. Moreover, it is suggested that how this architecture can be applied to any of the three categories of operations, upstream, midstream and downstream. This can be achieved by deploying a set of IoT based smart objects (devices) and cloud based technologies in order to reduce complex configurations and device programming. Our proposed IoT architecture supports the functional and business requirements of upstream, midstream and downstream oil and gas value chain of geologists, drilling contractors, operators, and other oil field services. Using our proposed IoT architecture, inefficiencies and problems can be picked and sorted out sooner ultimately saving time and money and increasing business productivity.",
"title": ""
},
{
"docid": "238620ca0d9dbb9a4b11756630db5510",
"text": "this planet and many oceanic and maritime applications seem relatively slow in exploiting the state-of-the-art info-communication technologies. The natural and man-made disasters that have taken place over the last few years have aroused significant interest in monitoring oceanic environments for scientific, environmental, commercial, safety, homeland security and military needs. The shipbuilding and offshore engineering industries are also increasingly interested in technologies like sensor networks as an economically viable alternative to currently adopted and costly methods used in seismic monitoring, structural health monitoring, installation and mooring, etc. Underwater sensor networks (UWSNs) are the enabling technology for wide range of applications like monitoring the strong influences and impact of climate regulation, nutrient production, oil retrieval and transportation The underwater environment differs from the terrestrial radio environment both in terms of its energy costs and channel propagation phenomena. The underwater channel is characterized by long propagation times and frequency-dependent attenuation that is highly affected by the distance between nodes as well as by the link orientation. Some of other issues in which UWSNs differ from terrestrial are limited bandwidth, constrained battery power, more failure of sensors because of fouling and corrosion, etc. This paper presents several fundamental key aspects and architectures of UWSNs, emerging research issues of underwater sensor networks and exposes the researchers into networking of underwater communication devices for exciting ocean monitoring and exploration applications. I. INTRODUCTION The Earth is a water planet. Around 70% of the surface of earth is covered by water. This is largely unexplored area and recently it has fascinated humans to explore it. Natural or man-made disasters that have taken place over the last few years have aroused significant interest in monitoring oceanic environments for scientific, environmental, commercial, safety, homeland security and military needs. The shipbuilding and offshore engineering industries are also increasingly interested in technologies like wireless sensor",
"title": ""
},
{
"docid": "483d8347967568fc8e6f0b3fec048c77",
"text": "We present a data-driven, probabilistic trajectory optimization framework for systems with unknown dynamics, called Probabilistic Differential Dynamic Programming (PDDP). PDDP takes into account uncertainty explicitly for dynamics models using Gaussian processes (GPs). Based on the second-order local approximation of the value function, PDDP performs Dynamic Programming around a nominal trajectory in Gaussian belief spaces. Different from typical gradientbased policy search methods, PDDP does not require a policy parameterization and learns a locally optimal, time-varying control policy. We demonstrate the effectiveness and efficiency of the proposed algorithm using two nontrivial tasks. Compared with the classical DDP and a state-of-the-art GP-based policy search method, PDDP offers a superior combination of learning speed, data efficiency and applicability.",
"title": ""
},
{
"docid": "7d25c646a8ce7aa862fba7088b8ea915",
"text": "Neuro-dynamic programming (NDP for short) is a relatively new class of dynamic programming methods for control and sequential decision making under uncertainty. These methods have the potential of dealing with problems that for a long time were thought to be intractable due to either a large state space or the lack of an accurate model. They combine ideas from the fields of neural networks, artificial intelligence, cognitive science, simulation, and approximation theory. We will delineate the major conceptual issues, survey a number of recent developments, describe some computational experience, and address a number of open questions. We consider systems where decisions are made in stages. The outcome of each decision is not fully predictable but can be anticipated to some extent before the next decision is made. Each decision results in some immediate cost but also affects the context in which future decisions are to be made and therefore affects the cost incurred in future stages. Dynamic programming (DP for short) provides a mathematical formalization of the tradeoff between immediate and future costs. Generally, in DP formulations there is a discrete-time dynamic system whose state evolves according to given transition probabilities that depend on a decision/control u. In particular, if we are in state i and we choose decision u, we move to state j with given probability pij(u). Simultaneously with this transition, we incur a cost g(i, u, j). In comparing, however, the available decisions u, it is not enough to look at the magnitude of the cost g(i, u, j); we must also take into account how desirable the next state j is. We thus need a way to rank or rate states j. This is done by using the optimal cost (over all remaining stages) starting from state j, which is denoted by J∗(j). These costs can be shown to",
"title": ""
},
{
"docid": "d308a1dfb10d538ee0bcb729dcbf2c44",
"text": "I test the disposition effect, the tendency of investors to hold losing investments too long and sell winning investments too soon, by analyzing trading records for 10,000 accounts at a large discount brokerage house. These investors demonstrate a strong preference for realizing winners rather than losers. Their behavior does not appear to be motivated by a desire to rebalance portfolios, or to avoid the higher trading costs of low priced stocks. Nor is it justified by subsequent portfolio performance. For taxable investments, it is suboptimal and leads to lower after-tax returns. Tax-motivated selling is most evident in December. THE TENDENCY TO HOLD LOSERS too long and sell winners too soon has been labeled the disposition effect by Shefrin and Statman (1985). For taxable investments the disposition effect predicts that people will behave quite differently than they would if they paid attention to tax consequences. To test the disposition effect, I obtained the trading records from 1987 through 1993 for 10,000 accounts at a large discount brokerage house. An analysis of these records shows that, overall, investors realize their gains more readily than their losses. The analysis also indicates that many investors engage in taxmotivated selling, especially in December. Alternative explanations have been proposed for why investors might realize their profitable investments while retaining their losing investments. Investors may rationally, or irrationally, believe that their current losers will in the future outperform their current * University of California, Davis. This paper is based on my dissertation at the University of California, Berkeley. I would like to thank an anonymous referee, Brad Barber, Peter Klein, Hayne Leland, Richard Lyons, David Modest, John Nofsinger, James Poterba, Mark Rubinstein, Paul Ruud, Richard Sansing, Richard Thaler, Brett Trueman, and participants at the Berkeley Program in Finance, the NBER behavioral finance meeting, the Financial Management Association Conference, the American Finance Association meetings, and seminar participants at UC Berkeley, the Yale School of Management, the University of California, Davis, the University of Southern California, the University of North Carolina, Duke University, the Wharton School, Stanford University, the University of Oregon, Harvard University, the Massachusetts Institute of Technology, the Amos Tuck School, the University of Chicago, the University of British Columbia, Northwestern University, the University of Texas, UCLA, the University of Michigan, and Columbia University for helpful comments. I would also like to thank Jeremy Evnine and especially the discount brokerage house that provided the data necessary for this study. Financial support from the Nasdaq Foundation is gratefully acknowledged.",
"title": ""
},
{
"docid": "49f68a9534a602074066948a13164ad4",
"text": "Recent developments in Web technologies and using AI techniques to support efforts in making the Web more intelligent and provide higher-level services to its users have opened the door to building the Semantic Web. That fact has a number of important implications for Web-based education, since Web-based education has become a very important branch of educational technology. Classroom independence and platform independence of Web-based education, availability of authoring tools for developing Web-based courseware, cheap and efficient storage and distribution of course materials, hyperlinks to suggested readings, digital libraries, and other sources of references relevant for the course are but a few of a number of clear advantages of Web-based education. However, there are several challenges in improving Web-based education, such as providing for more adaptivity and intelligence. Developments in the Semantic Web, while contributing to the solution to these problems, also raise new issues that must be considered if we are to progress. This paper surveys the basics of the Semantic Web and discusses its importance in future Web-based educational applications. Instead of trying to rebuild some aspects of a human brain, we are going to build a brain of and for humankind. D. Fensel and M.A. Musen (Fensel & Musen, 2001)",
"title": ""
},
{
"docid": "850a7daa56011e6c53b5f2f3e33d4c49",
"text": "Multi-objective evolutionary algorithms (MOEAs) have achieved great progress in recent decades, but most of them are designed to solve unconstrained multi-objective optimization problems. In fact, many real-world multi-objective problems usually contain a number of constraints. To promote the research of constrained multi-objective optimization, we first propose three primary types of difficulty, which reflect the challenges in the real-world optimization problems, to characterize the constraint functions in CMOPs, including feasibility-hardness, convergencehardness and diversity-hardness. We then develop a general toolkit to construct difficulty adjustable and scalable constrained multi-objective optimization problems (CMOPs) with three types of parameterized constraint functions according to the proposed three primary types of difficulty. In fact, combination of the three primary constraint functions with different parameters can lead to construct a large variety of CMOPs, whose difficulty can be uniquely defined by a triplet with each of its parameter specifying the level of each primary difficulty type respectively. Furthermore, the number of objectives in this toolkit are able to scale to more than two. Based on this toolkit, we suggest nine difficulty adjustable and scalable CMOPs named DAS-CMOP1-9. To evaluate the proposed test problems, two popular CMOEAs MOEA/D-CDP and NSGA-II-CDP are adopted to test their performances on DAS-CMOP1-9 with different difficulty triplets. The experiment results demonstrate that none of them can solve these problems efficiently, which stimulate us to develop new constrained MOEAs to solve the suggested DAS-CMOPs.",
"title": ""
},
{
"docid": "497d72ce075f9bbcb2464c9ab20e28de",
"text": "Eukaryotic organisms radiated in Proterozoic oceans with oxygenated surface waters, but, commonly, anoxia at depth. Exceptionally preserved fossils of red algae favor crown group emergence more than 1200 million years ago, but older (up to 1600-1800 million years) microfossils could record stem group eukaryotes. Major eukaryotic diversification ~800 million years ago is documented by the increase in the taxonomic richness of complex, organic-walled microfossils, including simple coenocytic and multicellular forms, as well as widespread tests comparable to those of extant testate amoebae and simple foraminiferans and diverse scales comparable to organic and siliceous scales formed today by protists in several clades. Mid-Neoproterozoic establishment or expansion of eukaryophagy provides a possible mechanism for accelerating eukaryotic diversification long after the origin of the domain. Protists continued to diversify along with animals in the more pervasively oxygenated oceans of the Phanerozoic Eon.",
"title": ""
},
{
"docid": "98e78d8fb047140a73f2a43cbe4a1c74",
"text": "Genomics can transform health-care through precision medicine. Plummeting sequencing costs would soon make genome testing affordable to the masses. Compute efficiency, however, has to improve by orders of magnitude to sequence and analyze the raw genome data. Sequencing software used today can take several hundreds to thousands of CPU hours to align reads to a reference sequence. This paper presents GenAx, an accelerator for read alignment, a time-consuming step in genome sequencing. It consists of a seeding and seed-extension accelerator. The latter is based on an innovative automata design that was designed from the ground-up to enable hardware acceleration. Unlike conventional Levenshtein automata, it is string independent and scales quadratically with edit distance, instead of string length. It supports critical features commonly used in sequencing such as affine gap scoring and traceback. GenAx provides a throughput of 4,058K reads/s for Illumina 101 bp reads. GenAx achieves 31.7× speedup over the standard BWA-MEM sequence aligner running on a 56-thread dualsocket 14-core Xeon E5 server processor, while reducing power consumption by 12× and area by 5.6×.",
"title": ""
},
{
"docid": "4f827fa8a868da051e92d03a9f5f7c75",
"text": "Ever increasing volumes of biosolids (treated sewage sludge) are being produced by municipal wastewater facilities. This is a consequence of the continued expansion of urban areas, which in turn require the commissioning of new treatment plants or upgrades to existing facilities. Biosolids contain nutrients and energy which can be used in agriculture or waste-to-energy processes. Biosolids have been disposed of in landfills, but there is an increasing pressure from regulators to phase out landfilling. This article performs a critical review on options for the management of biosolids with a focus on pyrolysis and the application of the solid fraction of pyrolysis (biochar) into soil.",
"title": ""
}
] |
scidocsrr
|
8b0ede7f613381d5b25c09ddc44c8203
|
Is dyslexia necessarily associated with negative feelings of self-worth? A review and implications for future research.
|
[
{
"docid": "4ccea211a4b3b01361a4205990491764",
"text": "published by the press syndicate of the university of cambridge Vygotsky's educational theory in cultural context / edited by Alex Kozulin. .. [et al.]. p. cm. – (Learning in doing) Includes bibliographical references and index.",
"title": ""
}
] |
[
{
"docid": "e730935b097cb4c4f36221d774d2e63a",
"text": "This paper outlines key design principles of Scilla—an intermediatelevel language for verified smart contracts. Scilla provides a clean separation between the communication aspect of smart contracts on a blockchain, allowing for the rich interaction patterns, and a programming component, which enjoys principled semantics and is amenable to formal verification. Scilla is not meant to be a high-level programming language, and we are going to use it as a translation target for high-level languages, such as Solidity, for performing program analysis and verification, before further compilation to an executable bytecode. We describe the automata-based model of Scilla, present its programming component and show how contract definitions in terms of automata streamline the process of mechanised verification of their safety and temporal properties.",
"title": ""
},
{
"docid": "e34ad4339934d9b9b4019fad37f8dd4e",
"text": "This paper presents a technique for estimating the threedimensional velocity vector field that describes the motion of each visible scene point (scene flow). The technique presented uses two consecutive image pairs from a stereo sequence. The main contribution is to decouple the position and velocity estimation steps, and to estimate dense velocities using a variational approach. We enforce the scene flow to yield consistent displacement vectors in the left and right images. The decoupling strategy has two main advantages: Firstly, we are independent in choosing a disparity estimation technique, which can yield either sparse or dense correspondences, and secondly, we can achieve frame rates of 5 fps on standard consumer hardware. The approach provides dense velocity estimates with accurate results at distances up to 50 meters.",
"title": ""
},
{
"docid": "8a8edb63c041a01cbb887cd526b97eb0",
"text": "We propose BrainNetCNN, a convolutional neural network (CNN) framework to predict clinical neurodevelopmental outcomes from brain networks. In contrast to the spatially local convolutions done in traditional image-based CNNs, our BrainNetCNN is composed of novel edge-to-edge, edge-to-node and node-to-graph convolutional filters that leverage the topological locality of structural brain networks. We apply the BrainNetCNN framework to predict cognitive and motor developmental outcome scores from structural brain networks of infants born preterm. Diffusion tensor images (DTI) of preterm infants, acquired between 27 and 46 weeks gestational age, were used to construct a dataset of structural brain connectivity networks. We first demonstrate the predictive capabilities of BrainNetCNN on synthetic phantom networks with simulated injury patterns and added noise. BrainNetCNN outperforms a fully connected neural-network with the same number of model parameters on both phantoms with focal and diffuse injury patterns. We then apply our method to the task of joint prediction of Bayley-III cognitive and motor scores, assessed at 18 months of age, adjusted for prematurity. We show that our BrainNetCNN framework outperforms a variety of other methods on the same data. Furthermore, BrainNetCNN is able to identify an infant's postmenstrual age to within about 2 weeks. Finally, we explore the high-level features learned by BrainNetCNN by visualizing the importance of each connection in the brain with respect to predicting the outcome scores. These findings are then discussed in the context of the anatomy and function of the developing preterm infant brain.",
"title": ""
},
{
"docid": "d9e09589352431cafb6e579faf91afa8",
"text": "The purpose of this study was to investigate the effects of training muscle groups 1 day per week using a split-body routine (SPLIT) vs. 3 days per week using a total-body routine (TOTAL) on muscular adaptations in well-trained men. Subjects were 20 male volunteers (height = 1.76 ± 0.05 m; body mass = 78.0 ± 10.7 kg; age = 23.5 ± 2.9 years) recruited from a university population. Participants were pair matched according to baseline strength and then randomly assigned to 1 of the 2 experimental groups: a SPLIT, where multiple exercises were performed for a specific muscle group in a session with 2-3 muscle groups trained per session (n = 10) or a TOTAL, where 1 exercise was performed per muscle group in a session with all muscle groups trained in each session (n = 10). Subjects were tested pre- and poststudy for 1 repetition maximum strength in the bench press and squat, and muscle thickness (MT) of forearm flexors, forearm extensors, and vastus lateralis. Results showed significantly greater increases in forearm flexor MT for TOTAL compared with SPLIT. No significant differences were noted in maximal strength measures. The findings suggest a potentially superior hypertrophic benefit to higher weekly resistance training frequencies.",
"title": ""
},
{
"docid": "8b1fa33cc90434abddf5458e05db0293",
"text": "The Stand-Alone Modula-2 System (SAM2S) is a portable, concurrent operating system and Modula-2 programming support environment. It is based on a highly modular kernel task running on single process-multiplexed microcomputers. SAM2S offers extensive network communication facilities. It provides the foundation for the locally resident portions of the MICROS distributed operating system for large netcomputers. SAM2S now supports a five-pass Modula-2 compiler, a task linker, link and load file decoders, a static symbolic debugger, a filer, and other utility tasks. SAM2S is currently running on each node of a network of DEC LSI-11/23 and Heurikon/Motorola 68000 workstations connected by an Ethernet. This paper reviews features of Modula-2 for operating system development and outlines the design of SAM2S with special emphasis on its modularity and communication flexibility. The two SAM2S implementations differ mainly in their peripheral drivers and in the large amount of memory available on the 68000 systems. Modula-2 has proved highly suitable for writing large, portable, concurrent and distributed operating systems.",
"title": ""
},
{
"docid": "53cf86b02ac1cd2e80b8a947a8acf0d3",
"text": "We introduce a novel compositional language model that works on PredicateArgument Structures (PASs). Our model jointly learns word representations and their composition functions using bagof-words and dependency-based contexts. Unlike previous word-sequencebased models, our PAS-based model composes arguments into predicates by using the category information from the PAS. This enables our model to capture longrange dependencies between words and to better handle constructs such as verbobject and subject-verb-object relations. We verify this experimentally using two phrase similarity datasets and achieve results comparable to or higher than the previous best results. Our system achieves these results without the need for pretrained word vectors and using a much smaller training corpus; despite this, for the subject-verb-object dataset our model improves upon the state of the art by as much as ∼10% in relative performance.",
"title": ""
},
{
"docid": "453e4343653f2d84bc4b5077d9556de1",
"text": "Device-to-Device (D2D) communication is the technology enabling user equipments (UEs) to directly communicate with each other without help of evolved nodeB (eNB). Due to this characteristic, D2D communication can reduce end-to-end delay and traffic load offered to eNB. However, by applying D2D communication into cellular systems, interference between D2D and eNB relaying UEs can occur if D2D UEs reuse frequency band for eNB relaying UEs. In cellular systems, fractional frequency reuse (FFR) is used to reduce inter-cell interference of cell outer UEs. In this paper, we propose a radio resource allocation scheme for D2D communication underlaying cellular networks using FFR. In the proposed scheme, D2D and cellular UEs use the different frequency bands chosen as users' locations. The proposed radio resource allocation scheme can alleviate interference between D2D and cellular UEs if D2D device is located in cell inner region. If D2D UEs is located in cell outer region, D2D and cellular UEs experience tolerable interference. By simulations, we show that the proposed scheme improves the performance of D2D and cellular UEs by reducing interference between them.",
"title": ""
},
{
"docid": "17598d7543d81dcf7ceb4cb354fb7c81",
"text": "Bitcoin is the first decentralized crypto-currency that is currently by far the most popular one in use. The bitcoin transaction syntax is expressive enough to setup digital contracts whose fund transfer can be enforced automatically. In this paper, we design protocols for the bitcoin voting problem, in which there are n voters, each of which wishes to fund exactly one of two candidates A and B. The winning candidate is determined by majority voting, while the privacy of individual vote is preserved. Moreover, the decision is irrevocable in the sense that once the outcome is revealed, the winning candidate is guaranteed to have the funding from all n voters. As in previous works, each voter is incentivized to follow the protocol by being required to put a deposit in the system, which will be used as compensation if he deviates from the protocol. Our solution is similar to previous protocols used for lottery, but needs an additional phase to distribute secret random numbers via zero-knowledge-proofs. Moreover, we have resolved a security issue in previous protocols that could prevent compensation from being paid.",
"title": ""
},
{
"docid": "39755a818e818d2e10b0bac14db6c347",
"text": "Algorithms to solve variational regularization of ill-posed inverse problems usually involve operators that depend on a collection of continuous parameters. When these operators enjoy some (local) regularity, these parameters can be selected using the socalled Stein Unbiased Risk Estimate (SURE). While this selection is usually performed by exhaustive search, we address in this work the problem of using the SURE to efficiently optimize for a collection of continuous parameters of the model. When considering non-smooth regularizers, such as the popular l1-norm corresponding to soft-thresholding mapping, the SURE is a discontinuous function of the parameters preventing the use of gradient descent optimization techniques. Instead, we focus on an approximation of the SURE based on finite differences as proposed in [51]. Under mild assumptions on the estimation mapping, we show that this approximation is a weakly differentiable function of the parameters and its weak gradient, coined the Stein Unbiased GrAdient estimator of the Risk (SUGAR), provides an asymptotically (with respect to the data dimension) unbiased estimate of the gradient of the risk. Moreover, in the particular case of softthresholding, it is proved to be also a consistent estimator. This gradient estimate can then be used as a basis to perform a quasi-Newton optimization. The computation of the SUGAR relies on the closed-form (weak) differentiation of the non-smooth function. We provide its expression for a large class of iterative methods including proximal splitting ones and apply our strategy to regularizations involving non-smooth convex structured penalties. Illustrations on various image restoration and matrix completion problems are given.",
"title": ""
},
{
"docid": "6f6ae8ea9237cca449b8053ff5f368e7",
"text": "With the rapid development of Location-based Social Network (LBSN) services, a large number of Point-of-Interests (POIs) have been available, which consequently raises a great demand of building personalized POI recommender systems. A personalized POI recommender system can significantly help users to find their preferred POIs and assist POI owners to attract more customers. However, due to the complexity of users’ checkin decision making process that is influenced by many different factors such as POI distance and region’s prosperity, and the dynamics of user’s preference, POI recommender systems usually suffer from many challenges. Although different latent factor based methods (e.g., probabilistic matrix factorization) have been proposed, most of them do not successfully incorporate both geographical influence and temporal effect together into latent factor models. To this end, in this paper, we propose a new Spatial-Temporal Probabilistic Matrix Factorization (STPMF) model that models a user’s preference for POI as the combination of his geographical preference and other general interest in POI. Furthermore, in addition to static general interest of user, we capture the temporal dynamics of user’s interest as well by modeling checkin data in a unique way. To evaluate the proposed STPMF model, we conduct extensive experiments with many state-of-the-art baseline methods and evaluation metrics on two real-world data sets. The experimental results clearly demonstrate the effectiveness of our proposed STPMF model.",
"title": ""
},
{
"docid": "56401a83fecb64f2810c4bbc51b912fc",
"text": "This paper presents an approach to vision-based simultaneous localization and mapping (SLAM). Our approach uses the scale invariant feature transform (SIFT) as features and applies a rejection technique to concentrate on a reduced set of distinguishable, stable features. We track detected SIFT features over consecutive frames obtained by a stereo camera and select only those features that appear to be stable from different views. Whenever a feature is selected, we compute a representative feature given the previous observations. This approach is applied within a Rao-Blackwellized particle filter to make the data association easier and furthermore to reduce the number of landmarks that need to be maintained in the map. Our system has been implemented and tested on data gathered with a mobile robot in a typical office environment. Experiments presented in this paper demonstrate that our method improves the data association and in this way leads to more accurate maps",
"title": ""
},
{
"docid": "2a2b2332e949372c6bba650725e9a9a2",
"text": "This study aimed to investigate the effect of academic procrastination on e-learning course achievement. Because all of the interactions among students, instructors, and contents in an e-learning environment were automatically recorded in a learning management system (LMS), procrastination such as the delays in weekly scheduled learning and late submission of assignments could be identified from log data. Among 569 college students who enrolled in an e-learning course in Korea, the absence and late submission of assignments were chosen to measure academic procrastination in e-learning. Multiple regression analysis was conducted to examine the relationship between academic procrastination and course achievement. The results showed that the absence and late submission of assignments were negatively significant in predicting course achievement. Furthermore, the study explored the predictability of academic procrastination on course achievement at four points of the 15-week course to test its potential for early prediction. The results showed that the regression model at each time point significantly predicted course achievement, and the predictability increased as time passed. Based on the findings, practical implications for facilitating a successful e-learning environment were suggested, and the potential of analyzing LMS data was discussed.",
"title": ""
},
{
"docid": "cb8fafd0cedfdca2b2d8d310891f6768",
"text": "1.1 BACKGROUND Calculators are not intelligent. Calculators give the right answers to challenging math problems, but everything they \" know \" is preprogrammed by people. They can never learn anything new, and outside of their limited domain of utility, they have the expertise of a stone. Calculators are able to solve problems entirely because people are already able to solve those same problems. Since the earliest days of computing, we have envisioned machines that could go beyond our own ability to solve problems—intelligent machines. We have generated many computing devices that can solve mathematical problems of enormous complexity, but mainly these too are merely \" calculators. \" They are prepro-grammed to do exactly what we want them to do. They accept input and generate the correct output. They may do it at blazingly fast speeds, but their underlying mechanisms depend on humans having already worked out how to write the programs that control their behavior. The dream of the intelligent machine is the vision of creating something that does not depend on having people preprogram its problem solving behavior. Put another way, artificial intelligence should not seek to merely solve problems, but should rather seek to solve the problem of how to solve problems. Although most scientific disciplines, such as mathematics, physics, chemistry, and biology, are well defined, the field of artificial intelligence (AI) remains enigmatic. This is nothing new. Even 20 years ago, Hofstadter (1985, p. 633) remarked, \" The central problem of AI is the question: What is the letter 'a'? Donald Knuth, on hearing me make this claim once, appended, 'And what is the letter 'i'?'—an amendment that I gladly accept. \" Despite nearly 50 years of research in the field, there is still no widely accepted definition of artificial intelligence. Even more, a discipline of computational intelligence—including research in neural networks, fuzzy systems, and evolutionary computation—has gained prominence as an alternative to AI, mainly because AI has failed to live up to its promises and because many believe that the methods that have been adopted under the old rubric of AI will never succeed. It may be astonishing to find that five decades of research in artificial intelligence have been pursued without fundamentally accepted goals, or even a simple",
"title": ""
},
{
"docid": "157c084aa6622c74449f248f98314051",
"text": "A magnetically-tuned multi-mode VCO featuring an ultra-wide frequency tuning range is presented. By changing the magnetic coupling coefficient between the primary and secondary coils in the transformer tank, the frequency tuning range of a dual-band VCO is greatly increased to continuously cover the whole E-band. Fabricated in a 65-nm CMOS process, the presented VCO measures a tuning range of 44.2% from 57.5 to 90.1 GHz while consuming 7mA to 9mA at 1.2V supply. The measured phase noises at 10MHz offset from carrier frequencies of 72.2, 80.5 and 90.1 GHz are -111.8, -108.9 and -105 dBc/Hz, respectively, which corresponds to a FOMT between -192.2 and -184.2dBc/Hz.",
"title": ""
},
{
"docid": "22c6ae71c708d5e2d1bc7e5e085c4842",
"text": "Head pose estimation is a fundamental task for face and social related research. Although 3D morphable model (3DMM) based methods relying on depth information usually achieve accurate results, they usually require frontal or mid-profile poses which preclude a large set of applications where such conditions can not be garanteed, like monitoring natural interactions from fixed sensors placed in the environment. A major reason is that 3DMM models usually only cover the face region. In this paper, we present a framework which combines the strengths of a 3DMM model fitted online with a prior-free reconstruction of a 3D full head model providing support for pose estimation from any viewpoint. In addition, we also proposes a symmetry regularizer for accurate 3DMM fitting under partial observations, and exploit visual tracking to address natural head dynamics with fast accelerations. Extensive experiments show that our method achieves state-of-the-art performance on the public BIWI dataset, as well as accurate and robust results on UbiPose, an annotated dataset of natural interactions that we make public and where adverse poses, occlusions or fast motions regularly occur.",
"title": ""
},
{
"docid": "6ecb0f91a888ceb679b94d6df7bd3775",
"text": "This paper* describes an innovative model of long term memory (SALT -Schema-Associative Long Term memory). It also presents an implementation of the SALT model, a specification of an agent, and some scenarios of interactions with the agent. The model presented has its roots in two of the most important general theories of human memory, namely the associative network theory and the schema-based theory. The main advantage of the SALT model is its capabilit y of generating context-dependent cognition. The examples selected for ill ustrating the functioning of the implementation were chosen from the field of personnel evaluation.",
"title": ""
},
{
"docid": "9aad59aeeb07a390062314fbb1c33d73",
"text": "An 8b 1.2 GS/s single-channel Successive Approximation Register (SAR) ADC is implemented in 32 nm CMOS, achieving 39.3 dB SNDR and a Figure-of-Merit (FoM) of 34 fJ per conversion step. High-speed operation is achieved by converting each sample with two alternate comparators clocked asynchronously and a redundant capacitive DAC with constant common mode to improve the accuracy of the comparator. A low-power, clocked capacitive reference buffer is used, and fractional reference voltages are provided to reduce the number of unit capacitors in the capacitive DAC (CDAC). The ADC stacks the CDAC with the reference capacitor to reduce the area and enhance the settling speed. Background calibration of comparator offset is implemented. The ADC consumes 3.1 mW from a 1 V supply and occupies 0.0015 mm2.",
"title": ""
},
{
"docid": "564ec6a2d5748afc83592ac0371a3ead",
"text": "Fine-grained vehicle classiflcation is a challenging task due to the subtle differences between vehicle classes. Several successful approaches to fine-grained image classification rely on part-based models, where the image is classified according to discriminative object parts. Such approaches require however that parts in the training images be manually annotated, a laborintensive process. We propose a convolutional architecture realizing a transform network capable of discovering the most discriminative parts of a vehicle at multiple scales. We experimentally show that our architecture outperforms a baseline reference if trained on class labels only, and performs closely to a reference based on a part-model if trained on loose vehicle localization bounding boxes.",
"title": ""
},
{
"docid": "eec33c75a0ec9b055a857054d05bcf54",
"text": "We introduce a logical process of three distinct phases to begin the evaluation of a new 3D dosimetry array. The array under investigation is a hollow cylinder phantom with diode detectors fixed in a helical shell forming an \"O\" axial detector cross section (ArcCHECK), with comparisons drawn to a previously studied 3D array with diodes fixed in two crossing planes forming an \"X\" axial cross section (Delta⁴). Phase I testing of the ArcCHECK establishes: robust relative calibration (response equalization) of the individual detectors, minor field size dependency of response not present in a 2D predecessor, and uncorrected angular response dependence in the axial plane. Phase II testing reveals vast differences between the two devices when studying fixed-width full circle arcs. These differences are primarily due to arc discretization by the TPS that produces low passing rates for the peripheral detectors of the ArcCHECK, but high passing rates for the Delta⁴. Similar, although less pronounced, effects are seen for the test VMAT plans modeled after the AAPM TG119 report. The very different 3D detector locations of the two devices, along with the knock-on effect of different percent normalization strategies, prove that the analysis results from the devices are distinct and noninterchangeable; they are truly measuring different things. The value of what each device measures, namely their correlation with--or ability to predict--clinically relevant errors in calculation and/or delivery of dose is the subject of future Phase III work.",
"title": ""
},
{
"docid": "5dc4dfc2d443c31332c70a56c2d70c7d",
"text": "Sentiment analysis or opinion mining is an important type of text analysis that aims to support decision making by extracting and analyzing opinion oriented text, identifying positive and negative opinions, and measuring how positively or negatively an entity (i.e., people, organization, event, location, product, topic, etc.) is regarded. As more and more users express their political and religious views on Twitter, tweets become valuable sources of people's opinions. Tweets data can be efficiently used to infer people's opinions for marketing or social studies. This paper proposes a Tweets Sentiment Analysis Model (TSAM) that can spot the societal interest and general people's opinions in regard to a social event. In this paper, Australian federal election 2010 event was taken as an example for sentiment analysis experiments. We are primarily interested in the sentiment of the specific political candidates, i.e., two primary minister candidates - Julia Gillard and Tony Abbot. Our experimental results demonstrate the effectiveness of the system.",
"title": ""
}
] |
scidocsrr
|
aad80bd51e1e196a2c7c41b21cc4b96d
|
A Qualitative Review on 3D Coarse Registration Methods
|
[
{
"docid": "9e555c44e0b27af976ec54ebab126df3",
"text": "Few systems capable of recognizing complex objects with free-form (sculptured) surfaces have been developed. The apparent lack of success is mainly due to the lack of a competent modelling scheme for representing such complex objects. In this paper, a new form of point representation for describing 3D free-form surfaces is proposed. This representation, which we call the point signature, serves to describe the structural neighbourhood of a point in a more complete manner than just using the 3D coordinates of the point. Being invariant to rotation and translation, the point signature can be used directly to hypothesize the correspondence to model points with similar signatures. Recognition is achieved by matching the signatures of data points representing the sensed surface to the signatures of data points representing the model surface. The use of point signatures is not restricted to the recognition of a single-object scene to a small library of models. Instead, it can be extended naturally to the recognition of scenes containing multiple partially-overlapping objects (which may also be juxtaposed with each other) against a large model library. No preliminary phase of segmenting the scene into the component objects is required. In searching for the appropriate candidate model, recognition need not proceed in a linear order which can become prohibitive for a large model library. For a given scene, signatures are extracted at arbitrarily spaced seed points. Each of these signatures is used to vote for models that contain points having similar signatures. Inappropriate models with low votes can be rejected while the remaining candidate models are ordered according to the votes they received. In this way, efficient verification of the hypothesized candidates can proceed by testing the most likely model first. Experiments using real data obtained from a range finder have shown fast recognition from a library of fifteen models whose complexities vary from that of simple piecewise quadric shapes to complicated face masks. Results from the recognition of both single-object and multiple-object scenes are presented.",
"title": ""
}
] |
[
{
"docid": "7bb04f2163e253068ac665f12a5dd35c",
"text": "Automatic segmentation of the liver and hepatic lesions is an important step towards deriving quantitative biomarkers for accurate clinical diagnosis and computer-aided decision support systems. This paper presents a method to automatically segment liver and lesions in CT and MRI abdomen images using cascaded fully convolutional neural networks (CFCNs) enabling the segmentation of large-scale medical trials and quantitative image analyses. We train and cascade two FCNs for the combined segmentation of the liver and its lesions. As a first step, we train an FCN to segment the liver as ROI input for a second FCN. The second FCN solely segments lesions within the predicted liver ROIs of step 1. CFCN models were trained on an abdominal CT dataset comprising 100 hepatic tumor volumes. Validation results on further datasets show that CFCN-based semantic liver and lesion segmentation achieves Dice scores over 94% for the liver with computation times below 100s per volume. We further experimentally demonstrate the robustness of the proposed method on 38 MRI liver tumor volumes and the public 3DIRCAD dataset.",
"title": ""
},
{
"docid": "015976c8877fa6561c6dbe4dcf58ee7c",
"text": "Classic sparse representation for classification (SRC) method fails to incorporate the label information of training images, and meanwhile has a poor scalability due to the expensive computation for `1 norm. In this paper, we propose a novel subspace sparse coding method with utilizing label information to effectively classify the images in the subspace. Our new approach unifies the tasks of dimension reduction and supervised sparse vector learning, by simultaneously preserving the data sparse structure and meanwhile seeking the optimal projection direction in the training stage, therefore accelerates the classification process in the test stage. Our method achieves both flat and structured sparsity for the vector representations, therefore making our framework more discriminative during the subspace learning and subsequent classification. The empirical results on 4 benchmark data sets demonstrate the effectiveness of our method.",
"title": ""
},
{
"docid": "22f61d8bab9ba3b89b9ce23d5ee2ef04",
"text": "Images of female scientists and engineers in popular$lms convey cultural and social assumptions about the role of women in science, engineering, and technology (SET). This study analyzed cultural representations of gender conveyed through images offemale scientists andengineers in popularjilms from 1991 to 2001. While many of these depictions of female scientists and engineers emphasized their appearance and focused on romance, most depictions also presented female scientists and engineers in professional positions of high status. Other images that showed the fernale scientists and engineers' interactions with male colleagues, ho~vevel; reinforced traditional social and cultural assumptions about the role of women in SET through overt and subtle forms of stereotyping. This article explores the sign$cance of thesejindings fordevelopingprograms to change girls'perceptions of scientists and engineers and attitudes toward SET careers.",
"title": ""
},
{
"docid": "4af5f2e9b12b4efa43c053fd13f640d0",
"text": "The high level of heterogeneity between linguistic annotations usually complic ates the interoperability of processing modules within an NLP pipeline. In this paper, a framework for the interoperation of NLP co mp nents, based on a data-driven architecture, is presented. Here, ontologies of linguistic annotation are employed to provide a conceptu al basis for the tag-set neutral processing of linguistic annotations. The framework proposed here is based on a set of struc tured OWL ontologies: a reference ontology, a set of annotation models which formalize different annotation schemes, and a declarativ e linking between these, specified separately. This modular architecture is particularly scalable and flexible as it allows for the integration of different reference ontologies of linguistic annotations in order to overcome the absence of a consensus for an ontology of ling uistic terminology. Our proposal originates from three lines of research from different fields: research on annotation type systems in UIMA; the ontological architecture OLiA, originally developed for sustainable documentation and annotation-independent corpus browsin g, and the ontologies of the OntoTag model, targeted towards the processing of linguistic annotations in Semantic Web applications. We describ how UIMA annotations can be backed up by ontological specifications of annotation schemes as in the OLiA model, and how these ar e linked to the OntoTag ontologies, which allow for further ontological processing.",
"title": ""
},
{
"docid": "f4aa06f7782a22eeb5f30d0ad27eaff9",
"text": "Friction effects are particularly critical for industrial robots, since they can induce large positioning errors, stick-slip motions, and limit cycles. This paper offers a reasoned overview of the main friction compensation techniques that have been developed in the last years, regrouping them according to the adopted kind of control strategy. Some experimental results are reported, to show how the control performances can be affected not only by the chosen method, but also by the characteristics of the available robotic architecture and of the executed task.",
"title": ""
},
{
"docid": "1eb2dcb1c5c1fb88e3f6a3b80fbf31d5",
"text": "For years, researchers and practitioners have primarily investigated the various processes within manufacturing supply chains individually. Recently, however, there has been increasing attention placed on the performance, design, and analysis of the supply chain as a whole. This attention is largely a result of the rising costs of manufacturing, the shrinking resources of manufacturing bases, shortened product life cycles, the leveling of the playing field within manufacturing, and the globalization of market economies. The objectives of this paper are to: (1) provide a focused review of literature in multi-stage supply chain modeling and (2) define a research agenda for future research in this area.",
"title": ""
},
{
"docid": "57a2ef4a644f0fc385185a381f309fcd",
"text": "Despite recent emergence of adversarial based methods for video prediction, existing algorithms often produce unsatisfied results in image regions with rich structural information (i.e., object boundary) and detailed motion (i.e., articulated body movement). To this end, we present a structure preserving video prediction framework to explicitly address above issues and enhance video prediction quality. On one hand, our framework contains a two-stream generation architecture which deals with high frequency video content (i.e., detailed object or articulated motion structure) and low frequency video content (i.e., location or moving directions) in two separate streams. On the other hand, we propose a RNN structure for video prediction, which employs temporal-adaptive convolutional kernels to capture time-varying motion patterns as well as tiny objects within a scene. Extensive experiments on diverse scenes, ranging from human motion to semantic layout prediction, demonstrate the effectiveness of the proposed video prediction approach.",
"title": ""
},
{
"docid": "3fd6d0ef0240b2fdd2a9c76a023ecab6",
"text": "In this work, an exponential spline method is developed and a nalyzed for approximating solutions of calculus of variati ons problems. The method uses a spline interpolant, which is con structed from exponential spline. It is proved to be secondrder convergent. Finally some illustrative examples are includ ed to demonstrate the applicability of the new technique. Nu merical results confirm the order of convergence predicted by the analysis.",
"title": ""
},
{
"docid": "22bfe6518994bac7d009ca98990f42b0",
"text": "BACKGROUND\nThe free nipple breast reduction method has certain disadvantages, such as nipple hyposensitivity, loss of lactation, and loss of projection. To eliminate these risks, the authors describe a patient-based breast reduction technique in which the major supplier vessels of the nipple-areola complex were determined by color Doppler ultrasonography. Pedicles containing these vessels were designed for reductions.\n\n\nMETHODS\nSixteen severe gigantomastia patients with a mean age of 41 years (range, 23 to 60 years) were included in the study. Major nipple-areola complex perforators were determined with 13- to 5-MHz linear probe Doppler ultrasonography before surgery. Pedicles were designed according to the vessel locations, and reductions were performed with superomedial-, superolateral-, or mediolateral-based designs.\n\n\nRESULTS\nDifferent combinations of internal mammary and lateral thoracic artery perforator-based reductions were achieved. None of the patients had areola necrosis. Mean reduction weight was 1795 g (range, 1320 to 2280) per breast.\n\n\nCONCLUSIONS\nInstead of using standard markings for severe gigantomastia patients, custom-made and sonographically determined pedicles were used. This technique can be considered as a \"guide\" for the surgeon during very large breast reductions.",
"title": ""
},
{
"docid": "4f2986b922e09df53aa7662bf58b1429",
"text": "Two semi-supervised feature extraction methods are proposed for electroencephalogram (EEG) classification. They aim to alleviate two important limitations in brain–computer interfaces (BCIs). One is on the requirement of small training sets owing to the need of short calibration sessions. The second is the time-varying property of signals, e.g., EEG signals recorded in the training and test sessions often exhibit different discriminant features. These limitations are common in current practical applications of BCI systems and often degrade the performance of traditional feature extraction algorithms. In this paper, we propose two strategies to obtain semi-supervised feature extractors by improving a previous feature extraction method extreme energy ratio (EER). The two methods are termed semi-supervised temporally smooth EER and semi-supervised importance weighted EER, respectively. The former constructs a regularization term on the preservation of the temporal manifold of test samples and adds this as a constraint to the learning of spatial filters. The latter defines two kinds of weights by exploiting the distribution information of test samples and assigns the weights to training data points and trials to improve the estimation of covariance matrices. Both of these two methods regularize the spatial filters to make them more robust and adaptive to the test sessions. Experimental results on data sets from nine subjects with comparisons to the previous EER demonstrate their better capability for classification.",
"title": ""
},
{
"docid": "3cabea669b02ca2653b880c0e0603005",
"text": "A simple method is presented to remedy the hysteresis problem associated with the gate dielectric of poly(4-vinyl phenol) (PVPh), which is widely used for organic transistors. The method involves simple blanket illumination of deep ultraviolet (UV) on the PVPh layer at room temperature. The exposure results in the photochemical transformation of hydroxyl groups in PVPh via the UV/ozone effect. This reduction in the concentration of hydroxyl groups enables one to effectively control the hysteresis problem even when the layer is exposed to moisture. The contrast created in the concentration of hydroxyl groups between the exposed and unexposed parts of PVPh also allows simultaneous patterning of the dielectric layer.",
"title": ""
},
{
"docid": "88478e315049f2c155bb611d797e8eb1",
"text": "In this paper we analyze aspects of the intellectual property strategies of firms in the global cosmetics and toilet preparations industry. Using detailed data on all 4,205 EPO patent grants in the relevant IPC class between 1980 and 2001, we find that about 15 percent of all patents are challenged in EPO opposition proceedings, a rate about twice as high as in the overall population of EPO patents. Moreover, opposition in this sector is more frequent than in chemicals-based high technology industries such as biotechnology and pharmaceuticals. About one third of the opposition cases involve multiple opponents. We search for rationales that could explain this surprisingly strong “IP litigation” activity. In a first step, we use simple probability models to analyze the likelihood of opposition as a function of characteristics of the attacked patent. We then introduce owner firm variables and find that major differences across firms in the likelihood of having their patents opposed prevail even after accounting for other influences. Aggressive opposition in the past appears to be associated with a reduction of attacks on own patents. In future work we will look at the determinants of outcomes and duration of these oppositions, in an attempt to understand the firms’ strategies more fully. Acknowledgements This version of the paper was prepared for presentation at the Productivity Program meetingsof the NBER Summer Institute. An earlier version of the paper was presented in February 2002 at the University of Maastricht Workshop on Strategic Management, Innovation and Econometrics, held at Chateau St. Gerlach, Valkenburg. We would like to thank the participants and in particular Franz Palm and John Hagedoorn for their helpful comments.",
"title": ""
},
{
"docid": "54546694b5b43b561237d50ce4a67dfc",
"text": "We describe a load balancing system for parallel intrusion detection on multi-core systems using a novel model allowing fine-grained selection of the network traffic to be analyzed. The system receives data from a network and distributes it to multiple IDSs running on individual CPU cores. In contrast to related approaches, we do not assume a static association of flows to IDS processes but adaptively determine the load of each IDS process to allocate network flows for a limited time window. We developed a priority model for the selection of network data and the assignment process. Special emphasis is given to environments with highly dynamic network traffic, where only a fraction of all data can be analyzed due to system constraints. We show that IDSs analyzing packet payload data disproportionately suffer from random packet drops due to overload. Our proposed system ensures loss-free analysis for selected data streams in a specified time interval. Our primary focus lies on the treatment of dynamic network behavior: neither data should be lost unintentionally, nor analysis processes should be needlessly idle. To evaluate the priority model and assignment systems, we implemented a prototype and evaluated it with real network traffic.",
"title": ""
},
{
"docid": "09338824cf3c6870be2369f47f6ddd17",
"text": "Myocardial infarction (MI) in rats is accompanied by apoptosis in the limbic system and a behavioural syndrome similar to models of depression. We have already shown that probiotics can reduce post-MI apoptosis and designed the present study to determine if probiotics can also prevent post-MI depressive behaviour. We also tested the hypothesis that probiotics achieve their central effects through changes in the intestinal barrier. MI was induced in anaesthetised rats via 40-min transient occlusion of the left anterior coronary artery. Sham rats underwent the same surgical procedure without actual coronary occlusion. For 7 d before MI and between the seventh post-MI day and euthanasia, half the MI and sham rats were given one billion live bacterial cells of Lactobacillus helveticus R0052 and Bifidobacterium longum R0175 per d dissolved in water, while the remaining animals received only the vehicle (maltodextrin). Depressive behaviour was evaluated 2 weeks post-MI in social interaction, forced swimming and passive avoidance step-down tests. Intestinal permeability was evaluated by oral administration with fluorescein isothiocyanate-dextran, 4 h before euthanasia. MI rats displayed less social interaction and impaired performance in the forced swimming and passive avoidance step-down tests compared to the sham controls (P < 0·05). Probiotics reversed the behavioural effects of MI (P < 0·05), but did not alter the behaviour of sham rats. Intestinal permeability was increased in MI rats and reversed by probiotics. In conclusion, L. helveticus R0052 and B. longum R0175 combination interferes with the development of post-MI depressive behaviour and restores intestinal barrier integrity in MI rats.",
"title": ""
},
{
"docid": "9d1772aaf73fce855fb4c79bfdde938a",
"text": "OBJECTIVE\nTo provide guidance for the organisation and delivery of clinical services and the clinical management of patients who deliberately self-harm, based on scientific evidence supplemented by expert clinical consensus and expressed as recommendations.\n\n\nMETHOD\nArticles and information were sourced from search engines including PubMed, EMBASE, MEDLINE and PsycINFO for several systematic reviews, which were supplemented by literature known to the deliberate self-harm working group, and from published systematic reviews and guidelines for deliberate self-harm. Information was reviewed by members of the deliberate self-harm working group, and findings were then formulated into consensus-based recommendations and clinical guidance. The guidelines were subjected to successive consultation and external review involving expert and clinical advisors, the public, key stakeholders, professional bodies and specialist groups with interest and expertise in deliberate self-harm.\n\n\nRESULTS\nThe Royal Australian and New Zealand College of Psychiatrists clinical practice guidelines for deliberate self-harm provide up-to-date guidance and advice regarding the management of deliberate self-harm patients, which is informed by evidence and clinical experience. The clinical practice guidelines for deliberate self-harm is intended for clinical use and service development by psychiatrists, psychologists, physicians and others with an interest in mental health care.\n\n\nCONCLUSION\nThe clinical practice guidelines for deliberate self-harm address self-harm within specific population sub-groups and provide up-to-date recommendations and guidance within an evidence-based framework, supplemented by expert clinical consensus.",
"title": ""
},
{
"docid": "22c85072db1f5b5a51b69fcabf01eb5e",
"text": "Websites’ and mobile apps’ privacy policies, written in natural language, tend to be long and difficult to understand. Information privacy revolves around the fundamental principle of notice and choice, namely the idea that users should be able to make informed decisions about what information about them can be collected and how it can be used. Internet users want control over their privacy, but their choices are often hidden in long and convoluted privacy policy documents. Moreover, little (if any) prior work has been done to detect the provision of choices in text. We address this challenge of enabling user choice by automatically identifying and extracting pertinent choice language in privacy policies. In particular, we present a two-stage architecture of classification models to identify opt-out choices in privacy policy text, labelling common varieties of choices with a mean F1 score of 0.735. Our techniques enable the creation of systems to help Internet users to learn about their choices, thereby effectuating notice and choice and improving Internet privacy.",
"title": ""
},
{
"docid": "85e42e9dd33ed5ece93aa73a6fe1b6e3",
"text": "In the present work CO2 continuous laser welding process was successfully applied and optimized for joining a dissimilar AISI 316 stainless steel and AISI 1009 low carbon steel plates. Laser power, welding speed, and defocusing distance combinations were carefully selected with the objective of producing welded joint with complete penetration, minimum fusion zone size and acceptable welding profile. Fusion zone area and shape of dissimilar austenitic stainless steel with ferritic low carbon steel were evaluated as a function of the selected laser welding parameters. Taguchi approach was used as statistical design of experiment (DOE) technique for optimizing the selected welding parameters in terms of minimizing the fusion zone. Mathematical models were developed to describe the influence of the selected parameters on the fusion zone area and shape, to predict its value within the limits of the variables being studied. The result indicates that the developed models can predict the responses satisfactorily.",
"title": ""
},
{
"docid": "eef278400e3526a90e144662aab9af12",
"text": "BACKGROUND\nMango is a highly perishable seasonal fruit and large quantities are wasted during the peak season as a result of poor postharvest handling procedures. Processing surplus mango fruits into flour to be used as a functional ingredient appears to be a good preservation method to ensure its extended consumption.\n\n\nRESULTS\nIn the present study, the chemical composition, bioactive/antioxidant compounds and functional properties of green and ripe mango (Mangifera indica var. Chokanan) peel and pulp flours were evaluated. Compared to commercial wheat flour, mango flours were significantly low in moisture and protein, but were high in crude fiber, fat and ash content. Mango flour showed a balance between soluble and insoluble dietary fiber proportions, with total dietary fiber content ranging from 3.2 to 5.94 g kg⁻¹. Mango flours exhibited high values for bioactive/antioxidant compounds compared to wheat flour. The water absorption capacity and oil absorption capacity of mango flours ranged from 0.36 to 0.87 g kg⁻¹ and from 0.18 to 0.22 g kg⁻¹, respectively.\n\n\nCONCLUSION\nResults of this study showed mango peel flour to be a rich source of dietary fiber with good antioxidant and functional properties, which could be a useful ingredient for new functional food formulations.",
"title": ""
},
{
"docid": "4dffb7bcd82bcc2fbb7291233e4f8f88",
"text": "In the following paper, we present a framework for quickly training 2D object detectors for robotic perception. Our method can be used by robotics practitioners to quickly (under 30 seconds per object) build a large-scale real-time perception system. In particular, we show how to create new detectors on the fly using large-scale internet image databases, thus allowing a user to choose among thousands of available categories to build a detection system suitable for the particular robotic application. Furthermore, we show how to adapt these models to the current environment with just a few in-situ images. Experiments on existing 2D benchmarks evaluate the speed, accuracy, and flexibility of our system.",
"title": ""
},
{
"docid": "ab01dc16d6f31a423b68fca2aeb8e109",
"text": "Matrix factorization techniques have been frequently applied in information retrieval, computer vision, and pattern recognition. Among them, Nonnegative Matrix Factorization (NMF) has received considerable attention due to its psychological and physiological interpretation of naturally occurring data whose representation may be parts based in the human brain. On the other hand, from the geometric perspective, the data is usually sampled from a low-dimensional manifold embedded in a high-dimensional ambient space. One then hopes to find a compact representation,which uncovers the hidden semantics and simultaneously respects the intrinsic geometric structure. In this paper, we propose a novel algorithm, called Graph Regularized Nonnegative Matrix Factorization (GNMF), for this purpose. In GNMF, an affinity graph is constructed to encode the geometrical information and we seek a matrix factorization, which respects the graph structure. Our empirical study shows encouraging results of the proposed algorithm in comparison to the state-of-the-art algorithms on real-world problems.",
"title": ""
}
] |
scidocsrr
|
cb3d3f595f86e24489cecacc94a046fe
|
Greater Cortical Thickness in Elderly Female Yoga Practitioners—A Cross-Sectional Study
|
[
{
"docid": "7b347abe744b19215cf7a50ebd1b7f89",
"text": "The thickness of the cerebral cortex was measured in 106 non-demented participants ranging in age from 18 to 93 years. For each participant, multiple acquisitions of structural T1-weighted magnetic resonance imaging (MRI) scans were averaged to yield high-resolution, high-contrast data sets. Cortical thickness was estimated as the distance between the gray/white boundary and the outer cortical surface, resulting in a continuous estimate across the cortical mantle. Global thinning was apparent by middle age. Men and women showed a similar degree of global thinning, and did not differ in mean thickness in the younger or older groups. Age-associated differences were widespread but demonstrated a patchwork of regional atrophy and sparing. Examination of subsets of the data from independent samples produced highly similar age-associated patterns of atrophy, suggesting that the specific anatomic patterns within the maps were reliable. Certain results, including prominent atrophy of prefrontal cortex and relative sparing of temporal and parahippocampal cortex, converged with previous findings. Other results were unexpected, such as the finding of prominent atrophy in frontal cortex near primary motor cortex and calcarine cortex near primary visual cortex. These findings demonstrate that cortical thinning occurs by middle age and spans widespread cortical regions that include primary as well as association cortex.",
"title": ""
},
{
"docid": "5dde27787ee92c2e56729b25b9ca4311",
"text": "The prefrontal cortex (PFC) subserves cognitive control: the ability to coordinate thoughts or actions in relation with internal goals. Its functional architecture, however, remains poorly understood. Using brain imaging in humans, we showed that the lateral PFC is organized as a cascade of executive processes from premotor to anterior PFC regions that control behavior according to stimuli, the present perceptual context, and the temporal episode in which stimuli occur, respectively. The results support an unified modular model of cognitive control that describes the overall functional organization of the human lateral PFC and has basic methodological and theoretical implications.",
"title": ""
}
] |
[
{
"docid": "62c71a412a8b715e2fda64cd8b6a2a66",
"text": "We study the design of local algorithms for massive graphs. A local graph algorithm is one that finds a solution containing or near a given vertex without looking at the whole graph. We present a local clustering algorithm. Our algorithm finds a good cluster—a subset of vertices whose internal connections are significantly richer than its external connections—near a given vertex. The running time of our algorithm, when it finds a nonempty local cluster, is nearly linear in the size of the cluster it outputs. The running time of our algorithm also depends polylogarithmically on the size of the graph and polynomially on the conductance of the cluster it produces. Our clustering algorithm could be a useful primitive for handling massive graphs, such as social networks and webgraphs. As an application of this clustering algorithm, we present a partitioning algorithm that finds an approximate sparsest cut with nearly optimal balance. Our algorithm takes time nearly linear in the number edges of the graph. Using the partitioning algorithm of this paper, we have designed a nearly linear time algorithm for constructing spectral sparsifiers of graphs, which we in turn use in a nearly linear time algorithm for solving linear systems in symmetric, diagonally dominant matrices. The linear system solver also leads to a nearly linear time algorithm for approximating the secondsmallest eigenvalue and corresponding eigenvector of the Laplacian matrix of a graph. These other results are presented in two companion papers.",
"title": ""
},
{
"docid": "3d4cfb2d3ba1e70e5dd03060f5d5f663",
"text": "BACKGROUND\nAlzheimer's disease (AD) causes considerable distress in caregivers who are continuously required to deal with requests from patients. Coping strategies play a fundamental role in modulating the psychologic impact of the disease, although their role is still debated. The present study aims to evaluate the burden and anxiety experienced by caregivers, the effectiveness of adopted coping strategies, and their relationships with burden and anxiety.\n\n\nMETHODS\nEighty-six caregivers received the Caregiver Burden Inventory (CBI) and the State-Trait Anxiety Inventory (STAI Y-1 and Y-2). The coping strategies were assessed by means of the Coping Inventory for Stressful Situations (CISS), according to the model proposed by Endler and Parker in 1990.\n\n\nRESULTS\nThe CBI scores (overall and single sections) were extremely high and correlated with dementia severity. Women, as well as older caregivers, showed higher scores. The trait anxiety (STAI-Y-2) correlated with the CBI overall score. The CISS showed that caregivers mainly adopted task-focused strategies. Women mainly adopted emotion-focused strategies and this style was related to a higher level of distress.\n\n\nCONCLUSION\nAD is associated with high distress among caregivers. The burden strongly correlates with dementia severity and is higher in women and in elderly subjects. Chronic anxiety affects caregivers who mainly rely on emotion-oriented coping strategies. The findings suggest providing support to families of patients with AD through tailored strategies aimed to reshape the dysfunctional coping styles.",
"title": ""
},
{
"docid": "6acb744fdeb496ef6a154c76b794e515",
"text": "UNLABELLED\nOvococci form a morphological group that includes several human pathogens (enterococci and streptococci). Their shape results from two modes of cell wall insertion, one allowing division and one allowing elongation. Both cell wall synthesis modes rely on a single cytoskeletal protein, FtsZ. Despite the central role of FtsZ in ovococci, a detailed view of the in vivo nanostructure of ovococcal Z-rings has been lacking thus far, limiting our understanding of their assembly and architecture. We have developed the use of photoactivated localization microscopy (PALM) in the ovococcus human pathogen Streptococcus pneumoniae by engineering spDendra2, a photoconvertible fluorescent protein optimized for this bacterium. Labeling of endogenously expressed FtsZ with spDendra2 revealed the remodeling of the Z-ring's morphology during the division cycle at the nanoscale level. We show that changes in the ring's axial thickness and in the clustering propensity of FtsZ correlate with the advancement of the cell cycle. In addition, we observe double-ring substructures suggestive of short-lived intermediates that may form upon initiation of septal cell wall synthesis. These data are integrated into a model describing the architecture and the remodeling of the Z-ring during the cell cycle of ovococci.\n\n\nIMPORTANCE\nThe Gram-positive human pathogen S. pneumoniae is responsible for 1.6 million deaths per year worldwide and is increasingly resistant to various antibiotics. FtsZ is a cytoskeletal protein polymerizing at midcell into a ring-like structure called the Z-ring. FtsZ is a promising new antimicrobial target, as its inhibition leads to cell death. A precise view of the Z-ring architecture in vivo is essential to understand the mode of action of inhibitory drugs (see T. den Blaauwen, J. M. Andreu, and O. Monasterio, Bioorg Chem 55:27-38, 2014, doi:10.1016/j.bioorg.2014.03.007, for a review on FtsZ inhibitors). This is notably true in ovococcoid bacteria like S. pneumoniae, in which FtsZ is the only known cytoskeletal protein. We have used superresolution microscopy to obtain molecular details of the pneumococcus Z-ring that have so far been inaccessible with conventional microscopy. This study provides a nanoscale description of the Z-ring architecture and remodeling during the division of ovococci.",
"title": ""
},
{
"docid": "16b2fd63778450887118e62096e6af26",
"text": "Due to the rapid proliferation of image capturing devices and user-friendly editing software suites, image manipulation is at everyone's hand. For this reason, the forensic community has developed a series of techniques to determine image authenticity. In this paper, we propose an algorithm for image tampering detection and localization, leveraging characteristic footprints left on images by different camera models. The rationale behind our algorithm is that all pixels of pristine images should be detected as being shot with a single device. Conversely, if a picture is obtained through image composition, traces of multiple devices can be detected. The proposed algorithm exploits a convolutional neural network (CNN) to extract characteristic camera model features from image patches. These features are then analyzed by means of iterative clustering techniques in order to detect whether an image has been forged, and localize the alien region.",
"title": ""
},
{
"docid": "1d6f3a0921f373045c39e8232f817c79",
"text": "This paper presents the design of a low voltage differential signaling (LVDS) receiver for a 1.3 Gb/s physical layer (PRY) interface. The receiver supports a wide input common mode range of 0.05 V to 2.35 V and a minimum input differential signal of 100 mV as specified by the IEEE LVDS standard. The design is implemented in 0.13 /spl mu/m CMOS technology using both thick (3.3 V) and thin (1.2 V) gate oxide devices and the receiver consumes 11 mW of power. The receiver provides the interface between PHY and media access control (MAC) sub-layers.",
"title": ""
},
{
"docid": "8e10d20723be23d699c0c581c529ee19",
"text": "Insect-scale legged robots have the potential to locomote on rough terrain, crawl through confined spaces, and scale vertical and inverted surfaces. However, small scale implies that such robots are unable to carry large payloads. Limited payload capacity forces miniature robots to utilize simple control methods that can be implemented on a simple onboard microprocessor. In this study, the design of a new version of the biologically-inspired Harvard Ambulatory MicroRobot (HAMR) is presented. In order to find the most suitable control inputs for HAMR, maneuverability experiments are conducted for several drive parameters. Ideal input candidates for orientation and lateral velocity control are identified as a result of the maneuverability experiments. Using these control inputs, two simple feedback controllers are implemented to control the orientation and the lateral velocity of the robot. The controllers are used to force the robot to track trajectories with a minimum turning radius of 55 mm and a maximum lateral to normal velocity ratio of 0.8. Due to their simplicity, the controllers presented in this work are ideal for implementation with on-board computation for future HAMR prototypes.",
"title": ""
},
{
"docid": "49c87552a43f75200fb869aa13de0cf5",
"text": "We combine data from a field survey with transaction log data from a mobile phone operator to provide new insight into daily patterns of mobile phone use in Rwanda. The analysis is divided into three parts. First, we present a statistical comparison of the general Rwandan population to the population of mobile phone owners in Rwanda. We find that phone owners are considerably wealthier, better educated, and more predominantly male than the general population. Second, we analyze patterns of phone use and access, based on self-reported survey data. We note statistically significant differences by gender; for instance, women are more likely to use shared phones than men. Third, we perform a quantitative analysis of calling patterns and social network structure using mobile operator billing logs. By these measures, the differences between men and women are more modest, but we observe vast differences in utilization between the relatively rich and the relatively poor. Taken together, the evidence in this paper suggests that phones are disproportionately owned and used by the privileged strata of Rwandan society.",
"title": ""
},
{
"docid": "ccf819d0d247bc795a5dfa72df91c1c5",
"text": "O Query # Rules/CQs Rewriting time, ms (avg. eval. time, DLV) RequiemG Presto Clipper RequiemG Presto Clipper Q1 27 53 42 281 45 50 A Q2 50 32 31 184 46 62 Q3 104 32 31 292 27 65 Q4 224 43 36 523 32 71 Q1 6 7 10 14 7 19 S Q2 2 3 22 263 9 22 Q3 4 4 9 1717 10 21 Q4 4 4 24 1611 9 23 Q1 2 4 2 14 ( 1247 ) 12 ( 1252 ) 27 ( 1255 ) U Q2 1 2 45 201 ( 1247 ) 23 ( 1262 ) 36 ( 1637 ) Q3 4 8 17 477 ( 2055 ) 26 ( 2172 ) 29 ( 1890 ) Q4 2 56 63 2431 ( 1260 ) 20 ( 1235 ) 28 ( 1735 ) Table: Comparison with other query rewriting engines oveDL-Lite ontologies (Adolena, Stock exchange, University)",
"title": ""
},
{
"docid": "aa8e351d9e4d4065e5ce59718b7f085e",
"text": "A hybrid metal-dielectric nanoantenna promises to harness the large Purcell factor of metallic nanostructures while taking advantage of the high scattering directivity and low dissipative losses of dielectric nanostructures. Here, we investigate a compact hybrid metal-dielectric nanoantenna that is inspired by the Yagi-Uda design. It comprises a metallic gold bowtie nanoantenna feed element and three silicon nanorod directors, exhibiting high unidirectional in-plane directivity and potential beam redirection capability in the visible spectral range. The entire device has a footprint of only 0.38 λ2, and its forward directivity is robust against fabrication imperfections. We use the photoluminescence from the gold bowtie nanoantenna itself as an elegant emitter to characterize the directivity of the device and experimentally demonstrate a directivity of ∼49.2. In addition, we demonstrate beam redirection with our device, achieving a 5° rotation of the main emission lobe with a feed element displacement of only 16 nm. These results are promising for various applications, including on-chip wireless communications, quantum computing, display technologies, and nanoscale alignment.",
"title": ""
},
{
"docid": "363cc184a6cae8b7a81744676e339a80",
"text": "Dismissing-avoidant adults are characterized by expressing relatively low levels of attachment-related distress. However, it is unclear whether this reflects a relative absence of covert distress or an attempt to conceal covert distress. Two experiments were conducted to distinguish between these competing explanations. In Experiment 1, participants were instructed to suppression resulted in a decrease in the accessibility of abandonment-related thoughts for dismissing-avoidant adults. Experiment 2 demonstrated that attempts to suppress the attachment system resulted in decreases in physiological arousal for dismissing-avoidant adults. These experiments indicate that dismissing-avoidant adults are capable of suppressing the latent activation of their attachment system and are not simply concealing latent distress. The discussion focuses on development, cognitive, and social factors that may promote detachment.",
"title": ""
},
{
"docid": "14a8b362e7ba287d21d5ce3c4f87c733",
"text": "A novel model-based approach to 3D hand tracking from monocular video is presented. The 3D hand pose, the hand texture, and the illuminant are dynamically estimated through minimization of an objective function. Derived from an inverse problem formulation, the objective function enables explicit use of temporal texture continuity and shading information while handling important self-occlusions and time-varying illumination. The minimization is done efficiently using a quasi-Newton method, for which we provide a rigorous derivation of the objective function gradient. Particular attention is given to terms related to the change of visibility near self-occlusion boundaries that are neglected in existing formulations. To this end, we introduce new occlusion forces and show that using all gradient terms greatly improves the performance of the method. Qualitative and quantitative experimental results demonstrate the potential of the approach.",
"title": ""
},
{
"docid": "2d356c3d189bbd3bf9ba9db9b5878780",
"text": "Training deep networks for semantic segmentation requires annotation of large amounts of data, which can be time-consuming and expensive. Unfortunately, these trained networks still generalize poorly when tested in domains not consistent with the training data. In this paper, we show that by carefully presenting a mixture of labeled source domain and proxy-labeled target domain data to a network, we can achieve state-of-the-art unsupervised domain adaptation results. With our design, the network progressively learns features specific to the target domain using annotation from only the source domain. We generate proxy labels for the target domain using the network’s own predictions. Our architecture then allows selective mining of easy samples from this set of proxy labels, and hard samples from the annotated source domain. We conduct a series of experiments with the GTA5, Cityscapes and BDD100k datasets on synthetic-to-real domain adaptation and geographic domain adaptation, showing the advantages of our method over baselines and existing approaches.",
"title": ""
},
{
"docid": "63de2448edead6e16ef2bc86c3acd77b",
"text": "In traditional topic models such as LDA, a word is generated by choosing a topic from a collection. However, existing topic models do not identify different types of topics in a document, such as topics that represent the content and topics that represent the sentiment. In this paper, our goal is to discover such different types of topics, if they exist. We represent our model as several parallel topic models (called topic factors), where each word is generated from topics from these factors jointly. Since the latent membership of the word is now a vector, the learning algorithms become challenging. We show that using a variational approximation still allows us to keep the algorithm tractable. Our experiments over several datasets show that our approach consistently outperforms many classic topic models while also discovering fewer, more meaningful, topics. 1",
"title": ""
},
{
"docid": "845ee0b77e30a01d87e836c6a84b7d66",
"text": "This paper proposes an efficient and effective scheme to applying the sliding window approach popular in computer vision to 3D data. Specifically, the sparse nature of the problem is exploited via a voting scheme to enable a search through all putative object locations at any orientation. We prove that this voting scheme is mathematically equivalent to a convolution on a sparse feature grid and thus enables the processing, in full 3D, of any point cloud irrespective of the number of vantage points required to construct it. As such it is versatile enough to operate on data from popular 3D laser scanners such as a Velodyne as well as on 3D data obtained from increasingly popular push-broom configurations. Our approach is “embarrassingly parallelisable” and capable of processing a point cloud containing over 100K points at eight orientations in less than 0.5s. For the object classes car, pedestrian and bicyclist the resulting detector achieves best-in-class detection and timing performance relative to prior art on the KITTI dataset as well as compared to another existing 3D object detection approach.",
"title": ""
},
{
"docid": "60e56a59ecbdee87005407ed6a117240",
"text": "The visionary Steve Jobs said, “A lot of times, people don’t know what they want until you show it to them.” A powerful recommender system not only shows people similar items, but also helps them discover what they might like, and items that complement what they already purchased. In this paper, we attempt to instill a sense of “intention” and “style” into our recommender system, i.e., we aim to recommend items that are visually complementary with those already consumed. By identifying items that are visually coherent with a query item/image, our method facilitates exploration of the long tail items, whose existence users may be even unaware of. This task is formulated only recently by Julian et al. [1], with the input being millions of item pairs that are frequently viewed/bought together, entailing noisy style coherence. In the same work, the authors proposed a Mahalanobisbased transform to discriminate a given pair to be sharing a same style or not. Despite its success, we experimentally found that it’s only able to recommend items on the margin of different clusters, which leads to limited coverage of the items to be recommended. Another limitation is it totally ignores the existence of taxonomy information that is ubiquitous in many datasets like Amazon the authors experimented with. In this report, we propose two novel methods that make use of the hierarchical category metadata to overcome the limitations identified above. The main contributions are listed as following.",
"title": ""
},
{
"docid": "8509e54bfc57829fcb5542116acc274f",
"text": "Big Data has become the new ubiquitous term used to describe massive collection of datasets that are difficult to process using traditional database and software techniques. Most of this data is inaccessible to users, as we need technology and tools to find, transform, analyze, and visualize data in order to make it consumable for decision-making. One aspect of Big Data research is dealing with the Variety of data that includes various formats such as structured, numeric, unstructured text data, email, video, audio, stock ticker, etc. Managing, merging, and governing a variety of data is the focus of this paper. This paper proposes a semantic Extract-Transform-Load (ETL) framework that uses semantic technologies to integrate and publish data from multiple sources as open linked data. This includes - creation of a semantic data model to provide a basis for integration and understanding of knowledge from multiple sources, creation of a distributed Web of data using Resource Description Framework (RDF) as the graph data model, extraction of useful knowledge and information from the combined data using SPARQL as the semantic query language.",
"title": ""
},
{
"docid": "c32b7f497450d92634ea097bbb062178",
"text": "This work addresses fine-grained image classification. Our work is based on the hypothesis that when dealing with subtle differences among object classes it is critical to identify and only account for a few informative image parts, as the remaining image context may not only be uninformative but may also hurt recognition. This motivates us to formulate our problem as a sequential search for informative parts over a deep feature map produced by a deep Convolutional Neural Network (CNN). A state of this search is a set of proposal bounding boxes in the image, whose informativeness is evaluated by the heuristic function (H), and used for generating new candidate states by the successor function (S). The two functions are unified via a Long Short-Term Memory network (LSTM) into a new deep recurrent architecture, called HSnet. Thus, HSnet (i) generates proposals of informative image parts and (ii) fuses all proposals toward final fine-grained recognition. We specify both supervised and weakly supervised training of HSnet depending on the availability of object part annotations. Evaluation on the benchmark Caltech-UCSD Birds 200-2011 and Cars-196 datasets demonstrate our competitive performance relative to the state of the art.",
"title": ""
},
{
"docid": "ff0027291a4b6765a64c3132b8e63cfa",
"text": "Predicting trends in stock market prices has been an area of interest for researchers for many years due to its complex and dynamic nature. Intrinsic volatility in stock market across the globe makes the task of prediction challenging. Forecasting and diffusion modeling, although effective can’t be the panacea to the diverse range of problems encountered in prediction, short-term or otherwise. Market risk, strongly correlated with forecasting errors, needs to be minimized to ensure minimal risk in investment. The authors propose to minimize forecasting error by treating the forecasting problem as a classification problem, a popular suite of algorithms in Machine learning. In this paper, we propose a novel way to minimize the risk of investment in stock market by predicting the returns of a stock using a class of powerful machine learning algorithms known as ensemble learning. Some of the technical indicators such as Relative Strength Index (RSI), stochastic oscillator etc are used as inputs to train our model. The learning model used is an ensemble of multiple decision trees. The algorithm is shown to outperform existing algorithms found in the literature. Out of Bag (OOB) error estimates have been found to be encouraging.",
"title": ""
},
{
"docid": "1857eb0d2d592961bd7c1c2f226df616",
"text": "The increasing integration of the Internet of Everything into the industrial value chain has built the foundation for the next industrial revolution called Industrie 4.0. Although Industrie 4.0 is currently a top priority for many companies, research centers, and universities, a generally accepted understanding of the term does not exist. As a result, discussing the topic on an academic level is difficult, and so is implementing Industrie 4.0 scenarios. Based on a quantitative text analysis and a qualitative literature review, the paper identifies design principles of Industrie 4.0. Taking into account these principles, academics may be enabled to further investigate on the topic, while practitioners may find assistance in identifying appropriate scenarios. A case study illustrates how the identified design principles support practitioners in identifying Industrie 4.0 scenarios.",
"title": ""
},
{
"docid": "a3585d424a54c31514aba579b80d8231",
"text": "The vast majority of today's critical infrastructure is supported by numerous feedback control loops and an attack on these control loops can have disastrous consequences. This is a major concern since modern control systems are becoming large and decentralized and thus more vulnerable to attacks. This paper is concerned with the estimation and control of linear systems when some of the sensors or actuators are corrupted by an attacker. We give a new simple characterization of the maximum number of attacks that can be detected and corrected as a function of the pair (A,C) of the system and we show in particular that it is impossible to accurately reconstruct the state of a system if more than half the sensors are attacked. In addition, we show how the design of a secure local control loop can improve the resilience of the system. When the number of attacks is smaller than a threshold, we propose an efficient algorithm inspired from techniques in compressed sensing to estimate the state of the plant despite attacks. We give a theoretical characterization of the performance of this algorithm and we show on numerical simulations that the method is promising and allows to reconstruct the state accurately despite attacks. Finally, we consider the problem of designing output-feedback controllers that stabilize the system despite sensor attacks. We show that a principle of separation between estimation and control holds and that the design of resilient output feedback controllers can be reduced to the design of resilient state estimators.",
"title": ""
}
] |
scidocsrr
|
54015d1232132c3bc906555d6efb734c
|
Spatial Resolution Enhancement of Low-Resolution Image Sequences A Comprehensive Review with Directions for Future Research
|
[
{
"docid": "7dbb7d378eae5c4b77076aa9504ba871",
"text": "The authors present a Markov random field model which allows realistic edge modeling while providing stable maximum a posterior (MAP) solutions. The model, referred to as a generalized Gaussian Markov random field (GGMRF), is named for its similarity to the generalized Gaussian distribution used in robust detection and estimation. The model satisfies several desirable analytical and computational properties for map estimation, including continuous dependence of the estimate on the data, invariance of the character of solutions to scaling of data, and a solution which lies at the unique global minimum of the a posteriori log-likelihood function. The GGMRF is demonstrated to be useful for image reconstruction in low-dosage transmission tomography.",
"title": ""
}
] |
[
{
"docid": "6055957e5f48c5f82afcfa641176b759",
"text": "This article presents the design of a low cost fully active phased array antenna with specific emphasis on the realization of an elementary radiating cell. The phased array antenna is designed for mobile satellite services and dedicated for automotive applications. Details on the radiating element design as well as its implementation in a multilayer's build-up are presented and discussed. Results of the measurements and characterization of the elementary radiating cell are also presented and discussed. An outlook of the next steps in the antenna realization concludes this paper.",
"title": ""
},
{
"docid": "397b3b96c16b2ce310ab61f9d2d7bdbf",
"text": "Dependency networks approximate a joint probability distribution over multiple random variables as a product of conditional distributions. Relational Dependency Networks (RDNs) are graphical models that extend dependency networks to relational domains. This higher expressivity, however, comes at the expense of a more complex model-selection problem: an unbounded number of relational abstraction levels might need to be explored. Whereas current learning approaches for RDNs learn a single probability tree per random variable, we propose to turn the problem into a series of relational function-approximation problems using gradient-based boosting. In doing so, one can easily induce highly complex features over several iterations and in turn estimate quickly a very expressive model. Our experimental results in several different data sets show that this boosting method results in efficient learning of RDNs when compared to state-of-the-art statistical relational learning approaches.",
"title": ""
},
{
"docid": "b515eb759984047f46f9a0c27b106f47",
"text": "Visual motion estimation is challenging, due to high data rates, fast camera motions, featureless or repetitive environments, uneven lighting, and many other issues. In this work, we propose a twolayer approach for visual odometry with stereo cameras, which runs in real-time and combines feature-based matching with semi-dense direct image alignment. Our method initializes semi-dense depth estimation, which is computationally expensive, from motion that is tracked by a fast but robust feature point-based method. By that, we are not only able to efficiently estimate the pose of the camera with a high frame rate, but also to reconstruct the 3D structure of the environment at image gradients, which is useful, e.g., for mapping and obstacle avoidance. Experiments on datasets captured by a micro aerial vehicle (MAV) show that our approach is faster than state-of-the-art methods without losing accuracy. Moreover, our combined approach achieves promising results on the KITTI dataset, which is very challenging for direct methods, because of the low frame rate in conjunction with fast motion.",
"title": ""
},
{
"docid": "ffdddba343bb0aa47fc101696ab3696d",
"text": "The meaning of a sentence in a document is more easily determined if its constituent words exhibit cohesion with respect to their individual semantics. This paper explores the degree of cohesion among a document's words using lexical chains as a semantic representation of its meaning. Using a combination of diverse types of lexical chains, we develop a text document representation that can be used for semantic document retrieval. For our approach, we develop two kinds of lexical chains: (i) a multilevel flexible chain representation of the extracted semantic values, which is used to construct a fixed segmentation of these chains and constituent words in the text; and (ii) a fixed lexical chain obtained directly from the initial semantic representation from a document. The extraction and processing of concepts is performed using WordNet as a lexical database. The segmentation then uses these lexical chains to model the dispersion of concepts in the document. Representing each document as a high-dimensional vector, we use spherical k-means clustering to demonstrate that our approach performs better than previ-",
"title": ""
},
{
"docid": "653f7e6f8aac3464eeac88a5c2f21f2e",
"text": "The decentralized electronic currency system Bitcoin gives the possibility to execute transactions via direct communication between users, without the need to resort to third parties entrusted with legitimizing the concerned monetary value. In its current state of development a recent, fast-changing, volatile and highly mediatized technology the discourses that unfold within spaces of information and discussion related to Bitcoin can be analysed in light of their ability to produce at once the representations of value, the practices according to which it is transformed and evolves, and the devices allowing for its implementation. The literature on the system is a testament to how the Bitcoin debates do not merely spread, communicate and diffuse representation of this currency, but are closely intertwined with the practice of the money itself. By focusing its attention on a specific corpus, that of expert discourse, the article shows how, introducing and discussing a specific device, dynamic or operation as being in some way related to trust, this expert knowledge contributes to the very definition and shaping of this trust within the Bitcoin system ultimately contributing to perform the shared definition of its value as a currency.",
"title": ""
},
{
"docid": "457f2508c59daaae9af818f8a6a963d1",
"text": "Robotic systems hold great promise to assist with household, educational, and research tasks, but the difficulties of designing and building such robots often are an inhibitive barrier preventing their development. This paper presents a framework in which simple robots can be easily designed and then rapidly fabricated and tested, paving the way for greater proliferation of robot designs. The Python package presented in this work allows for the scripted generation of mechanical elements, using the principles of hierarchical structure and modular reuse to simplify the design process. These structures are then manufactured using an origami-inspired method in which precision cut sheets of plastic film are folded to achieve desired geometries. Using these processes, lightweight, low cost, rapidly built quadrotors were designed and fabricated. Flight tests compared the resulting robots against similar micro air vehicles (MAVs) generated using other processes. Despite lower tolerance and precision, robots generated using the process presented in this work took significantly less time and cost to design and build, and yielded lighter, lower power MAVs.",
"title": ""
},
{
"docid": "c95c46d75c2ff3c783437100ba06b366",
"text": "Co-references are traditionally used when integrating data from different datasets. This approach has various benefits such as fault tolerance, ease of integration and traceability of provenance; however, it often results in the problem of entity consolidation, i.e., of objectively stating whether all the co-references do really refer to the same entity; and, when this is the case, whether they all convey the same intended meaning. Relying on the sole presence of a single equivalence (owl:sameAs) statement is often problematic and sometimes may even cause serious troubles. It has been observed that to indicate the likelihood of an equivalence one could use a numerically weighted measure, but the real hard questions of where precisely will these values come from arises. To answer this question we propose a methodology based on a graph clustering algorithm.",
"title": ""
},
{
"docid": "bbee52ebe65b2f7b8d0356a3fbdb80bf",
"text": "Science Study Book Corpus Document Filter [...] enters a d orbital. The valence electrons (those added after the last noble gas configuration) in these elements include the ns and (n \\u2013 1) d electrons. The official IUPAC definition of transition elements specifies those with partially filled d orbitals. Thus, the elements with completely filled orbitals (Zn, Cd, Hg, as well as Cu, Ag, and Au in Figure 6.30) are not technically transition elements. However, the term is frequently used to refer to the entire d block (colored yellow in Figure 6.30), and we will adopt this usage in this textbook. Inner transition elements are metallic elements in which the last electron added occupies an f orbital.",
"title": ""
},
{
"docid": "681aba7f37ae6807824c299454af5721",
"text": "Due to their rapid growth and deployment, Internet of things (IoT) devices have become a central aspect of our daily lives. However, they tend to have many vulnerabilities which can be exploited by an attacker. Unsupervised techniques, such as anomaly detection, can help us secure the IoT devices. However, an anomaly detection model must be trained for a long time in order to capture all benign behaviors. This approach is vulnerable to adversarial attacks since all observations are assumed to be benign while training the anomaly detection model. In this paper, we propose CIoTA, a lightweight framework that utilizes the blockchain concept to perform distributed and collaborative anomaly detection for devices with limited resources. CIoTA uses blockchain to incrementally update a trusted anomaly detection model via self-attestation and consensus among IoT devices. We evaluate CIoTA on our own distributed IoT simulation platform, which consists of 48 Raspberry Pis, to demonstrate CIoTA’s ability to enhance the security of each device and the security of the network as a whole.",
"title": ""
},
{
"docid": "7b205b171481afeb46d7347428b223cf",
"text": "The power–voltage characteristic of photovoltaic (PV) arrays displays multiple local maximum power points when all the modules do not receive uniform solar irradiance, i.e., under partial shading conditions (PSCs). Conventional maximum power point tracking (MPPT) methods are shown to be effective under uniform solar irradiance conditions. However, they may fail to track the global peak under PSCs. This paper proposes a new method for MPPT of PV arrays under both PSCs and uniform conditions. By analyzing the solar irradiance pattern and using the popular Hill Climbing method, the proposed method tracks all local maximum power points. The performance of the proposed method is evaluated through simulations in MATLAB/SIMULINK environment. Besides, the accuracy of the proposed method is proved using experimental results.",
"title": ""
},
{
"docid": "537100a375574a44b09c65e1b80752e4",
"text": "The underlying etiology of anterior knee pain has been extensively studied. Despite many possible causes, often times the diagnosis is elusive. The most common causes in the young athlete are osteosynchondroses, patellar peritendinitis and tendinosis, synovial impingement, malalignment, and patellar instability. Less common causes are osteochondritis dissecans and tumors. It is always important to rule out underlying hip pathology and infections. When a diagnosis cannot be established, the patient is usually labeled as having idiopathic anterior knee pain. A careful history and physical examination can point to the correct diagnosis in the majority of cases. For most of these conditions, treatment is typically nonoperative with surgery reserved for refractory pain for an established diagnosis.",
"title": ""
},
{
"docid": "bed9bdf4d4965610b85378f2fdbfab2a",
"text": "Application of data mining techniques to the World Wide Web, referred to as Web mining, has been the focus of several recent research projects and papers. However, there is n o established vocabulary, leading to confusion when comparing research efforts. The t e r m W e b mining has been used in two distinct ways. T h e first, called Web content mining in this paper, is the process of information discovery f rom sources across the World Wide Web. The second, called Web m a g e mining, is the process of mining f o r user browsing and access patterns. I n this paper we define W e b mining and present an overview of the various research issues, techniques, and development e f forts . W e briefly describe W E B M I N E R , a system for Web usage mining, and conclude this paper by listing research issues.",
"title": ""
},
{
"docid": "44420ad43080042c64da94f5fcec2dd6",
"text": "Cloud computing is an evolving paradigm with tremendous momentum, but its unique aspects exacerbate security and privacy challenges. This article explores the roadblocks and solutions to providing a trustworthy cloud computing environment.",
"title": ""
},
{
"docid": "d6976361b44aab044c563e75056744d6",
"text": "Five adrenoceptor subtypes are involved in the adrenergic regulation of white and brown fat cell function. The effects on cAMP production and cAMP-related cellular responses are mediated through the control of adenylyl cyclase activity by the stimulatory beta 1-, beta 2-, and beta 3-adrenergic receptors and the inhibitory alpha 2-adrenoceptors. Activation of alpha 1-adrenoceptors stimulates phosphoinositidase C activity leading to inositol 1,4,5-triphosphate and diacylglycerol formation with a consequent mobilization of intracellular Ca2+ stores and protein kinase C activation which trigger cell responsiveness. The balance between the various adrenoceptor subtypes is the point of regulation that determines the final effect of physiological amines on adipocytes in vitro and in vivo. Large species-specific differences exist in brown and white fat cell adrenoceptor distribution and in their relative importance in the control of the fat cell. Functional beta 3-adrenoceptors coexist with beta 1- and beta 2-adrenoceptors in a number of fat cells; they are weakly active in guinea pig, primate, and human fat cells. Physiological hormones and transmitters operate, in fact, through differential recruitment of all these multiple alpha- and beta-adrenoceptors on the basis of their relative affinity for the different subtypes. The affinity of the beta 3-adrenoceptor for catecholamines is less than that of the classical beta 1- and beta 2-adrenoceptors. Conversely, epinephrine and norepinephrine have a higher affinity for the alpha 2-adrenoceptors than for beta 1-, 2-, or 3-adrenoceptors. Antagonistic actions exist between alpha 2- and beta-adrenoceptor-mediated effects in white fat cells while positive cooperation has been revealed between alpha 1- and beta-adrenoceptors in brown fat cells. Homologous down-regulation of beta 1- and beta 2-adrenoceptors is observed after administration of physiological amines and beta-agonists. Conversely, beta 3- and alpha 2-adrenoceptors are much more resistant to agonist-induced desensitization and down-regulation. Heterologous regulation of beta-adrenoceptors was reported with glucocorticoids while sex-steroid hormones were shown to regulate alpha 2-adrenoceptor expression (androgens) and to alter adenylyl cyclase activity (estrogens).",
"title": ""
},
{
"docid": "f7a1eaa86a81b104a9ae62dc87c495aa",
"text": "In the Internet of Things, the extreme heterogeneity of sensors, actuators and user devices calls for new tools and design models able to translate the user's needs in machine-understandable scenarios. The scientific community has proposed different solution for such issue, e.g., the MQTT (MQ Telemetry Transport) protocol introduced the topic concept as “the key that identifies the information channel to which payload data is published”. This study extends the topic approach by proposing the Web of Topics (WoX), a conceptual model for the IoT. A WoX Topic is identified by two coordinates: (i) a discrete semantic feature of interest (e.g. temperature, humidity), and (ii) a URI-based location. An IoT entity defines its role within a Topic by specifying its technological and collaborative dimensions. By this approach, it is easier to define an IoT entity as a set of couples Topic-Role. In order to prove the effectiveness of the WoX approach, we developed the WoX APIs on top of an EPCglobal implementation. Then, 10 developers were asked to build a WoX-based application supporting a physics lab scenario at school. They also filled out an ex-ante and an ex-post questionnaire. A set of qualitative and quantitative metrics allowed measuring the model's outcome.",
"title": ""
},
{
"docid": "0f602d74485177ae98e9924bbcaae9e4",
"text": "From the early days of distance-based communications, people have naturally tried to disguise messages being sent from one place to another to avoid their intentions being revealed to other parties whether they were friends or enemies. For example, Julius Caesar sent confidential letters to Cicero in secret writing, Mary, Queen of Scots, sent secret messages to her followers from her prison, and Jefferson invented an encryption device that was even used by the Americans in World War II (Kippenhahn, 1999). However, with the advent of secret messages came also attempts at deciphering them. It happened that Mary Stuart’s secret messages were intercepted by Britain’s then secret police and decoded. In one of the messages she approved of a conspiracy against Elisabeth I, which sealed her well-known fate.",
"title": ""
},
{
"docid": "07fb577f1393bf4b33693961827e99aa",
"text": "Diabetes is one among the supreme health challenges of the current century. Most common method for estimation of blood glucose concentration is using glucose meter. The process involves pricking the finger and extracting the blood along with chemical analysis being done with the help of disposable test strips. Non-invasive method for glucose estimation promotes regular testing, adequate control and reduction in health care cost. The proposed method makes use of a near infrared sensor for determination of blood glucose. Near-infrared (NIR) is sent through the fingertip, before and after blocking the blood flow by making use of a principle called occlusion. By analyzing the variation in voltages received after reflection in both the cases with the dataset, the current diabetic condition as well as the approximate glucose level of the individual is predicted. The results obtained are being validated with glucose meter readings and statistical analysis of the readings where done. Analysis shows that the bias as well as the standard deviation decreases as the glucose concentration increases. The obtained result is then communicated with a smart phone through Bluetooth for further communication with the doctor.",
"title": ""
},
{
"docid": "b7668f382f1857ff034d8088328f866d",
"text": "Diverse lines of evidence point to a basic human aversion to physically harming others. First, we demonstrate that unwillingness to endorse harm in a moral dilemma is predicted by individual differences in aversive reactivity, as indexed by peripheral vasoconstriction. Next, we tested the specific factors that elicit the aversive response to harm. Participants performed actions such as discharging a fake gun into the face of the experimenter, fully informed that the actions were pretend and harmless. These simulated harmful actions increased peripheral vasoconstriction significantly more than did witnessing pretend harmful actions or to performing metabolically matched nonharmful actions. This suggests that the aversion to harmful actions extends beyond empathic concern for victim harm. Together, these studies demonstrate a link between the body and moral decision-making processes.",
"title": ""
},
{
"docid": "ac9bfa64fa41d4f22fc3c45adaadb099",
"text": "Driver distraction, defined as the diversion of attention away from activities critical for safe driving toward a competing activity, is increasingly recognized as a significant source of injuries and fatalities on the roadway. Additionally, the trend towards increasing the use of in-vehicle information systems is critical because they induce visual, biomechanical and cognitive distraction and may affect driving performance in qualitatively different ways. Non-intrusive methods are strongly preferred for monitoring distraction, and vision-based systems have appeared to be attractive for both drivers and researchers. Biomechanical, visual and cognitive distractions are the most commonly detected types in video-based algorithms. Many distraction detection systems only use a single visual cue and therefore, they may be easily disturbed when occlusion or illumination changes appear. Moreover, the combination of these visual cues is a key and challenging aspect in the development of robust distraction detection systems. These visual cues can be extracted mainly by using face monitoring systems but they should be completed with more visual cues (e.g., hands or body information) or even, distraction detection from specific actions (e.g., phone usage). Additionally, these algorithms should be included in an embedded device or system inside a car. This is not a trivial task and several requirements must be taken into account: reliability, real-time performance, low cost, small size, low power consumption, flexibility and short time-to-market. The key points for the development and implementation of sensors to carry out the detection of distraction will also be reviewed. This paper shows a review of the role of computer vision technology applied to the development of monitoring systems to detect distraction. Some key points considered as both future work and challenges ahead yet to be solved will also be addressed.",
"title": ""
},
{
"docid": "c03de8afcb5a6fce6c22e9394367f54d",
"text": "Thus the Gestalt domain with its three operations forms a general algebra. J. N. Wilson, Handbook of Computer Vision Algorithms in Image Algebra, 2nd ed. (1072), Computational Techniques and Algorithms for Image Processing (S. (1047), Universal Algebra and Coalgebra (Klaus Denecke, Shelly L. Wismath), World (986), Handbook of Mathematical Models in Computer Vision, (N. Paragios, (985), Numerical Optimization, second edition (Jorge Nocedal, Stephen J.",
"title": ""
}
] |
scidocsrr
|
a2491fc9edf79bd5a80886754ea0d252
|
xDGP: A Dynamic Graph Processing System with Adaptive Partitioning
|
[
{
"docid": "34c343413fc748c1fc5e07fb40e3e97d",
"text": "We study online social networks in which relationships can be either positive (indicating relations such as friendship) or negative (indicating relations such as opposition or antagonism). Such a mix of positive and negative links arise in a variety of online settings; we study datasets from Epinions, Slashdot and Wikipedia. We find that the signs of links in the underlying social networks can be predicted with high accuracy, using models that generalize across this diverse range of sites. These models provide insight into some of the fundamental principles that drive the formation of signed links in networks, shedding light on theories of balance and status from social psychology; they also suggest social computing applications by which the attitude of one user toward another can be estimated from evidence provided by their relationships with other members of the surrounding social network.",
"title": ""
}
] |
[
{
"docid": "f8b1a9de9510611e248cb18105e06298",
"text": "Sugarcane, an important field crop, is cultivated under tropical and subtropical regions around the world. Fusarium sacchari causing wilt, is a stalk disease, inflicting severe damage to the crop in India and other countries. Similarly, pokkah boeng (PB) a foliar disease caused by different species of Fusarium also infects the crop throughout the world. In India, both the diseases occur in different states in various sugarcane varieties. Although both diseases occur independently in the field, we recorded that they occur together in a plant. Hence, a detailed investigation was conducted to characterize different Fusarium isolates from wilt- and PB-affected sugarcane varieties by sequencing TEF1-α gene. Gene sequencing of 48 isolates revealed that 44 were of F. sacchari and the remaining four belonged to F. proliferatum. Of the four F. proliferatum, three were associated with PB and one with wilt. Almost all the 41 wilt-associated isolates belonged to F. sacchari. Investigation carried out to identify Fusarium isolates from the plants exhibiting both the wilt and the PB in two varieties Co 0238 and MS 901 revealed that only F. sacchari caused wilt and PB symptoms in both. Further, several varieties showed progressive disease severity through different phases of PB and that resulted in wilt development. The results clearly established for the first time that the same fungal pathogen systematically infects sugarcane plant and exhibits both the diseases.",
"title": ""
},
{
"docid": "f06aaad6da36bfd60c1937c20390f3bb",
"text": "Spinal cord injury (SCI) is a devastating neurological disorder. Autophagy is induced and plays a crucial role in SCI. Ginsenoside Rb1 (Rb1), one of the major active components extracted from Panax Ginseng CA Meyer, has exhibited neuroprotective effects in various neurodegenerative diseases. However, it remains unknown whether autophagy is involved in the neuroprotection of Rb1 on SCI. In this study, we examined the regulation of autophagy following Rb1 treatment and its involvement in the Rb1-induced neuroprotection in SCI and in vitro injury model. Firstly, we found that Rb1 treatment decreased the loss of motor neurons and promoted function recovery in the SCI model. Furthermore, we found that Rb1 treatment inhibited autophagy in neurons, and suppressed neuronal apoptosis and autophagic cell death in the SCI model. Finally, in the in vitro injury model, Rb1 treatment increased the viability of PC12 cells and suppressed apoptosis by inhibiting excessive autophagy, whereas stimulation of autophagy by rapamycin abolished the anti-apoptosis effect of Rb1. Taken together, these findings suggest that the inhibition of autophagy is involved in the neuroprotective effects of Rb1 on SCI.",
"title": ""
},
{
"docid": "dd36b71a91aa0b8b818ab6b4e6eb39c2",
"text": "Facial beauty prediction (FBP) is a significant visual recognition problem to make assessment of facial attractiveness that is consistent to human perception. To tackle this problem, various data-driven models, especially state-of-the-art deep learning techniques, were introduced, and benchmark dataset become one of the essential elements to achieve FBP. Previous works have formulated the recognition of facial beauty as a specific supervised learning problem of classification, regression or ranking, which indicates that FBP is intrinsically a computation problem with multiple paradigms. However, most of FBP benchmark datasets were built under specific computation constrains, which limits the performance and flexibility of the computational model trained on the dataset. In this paper, we argue that FBP is a multi-paradigm computation problem, and propose a new diverse benchmark dataset, called SCUT-FBP5500, to achieve multi-paradigm facial beauty prediction. The SCUT-FBP5500 dataset has totally 5500 frontal faces with diverse properties (male/female, Asian/Caucasian, ages) and diverse labels (face landmarks, beauty scores within [1], [5], beauty score distribution), which allows different computational models with different FBP paradigms, such as appearance-based/shape-based facial beauty classification/regression model for male/female of Asian/Caucasian. We evaluated the SCUT-FBP5500 dataset for FBP using different combinations of feature and predictor, and various deep learning methods. The results indicates the improvement of FBP and the potential applications based on the SCUT-FBP5500.",
"title": ""
},
{
"docid": "61556b092c6b5607e8bf2c556202570f",
"text": "The problem of recognizing actions in realistic videos is challenging yet absorbing owing to its great potentials in many practical applications. Most previous research is limited due to the use of simplified action databases under controlled environments or focus on excessively localized features without sufficiently encapsulating the spatio-temporal context. In this paper, we propose to model the spatio-temporal context information in a hierarchical way, where three levels of context are exploited in ascending order of abstraction: 1) point-level context (SIFT average descriptor), 2) intra-trajectory context (trajectory transition descriptor), and 3) inter-trajectory context (trajectory proximity descriptor). To obtain efficient and compact representations for the latter two levels, we encode the spatiotemporal context information into the transition matrix of a Markov process, and then extract its stationary distribution as the final context descriptor. Building on the multichannel nonlinear SVMs, we validate this proposed hierarchical framework on the realistic action (HOHA) and event (LSCOM) recognition databases, and achieve 27% and 66% relative performance improvements over the state-of-the-art results, respectively. We further propose to employ the Multiple Kernel Learning (MKL) technique to prune the kernels towards speedup in algorithm evaluation.",
"title": ""
},
{
"docid": "bff32fd3cc56ccf1700c9a9fd8804973",
"text": "We propose a new method to compute prediction intervals. Especially for small data sets the width of a prediction interval does not only depend on the variance of the target distribution, but also on the accuracy of our estimator of the mean of the target, i.e., on the width of the confidence interval. The confidence interval follows from the variation in an ensemble of neural networks, each of them trained and stopped on bootstrap replicates of the original data set. A second improvement is the use of the residuals on validation patterns instead of on training patterns for estimation of the variance of the target distribution. As illustrated on a synthetic example, our method is better than existing methods with regard to extrapolation and interpolation in data regimes with a limited amount of data, and yields prediction intervals which actual confidence levels are closer to the desired confidence levels. 1 STATISTICAL INTERVALS In this paper we will consider feedforward neural networks for regression tasks: estimating an underlying mathematical function between input and output variables based on a finite number of data points possibly corrupted by noise. We are given a set of Pdata pairs {ifJ, t fJ } which are assumed to be generated according to t(i) = f(i) + e(i) , (1) where e(i) denotes noise with zero mean. Straightforwardly trained on such a regression task, the output of a network o(i) given a new input vector i can be RWCP: Real World Computing Partnership; SNN: Foundation for Neural Networks. Practical Confidence and Prediction Intervals 177 interpreted as an estimate of the regression f(i) , i.e ., of the mean of the target distribution given input i. Sometimes this is all we are interested in: a reliable estimate of the regression f(i). In many applications, however, it is important to quantify the accuracy of our statements. For regression problems we can distinguish two different aspects: the accuracy of our estimate of the true regression and the accuracy of our estimate with respect to the observed output. Confidence intervals deal with the first aspect, i.e. , consider the distribution of the quantity f(i) o(i), prediction intervals with the latter, i.e., treat the quantity t(i) o(i). We see from t(i) o(i) = [f(i) o(i)] + ~(i) , (2) that a prediction interval necessarily encloses the corresponding confidence interval. In [7] a method somewhat similar to ours is introduced to estimate both the mean and the variance of the target probability distribution. It is based on the assumption that there is a sufficiently large data set, i.e., that their is no risk of overfitting and that the neural network finds the correct regression. In practical applications with limited data sets such assumptions are too strict. In this paper we will propose a new method which estimates the inaccuracy of the estimator through bootstrap resampling and corrects for the tendency to overfit by considering the residuals on validation patterns rather than those on training patterns. 2 BOOTSTRAPPING AND EARLY STOPPING Bootstrapping [3] is based on the idea that the available data set is nothing but a particular realization of some unknown probability distribution. Instead of sampling over the \"true\" probability distribution , which is obviously impossible, one defines an empirical distribution. With so-called naive bootstrapping the empirical distribution is a sum of delta peaks on the available data points, each with probability content l/Pdata. A bootstrap sample is a collection of Pdata patterns drawn with replacement from this empirical probability distribution. This bootstrap sample is nothing but our training set and all patterns that do not occur in the training set are by definition part of the validation set . For large Pdata, the probability that a pattern becomes part of the validation set is (1 l/Pdata)Pdata ~ lie ~ 0.37. When training a neural network on a particular bootstrap sample, the weights are adjusted in order to minimize the error on the training data. Training is stopped when the error on the validation data starts to increase. This so-called early stopping procedure is a popular strategy to prevent overfitting in neural networks and can be viewed as an alternative to regularization techniques such as weight decay. In this context bootstrapping is just a procedure to generate subdivisions in training and validation set similar to k-fold cross-validation or subsampling. On each of the nrun bootstrap replicates we train and stop a single neural network. The output of network i on input vector i IJ is written oi(ilJ ) == or. As \"the\" estimate of our ensemble of networks for the regression f(i) we take the average output l 1 nrun m(i) == L.: oi(i). n run i=l lThis is a so-called \"bagged\" estimator [2]. In [5] it is shown that a proper balancing of the network outputs can yield even better results.",
"title": ""
},
{
"docid": "2b3ff62e2e5742fc17734f2094a5f6fb",
"text": "This paper considers the problem of secure data aggregation (mainly summation) in a distributed setting, while ensuring differential privacy of the result. We study secure multiparty addition protocols using well known security schemes: Shamir’s secret sharing, perturbation-based, and various encryptions. We supplement our study with our new enhanced encryption scheme EFT, which is efficient and fault tolerant.Differential privacy of the final result is achieved by either distributed Laplace or Geometric mechanism (respectively DLPA or DGPA), while approximated differential privacy is achieved by diluted mechanisms. Distributed random noise is generated collectively by all participants, which draw random variables from one of several distributions: Gamma, Gauss, Geometric, or their diluted versions. We introduce a new distributed privacy mechanism with noise drawn from the Laplace distribution, which achieves smaller redundant noise with efficiency. We compare complexity and security characteristics of the protocols with different differential privacy mechanisms and security schemes. More importantly, we implemented all protocols and present an experimental comparison on their performance and scalability in a real distributed environment. Based on the evaluations, we identify our security scheme and Laplace DLPA as the most efficient for secure distributed data aggregation with differential privacy.",
"title": ""
},
{
"docid": "896d9382066abc722f3d8a1793f0a67d",
"text": "In this paper, we investigate the use of discourse-aware rewards with reinforcement learning to guide a model to generate long, coherent text. In particular, we propose to learn neural rewards to model cross-sentence ordering as a means to approximate desired discourse structure. Empirical results demonstrate that a generator trained with the learned reward produces more coherent and less repetitive text than models trained with crossentropy or with reinforcement learning with commonly used scores as rewards.",
"title": ""
},
{
"docid": "b3790611437e1660b7c222adcb26b510",
"text": "There have been increasing interests in the robotics community in building smaller and more agile autonomous micro aerial vehicles (MAVs). In particular, the monocular visual-inertial system (VINS) that consists of only a camera and an inertial measurement unit (IMU) forms a great minimum sensor suite due to its superior size, weight, and power (SWaP) characteristics. In this paper, we present a tightly-coupled nonlinear optimization-based monocular VINS estimator for autonomous rotorcraft MAVs. Our estimator allows the MAV to execute trajectories at 2 m/s with roll and pitch angles up to 30 degrees. We present extensive statistical analysis to verify the performance of our approach in different environments with varying flight speeds.",
"title": ""
},
{
"docid": "e64320b71675f2a059a50fd9479d2056",
"text": "Extreme sports (ES) are usually pursued in remote locations with little or no access to medical care with the athlete competing against oneself or the forces of nature. They involve high speed, height, real or perceived danger, a high level of physical exertion, spectacular stunts, and heightened risk element or death.Popularity for such sports has increased exponentially over the past two decades with dedicated TV channels, Internet sites, high-rating competitions, and high-profile sponsors drawing more participants.Recent data suggest that the risk and severity of injury in some ES is unexpectedly high. Medical personnel treating the ES athlete need to be aware there are numerous differences which must be appreciated between the common traditional sports and this newly developing area. These relate to the temperament of the athletes themselves, the particular epidemiology of injury, the initial management following injury, treatment decisions, and rehabilitation.The management of the injured extreme sports athlete is a challenge to surgeons and sports physicians. Appropriate safety gear is essential for protection from severe or fatal injuries as the margins for error in these sports are small.The purpose of this review is to provide an epidemiologic overview of common injuries affecting the extreme athletes through a focus on a few of the most popular and exciting extreme sports.",
"title": ""
},
{
"docid": "5c05b2d2086125bc8c6364b58c37971a",
"text": "In this exploratory field-study, we examined how normative messages (i.e., activating an injunctive norm, personal norm, or both) could encourage shoppers to use fewer free plastic bags for their shopping in addition to the supermarket‘s standard environmental message aimed at reducing plastic bags. In a one-way subjects-design (N = 200) at a local supermarket, we showed that shoppers used significantly fewer free plastic bags in the injunctive, personal and combined normative message condition than in the condition where only an environmental message was present. The combined normative message did result in the smallest uptake of free plastic bags compared to the injunctive and personal normative-only message, although these differences were not significant. Our findings imply that re-wording the supermarket‘s environmental message by including normative information could be a promising way to reduce the use of free plastic bags, which will ultimately benefit the environment.",
"title": ""
},
{
"docid": "caaca962473382e40a08f90240cc88b6",
"text": "Lysergic acid diethylamide (LSD) was synthesized in 1938 and its psychoactive effects discovered in 1943. It was used during the 1950s and 1960s as an experimental drug in psychiatric research for producing so-called \"experimental psychosis\" by altering neurotransmitter system and in psychotherapeutic procedures (\"psycholytic\" and \"psychedelic\" therapy). From the mid 1960s, it became an illegal drug of abuse with widespread use that continues today. With the entry of new methods of research and better study oversight, scientific interest in LSD has resumed for brain research and experimental treatments. Due to the lack of any comprehensive review since the 1950s and the widely dispersed experimental literature, the present review focuses on all aspects of the pharmacology and psychopharmacology of LSD. A thorough search of the experimental literature regarding the pharmacology of LSD was performed and the extracted results are given in this review. (Psycho-) pharmacological research on LSD was extensive and produced nearly 10,000 scientific papers. The pharmacology of LSD is complex and its mechanisms of action are still not completely understood. LSD is physiologically well tolerated and psychological reactions can be controlled in a medically supervised setting, but complications may easily result from uncontrolled use by layman. Actually there is new interest in LSD as an experimental tool for elucidating neural mechanisms of (states of) consciousness and there are recently discovered treatment options with LSD in cluster headache and with the terminally ill.",
"title": ""
},
{
"docid": "2dfb8e3f50c1968b441872fa4aa13fec",
"text": "An ultra-wideband Vivaldi antenna with dual-polarization capability is presented. A two-section quarter-wave balun feedline is developed to feed the tapered slot antenna, which improves the impedance matching performance especially in the low frequency regions. The dual-polarization is realized by orthogonally combining two identical Vivaldi antennas without a galvanic contact. Measured results have been presented with a fractional bandwidth of 172% from 0.56 GHz to 7.36 GHz for S11 < −10 dB and a good port isolation of S21 < −22 dB. The measured antenna gain of up to 9.4 dBi and cross-polarization discrimination (XPD) of more than 18 dB is achieved, making the antenna suitable for mobile communication testing in chambers or open-site facilities.",
"title": ""
},
{
"docid": "1b0dcde6dceb85c4f6278f6944f607e8",
"text": "Firms around the world have been implementing enterprise resource planning (ERP) systems since the 1990s to have an uniform information system in their respective organizations and to reengineer their business processes. Through a case type analysis conducted in six manufacturing firms that have one of the widely used ERP systems, various contextual factors that influenced these firms to implement this technology were understood using the six-stage model proposed by Kwon and Zmud. Three types of ERP systems, viz. SAP, Baan and Oracle ERP were studied in this research. Implementation of ERP systems was found to follow the stage model. The findings from the process model were used to develop the items for the causal model and in identifying appropriate constructs to group those items. In order to substantiate that the constructs developed to measure the causal model were congruent with the findings based on qualitative analysis, i.e. that the instrument appropriately reflects the understanding of the case interview; ‘triangulation’ technique was used. The findings from the qualitative study and the results from the quantitative study were found to be equivalent, thus, ensuring a fair assessment of the validity and reliability of the instrument developed to test the causal model. The quantitative measures done only at these six firms are not statistically significant but the samples were used as a part of the triangulation method to collect data from multiple sources, to verify the respondents’ understanding of the scales and as an initial measure to see if my understanding from the qualitative studies were accurately reflected by the instrument. This instrument will be pilot tested first and administered to a large sample of firms. # 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "fb11348b48f65a4d3101727308a1f4fc",
"text": "Spin-transfer torque random access memory (STT-RAM) has emerged as an attractive candidate for future nonvolatile memories. It advantages the benefits of current state-of-the-art memories including high-speed read operation (of static RAM), high density (of dynamic RAM), and nonvolatility (of flash memories). However, the write operation in the 1T-1MTJ STT-RAM bitcell is asymmetric and stochastic, which leads to high energy consumption and long latency. In this paper, a new write assist technique is proposed to terminate the write operation immediately after switching takes place in the magnetic tunneling junction (MTJ). As a result, both the write time and write energy consumption of 1T-1MTJ bitcells improves. Moreover, the proposed write assist technique leads to an error-free write operation. The simulation results using a 65-nm CMOS access transistor and a 40-nm MTJ technology confirm that the proposed write assist technique results in three orders of magnitude improvement in bit error rate compared with the best existing techniques. Moreover, the proposed write assist technique leads to 81% energy saving compared with a cell without write assist and adds only 9.6% area overhead to a 16-kbit STT-RAM array.",
"title": ""
},
{
"docid": "509658ef2758b5dd01f50e99ffe5ee4b",
"text": "The impact of reject brine chemical composition and disposal from inland desalination plants on soil and groundwater in the eastern region of Abu Dhabi Emirate, namely Al Wagan, Al Quaa and Um Al Zumool, was evaluated. Twenty five inland BWRO desalination plants (11 at Al Wagan, 12 at Al Quaa, and 2 at Um Al Zumool) have been investigated. The study indicated that average capacity of these plants varied between 26,400 G/d (99.93 m/d) to 61,000 G/d (230.91 m/d). The recovery rate varied from 60 to 70% and the reject brine accounted for about 30–40% of the total water production. The electrical conductivity of feed water and rejects brine varied from 4.61 to 14.70 and 12.90–30.30 (mS/cm), respectively. The reject brine is disposed directly into surface impoundment (unlined pits) in a permeable soil with low clay content, cation exchange capacity and organic matter content. The groundwater table 1ies at a depth of 100–150 m. The average distance between feed water intake and the disposal site is approximately 5 km. A survey has been conducted to gather basic information, determine the type of chemicals used, and determine if there is any current and previous monitoring program. The chemical compositions of the feed, product, reject, and pond water have been analyzed for major, minor and trace constituents. Most of the water samples (feed, product, reject and pond water) showed the presence of major, minor and trace constituents. Some of these constituents are above the Gulf Cooperation Council (GCC) and Abu-Dhabi National Oil Company (ADNOC) Standards for drinking water and effluents discharged into the desert. Total Petroleum Hydrocarbon (TPH) was also analyzed and found to be present, even in product water samples, in amount that exceed the GCC standards for organic chemical constituents in drinking water (0.01 mg/l).",
"title": ""
},
{
"docid": "0374d93d82ec404b7beee18aaa9bfbf1",
"text": "A grand challenge in reinforcement learning is intelligent exploration, especially when rewards are sparse or deceptive. Two Atari games serve as benchmarks for such hard-exploration domains: Montezuma’s Revenge and Pitfall. On both games, current RL algorithms perform poorly, even those with intrinsic motivation, which is the dominant method to encourage exploration and improve performance on hardexploration domains. To address this shortfall, we introduce a new algorithm called Go-Explore. It exploits the following principles: (1) remember states that have previously been visited, (2) first return to a promising state (without exploration), then explore from it, and (3) solve simulated environments through exploiting any available means (including by introducing determinism), then robustify (create a policy that can reliably perform the solution) via imitation learning. The combined effect of these principles generates dramatic performance improvements on hardexploration problems. On Montezuma’s Revenge, without being provided any domain knowledge, Go-Explore scores over 43,000 points, almost 4 times the previous state of the art. Go-Explore can also easily harness human-provided domain knowledge, and when augmented with it Go-Explore scores a mean of over 650,000 points on Montezuma’s Revenge. Its max performance of nearly 18 million surpasses the human world record by an order of magnitude, thus meeting even the strictest definition of “superhuman” performance. On Pitfall, Go-Explore with domain knowledge is the first algorithm to score above zero. Its mean performance of almost 60,000 points also exceeds expert human performance. Because GoExplore can produce many high-performing demonstrations automatically and cheaply, it also outperforms previous imitation learning work in which the solution was provided in the form of a human demonstration. Go-Explore opens up many new research directions into improving it and weaving its insights into current RL algorithms. It may also enable progress on previously unsolvable hard-exploration problems in a variety of domains, especially the many that often harness a simulator during training (e.g. robotics).",
"title": ""
},
{
"docid": "d549d4e7c30e004556ac78bdc4119b92",
"text": "Bitcoin is a peer-to-peer cryptographic currency system. Since its introduction in 2008, Bitcoin has gained noticeable popularity, mostly due to its following properties: (1) the transaction fees are very low, and (2) it is not controlled by any central authority, which in particular means that nobody can “print” the money to generate inflation. Moreover, the transaction syntax allows to create the so-called contracts, where a number of mutually-distrusting parties engage in a protocol to jointly perform some financial task, and the fairness of this process is guaranteed by the properties of Bitcoin. Although the Bitcoin contracts have several potential applications in the digital economy, so far they have not been widely used in real life. This is partly due to the fact that they are cumbersome to create and analyze, and hence risky to use. In this paper we propose to remedy this problem by using the methods originally developed for the computer-aided analysis for hardware and software systems, in particular those based on the timed automata. More concretely, we propose a framework for modeling the Bitcoin contracts using the timed automata in the UPPAAL model checker. Our method is general and can be used to model several contracts. As a proof-of-concept we use this framework to model some of the Bitcoin contracts from our recent previous work. We then automatically verify their security in UPPAAL, finding (and correcting) some subtle errors that were difficult to spot by the manual analysis. We hope that our work can draw the attention of the researchers working on formal modeling to the problem of the Bitcoin contract verification, and spark off more research on this topic.",
"title": ""
},
{
"docid": "c2756af71724249b458ffdf7a49c4060",
"text": "Objectives. Cooccurring psychiatric disorders influence the outcome and prognosis of gender dysphoria. The aim of this study is to assess psychiatric comorbidities in a group of patients. Methods. Eighty-three patients requesting sex reassignment surgery (SRS) were recruited and assessed through the Persian Structured Clinical Interview for DSM-IV Axis I disorders (SCID-I). Results. Fifty-seven (62.7%) patients had at least one psychiatric comorbidity. Major depressive disorder (33.7%), specific phobia (20.5%), and adjustment disorder (15.7%) were the three most prevalent disorders. Conclusion. Consistent with most earlier researches, the majority of patients with gender dysphoria had psychiatric Axis I comorbidity.",
"title": ""
},
{
"docid": "298894941f7615ea12291a815cb0752d",
"text": "This paper describes ongoing research and development of machine learning and other complementary automatic learning techniques in a framework adapted to the specific needs of power system security assessment. In the proposed approach, random sampling techniques are considered to screen all relevant power system operating situations, while existing numerical simulation tools are exploited to derive detailed security information. The heart of the framework is provided by machine learning methods used to extract and synthesize security knowledge reformulated in a suitable way for decision making. This consists of transforming the data base of case by case numerical simulations into a power system security knowledge base. The main expected fallouts with respect to existing security assessment methods are computational efficiency, better physical insight into non-linear problems, and management of uncertainties. The paper discusses also the complementary roles of various automatic learning methods in this framework, such as decision tree induction, multilayer perceptrons and nearest neighbor classifiers. Illustrations are taken from two different real large scale power system security problems : transient stability assessment of the Hydro-Québec system and voltage security assessment of the system of Electricité de France.",
"title": ""
},
{
"docid": "3e7a9fa9f575270a5cdf8f869d4a75dd",
"text": "The recently proposed semi-supervised learning methods exploit consistency loss between different predictions under random perturbations. Typically, a student model is trained to predict consistently with the targets generated by a noisy teacher. However, they ignore the fact that not all training data provide meaningful and reliable information in terms of consistency. For misclassified data, blindly minimizing the consistency loss around them can hinder learning. In this paper, we propose a novel certaintydriven consistency loss (CCL) to dynamically select data samples that have relatively low uncertainty. Specifically, we measure the variance or entropy of multiple predictions under random augmentations and dropout as an estimation of uncertainty. Then, we introduce two approaches, i.e. Filtering CCL and Temperature CCL to guide the student learn more meaningful and certain/reliable targets, and hence improve the quality of the gradients backpropagated to the student. Experiments demonstrate the advantages of the proposed method over the state-of-the-art semi-supervised deep learning methods on three benchmark datasets: SVHN, CIFAR10, and CIFAR100. Our method also shows robustness to noisy labels.",
"title": ""
}
] |
scidocsrr
|
12a46f59b3f63b3d2a4db5b68d66590e
|
In touch with the remote world: remote collaboration with augmented reality drawings and virtual navigation
|
[
{
"docid": "ff75699519c0df47220624db263b483a",
"text": "We present BeThere, a proof-of-concept system designed to explore 3D input for mobile collaborative interactions. With BeThere, we explore 3D gestures and spatial input which allow remote users to perform a variety of virtual interactions in a local user's physical environment. Our system is completely self-contained and uses depth sensors to track the location of a user's fingers as well as to capture the 3D shape of objects in front of the sensor. We illustrate the unique capabilities of our system through a series of interactions that allow users to control and manipulate 3D virtual content. We also provide qualitative feedback from a preliminary user study which confirmed that users can complete a shared collaborative task using our system.",
"title": ""
},
{
"docid": "edbbff35d892f66f7312b88a4cbdb214",
"text": "Various interaction techniques have been developed for interactive 3D environments. This paper presents an upto-date and comprehensive review of the state of the art of non-immersive interaction techniques for Navigation, Selection & Manipulation, and System Control, including a basic introduction to the topic, the challenges, and an examination of a number of popular approaches. We hope that this survey can aid both researchers and developers of interactive 3D applications in having a clearer overview of the topic and in particular can be useful for practitioners and researchers that are new to the field of interactive 3D graphics.",
"title": ""
},
{
"docid": "7dacadc597fd3af35dd70d6b219d78a2",
"text": "We present a system integrating gesture and live video to support collaboration on physical tasks. The architecture combines network IP cameras, desktop PCs, and tablet PCs to allow a remote helper to draw on a video feed of a workspace as he/she provides task instructions. A gesture recognition component enables the system both to normalize freehand drawings to facilitate communication with remote partners and to use pen-based input as a camera control device. Results of a preliminary user study suggest that our gesture over video communication system enhances task performance over traditional video-only systems. Implications for the design of multimodal systems to support collaborative physical tasks are also discussed.",
"title": ""
}
] |
[
{
"docid": "bba4256906b1aee1c76d817b9926226c",
"text": "In this paper, we present an analytical framework to evaluate the latency performance of connection-based spectrum handoffs in cognitive radio (CR) networks. During the transmission period of a secondary connection, multiple interruptions from the primary users result in multiple spectrum handoffs and the need of predetermining a set of target channels for spectrum handoffs. To quantify the effects of channel obsolete issue on the target channel predetermination, we should consider the three key design features: 1) general service time distribution of the primary and secondary connections; 2) different operating channels in multiple handoffs; and 3) queuing delay due to channel contention from multiple secondary connections. To this end, we propose the preemptive resume priority (PRP) M/G/1 queuing network model to characterize the spectrum usage behaviors with all the three design features. This model aims to analyze the extended data delivery time of the secondary connections with proactively designed target channel sequences under various traffic arrival rates and service time distributions. These analytical results are applied to evaluate the latency performance of the connection-based spectrum handoff based on the target channel sequences mentioned in the IEEE 802.22 wireless regional area networks standard. Then, to reduce the extended data delivery time, a traffic-adaptive spectrum handoff is proposed, which changes the target channel sequence of spectrum handoffs based on traffic conditions. Compared to the existing target channel selection methods, this traffic-adaptive target channel selection approach can reduce the extended data transmission time by 35 percent, especially for the heavy traffic loads of the primary users.",
"title": ""
},
{
"docid": "2ce2d44c6c19ad683989bbf8b117f778",
"text": "Modern computer systems feature multiple homogeneous or heterogeneous computing units with deep memory hierarchies, and expect a high degree of thread-level parallelism from the software. Exploitation of data locality is critical to achieving scalable parallelism, but adds a significant dimension of complexity to performance optimization of parallel programs. This is especially true for programming models where locality is implicit and opaque to programmers. In this paper, we introduce the hierarchical place tree (HPT) model as a portable abstraction for task parallelism and data movement. The HPT model supports co-allocation of data and computation at multiple levels of a memory hierarchy. It can be viewed as a generalization of concepts from the Sequoia and X10 programming models, resulting in capabilities that are not supported by either. Compared to Sequoia, HPT supports three kinds of data movement in a memory hierarchy rather than just explicit data transfer between adjacent levels, as well as dynamic task scheduling rather than static task assignment. Compared to X10, HPT provides a hierarchical notion of places for both computation and data mapping. We describe our work-in-progress on implementing the HPT model in the Habanero-Java (HJ) compiler and runtime system. Preliminary results on general-purpose multicore processors and GPU accelerators indicate that the HPT model can be a promising portable abstraction for future multicore processors.",
"title": ""
},
{
"docid": "24bc235af5949e9af2efc1347642ae42",
"text": "In recent years, location-based services have become very popular, mainly driven by the availability of modern mobile devices with integrated position sensors. Prominent examples are points of interest finders or geo-social networks such as Facebook Places, Qype, and Loopt. However, providing such services with private user positions may raise serious privacy concerns if these positions are not protected adequately. Therefore, location privacy concepts become mandatory to ensure the user’s acceptance of location-based services. Many different concepts and approaches for the protection of location privacy have been described in the literature. These approaches differ with respect to the protected information and their effectiveness against different attacks. The goal of this paper is to assess the applicability and effectiveness of location privacy approaches systematically. We first identify different protection goals, namely personal information (user identity), spatial information (user position), and temporal information (identity/position + time). Secondly, we give an overview of basic principles and existing approaches to protect these privacy goals. In a third step, we classify possible attacks. Finally, we analyze existing approaches with respect to their protection goals and their ability to resist the introduced attacks.",
"title": ""
},
{
"docid": "a7d4881412978a41da17e282f9419bdd",
"text": "Recent studies suggest that judgments of facial masculinity reflect more than sexually dimorphic shape. Here, we investigated whether the perception of masculinity is influenced by facial cues to body height and weight. We used the average differences in three-dimensional face shape of forty men and forty women to compute a morphological masculinity score, and derived analogous measures for facial correlates of height and weight based on the average face shape of short and tall, and light and heavy men. We found that facial cues to body height and weight had substantial and independent effects on the perception of masculinity. Our findings suggest that men are perceived as more masculine if they appear taller and heavier, independent of how much their face shape differs from women's. We describe a simple method to quantify how body traits are reflected in the face and to define the physical basis of psychological attributions.",
"title": ""
},
{
"docid": "56f0552afea73ccebae94a83a2bed963",
"text": "In the Workshop on Computational Personality Recognition (Shared Task), we released two datasets, varying in size and genre, annotated with gold standard personality labels. This allowed participants to evaluate features and learning techniques, and even to compare the performances of their systems for personality recognition on a common benchmark. We had 8 participants to the task. In this paper we discuss the results and compare them to previous literature. Introduction and Background Personality Recognition (see Mairesse et Al. 2007) consists of the automatic classification of authors’ personality traits, that can be compared against gold standard labels, obtained by means of personality tests. The Big5 test (Costa & MacCrae 1985, Goldberg et al. 2006) is the most popular personality test, and has become a standard over the years. It describes personality along five traits formalized as bipolar scales, namely: 1) Extraversion (x) (sociable vs shy) 2) Neuroticism (n) (neurotic vs calm) 3) Agreeableness (a) (friendly vs uncooperative) 4) Conscientiousness (c) (organized vs careless) 5) Openness (o) (insightful vs unimaginative). In recent years the interest of the scientific community in personality recognition has grown very fast. The first pioneering works by Argamon et al 2005, Oberlander & Nowson 2006 and the seminal paper by Mairesse et al. 2007, applied personality recognition to long texts, such as short essays or blog posts. The current challenges are instead related to the extraction of personality from mobile social networks (Staiano et al 2012), from social network sites (see Quercia et al. 2011, Golbeck et al 2011, Bachrach et al. 2012, Kosinski et al. 2013) and from languages different from English (Kermanidis 2012, Bai et al 2012). There are also many other applications that can take advantage of personality recognition, including social network analysis (Celli & Rossi 2012), recommendation systems (Roshchina et Al. 2011), deception detection (Enos et Al. 2006), authorship attribution (Luyckx & Daelemans 2008), sentiment analysis/opinion mining (Golbeck & Hansen 2011), and others. Copyright c © 2013, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. In the Workshop on Computational Personality Recognition (Shared Task), we invited contributions from researchers or teams working in these areas or other related fields. Despite a growing number of works in personality recognition, it is still difficult to gauge their performance and quality, due to the fact that almost all the scholars working in the field run their experiments on very different datasets, and use very different evaluation procedures (Celli 2013). These problems are exacerbated by the fact that producing gold standard data for personality recognition is difficult and costly. In 2012 there has been a competition on personality prediction from Twitter streaming data1 with about 90 teams participating, thus showing the great interest of the industry and the research community about this field. The Workshop on Computational Personality Recognition (Shared Task) is different from a simple competition, because we do not want to focus just on systems’ performances, but rather we would like to provide a benchmark for discovering which feature sets, resources, and learning techniques are useful in the extraction of personality from text and from social network data. We released two datasets, different in size and domain, annotated with gold standard personality labels. This allowed participants to compare the performance of their personality recognition systems on a common benchmark, or to exploit personality labels for other related tasks, such as social network analysis. In this paper we summarize the results of the Workshop on Computational Personality Recognition (Shared Task), discussing challenges and possible future directions. The paper is structured as follows: in the next section we provide an overview of previous work on personality recognition. Then in the following sections we present the datasets and the shared task, we report and discuss the results, and finally we draw some conclusions.",
"title": ""
},
{
"docid": "4a4b12e5f60a0d9cee2be7d499055dd9",
"text": "This paper describes the process of inducting theory using case studies-from specifying the research questions to reaching closure. Some features of the process, such as problem definition and construct validation, are similar to hypothesis-testing research. Others, such as within-case analysis and replication logic, are unique to the inductive, case-oriented process. Overall, the process described here is highly iterative and tightly linked to data. This research approach is especially appropriate in new topic areas. The resultant theory is often novel, testable, and empirically valid. Finally, framebreaking insights, the tests of good theory (e.g., parsimony, logical coherence), and convincing grounding in the evidence are the key criteria for evaluating this type of research.",
"title": ""
},
{
"docid": "98b1965e232cce186b9be4d7ce946329",
"text": "Currently existing dynamic models for a two-wheeled inverted pendulum mobile robot have some common mistakes. In order to find where the errors of the dynamic model are induced, Lagrangian method and Kane's method are compared in deriving the equation of motion. Numerical examples are given to illustrate the effect of the incorrect terms. Finally, a complete dynamic model is proposed without any error and missing terms.",
"title": ""
},
{
"docid": "39cf15285321c7d56904c8c59b3e1373",
"text": "J. Naidoo1*, D. B. Page2, B. T. Li3, L. C. Connell3, K. Schindler4, M. E. Lacouture5,6, M. A. Postow3,6 & J. D. Wolchok3,6 Sidney Kimmel Comprehensive Cancer Center, Johns Hopkins University, Baltimore; Providence Portland Medical Center and Earl A. Chiles Research Institute, Portland; Department of Medicine and Ludwig Center, Memorial Sloan Kettering Cancer Center, New York, USA; Department of Dermatology, Medical University of Vienna, Vienna, Austria; Dermatology Service, Memorial Sloan Kettering Cancer Center, New York; Department of Medicine, Weill Cornell Medical College, New York, USA",
"title": ""
},
{
"docid": "cc8159c1bf2494d0c0df88343d7366b1",
"text": "Sharp electrically conductive structures integrated into micro-transfer-print compatible components provide an approach to forming electrically interconnected systems during the assembly procedure. Silicon micromachining techniques are used to fabricate print-compatible components with integrated, electrically conductive, pressure-concentrating structures. The geometry of the structures allow them to penetrate a polymer receiving layer during the elastomer stamp printing operation, and reflow of the polymer following the transfer completes the electrical interconnection when capillary action forces the gold-coated pressure-concentrator into a metal landing site. Experimental results and finite element simulations support a discussion of the mechanics of the interconnection.",
"title": ""
},
{
"docid": "31122e142e02b7e3b99c52c8f257a92e",
"text": "Impervious surface has been recognized as a key indicator in assessing urban environments. However, accurate impervious surface extraction is still a challenge. Effectiveness of impervious surface in urban land-use classification has not been well addressed. This paper explored extraction of impervious surface information from Landsat Enhanced Thematic Mapper data based on the integration of fraction images from linear spectral mixture analysis and land surface temperature. A new approach for urban land-use classification, based on the combined use of impervious surface and population density, was developed. Five urban land-use classes (i.e., low-, medium-, high-, and very-high-intensity residential areas, and commercial/industrial/transportation uses) were developed in the city of Indianapolis, Indiana, USA. Results showed that the integration of fraction images and surface temperature provided substantially improved impervious surface image. Accuracy assessment indicated that the rootmean-square error and system error yielded 9.22% and 5.68%, respectively, for the impervious surface image. The overall classification accuracy of 83.78% for five urban land-use classes was obtained. © 2006 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "e76dbd031a451237365135bd2bac387e",
"text": "PURPOSE OF REVIEW\nDescribe developments in the etiological understanding of Tourette syndrome.\n\n\nRECENT FINDINGS\nTourette syndrome is a complex heterogenous clinical syndrome, which is not a unitary entity. Pathophysiological models describe gamma-aminobutyric acid-ergic-associated disinhibition of cortico-basal ganglia motor, sensory and limbic loops. MRI studies support basal ganglia volume loss, with additional white matter and cerebellar changes. Tourette syndrome cause likely involves multiple vulnerability genes and environmental factors. Only recently have some vulnerability gene findings been replicated, including histidine decarboxylase and neurexin 1, yet these rare variants only explain a small proportion of patients. Planned large genetic studies will improve genetic understanding. The role of inflammation as a contributor to disease expression is now supported by large epidemiological studies showing an association with maternal autoimmunity and childhood infection. Investigation of blood cytokines, blood mRNA and brain mRNA expression support the role of a persistent immune activation, and there are similarities with the immune literature of autistic spectrum disorder. Current treatment is symptomatic, although there is a better appreciation of factors that influence treatment response.\n\n\nSUMMARY\nAt present, therapeutics is focused on symptom-based treatments, yet with improved etiological understanding, we will move toward disease-modifying therapies in the future.",
"title": ""
},
{
"docid": "1592dc2c81d9d6b9c58cc1a5b530c923",
"text": "We propose a cloudlet network architecture to bring the computing resources from the centralized cloud to the edge. Thus, each User Equipment (UE) can communicate with its Avatar, a software clone located in a cloudlet, and can thus lower the end-to-end (E2E) delay. However, UEs are moving over time, and so the low E2E delay may not be maintained if UEs' Avatars stay in their original cloudlets. Thus, live Avatar migration (i.e., migrating a UE's Avatar to a suitable cloudlet based on the UE's location) is enabled to maintain the low E2E delay between each UE and its Avatar. On the other hand, the migration itself incurs extra overheads in terms of resources of the Avatar, which compromise the performance of applications running in the Avatar. By considering the gain (i.e., the E2E delay reduction) and the cost (i.e., the migration overheads) of the live Avatar migration, we propose a PRofIt Maximization Avatar pLacement (PRIMAL) strategy for the cloudlet network in order to optimize the tradeoff between the migration gain and the migration cost by selectively migrating the Avatars to their optimal locations. Simulation results demonstrate that as compared to the other two strategies (i.e., Follow Me Avatar and Static), PRIMAL maximizes the profit in terms of maintaining the low average E2E delay between UEs and their Avatars and minimizing the migration cost simultaneously.",
"title": ""
},
{
"docid": "996eb4470d33f00ed9cb9bcc52eb5d82",
"text": "Andrew is a distributed computing environment that is a synthesis of the personal computing and timesharing paradigms. When mature, it is expected to encompass over 5,000 workstations spanning the Carnegie Mellon University campus. This paper examines the security issues that arise in such an environment and describes the mechanisms that have been developed to address them. These mechanisms include the logical and physical separation of servers and clients, support for secure communication at the remote procedure call level, a distributed authentication service, a file-protection scheme that combines access lists with UNIX mode bits, and the use of encryption as a basic building block. The paper also discusses the assumptions underlying security in Andrew and analyzes the vulnerability of the system. Usage experience reveals that resource control, particularly of workstation CPU cycles, is more important than originally anticipated and that the mechanisms available to address this issue are rudimentary.",
"title": ""
},
{
"docid": "6f6d1af797df05afb9dc2081a177d9bb",
"text": "Systematic reviews of literature relevant to children and adolescents with difficulty processing and integrating sensory information are important to the practice of occupational therapy in this area. This article explains the five questions that were developed and served as the focus for these reviews: neuronal plasticity, subtyping, sensory integration and non-sensory integration occupational therapy interventions, and occupational performance for this population. Presented are the background for the reviews; the process followed for each question, including search terms and search strategy; the databases searched; and the methods used to summarize and critically appraise the literature. The final number of articles included in each systematic review, a summary of the results of the review, the strengths and limitations of the review, and implications for practice, education, and research are described.",
"title": ""
},
{
"docid": "e3459bb93bb6f7af75a182472bb42b3e",
"text": "We consider the algorithmic problem of selecting a set of target nodes that cause the biggest activation cascade in a network. In case when the activation process obeys the diminishing return property, a simple hill-climbing selection mechanism has been shown to achieve a provably good performance. Here we study models of influence propagation that exhibit critical behavior and where the property of diminishing returns does not hold. We demonstrate that in such systems the structural properties of networks can play a significant role. We focus on networks with two loosely coupled communities and show that the double-critical behavior of activation spreading in such systems has significant implications for the targeting strategies. In particular, we show that simple strategies that work well for homogenous networks can be overly suboptimal and suggest simple modification for improving the performance by taking into account the community structure.",
"title": ""
},
{
"docid": "57e70bca420ca75412758ef8591c99ab",
"text": "We present graph partition neural networks (GPNN), an extension of graph neural networks (GNNs) able to handle extremely large graphs. GPNNs alternate between locally propagating information between nodes in small subgraphs and globally propagating information between the subgraphs. To efficiently partition graphs, we experiment with spectral partitioning and also propose a modified multi-seed flood fill for fast processing of large scale graphs. We extensively test our model on a variety of semi-supervised node classification tasks. Experimental results indicate that GPNNs are either superior or comparable to state-of-the-art methods on a wide variety of datasets for graph-based semi-supervised classification. We also show that GPNNs can achieve similar performance as standard GNNs with fewer propagation steps.",
"title": ""
},
{
"docid": "e7fb9b39e0fee3924ff67ea74c85a36f",
"text": "A method of using a simple single parasitic element to significantly enhance a 3-dB axial-ratio (AR) bandwidth of a crossed dipole antenna is presented. This approach is verified by introducing one bowtie-shaped parasitic element between two arms of crossed bowtie dipoles to generate additional resonance and thereby significantly broadening AR bandwidth. The final design, with an overall size of 0.79 <italic>λ<sub>o</sub></italic> × 0.79 <italic> λ<sub>o</sub></italic> × 0.27 <italic>λ<sub>o</sub></italic> (<italic>λ<sub>o</sub></italic> is the free-space wavelength of circular polarized center frequency), resulted in very large measured –10-dB impedance and 3-dB AR bandwidths ranging from 1.9 to 3.9 GHz (∼68.9%) and from 2.05 to 3.75 GHz (∼58.6%), respectively. In addition, the antenna also yielded a right-hand circular polarization with stable far-field radiation patterns and an average broadside gain of approximately 8.2 dBic.",
"title": ""
},
{
"docid": "328aad76b94b34bf49719b98ae391cfe",
"text": "We discuss methods for statistically analyzing the output from stochastic discrete-event or Monte Carlo simulations. Terminating and steady-state simulations are considered.",
"title": ""
},
{
"docid": "36e1bbc473e859874d5c338dfdbc95d9",
"text": "Mobility for the blind is always a great problem. Just like a sighted, blind also needs to travel around inside a closed premises like house, factory, office, school etc. They may also like to go for shopping, visiting friends and other places of their interest. Presently available electronic travelling aids like sonic path finder, sonic torch etc. are not suitable for using inside a closed premises such as school, factory, office etc. In this paper an electronically guided walking stick that can be used conveniently inside a closed premises has been discussed.",
"title": ""
},
{
"docid": "a4319af83eaecdf3ffd84fdeea5ef62f",
"text": "In this paper, we investigate the problem of overfitting in deep reinforcement learning. Among the most common benchmarks in RL, it is customary to use the same environments for both training and testing. This practice offers relatively little insight into an agent’s ability to generalize. We address this issue by using procedurally generated environments to construct distinct training and test sets. Most notably, we introduce a new environment called CoinRun, designed as a benchmark for generalization in RL. Using CoinRun, we find that agents overfit to surprisingly large training sets. We then show that deeper convolutional architectures improve generalization, as do methods traditionally found in supervised learning, including L2 regularization, dropout, data augmentation and batch normalization.",
"title": ""
}
] |
scidocsrr
|
25c25f0fc4f1437e0f3f88e7f0a79a16
|
From Papercraft to Paper Mechatronics: Exploring a New Medium and Developing a Computational Design Tool
|
[
{
"docid": "567445f68597ea8ff5e89719772819be",
"text": "We have developed an interactive pop-up book called Electronic Popables to explore paper-based computing. Our book integrates traditional pop-up mechanisms with thin, flexible, paper-based electronics and the result is an artifact that looks and functions much like an ordinary pop-up, but has added elements of dynamic interactivity. This paper introduces the book and, through it, a library of paper-based sensors and a suite of paper-electronics construction techniques. We also reflect on the unique and under-explored opportunities that arise from combining material experimentation, artistic design, and engineering.",
"title": ""
}
] |
[
{
"docid": "0e57945ae40e8c0f08e92396c2592a78",
"text": "Frequent or contextually predictable words are often phonetically reduced, i.e. shortened and produced with articulatory undershoot. Explanations for phonetic reduction of predictable forms tend to take one of two approaches: Intelligibility-based accounts hold that talkers maximize intelligibility of words that might otherwise be difficult to recognize; production-based accounts hold that variation reflects the speed of lexical access and retrieval in the language production system. Here we examine phonetic variation as a function of phonological neighborhood density, capitalizing on the fact that words from dense phonological neighborhoods tend to be relatively difficult to recognize, yet easy to produce. We show that words with many phonological neighbors tend to be phonetically reduced (shortened in duration and produced with more centralized vowels) in connected speech, when other predictors of phonetic variation are brought under statistical control. We argue that our findings are consistent with the predictions of production-based accounts of pronunciation variation. 2011 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "bd19395492dfbecd58f5cfd56b0d00a7",
"text": "The ubiquity of the various cheap embedded sensors on mobile devices, for example cameras, microphones, accelerometers, and so on, is enabling the emergence of participatory sensing applications. While participatory sensing can benefit the individuals and communities greatly, the collection and analysis of the participators' location and trajectory data may jeopardize their privacy. However, the existing proposals mostly focus on participators' location privacy, and few are done on participators' trajectory privacy. The effective analysis on trajectories that contain spatial-temporal history information will reveal participators' whereabouts and the relevant personal privacy. In this paper, we propose a trajectory privacy-preserving framework, named TrPF, for participatory sensing. Based on the framework, we improve the theoretical mix-zones model with considering the time factor from the perspective of graph theory. Finally, we analyze the threat models with different background knowledge and evaluate the effectiveness of our proposal on the basis of information entropy, and then compare the performance of our proposal with previous trajectory privacy protections. The analysis and simulation results prove that our proposal can protect participators' trajectories privacy effectively with lower information loss and costs than what is afforded by the other proposals.",
"title": ""
},
{
"docid": "06e8d9c53fe89fbf683920e90bf09731",
"text": "Convolutional neural networks (CNNs) with their ability to learn useful spatial features have revolutionized computer vision. The network topology of CNNs exploits the spatial relationship among the pixels in an image and this is one of the reasons for their success. In other domains deep learning has been less successful because it is not clear how the structure of non-spatial data can constrain network topology. Here, we show how multivariate time series can be interpreted as space-time pictures, thus expanding the applicability of the tricks-of-the-trade for CNNs to this important domain. We demonstrate that our model beats more traditional state-of-the-art models at predicting price development on the European Power Exchange (EPEX). Furthermore, we find that the features discovered by CNNs on raw data beat the features that were hand-designed by an expert.",
"title": ""
},
{
"docid": "9a63a5db2a40df78a436e7be87f42ff7",
"text": "A quantitative, coordinate-based meta-analysis combined data from 354 participants across 22 fMRI studies and one positron emission tomography (PET) study to identify the differences in neural correlates of figurative and literal language processing, and to investigate the role of the right hemisphere (RH) in figurative language processing. Studies that reported peak activations in standard space contrasting figurative vs. literal language processing at whole brain level in healthy adults were included. The left and right IFG, large parts of the left temporal lobe, the bilateral medial frontal gyri (medFG) and an area around the left amygdala emerged for figurative language processing across studies. Conditions requiring exclusively literal language processing did not activate any selective regions in most of the cases, but if so they activated the cuneus/precuneus, right MFG and the right IPL. No general RH advantage for metaphor processing could be found. On the contrary, significant clusters of activation for metaphor conditions were mostly lateralized to the left hemisphere (LH). Subgroup comparisons between experiments on metaphors, idioms, and irony/sarcasm revealed shared activations in left frontotemporal regions for idiom and metaphor processing. Irony/sarcasm processing was correlated with activations in midline structures such as the medFG, ACC and cuneus/precuneus. To test the graded salience hypothesis (GSH, Giora, 1997), novel metaphors were contrasted against conventional metaphors. In line with the GSH, RH involvement was found for novel metaphors only. Here we show that more analytic, semantic processes are involved in metaphor comprehension, whereas irony/sarcasm comprehension involves theory of mind processes.",
"title": ""
},
{
"docid": "b0a1a782ce2cbf5f152a52537a1db63d",
"text": "In piezoelectric energy harvesting (PEH), with the use of the nonlinear technique named synchronized switching harvesting on inductor (SSHI), the harvesting efficiency can be greatly enhanced. Furthermore, the introduction of its self-powered feature makes this technique more applicable for standalone systems. In this article, a modified circuitry and an improved analysis for self-powered SSHI are proposed. With the modified circuitry, direct peak detection and better isolation among different units within the circuit can be achieved, both of which result in further removal on dissipative components. In the improved analysis, details in open circuit voltage, switching phase lag, and voltage inversion factor are discussed, all of which lead to a better understanding to the working principle of the self-powered SSHI. Both analyses and experiments show that, in terms of harvesting power, the higher the excitation level, the closer between self-powered and ideal SSHI; at the same time, the more beneficial the adoption of self-powered SSHI treatment in piezoelectric energy harvesting, compared to the standard energy harvesting (SEH) technique.",
"title": ""
},
{
"docid": "e995ed011dedd9e543f07a4af78e27bb",
"text": "Over the last years, computer networks have evolved into highly dynamic and interconnected environments, involving multiple heterogeneous devices and providing a myriad of services on top of them. This complex landscape has made it extremely difficult for security administrators to keep accurate and be effective in protecting their systems against cyber threats. In this paper, we describe our vision and scientific posture on how artificial intelligence techniques and a smart use of security knowledge may assist system administrators in better defending their networks. To that end, we put forward a research roadmap involving three complimentary axes, namely, (I) the use of FCA-based mechanisms for managing configuration vulnerabilities, (II) the exploitation of knowledge representation techniques for automated security reasoning, and (III) the design of a cyber threat intelligence mechanism as a CKDD process. Then, we describe a machine-assisted process for cyber threat analysis which provides a holistic perspective of how these three research axes are integrated together.",
"title": ""
},
{
"docid": "d437d71047b70736f5a6cbf3724d62a9",
"text": "We propose syntactically controlled paraphrase networks (SCPNs) and use them to generate adversarial examples. Given a sentence and a target syntactic form (e.g., a constituency parse), SCPNs are trained to produce a paraphrase of the sentence with the desired syntax. We show it is possible to create training data for this task by first doing backtranslation at a very large scale, and then using a parser to label the syntactic transformations that naturally occur during this process. Such data allows us to train a neural encoderdecoder model with extra inputs to specify the target syntax. A combination of automated and human evaluations show that SCPNs generate paraphrases that follow their target specifications without decreasing paraphrase quality when compared to baseline (uncontrolled) paraphrase systems. Furthermore, they are more capable of generating syntactically adversarial examples that both (1) “fool” pretrained models and (2) improve the robustness of these models to syntactic variation when used to augment their training data.",
"title": ""
},
{
"docid": "646b594b713a92a5a0ab6b97ee91d927",
"text": "We aim to constrain the evolution of active galactic nuclei (AGNs) as a function of obscuration using an X-ray-selected sample of ∼2000 AGNs from a multi-tiered survey including the CDFS, AEGIS-XD, COSMOS, and XMM-XXL fields. The spectra of individual X-ray sources are analyzed using a Bayesian methodology with a physically realistic model to infer the posterior distribution of the hydrogen column density and intrinsic X-ray luminosity. We develop a novel non-parametric method that allows us to robustly infer the distribution of the AGN population in X-ray luminosity, redshift, and obscuring column density, relying only on minimal smoothness assumptions. Our analysis properly incorporates uncertainties from low count spectra, photometric redshift measurements, association incompleteness, and the limited sample size. We find that obscured AGNs with NH > 1022 cm−2 account for 77 −5% of the number density and luminosity density of the accretion supermassive black hole population with LX > 1043 erg s−1, averaged over cosmic time. Compton-thick AGNs account for approximately half the number and luminosity density of the obscured population, and 38 −7% of the total. We also find evidence that the evolution is obscuration dependent, with the strongest evolution around NH ≈ 1023 cm−2. We highlight this by measuring the obscured fraction in Compton-thin AGNs, which increases toward z ∼ 3, where it is 25% higher than the local value. In contrast, the fraction of Compton-thick AGNs is consistent with being constant at ≈35%, independent of redshift and accretion luminosity. We discuss our findings in the context of existing models and conclude that the observed evolution is, to first order, a side effect of anti-hierarchical growth.",
"title": ""
},
{
"docid": "994fcd84c9f2d75df6388cfe5ea33d06",
"text": "In this paper, we present a modeling and monitoring scheme of the friction between the wafer and polishing pad for the linear chemical-mechanical planarization (CMP) processes. Kinematic analysis of the linear CMP system is investigated and a distributed LuGre dynamic friction model is utilized to capture the friction forces generated by the wafer/pad interactions. We present an experimental validation of wafer/pad friction modeling and analysis. Pad conditioning and wafer film topography effects on the wafer/pad friction are also experimentally demonstrated. Finally, one application example is illustrated the use of friction torques for real-time monitoring the shallow trench isolation (STI) CMP processes.",
"title": ""
},
{
"docid": "a412f5facafdb2479521996c05143622",
"text": "A temperature and supply independent on-chip reference relaxation oscillator for low voltage design is described. The frequency of oscillation is mainly a function of a PVT robust biasing current. The comparator for the relaxation oscillator is replaced with a high speed common-source stage to eliminate the temperature dependency of the comparator delay. The current sources and voltages are biased by a PVT robust references derived from a bandgap circuitry. This oscillator is designed in TSMC 65 nm CMOS process to operate with a minimum supply voltage of 1.4 V and consumes 100 μW at 157 MHz frequency of oscillation. The oscillator exhibits frequency variation of 1.6% for supply changes from 1.4 V to 1.9 V, and ±1.2% for temperature changes from 20°C to 120°C.",
"title": ""
},
{
"docid": "b5f356974c0272e04b6e4844b297684e",
"text": "The development and spread of chloroquine-resistant Plasmodium falciparum threatens the health of millions of people and poses a major challenge to the control of malaria. Monitoring drug efficacy in 2-year intervals is an important tool for establishing rational anti-malarial drug policies. This study addresses the therapeutic efficacy of artemether-lumefantrine (AL) for the treatment of Plasmodium falciparum in southwestern Ethiopia. A 28-day in vivo therapeutic efficacy study was conducted from September to December, 2011, in southwestern Ethiopia. Participants were selected for the study if they were older than 6 months, weighed more than 5 kg, symptomatic, and had microscopically confirmed, uncomplicated P. falciparum. All 93 eligible patients were treated with AL and followed for 28 days. For each patient, recurrence of parasitaemia, the clinical condition, and the presence of gametoytes were assessed on each visit during the follow-up period. PCR was conducted to differentiate re-infection from recrudescence. Seventy-four (83.1 %) of the study subjects cleared fever by day 1, but five (5.6 %) had fever at day 2. All study subjects cleared fever by day 3. Seventy-nine (88.8 %) of the study subjects cleared the parasite by day 1, seven (7.9 %) were blood-smear positive by day 1, and three (3.4 %) were positive by day 2. In five patients (5.6 %), parasitaemia reappeared during the 28-day follow-up period. From these five, one (1.1 %) was a late clinical failure, and four (4.5 %) were a late parasitological failure. On the day of recurrent parasitaemia, the level of chloroquine/desethylchloroquine (CQ-DCQ) was above the minimum effective concentration (>100 ng/ml) in one patient. There were 84 (94.4 %) adequate clinical and parasitological responses. The 28-day, PCR-uncorrected (unadjusted by genotyping) cure rate was 84 (94.4 %), whereas the 28-day, PCR-corrected cure rate was 87 (97.8 %). Of the three re-infections, two (2.2 %) were due to P. falciparum and one (1.1 %) was due to P. vivax. From 89 study subjects, 12 (13.5 %) carried P. falciparum gametocytes at day 0, whereas the 28-day gametocyte carriage rate was 2 (2.2 %). Years after the introduction of AL in Ethiopia, the finding of this study is that AL has been highly effective in the treatment of uncomplicated P. falciparum malaria and reducing gametocyte carriage in southwestern Ethiopia.",
"title": ""
},
{
"docid": "a17e1bf423195ff66d73456f931fa5a1",
"text": "We propose a dialogue state tracker based on long short term memory (LSTM) neural networks. LSTM is an extension of a recurrent neural network (RNN), which can better consider distant dependencies in sequential input. We construct a LSTM network that receives utterances of dialogue participants as input, and outputs the dialogue state of the current utterance. The input utterances are separated into vectors of words with their orders, which are further converted to word embeddings to avoid sparsity problems. In experiments, we combined this system with the baseline system of the dialogue state tracking challenge (DSTC), and achieved improved dialogue state tracking accuracy.",
"title": ""
},
{
"docid": "5f63681c406856bc0664ee5a32d04b18",
"text": "In 2008, the emergence of the blockchain as the foundation of the first-ever decentralized cryptocurrency not only revolutionized the financial industry but proved a boon for peer-to-peer (P2P) information exchange in the most secure, efficient, and transparent manner. The blockchain is a public ledger that works like a log by keeping a record of all transactions in chronological order, secured by an appropriate consensus mechanism and providing an immutable record. Its exceptional characteristics include immutability, irreversibility, decentralization, persistence, and anonymity.",
"title": ""
},
{
"docid": "8e7c2943eb6df575bf847cd67b6424dc",
"text": "Today, money laundering poses a serious threat not only to financial institutions but also to the nation. This criminal activity is becoming more and more sophisticated and seems to have moved from the cliché, of drug trafficking to financing terrorism and surely not forgetting personal gain. Most international financial institutions have been implementing anti-money laundering solutions to fight investment fraud. However, traditional investigative techniques consume numerous man-hours. Recently, data mining approaches have been developed and are considered as well-suited techniques for detecting money laundering activities. Within the scope of a collaboration project for the purpose of developing a new solution for the anti-money laundering Units in an international investment bank, we proposed a simple and efficient data mining-based solution for anti-money laundering. In this paper, we present this solution developed as a tool and show some preliminary experiment results with real transaction datasets.",
"title": ""
},
{
"docid": "1ebc62dc8dfeaf9c547e7fe3d4d21ae7",
"text": "Electrically small antennas are generally presumed to exhibit high impedance mismatch (high VSWR), low efficiency, high quality factor (Q); and, therefore, narrow operating bandwidth. For an electric or magnetic dipole antenna, there is a fundamental lower bound for the quality factor that is determined as a function of the antenna's occupied physical volume. In this paper, the quality factor of a resonant, electrically small electric dipole is minimized by allowing the antenna geometry to utilize the occupied spherical volume to the greatest extent possible. A self-resonant, electrically small electric dipole antenna is presented that exhibits an impedance near 50 Ohms, an efficiency in excess of 95% and a quality factor that is within 1.5 times the fundamental lower bound at a value of ka less than 0.27. Through an arrangement of the antenna's wire geometry, the electrically small dipole's polarization is converted from linear to elliptical (with an axial ratio of 3 dB), resulting in a further reduction in the quality factor. The elliptically polarized, electrically small antenna exhibits an impedance near 50 Ohms, an efficiency in excess of 95% and it has an omnidirectional, figure-eight radiation pattern.",
"title": ""
},
{
"docid": "829e437aee100b302f35900e0b0a91ab",
"text": "A 1. 5 V 0.18mum CMOS LNA for GPS applications has been designed with fully differential topology. Under such a low supply voltage, the fully differential LNA has been simulated, it provides a series of good results in Noise figure, Linearity and Power consumption. The LNA achieves a Noise figure of 1. 5 dB, voltage gain of 32 dB, Power dissipation of 6 mW, and the input reflection coefficient (Sn) is -23 dB.",
"title": ""
},
{
"docid": "507353e988950736e35f78185d320ce4",
"text": "Traceability is an important concern in projects that span different engineering domains. Traceability can also be mandated, exploited and managed across the engineering lifecycle, and may involve defining connections between heterogeneous models. As a result, traceability can be considered to be multi-domain. This thesis introduces the concept and challenges of multi-domain traceability and explains how it can be used to support typical traceability scenarios. It proposes a model-based approach to develop a traceability solution which effectively operates across multiple engineering domains. The approach introduced a collection of tasks and structures which address the identified challenges for a traceability solution in multi-domain projects. The proposed approach demonstrates that modelling principles and MDE techniques can help to address current challenges and consequently improve the effectiveness of a multi-domain traceability solution. A prototype of the required tooling to support the approach is implemented with EMF and atop Epsilon; it consists of an implementation of the proposed structures (models) and model management operations to support traceability. Moreover, the approach is illustrated in the context of two safety-critical projects where multi-domain traceability is required to underpin certification arguments.",
"title": ""
},
{
"docid": "8f449e62b300c4c8ff62306d02f2f820",
"text": "The effects of adrenal corticosteroids on subsequent adrenocorticotropin secretion are complex. Acutely (within hours), glucocorticoids (GCs) directly inhibit further activity in the hypothalamo-pituitary-adrenal axis, but the chronic actions (across days) of these steroids on brain are directly excitatory. Chronically high concentrations of GCs act in three ways that are functionally congruent. (i) GCs increase the expression of corticotropin-releasing factor (CRF) mRNA in the central nucleus of the amygdala, a critical node in the emotional brain. CRF enables recruitment of a chronic stress-response network. (ii) GCs increase the salience of pleasurable or compulsive activities (ingesting sucrose, fat, and drugs, or wheel-running). This motivates ingestion of \"comfort food.\" (iii) GCs act systemically to increase abdominal fat depots. This allows an increased signal of abdominal energy stores to inhibit catecholamines in the brainstem and CRF expression in hypothalamic neurons regulating adrenocorticotropin. Chronic stress, together with high GC concentrations, usually decreases body weight gain in rats; by contrast, in stressed or depressed humans chronic stress induces either increased comfort food intake and body weight gain or decreased intake and body weight loss. Comfort food ingestion that produces abdominal obesity, decreases CRF mRNA in the hypothalamus of rats. Depressed people who overeat have decreased cerebrospinal CRF, catecholamine concentrations, and hypothalamo-pituitary-adrenal activity. We propose that people eat comfort food in an attempt to reduce the activity in the chronic stress-response network with its attendant anxiety. These mechanisms, determined in rats, may explain some of the epidemic of obesity occurring in our society.",
"title": ""
},
{
"docid": "cfce53c88e07b9cd837c3182a24d9901",
"text": "The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.",
"title": ""
},
{
"docid": "05457fe0f541e313c01b5d4b4015fa7b",
"text": "This paper presents the case for and the evidence in favour of passive investment strategies and examines the major criticisms of the technique. I conclude that the evidence strongly supports passive investment management in all markets—smallcapitalisation stocks as well as large-capitalisation equities, US markets as well as international markets, and bonds as well as stocks. Recent attacks on the efficient market hypothesis do not weaken the case for indexing.",
"title": ""
}
] |
scidocsrr
|
f34ced239c02a52fbb771d042910a58d
|
Big data emerging technologies: A CaseStudy with analyzing twitter data using apache hive
|
[
{
"docid": "298b65526920c7a094f009884439f3e4",
"text": "Big Data concerns massive, heterogeneous, autonomous sources with distributed and decentralized control. These characteristics make it an extreme challenge for organizations using traditional data management mechanism to store and process these huge datasets. It is required to define a new paradigm and re-evaluate current system to manage and process Big Data. In this paper, the important characteristics, issues and challenges related to Big Data management has been explored. Various open source Big Data analytics frameworks that deal with Big Data analytics workloads have been discussed. Comparative study between the given frameworks and suitability of the same has been proposed.",
"title": ""
},
{
"docid": "e9aac361f8ca1bb8f10409859aef718d",
"text": "MapReduce has become an important distributed processing model for large-scale data-intensive applications like data mining and web indexing. Hadoop-an open-source implementation of MapReduce is widely used for short jobs requiring low response time. The current Hadoop implementation assumes that computing nodes in a cluster are homogeneous in nature. Data locality has not been taken into account for launching speculative map tasks, because it is assumed that most maps are data-local. Unfortunately, both the homogeneity and data locality assumptions are not satisfied in virtualized data centers. We show that ignoring the data-locality issue in heterogeneous environments can noticeably reduce the MapReduce performance. In this paper, we address the problem of how to place data across nodes in a way that each node has a balanced data processing load. Given a dataintensive application running on a Hadoop MapReduce cluster, our data placement scheme adaptively balances the amount of data stored in each node to achieve improved data-processing performance. Experimental results on two real data-intensive applications show that our data placement strategy can always improve the MapReduce performance by rebalancing data across nodes before performing a data-intensive application in a heterogeneous Hadoop cluster.",
"title": ""
}
] |
[
{
"docid": "07ef9eece7de49ee714d4a2adf9bb078",
"text": "Vegetable oil has been proven to be advantageous as a non-toxic, cost-effective and biodegradable solvent to extract polycyclic aromatic hydrocarbons (PAHs) from contaminated soils for remediation purposes. The resulting vegetable oil contained PAHs and therefore required a method for subsequent removal of extracted PAHs and reuse of the oil in remediation processes. In this paper, activated carbon adsorption of PAHs from vegetable oil used in soil remediation was assessed to ascertain PAH contaminated oil regeneration. Vegetable oils, originating from lab scale remediation, with different PAH concentrations were examined to study the adsorption of PAHs on activated carbon. Batch adsorption tests were performed by shaking oil-activated carbon mixtures in flasks. Equilibrium data were fitted with the Langmuir and Freundlich isothermal models. Studies were also carried out using columns packed with activated carbon. In addition, the effects of initial PAH concentration and activated carbon dosage on sorption capacities were investigated. Results clearly revealed the effectiveness of using activated carbon as an adsorbent to remove PAHs from the vegetable oil. Adsorption equilibrium of PAHs on activated carbon from the vegetable oil was successfully evaluated by the Langmuir and Freundlich isotherms. The initial PAH concentrations and carbon dosage affected adsorption significantly. The results indicate that the reuse of vegetable oil was feasible.",
"title": ""
},
{
"docid": "4b494016220eb5442642e34c3ed2d720",
"text": "BACKGROUND\nTreatments for alopecia are in high demand, but not all are safe and reliable. Dalteparin and protamine microparticles (D/P MPs) can effectively carry growth factors (GFs) in platelet-rich plasma (PRP).\n\n\nOBJECTIVE\nTo identify the effects of PRP-containing D/P MPs (PRP&D/P MPs) on hair growth.\n\n\nMETHODS & MATERIALS\nParticipants were 26 volunteers with thin hair who received five local treatments of 3 mL of PRP&D/P MPs (13 participants) or PRP and saline (control, 13 participants) at 2- to 3-week intervals and were evaluated for 12 weeks. Injected areas comprised frontal or parietal sites with lanugo-like hair. Experimental and control areas were photographed. Consenting participants underwent biopsies for histologic examination.\n\n\nRESULTS\nD/P MPs bind to various GFs contained in PRP. Significant differences were seen in hair cross-section but not in hair numbers in PRP and PRP&D/P MP injections. The addition of D/P MPs to PRP resulted in significant stimulation in hair cross-section. Microscopic findings showed thickened epithelium, proliferation of collagen fibers and fibroblasts, and increased vessels around follicles.\n\n\nCONCLUSION\nPRP&D/P MPs and PRP facilitated hair growth but D/P MPs provided additional hair growth. The authors have indicated no significant interest with commercial supporters.",
"title": ""
},
{
"docid": "5d21df36697616719bcc3e0ee22a08bd",
"text": "In spite of the significant recent progress, the incorporation of haptics into virtual environments is still in its infancy due to limitations in the hardware, the cost of development, as well as the level of reality they provide. Nonetheless, we believe that the field will one day be one of the groundbreaking media of the future. It has its current holdups but the promise of the future is worth the wait. The technology is becoming cheaper and applications are becoming more forthcoming and apparent. If we can survive this infancy, it will promise to be an amazing revolution in the way we interact with computers and the virtual world. The researchers organize the rapidly increasing multidisciplinary research of haptics into four subareas: human haptics, machine haptics, computer haptics, and multimedia haptics",
"title": ""
},
{
"docid": "580e2f24b8b4a7564e132b87420fe7ad",
"text": "Walking is a vital exercise for health promotion and a fundamental ability necessary for everyday life. In the authors' previous studies, an omni-directional walker was developed for walking rehabilitation. Walking training programs are stored in the walker and the walker must precisely follow the paths defined in the walking training programs to guarantee the effectiveness of rehabilitation. In the previous study, an adaptive control method has been proposed for path tracking of the walker considering a center of gravity shift and load change. In this paper simulations and running experiments are carried out to verify the proposed adaptive control method. First, the kinematics and the kinetics of the omni-directional walker motion are described. Second, the adaptive control strategy is presented. Finally, path tracking simulations and experiments are carried out using the proposed method. Comparing with the proportional-integral-derivative control (PID control), the simulation and experiment results demonstrate the feasibility and effectiveness of the adaptive control method.",
"title": ""
},
{
"docid": "fb9c0650f5ac820eef3df65b7de1ff12",
"text": "Since 2013, a number of studies have enhanced the literature and have guided clinicians on viable treatment interventions outside of pharmacotherapy and surgery. Thirty-three randomized controlled trials and one large observational study on exercise and physiotherapy were published in this period. Four randomized controlled trials focused on dance interventions, eight on treatment of cognition and behavior, two on occupational therapy, and two on speech and language therapy (the latter two specifically addressed dysphagia). Three randomized controlled trials focused on multidisciplinary care models, one study on telemedicine, and four studies on alternative interventions, including music therapy and mindfulness. These studies attest to the marked interest in these therapeutic approaches and the increasing evidence base that places nonpharmacological treatments firmly within the integrated repertoire of treatment options in Parkinson's disease.",
"title": ""
},
{
"docid": "6bd18974879c8f38309e8ebb818c6ebf",
"text": "Calcium (Ca(2+)) is a ubiquitous signaling molecule that accumulates in the cytoplasm in response to diverse classes of stimuli and, in turn, regulates many aspects of cell function. In neurons, Ca(2+) influx in response to action potentials or synaptic stimulation triggers neurotransmitter release, modulates ion channels, induces synaptic plasticity, and activates transcription. In this article, we discuss the factors that regulate Ca(2+) signaling in mammalian neurons with a particular focus on Ca(2+) signaling within dendritic spines. This includes consideration of the routes of entry and exit of Ca(2+), the cellular mechanisms that establish the temporal and spatial profile of Ca(2+) signaling, and the biophysical criteria that determine which downstream signals are activated when Ca(2+) accumulates in a spine. Furthermore, we also briefly discuss the technical advances that made possible the quantitative study of Ca(2+) signaling in dendritic spines.",
"title": ""
},
{
"docid": "1969bf5a07349cc5a9b498e0437e41fe",
"text": "In this work, we tackle the problem of instance segmentation, the task of simultaneously solving object detection and semantic segmentation. Towards this goal, we present a model, called MaskLab, which produces three outputs: box detection, semantic segmentation, and direction prediction. Building on top of the Faster-RCNN object detector, the predicted boxes provide accurate localization of object instances. Within each region of interest, MaskLab performs foreground/background segmentation by combining semantic and direction prediction. Semantic segmentation assists the model in distinguishing between objects of different semantic classes including background, while the direction prediction, estimating each pixel's direction towards its corresponding center, allows separating instances of the same semantic class. Moreover, we explore the effect of incorporating recent successful methods from both segmentation and detection (e.g., atrous convolution and hypercolumn). Our proposed model is evaluated on the COCO instance segmentation benchmark and shows comparable performance with other state-of-art models.",
"title": ""
},
{
"docid": "55462ae5eeb747114dfda77d14519557",
"text": "In an environment where supply chains compete against supply chains, information sharing among supply chain partners using information systems is a competitive tool. Supply chain ontology has been proposed as an important medium for attaining information systems interoperability. Ontology has its origin in philosophy, and the computing community has adopted ontology in its language. This paper presents a study of state of the art research in supply chain ontology and identifies the outstanding research gaps. Six supply chain ontology models were identified from a systematic review of literature. A seven point comparison framework was developed to consider the underlying concepts as well as application of the ontology models. The comparison results were then synthesised into nine gaps to inform future supply chain ontology research. This work is a rigorous and systematic attempt to identify and synthesise the research in supply chain ontology. 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "6be44677f42b5a6aaaea352e11024cfa",
"text": "In this paper, we intend to discuss if and in what sense semiosis (meaning process, cf. C.S. Peirce) can be regarded as an “emergent” process in semiotic systems. It is not our problem here to answer when or how semiosis emerged in nature. As a prerequisite for the very formulation of these problems, we are rather interested in discussing the conditions which should be fulfilled for semiosis to be characterized as an emergent process. The first step in this work is to summarize a systematic analysis of the variety of emergence theories and concepts, elaborated by Achim Stephan. Along the summary of this analysis, we pose fundamental questions that have to be answered in order to ascribe a precise meaning to the term “emergence” in the context of an understanding of semiosis. After discussing a model for explaining emergence based on Salthe’s hierarchical structuralism, which considers three levels at a time in a semiotic system, we present some tentative answers to those questions.",
"title": ""
},
{
"docid": "36f2be7a14eeb10ad975aa00cfd30f36",
"text": "Recovering a low-rank tensor from incomplete information is a recurring problem in signal processing and machine learning. The most popular convex relaxation of this problem minimizes the sum of the nuclear norms of the unfoldings of the tensor. We show that this approach can be substantially suboptimal: reliably recovering a K-way tensor of length n and Tucker rank r from Gaussian measurements requires Ω(rnK−1) observations. In contrast, a certain (intractable) nonconvex formulation needs only O(r +nrK) observations. We introduce a very simple, new convex relaxation, which partially bridges this gap. Our new formulation succeeds with O(rbK/2cndK/2e) observations. While these results pertain to Gaussian measurements, simulations strongly suggest that the new norm also outperforms the sum of nuclear norms for tensor completion from a random subset of entries. Our lower bound for the sum-of-nuclear-norms model follows from a new result on recovering signals with multiple sparse structures (e.g. sparse, low rank), which perhaps surprisingly demonstrates the significant suboptimality of the commonly used recovery approach via minimizing the sum of individual sparsity inducing norms (e.g. l1, nuclear norm). Our new formulation for low-rank tensor recovery however opens the possibility in reducing the sample complexity by exploiting several structures jointly.",
"title": ""
},
{
"docid": "4805f0548cb458b7fad623c07ab7176d",
"text": "This paper presents a unified control framework for controlling a quadrotor tail-sitter UAV. The most salient feature of this framework is its capability of uniformly treating the hovering and forward flight, and enabling continuous transition between these two modes, depending on the commanded velocity. The key part of this framework is a nonlinear solver that solves for the proper attitude and thrust that produces the required acceleration set by the position controller in an online fashion. The planned attitude and thrust are then achieved by an inner attitude controller that is global asymptotically stable. To characterize the aircraft aerodynamics, a full envelope wind tunnel test is performed on the full-scale quadrotor tail-sitter UAV. In addition to planning the attitude and thrust required by the position controller, this framework can also be used to analyze the UAV's equilibrium state (trimmed condition), especially when wind gust is present. Finally, simulation results are presented to verify the controller's capacity, and experiments are conducted to show the attitude controller's performance.",
"title": ""
},
{
"docid": "980950d8c5c7f5cda550b271d4e0d309",
"text": "The paper presents an accurate analytical subdomain model for computation of the open-circuit magnetic field in surface-mounted permanent-magnet machines with any pole and slot combinations, including fractional slot machines, accounting for stator slotting effect. It is derived by solving the field governing equations in each simple and regular subdomain, i.e., magnet, air-gap and stator slots, and applying the boundary conditions to the interfaces between these subdomains. The model accurately accounts for the influence of interaction between slots, radial/parallel magnetization, internal/external rotor topologies, relative recoil permeability of magnets, and odd/even periodic boundary conditions. The back-electromotive force, electromagnetic torque, cogging torque, and unbalanced magnetic force are obtained based on the field model. The relationship between this accurate subdomain model and the conventional subdomain model, which is based on the simplified one slot per pole machine model, is also discussed. The investigation shows that the proposed accurate subdomain model has better accuracy than the subdomain model based on one slot/pole machine model. The finite element and experimental results validate the analytical prediction.",
"title": ""
},
{
"docid": "90fe763855ca6c4fabe4f9d042d5c61a",
"text": "While learning models of intuitive physics is an increasingly active area of research, current approaches still fall short of natural intelligences in one important regard: they require external supervision, such as explicit access to physical states, at training and sometimes even at test times. Some authors have relaxed such requirements by supplementing the model with an handcrafted physical simulator. Still, the resulting methods are unable to automatically learn new complex environments and to understand physical interactions within them. In this work, we demonstrated for the first time learning such predictors directly from raw visual observations and without relying on simulators. We do so in two steps: first, we learn to track mechanically-salient objects in videos using causality and equivariance, two unsupervised learning principles that do not require auto-encoding. Second, we demonstrate that the extracted positions are sufficient to successfully train visual motion predictors that can take the underlying environment into account. We validate our predictors on synthetic datasets; then, we introduce a new dataset, ROLL4REAL, consisting of real objects rolling on complex terrains (pool table, elliptical bowl, and random height-field). We show that in all such cases it is possible to learn reliable extrapolators of the object trajectories from raw videos alone, without any form of external supervision and with no more prior knowledge than the choice of a convolutional neural network architecture.",
"title": ""
},
{
"docid": "91f3268092606d2bd1698096e32c824f",
"text": "Classic pipeline models for task-oriented dialogue system require explicit modeling the dialogue states and hand-crafted action spaces to query a domain-specific knowledge base. Conversely, sequence-to-sequence models learn to map dialogue history to the response in current turn without explicit knowledge base querying. In this work, we propose a novel framework that leverages the advantages of classic pipeline and sequence-to-sequence models. Our framework models a dialogue state as a fixed-size distributed representation and use this representation to query a knowledge base via an attention mechanism. Experiment on Stanford Multi-turn Multi-domain Taskoriented Dialogue Dataset shows that our framework significantly outperforms other sequenceto-sequence based baseline models on both automatic and human evaluation. Title and Abstract in Chinese 面向任务型对话中基于对话状态表示的序列到序列学习 面向任务型对话中,传统流水线模型要求对对话状态进行显式建模。这需要人工定义对 领域相关的知识库进行检索的动作空间。相反地,序列到序列模型可以直接学习从对话 历史到当前轮回复的一个映射,但其没有显式地进行知识库的检索。在本文中,我们提 出了一个结合传统流水线与序列到序列二者优点的模型。我们的模型将对话历史建模为 一组固定大小的分布式表示。基于这组表示,我们利用注意力机制对知识库进行检索。 在斯坦福多轮多领域对话数据集上的实验证明,我们的模型在自动评价与人工评价上优 于其他基于序列到序列的模型。",
"title": ""
},
{
"docid": "3ce6c3b6a23e713bf9af419ce2d7ded3",
"text": "Two measures of financial performance that are being applied increasingly in investor-owned and not-for-profit healthcare organizations are market value added (MVA) and economic value added (EVA). Unlike traditional profitability measures, both MVA and EVA measures take into account the cost of equity capital. MVA is most appropriate for investor-owned healthcare organizations and EVA is the best measure for not-for-profit organizations. As healthcare financial managers become more familiar with MVA and EVA and understand their potential, these two measures may become more widely accepted accounting tools for assessing the financial performance of investor-owned and not-for-profit healthcare organizations.",
"title": ""
},
{
"docid": "17d0da8dd05d5cfb79a5f4de4449fcdd",
"text": "PUBLISHING Thousands of scientists start year without journal access p.13 2017 SNEAK PEEK What the new year holds for science p.14 ECOLOGY What is causing the deaths of so many shorebirds? p.16 PHYSICS Quantum computers ready to leap out of the lab The race is on to turn scientific curiosities into working machines. A front runner in the pursuit of quantum computing uses single ions trapped in a vacuum. Q uantum computing has long seemed like one of those technologies that are 20 years away, and always will be. But 2017 could be the year that the field sheds its research-only image. Computing giants Google and Microsoft recently hired a host of leading lights, and have set challenging goals for this year. Their ambition reflects a broader transition taking place at start-ups and academic research labs alike: to move from pure science towards engineering. \" People are really building things, \" says Christopher Monroe, a physicist at the University of Maryland in College Park who co-founded the start-up IonQ in 2015. \" I've never seen anything like that. It's no longer just research. \" Google started working on a form of quantum computing that harnesses super-conductivity in 2014. It hopes this year, or shortly after, to perform a computation that is beyond even the most powerful 'classical' supercomputers — an elusive milestone known as quantum supremacy. Its rival, Microsoft, is betting on an intriguing but unproven concept, topological quantum computing, and hopes to perform a first demonstration of the technology. The quantum-computing start-up scene is also heating up. Monroe plans to begin hiring in earnest this year. Physicist Robert Schoelkopf at Yale University in New Haven, Connecticut, who co-founded the start-up Quantum Circuits, and former IBM applied physicist Chad Rigetti, who set up Rigetti in",
"title": ""
},
{
"docid": "e7d36dc01a3e20c3fb6d2b5245e46705",
"text": "A gender gap in mathematics achievement persists in some nations but not in others. In light of the underrepresentation of women in careers in science, technology, mathematics, and engineering, increasing research attention is being devoted to understanding gender differences in mathematics achievement, attitudes, and affect. The gender stratification hypothesis maintains that such gender differences are closely related to cultural variations in opportunity structures for girls and women. We meta-analyzed 2 major international data sets, the 2003 Trends in International Mathematics and Science Study and the Programme for International Student Assessment, representing 493,495 students 14-16 years of age, to estimate the magnitude of gender differences in mathematics achievement, attitudes, and affect across 69 nations throughout the world. Consistent with the gender similarities hypothesis, all of the mean effect sizes in mathematics achievement were very small (d < 0.15); however, national effect sizes showed considerable variability (ds = -0.42 to 0.40). Despite gender similarities in achievement, boys reported more positive math attitudes and affect (ds = 0.10 to 0.33); national effect sizes ranged from d = -0.61 to 0.89. In contrast to those of previous tests of the gender stratification hypothesis, our results point to specific domains of gender equity responsible for gender gaps in math. Gender equity in school enrollment, women's share of research jobs, and women's parliamentary representation were the most powerful predictors of cross-national variability in gender gaps in math. Results are situated within the context of existing research demonstrating apparently paradoxical effects of societal gender equity and highlight the significance of increasing girls' and women's agency cross-nationally.",
"title": ""
},
{
"docid": "b5fe13becf36cdc699a083b732dc5d6a",
"text": "The stability of two-dimensional, linear, discrete systems is examined using the 2-D matrix Lyapunov equation. While the existence of a positive definite solution pair to the 2-D Lyapunov equation is sufficient for stability, the paper proves that such existence is not necessary for stability, disproving a long-standing conjecture.",
"title": ""
},
{
"docid": "f2af56bef7ae8c12910d125a3b729e6a",
"text": "We investigate an important and challenging problem in summary generation, i.e., Evolutionary Trans-Temporal Summarization (ETTS), which generates news timelines from massive data on the Internet. ETTS greatly facilitates fast news browsing and knowledge comprehension, and hence is a necessity. Given the collection of time-stamped web documents related to the evolving news, ETTS aims to return news evolution along the timeline, consisting of individual but correlated summaries on each date. Existing summarization algorithms fail to utilize trans-temporal characteristics among these component summaries. We propose to model trans-temporal correlations among component summaries for timelines, using inter-date and intra-date sentence dependencies, and present a novel combination. We develop experimental systems to compare 5 rival algorithms on 6 instinctively different datasets which amount to 10251 documents. Evaluation results in ROUGE metrics indicate the effectiveness of the proposed approach based on trans-temporal information.",
"title": ""
},
{
"docid": "cf15139b8f62d01f38f14d8fa09d3bd6",
"text": "In reinforcement learning (RL) tasks, an efficient exploration mechanism should be able to encourage an agent to take actions that lead to less frequent states which may yield higher accumulative future return. However, both knowing about the future and evaluating the frequentness of states are non-trivial tasks, especially for deep RL domains, where a state is represented by high-dimensional image frames. In this paper, we propose a novel informed exploration framework for deep RL tasks, where we build the capability for a RL agent to predict over the future transitions and evaluate the frequentness for the predicted future frames in a meaningful manner. To this end, we train a deep prediction model to generate future frames given a state-action pair, and a convolutional autoencoder model to generate deep features for conducting hashing over the seen frames. In addition, to utilize the counts derived from the seen frames to evaluate the frequentness for the predicted frames, we tackle the challenge of making the hash codes for the predicted future frames to match with their corresponding seen frames. In this way, we could derive a reliable metric for evaluating the novelty of the future direction pointed by each action, and hence inform the agent to explore the least frequent one. We use Atari 2600 games as the testing environment and demonstrate that the proposed framework achieves significant performance gain over a state-of-the-art informed exploration approach in most of the domains.",
"title": ""
}
] |
scidocsrr
|
ac848e7f7162ffd6b2a022426906e321
|
Visual Learning of Arithmetic Operations
|
[
{
"docid": "2f20bca0134eb1bd9d65c4791f94ddcc",
"text": "We present an attention-based model for recognizing multiple objects in images. The proposed model is a deep recurrent neural network trained with reinforcement learning to attend to the most relevant regions of the input image. We show that the model learns to both localize and recognize multiple objects despite being given only class labels during training. We evaluate the model on the challenging task of transcribing house number sequences from Google Street View images and show that it is both more accurate than the state-of-the-art convolutional networks and uses fewer parameters and less computation.",
"title": ""
}
] |
[
{
"docid": "90eb392765c01b6166daa2a7a62944d1",
"text": "Recent studies have demonstrated the potential for reducing energy consumption in integrated circuits by allowing errors during computation. While most proposed techniques for achieving this rely on voltage overscaling (VOS), this paper shows that Imprecise Hardware (IHW) with design-time structural parameters can achieve orthogonal energy-quality tradeoffs. Two IHW adders are improved and two IHW multipliers are introduced in this paper. In addition, a simulation-free error estimation technique is proposed to rapidly and accurately estimate the impact of IHW on output quality. Finally, a quality-aware energy minimization methodology is presented. To validate this methodology, experiments are conducted on two computational kernels: DOT-PRODUCT and L2-NORM -- used in three applications -- Leukocyte Tracker, SVM classification and K-means clustering. Results show that the Hellinger distance between estimated and simulated error distribution is within 0.05 and that the methodology enables designers to explore energy-quality tradeoffs with significant reduction in simulation complexity.",
"title": ""
},
{
"docid": "6b94f2f88fb62de5bec8ae0ace3afa1c",
"text": "The purpose of this paper is to design a microstrip patch antenna with low pass filter for efficient rectenna design this structure having the property of rejecting higher harmonics than 2GHz. As the design frequency is 2GHz.in first step we design a patch antenna in second step we design patch antenna with low pass filter and combine these two. The IE3D software is used for the simulation of this structure.",
"title": ""
},
{
"docid": "2f83b2ef8f71c56069304b0962074edc",
"text": "Abstract: Printed antennas are becoming one of the most popular designs in personal wireless communications systems. In this paper, the design of a novel tapered meander line antenna is presented. The design analysis and characterization of the antenna is performed using the finite difference time domain technique and experimental verifications are performed to ensure the effectiveness of the numerical model. The new design features an operating frequency of 2.55 GHz with a 230 MHz bandwidth, which supports future generations of mobile communication systems.",
"title": ""
},
{
"docid": "8c4ece41e96c08536375e9e72dc9ddc3",
"text": "BACKGROUND\nWe present one unusual case of anophthalmia and craniofacial cleft, probably due to congenital toxoplasmosis only.\n\n\nCASE PRESENTATION\nA two-month-old male had a twin in utero who disappeared between the 7th and the 14th week of gestation. At birth, the baby presented anophthalmia and craniofacial cleft, and no sign compatible with genetic or exposition/deficiency problems, like the Wolf-Hirschhorn syndrome or maternal vitamin A deficiency. Congenital toxoplasmosis was confirmed by the presence of IgM abs and IgG neo-antibodies in western blot, as well as by real time PCR in blood. CMV infection was also discarded by PCR and IgM negative results. Structures suggestive of T. gondii pseudocysts were observed in a biopsy taken during the first functional/esthetic surgery.\n\n\nCONCLUSIONS\nWe conclude that this is a rare case of anophthalmia combined with craniofacial cleft due to congenital toxoplasmosis, that must be considered by physicians. This has not been reported before.",
"title": ""
},
{
"docid": "c3c5931200ff752d8285cc1068e779ee",
"text": "Speech-driven facial animation is the process which uses speech signals to automatically synthesize a talking character. The majority of work in this domain creates a mapping from audio features to visual features. This often requires post-processing using computer graphics techniques to produce realistic albeit subject dependent results. We present a system for generating videos of a talking head, using a still image of a person and an audio clip containing speech, that doesn’t rely on any handcrafted intermediate features. To the best of our knowledge, this is the first method capable of generating subject independent realistic videos directly from raw audio. Our method can generate videos which have (a) lip movements that are in sync with the audio and (b) natural facial expressions such as blinks and eyebrow movements 1. We achieve this by using a temporal GAN with 2 discriminators, which are capable of capturing different aspects of the video. The effect of each component in our system is quantified through an ablation study. The generated videos are evaluated based on their sharpness, reconstruction quality, and lip-reading accuracy. Finally, a user study is conducted, confirming that temporal GANs lead to more natural sequences than a static GAN-based approach.",
"title": ""
},
{
"docid": "397c25e6381818eabadf23d214409e45",
"text": "s of Invited Talks Plagiarizing Nature for Engineering Analysis and Design",
"title": ""
},
{
"docid": "4f44b685adc7e63f18a40d0f3fc25585",
"text": "Computational Thinking (CT) has become popular in recent years and has been recognised as an essential skill for all, as members of the digital age. Many researchers have tried to define CT and have conducted studies about this topic. However, CT literature is at an early stage of maturity, and is far from either explaining what CT is, or how to teach and assess this skill. In the light of this state of affairs, the purpose of this study is to examine the purpose, target population, theoretical basis, definition, scope, type and employed research design of selected papers in the literature that have focused on computational thinking, and to provide a framework about the notion, scope and elements of CT. In order to reveal the literature and create the framework for computational thinking, an inductive qualitative content analysis was conducted on 125 papers about CT, selected according to pre-defined criteria from six different databases and digital libraries. According to the results, the main topics covered in the papers composed of activities (computerised or unplugged) that promote CT in the curriculum. The targeted population of the papers was mainly K-12. Gamed-based learning and constructivism were the main theories covered as the basis for CT papers. Most of the papers were written for academic conferences and mainly composed of personal views about CT. The study also identified the most commonly used words in the definitions and scope of CT, which in turn formed the framework of CT. The findings obtained in this study may not only be useful in the exploration of research topics in CT and the identification of CT in the literature, but also support those who need guidance for developing tasks or programs about computational thinking and informatics.",
"title": ""
},
{
"docid": "8a9603a10e5e02f6edfbd965ee11bbb9",
"text": "The alerts produced by network-based intrusion detection systems, e.g. Snort, can be difficult for network administrators to efficiently review and respond to due to the enormous number of alerts generated in a short time frame. This work describes how the visualization of raw IDS alert data assists network administrators in understanding the current state of a network and quickens the process of reviewing and responding to intrusion attempts. The project presented in this work consists of three primary components. The first component provides a visual mapping of the network topology that allows the end-user to easily browse clustered alerts. The second component is based on the flocking behavior of birds such that birds tend to follow other birds with similar behaviors. This component allows the end-user to see the clustering process and provides an efficient means for reviewing alert data. The third component discovers and visualizes patterns of multistage attacks by profiling the attacker’s behaviors.",
"title": ""
},
{
"docid": "28b15544f3e054ca483382a471c513e5",
"text": "In this work, design and control system development of a gas-electric hybrid quad tilt-rotor UAV with morphing wing are presented. The proposed aircraft has an all carbon-composite body, gas-electric hybrid electric generation system for 3 hours hovering or up to 10 hours of horizontal flight, a novel configuration for VTOL and airplane-like flights with minimized aerodynamic costs and mechanical morphing wings for both low speed and high speed horizontal flights. The mechanical design of the vehicle is performed to achieve a strong and light-weight structure, whereas the aerodynamic and propulsion system designs are aimed for accomplishing both fixed wing and rotary wing aircraft flights with maximized flight endurance. A detailed dynamic model of the aerial vehicle is developed including the effects of tilting rotors, variable fuel weight, and morphing wing lift-drag forces and pitching moments. Control system is designed for both flight regimes and flight simulations are carried out to check the performance of the proposed control system.",
"title": ""
},
{
"docid": "5571389dcc25cbcd9c68517934adce1d",
"text": "The polysaccharide-containing extracellular fractions (EFs) of the edible mushroom Pleurotus ostreatus have immunomodulating effects. Being aware of these therapeutic effects of mushroom extracts, we have investigated the synergistic relations between these extracts and BIAVAC and BIAROMVAC vaccines. These vaccines target the stimulation of the immune system in commercial poultry, which are extremely vulnerable in the first days of their lives. By administrating EF with polysaccharides from P. ostreatus to unvaccinated broilers we have noticed slow stimulation of maternal antibodies against infectious bursal disease (IBD) starting from four weeks post hatching. For the broilers vaccinated with BIAVAC and BIAROMVAC vaccines a low to almost complete lack of IBD maternal antibodies has been recorded. By adding 5% and 15% EF in the water intake, as compared to the reaction of the immune system in the previous experiment, the level of IBD antibodies was increased. This has led us to believe that by using this combination of BIAVAC and BIAROMVAC vaccine and EF from P. ostreatus we can obtain good results in stimulating the production of IBD antibodies in the period of the chicken first days of life, which are critical to broilers' survival. This can be rationalized by the newly proposed reactivity biological activity (ReBiAc) principles by examining the parabolic relationship between EF administration and recorded biological activity.",
"title": ""
},
{
"docid": "59ce42be854ceb6a92579b43442f016c",
"text": "This paper presents the design, fabrication, and characterization of the SiC JBSFET (junction barrier Schottky (JBS) diode integrated MOSFET). The fabrication of the JBSFET adopted a novel single metal, single thermal treatment process to simultaneously form ohmic contacts on n+, p+ implanted regions, and Schottky contact on the n-4H-SiC epilayer. The presented SiC JBSFET uses 40% smaller wafer area because the diode and MOSFET share the edge termination as well as the current conducting drift region. The proposed single chip solution of MOSFET/JBS diode functionalities eliminates the parasitic inductance between separately packaged devices allowing a higher frequency operation in a power converter.",
"title": ""
},
{
"docid": "7c0586335facd8388814f863e19e3d06",
"text": "OBJECTIVE\nWe reviewed randomized controlled trials of complementary and alternative medicine (CAM) treatments for depression, anxiety, and sleep disturbance in nondemented older adults.\n\n\nDATA SOURCES\nWe searched PubMed (1966-September 2006) and PsycINFO (1984-September 2006) databases using combinations of terms including depression, anxiety, and sleep; older adult/elderly; randomized controlled trial; and a list of 56 terms related to CAM.\n\n\nSTUDY SELECTION\nOf the 855 studies identified by database searches, 29 met our inclusion criteria: sample size >or= 30, treatment duration >or= 2 weeks, and publication in English. Four additional articles from manual bibliography searches met inclusion criteria, totaling 33 studies.\n\n\nDATA EXTRACTION\nWe reviewed identified articles for methodological quality using a modified Scale for Assessing Scientific Quality of Investigations (SASQI). We categorized a study as positive if the CAM therapy proved significantly more effective than an inactive control (or as effective as active control) on at least 1 primary psychological outcome. Positive and negative studies were compared on the following characteristics: CAM treatment category, symptom(s) assessed, country where the study was conducted, sample size, treatment duration, and mean sample age.\n\n\nDATA SYNTHESIS\n67% of the 33 studies reviewed were positive. Positive studies had lower SASQI scores for methodology than negative studies. Mind-body and body-based therapies had somewhat higher rates of positive results than energy- or biologically-based therapies.\n\n\nCONCLUSIONS\nMost studies had substantial methodological limitations. A few well-conducted studies suggested therapeutic potential for certain CAM interventions in older adults (e.g., mind-body interventions for sleep disturbances and acupressure for sleep and anxiety). More rigorous research is needed, and suggestions for future research are summarized.",
"title": ""
},
{
"docid": "87ab746df486a15b895cc0a4706db6c7",
"text": "Many complex systems in the real world can be modeled as signed social networks that contain both positive and negative relations. Algorithms for mining social networks have been developed in the past; however, most of them were designed primarily for networks containing only positive relations and, thus, are not suitable for signed networks. In this work, we propose a new algorithm, called FEC, to mine signed social networks where both positive within-group relations and negative between-group relations are dense. FEC considers both the sign and the density of relations as the clustering attributes, making it effective for not only signed networks but also conventional social networks including only positive relations. Also, FEC adopts an agent-based heuristic that makes the algorithm efficient (in linear time with respect to the size of a network) and capable of giving nearly optimal solutions. FEC depends on only one parameter whose value can easily be set and requires no prior knowledge on hidden community structures. The effectiveness and efficacy of FEC have been demonstrated through a set of rigorous experiments involving both benchmark and randomly generated signed networks.",
"title": ""
},
{
"docid": "bbfc488e55fe2dfaff2af73a75c31edd",
"text": "This overview covers a wide range of cannabis topics, initially examining issues in dispensaries and self-administration, plus regulatory requirements for production of cannabis-based medicines, particularly the Food and Drug Administration \"Botanical Guidance.\" The remainder pertains to various cannabis controversies that certainly require closer examination if the scientific, consumer, and governmental stakeholders are ever to reach consensus on safety issues, specifically: whether botanical cannabis displays herbal synergy of its components, pharmacokinetics of cannabis and dose titration, whether cannabis medicines produce cyclo-oxygenase inhibition, cannabis-drug interactions, and cytochrome P450 issues, whether cannabis randomized clinical trials are properly blinded, combatting the placebo effect in those trials via new approaches, the drug abuse liability (DAL) of cannabis-based medicines and their regulatory scheduling, their effects on cognitive function and psychiatric sequelae, immunological effects, cannabis and driving safety, youth usage, issues related to cannabis smoking and vaporization, cannabis concentrates and vape-pens, and laboratory analysis for contamination with bacteria and heavy metals. Finally, the issue of pesticide usage on cannabis crops is addressed. New and disturbing data on pesticide residues in legal cannabis products in Washington State are presented with the observation of an 84.6% contamination rate including potentially neurotoxic and carcinogenic agents. With ongoing developments in legalization of cannabis in medical and recreational settings, numerous scientific, safety, and public health issues remain.",
"title": ""
},
{
"docid": "c691820eec90395366a415f19b2e8764",
"text": "This study attempts to identify the salient factors affecting tourist food consumption. By reviewing available studies in the hospitality and tourism literature and synthesising insights from food consumption and sociological research, five socio-cultural and psychological factors influencing tourist food consumption are identified: cultural/religious influences, socio-demographic factors, food-related personality traits, exposure effect/past experience, and motivational factors. The findings further suggest that the motivational factors can be categorised into five main dimensions: symbolic, obligatory, contrast, extension, and pleasure. Given the lack of research in examining tourist food consumption systematically, the multidisciplinary approach adopted in this study allows a comprehensive understanding of the phenomenon which forms the basis for further research and conceptual elaboration.",
"title": ""
},
{
"docid": "e79abaaa50d8ab8938f1839c7e4067f9",
"text": "We review the objectives and techniques used in the control of horizontal axis wind turbines at the individual turbine level, where controls are applied to the turbine blade pitch and generator. The turbine system is modeled as a flexible structure operating in the presence of turbulent wind disturbances. Some overview of the various stages of turbine operation and control strategies used to maximize energy capture in below rated wind speeds is given, but emphasis is on control to alleviate loads when the turbine is operating at maximum power. After reviewing basic turbine control objectives, we provide an overview of the common basic linear control approaches and then describe more advanced control architectures and why they may provide significant advantages.",
"title": ""
},
{
"docid": "e7b9c3ef571770788cd557f8c4843bcf",
"text": "Different efforts have been done to address the problem of information overload on the Internet. Recommender systems aim at directing users through this information space, toward the resources that best meet their needs and interests by extracting knowledge from the previous users’ interactions. In this paper, we propose an algorithm to solve the web page recommendation problem. In our algorithm, we use distributed learning automata to learn the behavior of previous users’ and recommend pages to the current user based on learned pattern. Our experiments on real data set show that the proposed algorithm performs better than the other algorithms that we compared to and, at the same time, it is less complex than other algorithms with respect to memory usage and computational cost too.",
"title": ""
},
{
"docid": "e605e0417160dec6badddd14ec093843",
"text": "Within both academic and policy discourses, the concept of media literacy is being extended from its traditional focus on print and audiovisual media to encompass the internet and other new media. The present article addresses three central questions currently facing the public, policy-makers and academy: What is media literacy? How is it changing? And what are the uses of literacy? The article begins with a definition: media literacy is the ability to access, analyse, evaluate and create messages across a variety of contexts. This four-component model is then examined for its applicability to the internet. Having advocated this skills-based approach to media literacy in relation to the internet, the article identifies some outstanding issues for new media literacy crucial to any policy of promoting media literacy among the population. The outcome is to extend our understanding of media literacy so as to encompass the historically and culturally conditioned relationship among three processes: (i) the symbolic and material representation of knowledge, culture and values; (ii) the diffusion of interpretative skills and abilities across a (stratified) population; and (iii) the institutional, especially, the state management of the power that access to and skilled use of knowledge brings to those who are ‘literate’.",
"title": ""
},
{
"docid": "e1dd2a719d3389a11323c5245cd2b938",
"text": "Secure identity tokens such as Electronic Identity (eID) cards are emerging everywhere. At the same time user-centric identity management gains acceptance. Anonymous credential schemes are the optimal realization of user-centricity. However, on inexpensive hardware platforms, typically used for eID cards, these schemes could not be made to meet the necessary requirements such as future-proof key lengths and transaction times on the order of 10 seconds. The reasons for this is the need for the hardware platform to be standardized and certified. Therefore an implementation is only possible as a Java Card applet. This results in severe restrictions: little memory (transient and persistent), an 8-bit CPU, and access to hardware acceleration for cryptographic operations only by defined interfaces such as RSA encryption operations.\n Still, we present the first practical implementation of an anonymous credential system on a Java Card 2.2.1. We achieve transaction times that are orders of magnitudes faster than those of any prior attempt, while raising the bar in terms of key length and trust model. Our system is the first one to act completely autonomously on card and to maintain its properties in the face of an untrusted terminal. In addition, we provide a formal system specification and share our solution strategies and experiences gained and with the Java Card.",
"title": ""
},
{
"docid": "a68cec6fd069499099c8bca264eb0982",
"text": "The anti-saccade task has emerged as an important task for investigating the flexible control that we have over behaviour. In this task, participants must suppress the reflexive urge to look at a visual target that appears suddenly in the peripheral visual field and must instead look away from the target in the opposite direction. A crucial step involved in performing this task is the top-down inhibition of a reflexive, automatic saccade. Here, we describe recent neurophysiological evidence demonstrating the presence of this inhibitory function in single-cell activity in the frontal eye fields and superior colliculus. Patients diagnosed with various neurological and/or psychiatric disorders that affect the frontal lobes or basal ganglia find it difficult to suppress the automatic pro-saccade, revealing a deficit in top-down inhibition.",
"title": ""
}
] |
scidocsrr
|
779584c4a51b39d0abab8d2966c96bf7
|
AutoManner: An Automated Interface for Making Public Speakers Aware of Their Mannerisms
|
[
{
"docid": "db252efe7bde6cc0d58e337f8ad04271",
"text": "Social skills training is a well-established method to decrease human anxiety and discomfort in social interaction, and acquire social skills. In this paper, we attempt to automate the process of social skills training by developing a dialogue system named \"automated social skills trainer,\" which provides social skills training through human-computer interaction. The system includes a virtual avatar that recognizes user speech and language information and gives feedback to users to improve their social skills. Its design is based on conventional social skills training performed by human participants, including defining target skills, modeling, role-play, feedback, reinforcement, and homework. An experimental evaluation measuring the relationship between social skill and speech and language features shows that these features have a relationship with autistic traits. Additional experiments measuring the effect of performing social skills training with the proposed application show that most participants improve their skill by using the system for 50 minutes.",
"title": ""
},
{
"docid": "e6297a36c70d15a3fe6c6842b2afbd8a",
"text": "Good public speaking skills convey strong and effective communication, which is critical in many professions and used in everyday life. The ability to speak publicly requires a lot of training and practice. Recent technological developments enable new approaches for public speaking training that allow users to practice in a safe and engaging environment. We explore feedback strategies for public speaking training that are based on an interactive virtual audience paradigm. We investigate three study conditions: (1) a non-interactive virtual audience (control condition), (2) direct visual feedback, and (3) nonverbal feedback from an interactive virtual audience. We perform a threefold evaluation based on self-assessment questionnaires, expert assessments, and two objectively annotated measures of eye-contact and avoidance of pause fillers. Our experiments show that the interactive virtual audience brings together the best of both worlds: increased engagement and challenge as well as improved public speaking skills as judged by experts.",
"title": ""
}
] |
[
{
"docid": "83b7d0a8aba61cd49527d67a47231fea",
"text": "The purpose of the present study was to estimate the prevalence of depression in Chinese university students, and to identify the socio-demographic factors associated with depression in this population. A multi-stage stratified sampling procedure was used to select university students (N = 5245) in Harbin (Heilongjiang Province, Northeastern China), who were aged 16-35 years. The Beck Depression Inventory (BDI) was used to determine depressive symptoms of the participants. BDI scores of 14 or higher were categorized as depressive for logistic regression analysis. Depression was diagnosed by the Structured Clinical Interview (SCID) for the Diagnostic and Statistical Manual of Mental Disorders-Fourth Edition (DSM-IV). 11.7% of the participants had a BDI score 14 or higher. Major Depressive Disorder was seen in 4.0% of Chinese university students. There were no statistical differences in the incidence of depression when gender, ethnicity, and university classification were analyzed. Multivariate analysis showed that age, study year, satisfaction with major, family income situation, parental relationship and mother's education were significantly associated with depression. Moderate depression is prevalent in Chinese university students. The students who were older, dissatisfied with their major, had a lower family income, poor parental relationships, and a lower level of mother's education were susceptible to depression.",
"title": ""
},
{
"docid": "fc3af1e7ebc13605938d8f8238d9c8bd",
"text": "Detecting objects becomes difficult when we need to deal with large shape deformation, occlusion and low resolution. We propose a novel approach to i) handle large deformations and partial occlusions in animals (as examples of highly deformable objects), ii) describe them in terms of body parts, and iii) detect them when their body parts are hard to detect (e.g., animals depicted at low resolution). We represent the holistic object and body parts separately and use a fully connected model to arrange templates for the holistic object and body parts. Our model automatically decouples the holistic object or body parts from the model when they are hard to detect. This enables us to represent a large number of holistic object and body part combinations to better deal with different \"detectability\" patterns caused by deformations, occlusion and/or low resolution. We apply our method to the six animal categories in the PASCAL VOC dataset and show that our method significantly improves state-of-the-art (by 4.1% AP) and provides a richer representation for objects. During training we use annotations for body parts (e.g., head, torso, etc.), making use of a new dataset of fully annotated object parts for PASCAL VOC 2010, which provides a mask for each part.",
"title": ""
},
{
"docid": "038f4500da6feed4c56f7def631539a3",
"text": "Deep learning methods are useful for high-dimensional data and are becoming widely used in many areas of software engineering. Deep learners utilizes extensive computational power and can take a long time to train- making it difficult to widely validate and repeat and improve their results. Further, they are not the best solution in all domains. For example, recent results show that for finding related Stack Overflow posts, a tuned SVM performs similarly to a deep learner, but is significantly faster to train.\n This paper extends that recent result by clustering the dataset, then tuning every learners within each cluster. This approach is over 500 times faster than deep learning (and over 900 times faster if we use all the cores on a standard laptop computer). Significantly, this faster approach generates classifiers nearly as good (within 2% F1 Score) as the much slower deep learning method. Hence we recommend this faster methods since it is much easier to reproduce and utilizes far fewer CPU resources.\n More generally, we recommend that before researchers release research results, that they compare their supposedly sophisticated methods against simpler alternatives (e.g applying simpler learners to build local models).",
"title": ""
},
{
"docid": "759b85bd270afb908ce2b4f23e0f5269",
"text": "In this paper we discuss λ-policy iteration, a method for exact and approximate dynamic programming. It is intermediate between the classical value iteration (VI) and policy iteration (PI) methods, and it is closely related to optimistic (also known as modified) PI, whereby each policy evaluation is done approximately, using a finite number of VI. We review the theory of the method and associated questions of bias and exploration arising in simulation-based cost function approximation. We then discuss various implementations, which offer advantages over well-established PI methods that use LSPE(λ), LSTD(λ), or TD(λ) for policy evaluation with cost function approximation. One of these implementations is based on a new simulation scheme, called geometric sampling, which uses multiple short trajectories rather than a single infinitely long trajectory.",
"title": ""
},
{
"docid": "b1239b48759757db5dfbc0c8bca6e983",
"text": "Deep convolutional neural networks (DCNNs) have been driving significant advances in semantic image segmentation due to their powerful feature representation for recognition. However, their performance in preserving object boundaries is still not satisfactory. Visual mechanism theory indicates that image segmentation tasks require not only recognition, like DCNNs, but also local visual attention capability. Considering that superpixel is good at grasping detailed local structure, we propose a probabilistic superpixel-based dense conditional random field model (PSP-CRF) to refine label assignments as a post-processing optimization method. First, the well-known fully convolutional networks (FCN) and Deeplab-ResNet are employed to produce coarse prediction probabilistic maps at each pixel. Second, we construct a fully connected CRF model based on the PSP generated by the simple linear iterative clustering algorithm. In our approach, an effective refining algorithm with entropy is developed to convert the pixel-level appearance and position features to the normalized PSP, which works well for CRF. Third, our method optimizes the PSP-CRF to obtain the final label assignment results by employing a highly efficient mean field inference algorithm and some quadratic programming relaxation related algorithms. The experiments on the PASCAL VOC segmentation dataset demonstrate the effectiveness of our methods which can improve the segmentation performance of DCNNs to 82% in mIoU while increasing the computational efficiency by 47%.",
"title": ""
},
{
"docid": "d18c77b3d741e1a7ed10588f6a3e75c0",
"text": "Given only a few image-text pairs, humans can learn to detect semantic concepts and describe the content. For machine learning algorithms, they usually require a lot of data to train a deep neural network to solve the problem. However, it is challenging for the existing systems to generalize well to the few-shot multi-modal scenario, because the learner should understand not only images and texts but also their relationships from only a few examples. In this paper, we tackle two multi-modal problems, i.e., image captioning and visual question answering (VQA), in the few-shot setting.\n We propose Fast Parameter Adaptation for Image-Text Modeling (FPAIT) that learns to learn jointly understanding image and text data by a few examples. In practice, FPAIT has two benefits. (1) Fast learning ability. FPAIT learns proper initial parameters for the joint image-text learner from a large number of different tasks. When a new task comes, FPAIT can use a small number of gradient steps to achieve a good performance. (2) Robust to few examples. In few-shot tasks, the small training data will introduce large biases in Convolutional Neural Networks (CNN) and damage the learner's performance. FPAIT leverages dynamic linear transformations to alleviate the side effects of the small training set. In this way, FPAIT flexibly normalizes the features and thus reduces the biases during training. Quantitatively, FPAIT achieves superior performance on both few-shot image captioning and VQA benchmarks.",
"title": ""
},
{
"docid": "63e715ae4f67f4c7261258531516deb3",
"text": "Query similarity calculation is an important problem and has a wide range of applications in IR, including query recommendation, query expansion, and even advertisement matching. Existing work on query similarity aims to provide a single similarity measure without considering the fact that queries are ambiguous and usually have multiple search intents. In this paper, we argue that query similarity should be defined upon search intents, so-called intent-aware query similarity. By introducing search intents into the calculation of query similarity, we can obtain more accurate and also informative similarity measures on queries and thus help a variety of applications, especially those related to diversification. Specifically, we first identify the potential search intents of queries, and then measure query similarity under different intents using intent-aware representations. A regularized topic model is employed to automatically learn the potential intents of queries by using both the words from search result snippets and the regularization from query co-clicks. Experimental results confirm the effectiveness of intent-aware query similarity on ambiguous queries which can provide significantly better similarity scores over the traditional approaches. We also experimentally verified the utility of intent-aware similarity in the application of query recommendation, which can suggest diverse queries in a structured way to search users.",
"title": ""
},
{
"docid": "8abcf3e56e272c06da26a40d66afcfb0",
"text": "As internet use becomes increasingly integral to modern life, the hazards of excessive use are also becoming apparent. Prior research suggests that socially anxious individuals are particularly susceptible to problematic internet use. This vulnerability may relate to the perception of online communication as a safer means of interacting, due to greater control over self-presentation, decreased risk of negative evaluation, and improved relationship quality. To investigate these hypotheses, a general sample of 338 completed an online survey. Social anxiety was confirmed as a significant predictor of problematic internet use when controlling for depression and general anxiety. Social anxiety was associated with perceptions of greater control and decreased risk of negative evaluation when communicating online, however perceived relationship quality did not differ. Negative expectations during face-to-face interactions partially accounted for the relationship between social anxiety and problematic internet use. There was also preliminary evidence that preference for online communication exacerbates face-to-face avoidance.",
"title": ""
},
{
"docid": "6bbb75137cee4cd173e2f7d082da6a2c",
"text": "Neural network models have shown their promising opportunities for multi-task learning, which focus on learning the shared layers to extract the common and task-invariant features. However, in most existing approaches, the extracted shared features are prone to be contaminated by task-specific features or the noise brought by other tasks. In this paper, we propose an adversarial multi-task learning framework, alleviating the shared and private latent feature spaces from interfering with each other. We conduct extensive experiments on 16 different text classification tasks, which demonstrates the benefits of our approach. Besides, we show that the shared knowledge learned by our proposed model can be regarded as off-the-shelf knowledge and easily transferred to new tasks. The datasets of all 16 tasks are publicly available at http://nlp.fudan.",
"title": ""
},
{
"docid": "78bc13c6b86ea9a8fda75b66f665c39f",
"text": "We propose a stochastic answer network (SAN) to explore multi-step inference strategies in Natural Language Inference. Rather than directly predicting the results given the inputs, the model maintains a state and iteratively refines its predictions. Our experiments show that SAN achieves the state-of-the-art results on three benchmarks: Stanford Natural Language Inference (SNLI) dataset, MultiGenre Natural Language Inference (MultiNLI) dataset and Quora Question Pairs dataset.",
"title": ""
},
{
"docid": "ddc3241c09a33bde1346623cf74e6866",
"text": "This paper presents a new technique for predicting wind speed and direction. This technique is based on using a linear time-series-based model relating the predicted interval to its corresponding one- and two-year old data. The accuracy of the model for predicting wind speeds and directions up to 24 h ahead have been investigated using two sets of data recorded during winter and summer season at Madison weather station. Generated results are compared with their corresponding values when using the persistent model. The presented results validate the effectiveness and accuracy of the proposed prediction model for wind speed and direction.",
"title": ""
},
{
"docid": "a42ca90e38f8fcdea60df967c7ca8ecd",
"text": "DDoS defense today relies on expensive and proprietary hardware appliances deployed at fixed locations. This introduces key limitations with respect to flexibility (e.g., complex routing to get traffic to these “chokepoints”) and elasticity in handling changing attack patterns. We observe an opportunity to address these limitations using new networking paradigms such as softwaredefined networking (SDN) and network functions virtualization (NFV). Based on this observation, we design and implement Bohatei, a flexible and elastic DDoS defense system. In designing Bohatei, we address key challenges with respect to scalability, responsiveness, and adversary-resilience. We have implemented defenses for several DDoS attacks using Bohatei. Our evaluations show that Bohatei is scalable (handling 500 Gbps attacks), responsive (mitigating attacks within one minute), and resilient to dynamic adversaries.",
"title": ""
},
{
"docid": "1d1e89d6f1db290f01d296394d03a71b",
"text": "Ontology mapping is seen as a solution provider in today’s landscape of ontology research. As the number of ontologies that are made publicly available and accessible on the Web increases steadily, so does the need for applications to use them. A single ontology is no longer enough to support the tasks envisaged by a distributed environment like the Semantic Web. Multiple ontologies need to be accessed from several applications. Mapping could provide a common layer from which several ontologies could be accessed and hence could exchange information in semantically sound manners. Developing such mappings has been the focus of a variety of works originating from diverse communities over a number of years. In this article we comprehensively review and present these works. We also provide insights on the pragmatics of ontology mapping and elaborate on a theoretical approach for defining ontology mapping.",
"title": ""
},
{
"docid": "e141a1c5c221aa97db98534b339694cb",
"text": "Despite the tremendous popularity and great potential, the field of Enterprise Resource Planning (ERP) adoption and implementation is littered with remarkable failures. Though many contributing factors have been cited in the literature, we argue that the integrated nature of ERP systems, which generally requires an organization to adopt standardized business processes reflected in the design of the software, is a key factor contributing to these failures. We submit that the integration and standardization imposed by most ERP systems may not be suitable for all types of organizations and thus the ‘‘fit’’ between the characteristics of the adopting organization and the standardized business process designs embedded in the adopted ERP system affects the likelihood of implementation success or failure. In this paper, we use the structural contingency theory to identify a set of dimensions of organizational structure and ERP system characteristics that can be used to gauge the degree of fit, thus providing some insights into successful ERP implementations. Propositions are developed based on analyses regarding the success of ERP implementations in different types of organizations. These propositions also provide directions for future research that might lead to prescriptive guidelines for managers of organizations contemplating implementing ERP systems. r 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "485611e88206276342f9ce544dc19ca7",
"text": "In a prospective study by the Scoliosis Research Society, 286 girls who had adolescent idiopathic scoliosis, a thoracic or thoracolumbar curve of 25 to 35 degrees, and a mean age of twelve years and seven months (range, ten to fifteen years) were followed to determine the effect of treatment with observation only (129 patients), an underarm plastic brace (111 patients), and nighttime surface electrical stimulation (forty-six patients). Thirty-nine patients were lost to follow-up, leaving 247 (86 per cent) who were followed until maturity or who were dropped from the study because of failure of the assigned treatment. The end point of failure of treatment was defined as an increase in the curve of at least 6 degrees, from the time of the first roentgenogram, on two consecutive roentgenograms. As determined with use of this end point, treatment with a brace failed in seventeen of the 111 patients; observation only, in fifty-eight of the 129 patients; and electrical stimulation, in twenty-two of the forty-six patients. According to survivorship analysis, treatment with a brace was associated with a success rate of 74 per cent (95 per cent confidence interval, 52 to 84) at four years; observation only, with a success rate of 34 per cent (95 per cent confidence interval, 16 to 49); and electrical stimulation, with a success rate of 33 per cent (95 per cent confidence interval, 12 to 60).(ABSTRACT TRUNCATED AT 250 WORDS)",
"title": ""
},
{
"docid": "2984294e4fd66a8eceab0ca8dd76361f",
"text": "The popularization of Bitcoin, a decentralized crypto-currency has inspired the production of several alternative, or “alt”, currencies. Ethereum, CryptoNote, and Zerocash all represent unique contributions to the cryptocurrency space. Although most alt currencies harbor their own source of innovation, they have no means of adopting the innovations of other currencies which may succeed them. We aim to remedy the potential for atrophied evolution in the crypto-currency space by presenting Tezos, a generic and self-amending crypto-ledger. Tezos can instantiate any blockchain based protocol. Its seed protocol specifies a procedure for stakeholders to approve amendments to the protocol, including amendments to the amendment procedure itself. Upgrades to Tezos are staged through a testing environment to allow stakeholders to recall potentially problematic amendments. The philosophy of Tezos is inspired by Peter Suber’s Nomic[1], a game built around a fully introspective set of rules. In this paper, we hope to elucidate the potential benefits of Tezos, our choice to implement as a proof-of-stake system, and our choice to write it",
"title": ""
},
{
"docid": "c5628c76f448fb71165069aefc75a2c4",
"text": "This research work aims to design and develop a wireless food ordering system in the restaurant. The project presents in-depth on the technical operation of the Wireless Ordering System (WOS) including systems architecture, function, limitations and recommendations. It is believed that with the increasing use of handheld device e.g PDAs in restaurants, pervasive application will become an important tool for restaurants to improve the management aspect by utilizing PDAs to coordinate food ordering could increase efficiency for restaurants and caterers by saving time, reducing human errors and by providing higher quality customer service. With the combination of simple design and readily available emerging communications technologies, it can be concluded that this system is an attractive solution for the hospitality industry.",
"title": ""
},
{
"docid": "be69820b8b0f80c9bb9c56d4652645da",
"text": "Intel Software Guard Extensions (SGX) is an emerging trusted hardware technology. SGX enables user-level code to allocate regions of trusted memory, called enclaves, where the confidentiality and integrity of code and data are guaranteed. While SGX offers strong security for applications, one limitation of SGX is the lack of system call support inside enclaves, which leads to a non-trivial, refactoring effort when protecting existing applications with SGX. To address this issue, previous works have ported existing library OSes to SGX. However, these library OSes are suboptimal in terms of security and performance since they are designed without taking into account the characteristics of SGX.\n In this paper, we revisit the library OS approach in a new setting---Intel SGX. We first quantitatively evaluate the performance impact of enclave transitions on SGX programs, identifying it as a performance bottleneck for any library OSes that aim to support system-intensive SGX applications. We then present the design and implementation of SGXKernel, an in-enclave library OS, with highlight on its switchless design, which obviates the needs for enclave transitions. This switchless design is achieved by incorporating two novel ideas: asynchronous cross-enclave communication and preemptible in-enclave multi-threading. We intensively evaluate the performance of SGXKernel on microbenchmarks and application benchmarks. The results show that SGXKernel significantly outperforms a state-of-the-art library OS that has been ported to SGX.",
"title": ""
},
{
"docid": "8fbec2539107e58a6cd4e6266dc20ccc",
"text": "The Indoor flights of UAV (Unmanned Aerial Vehicle) are susceptible to impacts of multiples obstacles and walls. The most basic controller that a drone requires in order to achieve indoor flight, is a controller that can maintain the drone flying in the same site, this is called hovering control. This paper presents a fuzzy PID controller for hovering. The control system to modify the gains of the parameters of the PID controllers in the x and y axes as a function of position and error in each axis, of a known environment. Flight tests were performed over an AR.Drone 2.0, comparing RMSE errors of hovering with classical PID and fuzzy PID under disturbances. The fuzzy PID controller reduced the average error from 11 cm to 8 cm in a 3 minutes test. This result is an improvement over previously published works.",
"title": ""
},
{
"docid": "d9c9dde3f5e3bf280f09d6783a573357",
"text": "We present a detection method that is able to detect a learned target and is valid for both static and moving cameras. As an application, we detect pedestrians, but could be anything if there is a large set of images of it. The data set is fed into a number of deep convolutional networks, and then, two of these models are set in cascade in order to filter the cutouts of a multi-resolution window that scans the frames in a video sequence. We demonstrate that the excellent performance of deep convolutional networks is very difficult to match when dealing with real problems, and yet we obtain competitive results.",
"title": ""
}
] |
scidocsrr
|
6880869271712d6e15c78f941450f136
|
Noninvasive Brain-Computer Interface: Decoding Arm Movement Kinematics and Motor Control
|
[
{
"docid": "6865c344849ec96d79e7a83a2ab559b1",
"text": "A brain-computer interface (BCI) acquires brain signals, extracts informative features, and translates these features to commands to control an external device. This paper investigates the application of a noninvasive electroencephalography (EEG)based BCI to identify brain signal features in regard to actual hand movement speed. This provides a more refined control for a BCI system in terms of movement parameters. An experiment was performed to collect EEG data from subjects while they performed right-hand movement at two different speeds, namely fast and slow, in four different directions. The informative features from the data were obtained using the Wavelet-Common Spatial Pattern (W-CSP) algorithm that provided high-temporal-spatial-spectral resolution. The applicability of these features to classify the two speeds and to reconstruct the speed profile was studied. The results for classifying speed across seven subjects yielded a mean accuracy of 83.71% using a Fisher Linear Discriminant (FLD) classifier. The speed components were reconstructed using multiple linear regression and significant correlation of 0.52 (Pearson's linear correlation coefficient) was obtained between recorded and reconstructed velocities on an average. The spatial patterns of the W-CSP features obtained showed activations in parietal and motor areas of the brain. The results achieved promises to provide a more refined control in BCI by including control of movement speed.",
"title": ""
}
] |
[
{
"docid": "2ab6b91f6e5e01b3bb8c8e5c0fbdcf24",
"text": "Application markets such as Apple’s App Store and Google’s Play Store have played an important role in the popularity of smartphones and mobile devices. However, keeping malware out of application markets is an ongoing challenge. While recent work has developed various techniques to determine what applications do, no work has provided a technical approach to answer, what do users expect? In this paper, we present the first step in addressing this challenge. Specifically, we focus on permissions for a given application and examine whether the application description provides any indication for why the application needs a permission. We present WHYPER, a framework using Natural Language Processing (NLP) techniques to identify sentences that describe the need for a given permission in an application description. WHYPER achieves an average precision of 82.8%, and an average recall of 81.5% for three permissions (address book, calendar, and record audio) that protect frequentlyused security and privacy sensitive resources. These results demonstrate great promise in using NLP techniques to bridge the semantic gap between user expectations and application functionality, further aiding the risk assessment of mobile applications.",
"title": ""
},
{
"docid": "5b16933905d36ba54ab74743251d7ca7",
"text": "The explosive growth of the user-generated content on the Web has offered a rich data source for mining opinions. However, the large number of diverse review sources challenges the individual users and organizations on how to use the opinion information effectively. Therefore, automated opinion mining and summarization techniques have become increasingly important. Different from previous approaches that have mostly treated product feature and opinion extraction as two independent tasks, we merge them together in a unified process by using probabilistic models. Specifically, we treat the problem of product feature and opinion extraction as a sequence labeling task and adopt Conditional Random Fields models to accomplish it. As part of our work, we develop a computational approach to construct domain specific sentiment lexicon by combining semi-structured reviews with general sentiment lexicon, which helps to identify the sentiment orientations of opinions. Experimental results on two real world datasets show that the proposed method is effective.",
"title": ""
},
{
"docid": "68c02f7658cb55a00f3a71923cf6dd2e",
"text": "Anterior insula and adjacent frontal operculum (hereafter referred to as IFO) are active during exposure to tastants/odorants (particularly disgusting ones), and during the viewing of disgusted facial expressions. Together with lesion data, the IFO has thus been proposed to be crucial in processing disgust-related stimuli. Here, we examined IFO involvement in the processing of other people's gustatory emotions more generally by exposing participants to food-related disgusted, pleased and neutral facial expressions during functional magnetic resonance imaging (fMRI). We then exposed participants to pleasant, unpleasant and neutral tastants for the purpose of mapping their gustatory IFO. Finally, we associated participants' self reported empathy (measured using the Interpersonal Reactivity Index, IRI) with their IFO activation during the witnessing of others' gustatory emotions. We show that participants' empathy scores were predictive of their gustatory IFO activation while witnessing both the pleased and disgusted facial expression of others. While the IFO has been implicated in the processing of negative emotions of others and empathy for negative experiences like pain, our finding extends this concept to empathy for intense positive feelings, and provides empirical support for the view that the IFO contributes to empathy by mapping the bodily feelings of others onto the internal bodily states of the observer, in agreement with the putative interoceptive function of the IFO.",
"title": ""
},
{
"docid": "0583b36c9dfa3080ab94b16a7410b7cd",
"text": "In this paper we present a simple yet effective approach to automatic OCR error detection and correction on a corpus of French clinical reports of variable OCR quality within the domain of foetopathology. While traditional OCR error detection and correction systems rely heavily on external information such as domain-specific lexicons, OCR process information or manually corrected training material, these are not always available given the constraints placed on using medical corpora. We therefore propose a novel method that only needs a representative corpus of acceptable OCR quality in order to train models. Our method uses recurrent neural networks (RNNs) to model sequential information on character level for a given medical text corpus. By inserting noise during the training process we can simultaneously learn the underlying (character-level) language model and as well as learning to detect and eliminate random noise from the textual input. The resulting models are robust to the variability of OCR quality but do not require additional, external information such as lexicons. We compare two different ways of injecting noise into the training process and evaluate our models on a manually corrected data set. We find that the best performing system achieves a 73% accuracy.",
"title": ""
},
{
"docid": "1050845816f29b50360eb6f2277071be",
"text": "Natural language interactive narratives are a variant of traditional branching storylines where player actions are expressed in natural language rather than by selecting among choices. Previous efforts have handled the richness of natural language input using machine learning technologies for text classification, bootstrapping supervised machine learning approaches with human-in-the-loop data acquisition or by using expected player input as fake training data. This paper explores a third alternative, where unsupervised text classifiers are used to automatically route player input to the most appropriate storyline branch. We describe the Data-driven Interactive Narrative Engine (DINE), a web-based tool for authoring and deploying natural language interactive narratives. To compare the performance of different algorithms for unsupervised text classification, we collected thousands of user inputs from hundreds of crowdsourced participants playing 25 different scenarios, and hand-annotated them to create a goldstandard test set. Through comparative evaluations, we identified an unsupervised algorithm for narrative text classification that approaches the performance of supervised text classification algorithms. We discuss how this technology supports authors in the rapid creation and deployment of interactive narrative experiences, with authorial burdens similar to that of traditional branching storylines.",
"title": ""
},
{
"docid": "2e4d1b5b1c1a8dbeba0d17025f2a2471",
"text": "In this age of globalization, the need for competent legal translators is greater than ever. This perhaps explains the growing interest in legal translation not only by linguists but also by lawyers, the latter especially over the past 10 years (cf. Berteloot, 1999:101). Although Berteloot maintains that lawyers analyze the subject matter from a different perspective, she advises her colleagues also to take account of contributions by linguists (ibid.). I assume this includes translation theory as well. In the past, both linguists and lawyers have attempted to apply theories of general translation to legal texts, such as Catford’s concept of situation equivalence (Kielar, 1977:33), Nida’s theory of formal correspondence (Weisflog, 1987:187, 191); also in Weisflog 1996:35), and, more recently, Vermeer’s skopos theory (see Madsen’s, 1997:17-26). While some legal translators seem content to apply principles of general translation theory (Koutsivitis, 1988:37), others dispute the usefulness of translation theory for legal translation (Weston, 1991:1). The latter view is not surprising since special methods and techniques are required in legal translation, a fact confirmed by Bocquet, who recognizes the importance of establishing a theory or at least a theoretical framework that is practice oriented (1994). By analyzing legal translation as an act of communication in the mechanism of the law, my book New Approach to Legal Translation (1997) attempts to provide a theoretical basis for legal translation within the framework of modern translation theory.",
"title": ""
},
{
"docid": "1fcbc7d6c408d00d3bd1e225e28a32cc",
"text": "Active learning aims to train an accurate prediction model with minimum cost by labeling most informative instances. In this paper, we survey existing works on active learning from an instance-selection perspective and classify them into two categories with a progressive relationship: (1) active learning merely based on uncertainty of independent and identically distributed (IID) instances, and (2) active learning by further taking into account instance correlations. Using the above categorization, we summarize major approaches in the field, along with their technical strengths/weaknesses, followed by a simple runtime performance comparison, and discussion about emerging active learning applications and instance-selection challenges therein. This survey intends to provide a high-level summarization for active learning and motivates interested readers to consider instance-selection approaches for designing effective active learning solutions.",
"title": ""
},
{
"docid": "700d3e2cb64624df33ef411215d073ab",
"text": "A novel type of learning machine called support vector machine (SVM) has been receiving increasing interest in areas ranging from its original application in pattern recognition to other applications such as regression estimation due to its remarkable generalization performance. This paper deals with the application of SVM in financial time series forecasting. The feasibility of applying SVM in financial forecasting is first examined by comparing it with the multilayer back-propagation (BP) neural network and the regularized radial basis function (RBF) neural network. The variability in performance of SVM with respect to the free parameters is investigated experimentally. Adaptive parameters are then proposed by incorporating the nonstationarity of financial time series into SVM. Five real futures contracts collated from the Chicago Mercantile Market are used as the data sets. The simulation shows that among the three methods, SVM outperforms the BP neural network in financial forecasting, and there are comparable generalization performance between SVM and the regularized RBF neural network. Furthermore, the free parameters of SVM have a great effect on the generalization performance. SVM with adaptive parameters can both achieve higher generalization performance and use fewer support vectors than the standard SVM in financial forecasting.",
"title": ""
},
{
"docid": "81fa6a7931b8d5f15d55316a6ed1d854",
"text": "The objective of the study is to compare skeletal and dental changes in class II patients treated with fixed functional appliances (FFA) that pursue different biomechanical concepts: (1) FMA (Functional Mandibular Advancer) from first maxillary molar to first mandibular molar through inclined planes and (2) Herbst appliance from first maxillary molar to lower first bicuspid through a rod-and-tube mechanism. Forty-two equally distributed patients were treated with FMA (21) and Herbst appliance (21), following a single-step advancement protocol. Lateral cephalograms were available before treatment and immediately after removal of the FFA. The lateral cephalograms were analyzed with customized linear measurements. The actual therapeutic effect was then calculated through comparison with data from a growth survey. Additionally, the ratio of skeletal and dental contributions to molar and overjet correction for both FFA was calculated. Data was analyzed by means of one-sample Student’s t tests and independent Student’s t tests. Statistical significance was set at p < 0.05. Although differences between FMA and Herbst appliance were found, intergroup comparisons showed no statistically significant differences. Almost all measurements resulted in comparable changes for both appliances. Statistically significant dental changes occurred with both appliances. Dentoalveolar contribution to the treatment effect was ≥70%, thus always resulting in ≤30% for skeletal alterations. FMA and Herbst appliance usage results in comparable skeletal and dental treatment effects despite different biomechanical approaches. Treatment leads to overjet and molar relationship correction that is mainly caused by significant dentoalveolar changes.",
"title": ""
},
{
"docid": "d66799a5d65a6f23527a33b124812ea6",
"text": "Time series is an important class of temporal data objects and it can be easily obtained from scientific and financial applications, and anomaly detection for time series is becoming a hot research topic recently. This survey tries to provide a structured and comprehensive overview of the research on anomaly detection. In this paper, we have discussed the definition of anomaly and grouped existing techniques into different categories based on the underlying approach adopted by each technique. And for each category, we identify the advantages and disadvantages of the techniques in that category. Then, we provide a briefly discussion on the representative methods recently. Furthermore, we also point out some key issues about multivariate time series anomaly. Finally, some suggestions about anomaly detection are discussed and future research trends are also summarized, which is hopefully beneficial to the researchers of time series and other relative domains.",
"title": ""
},
{
"docid": "b2c03d8e54a2a6840f6688ab9682e24b",
"text": "Path following and follow-the-leader motion is particularly desirable for minimally-invasive surgery in confined spaces which can only be reached using tortuous paths, e.g. through natural orifices. While path following and followthe- leader motion can be achieved by hyper-redundant snake robots, their size is usually not applicable for medical applications. Continuum robots, such as tendon-driven or concentric tube mechanisms, fulfill the size requirements for minimally invasive surgery, but yet follow-the-leader motion is not inherently provided. In fact, parameters of the manipulator's section curvatures and translation have to be chosen wisely a priori. In this paper, we consider a tendon-driven continuum robot with extensible sections. After reformulating the forward kinematics model, we formulate prerequisites for follow-the-leader motion and present a general approach to determine a sequence of robot configurations to achieve follow-the-leader motion along a given 3D path. We evaluate our approach in a series of simulations with 3D paths composed of constant curvature arcs and general 3D paths described by B-spline curves. Our results show that mean path errors <;0.4mm and mean tip errors <;1.6mm can theoretically be achieved for constant curvature paths and <;2mm and <;3.1mm for general B-spline curves respectively.",
"title": ""
},
{
"docid": "52606d9059e08bda1bd837c8e5b8296b",
"text": "The problem of point of interest (POI) recommendation is to provide personalized recommendations of places, such as restaurants and movie theaters. The increasing prevalence of mobile devices and of location based social networks (LBSNs) poses significant new opportunities as well as challenges, which we address. The decision process for a user to choose a POI is complex and can be influenced by numerous factors, such as personal preferences, geographical considerations, and user mobility behaviors. This is further complicated by the connection LBSNs and mobile devices. While there are some studies on POI recommendations, they lack an integrated analysis of the joint effect of multiple factors. Meanwhile, although latent factor models have been proved effective and are thus widely used for recommendations, adopting them to POI recommendations requires delicate consideration of the unique characteristics of LBSNs. To this end, in this paper, we propose a general geographical probabilistic factor model (Geo-PFM) framework which strategically takes various factors into consideration. Specifically, this framework allows to capture the geographical influences on a user's check-in behavior. Also, user mobility behaviors can be effectively leveraged in the recommendation model. Moreover, based our Geo-PFM framework, we further develop a Poisson Geo-PFM which provides a more rigorous probabilistic generative process for the entire model and is effective in modeling the skewed user check-in count data as implicit feedback for better POI recommendations. Finally, extensive experimental results on three real-world LBSN datasets (which differ in terms of user mobility, POI geographical distribution, implicit response data skewness, and user-POI observation sparsity), show that the proposed recommendation methods outperform state-of-the-art latent factor models by a significant margin.",
"title": ""
},
{
"docid": "10c926cbfe4339a3a5279e238bc1b0a7",
"text": "Health outcomes in modern society are often shaped by peer interactions. Increasingly, a significant fraction of such interactions happen online and can have an impact on various mental health and behavioral health outcomes. Guided by appropriate social and psychological research, we conduct an observational study to understand the interactions between clinically depressed users and their ego-network when contrasted with a differential control group of normal users and their ego-network. Specifically, we examine if one can identify relevant linguistic and emotional signals from social media exchanges to detect symptomatic cues of depression. We observe significant deviations in the behavior of depressed users from the control group. Reduced and nocturnal online activity patterns, reduced active and passive network participation, increase in negative sentiment or emotion, distinct linguistic styles (e.g. self-focused pronoun usage), highly clustered and tightly-knit neighborhood structure, and little to no exchange of influence between depressed users and their ego-network over time are some of the observed characteristics. Based on our observations, we then describe an approach to extract relevant features and show that building a classifier to predict depression based on such features can achieve an F-score of 90%.",
"title": ""
},
{
"docid": "ca62a58ac39d0c2daaa573dcb91cd2e0",
"text": "Blast-related head injuries are one of the most prevalent injuries among military personnel deployed in service of Operation Iraqi Freedom. Although several studies have evaluated symptoms after blast injury in military personnel, few studies compared them to nonblast injuries or measured symptoms within the acute stage after traumatic brain injury (TBI). Knowledge of acute symptoms will help deployed clinicians make important decisions regarding recommendations for treatment and return to duty. Furthermore, differences more apparent during the acute stage might suggest important predictors of the long-term trajectory of recovery. This study evaluated concussive, psychological, and cognitive symptoms in military personnel and civilian contractors (N = 82) diagnosed with mild TBI (mTBI) at a combat support hospital in Iraq. Participants completed a clinical interview, the Automated Neuropsychological Assessment Metric (ANAM), PTSD Checklist-Military Version (PCL-M), Behavioral Health Measure (BHM), and Insomnia Severity Index (ISI) within 72 hr of injury. Results suggest that there are few differences in concussive symptoms, psychological symptoms, and neurocognitive performance between blast and nonblast mTBIs, although clinically significant impairment in cognitive reaction time for both blast and nonblast groups is observed. Reductions in ANAM accuracy were related to duration of loss of consciousness, not injury mechanism.",
"title": ""
},
{
"docid": "4445f128f31d6f42750049002cb86a29",
"text": "Convolutional neural networks are a popular choice for current object detection and classification systems. Their performance improves constantly but for effective training, large, hand-labeled datasets are required. We address the problem of obtaining customized, yet large enough datasets for CNN training by synthesizing them in a virtual world, thus eliminating the need for tedious human interaction for ground truth creation. We developed a CNN-based multi-class detection system that was trained solely on virtual world data and achieves competitive results compared to state-of-the-art detection systems.",
"title": ""
},
{
"docid": "940e7dc630b7dcbe097ade7abb2883a4",
"text": "Modern object detection methods typically rely on bounding box proposals as input. While initially popularized in the 2D case, this idea has received increasing attention for 3D bounding boxes. Nevertheless, existing 3D box proposal techniques all assume having access to depth as input, which is unfortunately not always available in practice. In this paper, we therefore introduce an approach to generating 3D box proposals from a single monocular RGB image. To this end, we develop an integrated, fully differentiable framework that inherently predicts a depth map, extracts a 3D volumetric scene representation and generates 3D object proposals. At the core of our approach lies a novel residual, differentiable truncated signed distance function module, which, accounting for the relatively low accuracy of the predicted depth map, extracts a 3D volumetric representation of the scene. Our experiments on the standard NYUv2 dataset demonstrate that our framework lets us generate high-quality 3D box proposals and that it outperforms the two-stage technique consisting of successively performing state-of-the-art depth prediction and depthbased 3D proposal generation.",
"title": ""
},
{
"docid": "6964d3ac400abd6ace1ed48c36d68d06",
"text": "Sentiment Analysis (SA) is indeed a fascinating area of research which has stolen the attention of researchers as it has many facets and more importantly it promises economic stakes in the corporate and governance sector. SA has been stemmed out of text analytics and established itself as a separate identity and a domain of research. The wide ranging results of SA have proved to influence the way some critical decisions are taken. Hence, it has become relevant in thorough understanding of the different dimensions of the input, output and the processes and approaches of SA.",
"title": ""
},
{
"docid": "2a7b7d9fab496be18f6bf50add2f7b1e",
"text": "BACKROUND\nSuperior Mesenteric Artery Syndrome (SMAS) is a rare disorder caused by compression of the third portion of the duodenum by the SMA. Once a conservative approach fails, usual surgical strategies include Duodenojejunostomy and Strong's procedure. The latter avoids potential anastomotic risks and complications. Robotic Strong's procedure (RSP) combines both the benefits of a minimal invasive approach and also enchased robotic accuracy and efficacy.\n\n\nMETHODS\nFor a young girl who was unsuccessfully treated conservatively, the paper describes the RSP surgical technique. To the authors' knowledge, this is the first report in the literature.\n\n\nRESULTS\nMinimal blood loss, short operative time, short hospital stay and early recovery were the short-term benefits. Significant weight gain was achieved three months after the surgery.\n\n\nCONCLUSION\nBased on primary experience, it is suggested that RSP is a very effective alternative in treating SMAS.",
"title": ""
}
] |
scidocsrr
|
061e0e5da7eff177611f9dcaf144e92a
|
RT3D: Real-Time 3-D Vehicle Detection in LiDAR Point Cloud for Autonomous Driving
|
[
{
"docid": "cc4c58f1bd6e5eb49044353b2ecfb317",
"text": "Today, visual recognition systems are still rarely employed in robotics applications. Perhaps one of the main reasons for this is the lack of demanding benchmarks that mimic such scenarios. In this paper, we take advantage of our autonomous driving platform to develop novel challenging benchmarks for the tasks of stereo, optical flow, visual odometry/SLAM and 3D object detection. Our recording platform is equipped with four high resolution video cameras, a Velodyne laser scanner and a state-of-the-art localization system. Our benchmarks comprise 389 stereo and optical flow image pairs, stereo visual odometry sequences of 39.2 km length, and more than 200k 3D object annotations captured in cluttered scenarios (up to 15 cars and 30 pedestrians are visible per image). Results from state-of-the-art algorithms reveal that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world. Our goal is to reduce this bias by providing challenging benchmarks with novel difficulties to the computer vision community. Our benchmarks are available online at: www.cvlibs.net/datasets/kitti.",
"title": ""
},
{
"docid": "2e3bd582d0984f687032f03eb51b5fc0",
"text": "We present AVOD, an Aggregate View Object Detection network for autonomous driving scenarios. The proposed neural network architecture uses LIDAR point clouds and RGB images to generate features that are shared by two subnetworks: a region proposal network (RPN) and a second stage detector network. The proposed RPN uses a novel architecture capable of performing multimodal feature fusion on high resolution feature maps to generate reliable 3D object proposals for multiple object classes in road scenes. Using these proposals, the second stage detection network performs accurate oriented 3D bounding box regression and category classification to predict the extents, orientation, and classification of objects in 3D space. Our proposed architecture is shown to produce state of the art results on the KITTI 3D object detection benchmark [1] while running in real time with a low memory footprint, making it a suitable candidate for deployment on autonomous vehicles. Code is available at",
"title": ""
},
{
"docid": "2da84ca7d7db508a6f9a443f2dbae7c1",
"text": "This paper proposes a computationally efficient approach to detecting objects natively in 3D point clouds using convolutional neural networks (CNNs). In particular, this is achieved by leveraging a feature-centric voting scheme to implement novel convolutional layers which explicitly exploit the sparsity encountered in the input. To this end, we examine the trade-off between accuracy and speed for different architectures and additionally propose to use an L1 penalty on the filter activations to further encourage sparsity in the intermediate representations. To the best of our knowledge, this is the first work to propose sparse convolutional layers and L1 regularisation for efficient large-scale processing of 3D data. We demonstrate the efficacy of our approach on the KITTI object detection benchmark and show that VoteSDeep models with as few as three layers outperform the previous state of the art in both laser and laser-vision based approaches by margins of up to 40% while remaining highly competitive in terms of processing time.",
"title": ""
}
] |
[
{
"docid": "ffd45fa5cd9c2ce6b4dc7c5433864fd4",
"text": "AIM\nTo evaluate validity of the Greek version of a global measure of perceived stress PSS-14 (Perceived Stress Scale - 14 item).\n\n\nMATERIALS AND METHODS\nThe original PSS-14 (theoretical range 0-56) was translated into Greek and then back-translated. One hundred men and women (39 +/- 10 years old, 40 men) participated in the validation process. Firstly, participants completed the Greek PSS-14 and, then they were interviewed by a psychologist specializing in stress management. Cronbach's alpha (a) evaluated internal consistency of the measurement, whereas Kendall's tau-b and Bland & Altman methods assessed consistency with the clinical evaluation. Exploratory and Confirmatory Factor analyses were conducted to reveal hidden factors within the data and to confirm the two-dimensional character of the scale.\n\n\nRESULTS\nMean (SD) PSS-14 score was 25(7.9). Strong internal consistency (Cronbach's alpha = 0.847) as well as moderate-to-good concordance between clinical assessment and PSS-14 (Kendall's tau-b = 0.43, p < 0.01) were observed. Two factors were extracted. Factor one explained 34.7% of variability and was heavily laden by positive items, and factor two that explained 10.6% of the variability by negative items. Confirmatory factor analysis revealed that the model with 2 factors had chi-square equal to 241.23 (p < 0.001), absolute fix indexes were good (i.e. GFI = 0.733, AGFI = 0.529), and incremental fix indexes were also adequate (i.e. NFI = 0.89 and CFI = 0.92).\n\n\nCONCLUSION\nThe developed Greek version of PSS-14 seems to be a valid instrument for the assessment of perceived stress in the Greek adult population living in urban areas; a finding that supports its local use in research settings as an evaluation tool measuring perceived stress, mainly as a risk factor but without diagnostic properties.",
"title": ""
},
{
"docid": "5cfbeef0e6ca5dd62a70160a83b0ecaa",
"text": "Tissue mimicking phantoms (TMPs) replicating the dielectric properties of wet skin, fat, blood, and muscle tissues for the 0.3 to 20 GHz frequency range are presented in this paper. The TMPs reflect the dielectric properties with maximum deviations of 7.7 units and 3.9 S/m for relative dielectric constant and conductivity, respectively, for the whole band. The dielectric properties of the blood mimicking material are further investigated by adding realistic glucose amounts and a Cole-Cole model used to compare the behavior with respect to changing glucose levels. In addition, a patch resonator was fabricated and tested with the four-layered physical phantom developed in house. It was observed that the input impedance of the resonator is sensitive to the changes in the dielectric properties and, hence, to the realistic glucose level changes in the blood layer.",
"title": ""
},
{
"docid": "a212a2969c0c72894dcde880bbf29fa7",
"text": "Machine learning is useful for building robust learning models, and it is based on a set of features that identify a state of an object. Unfortunately, some data sets may contain a large number of features making, in some cases, the learning process time consuming and the generalization capability of machine learning poor. To make a data set easy to learn and understand, it is typically recommended to remove the most irrelevant features from the set. However, choosing what data should be kept or eliminated may be performed by complex selection algorithms, and optimal feature selection may require an exhaustive search of all possible subsets of features which is computationally expensive. This paper proposes a simple method to perform feature selection using artificial neural networks. It is shown experimentally that genetic algorithms in combination with artificial neural networks can easily be used to extract those features that are required to produce a desired result. Experimental results show that very few hidden neurons are required for feature selection as artificial neural networks are only used to assess the quality of an individual, which is a chosen subset of features.",
"title": ""
},
{
"docid": "b9b8a55afc751d77d1322de0746cc48b",
"text": "One week of solitary confinement of prison inmates produced significant changes in their EEG frequency and visual evoked potentials (VEP) that parallel those reported in laboratory studies of sensory deprivation. EEG frequency declined in a nonlinear manner over the period. VEP latency, which decreased with continued solitary confinement, was shorter for these 5s than for control 5s whose VEP latency did not change over the same period. Experimental 5s who had been in prison longer had shorter VEP latencies than relative newcomers to the prison.",
"title": ""
},
{
"docid": "26d9a51b9312af2b63d2e2876c9d448e",
"text": "Permission has been given to destroy this document when it is no longer needed. Cyber-enabled and cyber-physical systems connect and engage virtually every mission-critical military capability today. And as more warfighting technologies become integrated and connected, both the risks and opportunities from a cyberwarfare continue to grow—motivating sweeping requirements and investments in cybersecurity assessment capabilities to evaluate technology vulner-abilities, operational impacts, and operator effectiveness. Operational testing of cyber capabilities, often in conjunction with major military exercises, provides valuable connections to and feedback from the operational warfighter community. These connections can help validate capability impact on the mission and, when necessary, provide course-correcting feedback to the technology development process and its stakeholders. However, these tests are often constrained in scope, duration, and resources and require a thorough and wholistic approach, especially with respect to cyber technology assessments, where additional safety and security constraints are often levied. This report presents a summary of the state of the art in cyber assessment technologies and methodologies and prescribes an approach to the employment of cyber range operational exercises (OPEXs). Numerous recommendations on general cyber assessment methodologies and cyber range design are included, the most significant of which are summarized below. • Perform bottom-up and top-down assessment formulation methodologies to robustly link mission and assessment objectives to metrics, success criteria, and system observables. • Include threat-based assessment formulation methodologies that define risk and security met-rics within the context of mission-relevant adversarial threats and mission-critical system assets. • Follow a set of cyber range design mantras to guide and grade the design of cyber range components. • Call for future work in live-to-virtual exercise integration and cross-domain modeling and simulation technologies. • Call for continued integration of developmental and operational cyber assessment events, development of reusable cyber assessment test tools and processes, and integration of a threat-based assessment approach across the cyber technology acquisition cycle. Finally, this recommendations report was driven by obsevations made by the MIT Lincoln Laboratory (MIT LL) Cyber Measurement Campaign (CMC) team during an operational demonstration event for the DoD Enterprise Cyber Range Environment (DECRE) Command and Control Information Systems (C2IS). 1 This report also incorporates a prior CMC report based on Pacific Command (PACOM) exercise observations, as well as MIT LL's expertise in cyber range development and cyber systems assessment. 2 1 CMC is explained in further detail in Appendix A.1. 2 See References section at the end of the report. …",
"title": ""
},
{
"docid": "4d99090b874776b89092f63f21c8ea93",
"text": "Object viewpoint classification aims at predicting an approximate 3D pose of objects in a scene and is receiving increasing attention. State-of-the-art approaches to viewpoint classification use generative models to capture relations between object parts. In this work we propose to use a mixture of holistic templates (e.g. HOG) and discriminative learning for joint viewpoint classification and category detection. Inspired by the work of Felzenszwalb et al 2009, we discriminatively train multiple components simultaneously for each object category. A large number of components are learned in the mixture and they are associated with canonical viewpoints of the object through different levels of supervision, being fully supervised, semi-supervised, or unsupervised. We show that discriminative learning is capable of producing mixture components that directly provide robust viewpoint classification, significantly outperforming the state of the art: we improve the viewpoint accuracy on the Savarese et al 3D Object database from 57% to 74%, and that on the VOC 2006 car database from 73% to 86%. In addition, the mixture-of-templates approach to object viewpoint/pose has a natural extension to the continuous case by discriminatively learning a linear appearance model locally at each discrete view. We evaluate continuous viewpoint estimation on a dataset of everyday objects collected using IMUs for groundtruth annotation: our mixture model shows great promise comparing to a number of baselines including discrete nearest neighbor and linear regression.",
"title": ""
},
{
"docid": "39ce21cf294147475b9bfe48851dcebe",
"text": "In this paper, we introduce the Action Schema Network (ASNet): a neural network architecture for learning generalised policies for probabilistic planning problems. By mimicking the relational structure of planning problems, ASNets are able to adopt a weight sharing scheme which allows the network to be applied to any problem from a given planning domain. This allows the cost of training the network to be amortised over all problems in that domain. Further, we propose a training method which balances exploration and supervised training on small problems to produce a policy which remains robust when evaluated on larger problems. In experiments, we show that ASNet’s learning capability allows it to significantly outperform traditional non-learning planners in several challenging domains.",
"title": ""
},
{
"docid": "7fd48dcff3d5d0e4bfccc3be67db8c00",
"text": "Criollo cacao (Theobroma cacao ssp. cacao) was cultivated by the Mayas over 1500 years ago. It has been suggested that Criollo cacao originated in Central America and that it evolved independently from the cacao populations in the Amazon basin. Cacao populations from the Amazon basin are included in the second morphogeographic group: Forastero, and assigned to T. cacao ssp. sphaerocarpum. To gain further insight into the origin and genetic basis of Criollo cacao from Central America, RFLP and microsatellite analyses were performed on a sample that avoided mixing pure Criollo individuals with individuals classified as Criollo but which might have been introgressed with Forastero genes. We distinguished these two types of individuals as Ancient and Modern Criollo. In contrast to previous studies, Ancient Criollo individuals formerly classified as ‘wild’, were found to form a closely related group together with Ancient Criollo individuals from South America. The Ancient Criollo trees were also closer to Colombian-Ecuadorian Forastero individuals than these Colombian-Ecuadorian trees were to other South American Forastero individuals. RFLP and microsatellite analyses revealed a high level of homozygosity and significantly low genetic diversity within the Ancient Criollo group. The results suggest that the Ancient Criollo individuals represent the original Criollo group. The results also implies that this group does not represent a separate subspecies and that it probably originated from a few individuals in South America that may have been spread by man within Central America.",
"title": ""
},
{
"docid": "64d3ecbffd5bc7e7b3a4fc1380e8818b",
"text": "In this paper a new lane marking detection algorithm in different road conditions for monocular vision was proposed. Traditional detection algorithms implement the same operation for different road conditions. It is difficult to simultaneously satisfy the requirements of timesaving and robustness in different road conditions. Our algorithm divides the road conditions into two classes. One class is for the clean road, and the other one is for the road with disturbances such as shadows, non-lane markings and vehicles. Our algorithm has its advantages in clean road while has a robust detection of lane markings in complex road. On the remapping image obtained from inverse perspective transformation, a search strategy is used to judge whether pixels belong to the same lane marking. When disturbances appear on the road, this paper uses probabilistic Hough transform to detect lines, and finds out the true lane markings by use of their geometrical features. The experimental results have shown the robustness and accuracy of our algorithm with respect to shadows, changing illumination and non-lane markings.",
"title": ""
},
{
"docid": "b13969bec16ace6c28c5a7ab82f7cc67",
"text": "OBJECTIVE\nRecent evidence shows that calcium released from the sarcoplasmic reticulum (SR) plays an important role in the regulation of heart rate. The aim of this study was to investigate the subcellular distribution of ryanodine receptors in the guinea-pig sino-atrial (SA) node and to determine their functional role in the regulation of pacemaker frequency in response to beta-adrenoceptor stimulation.\n\n\nMETHODS\nMonoclonal antibodies raised against the cardiac ryanodine receptor were used with confocal microscopy to investigate ryanodine receptor distribution in single guinea-pig SA node cells. The functional role of ryanodine receptors was investigated in both multicellular SA node/atrial preparations and in single SA node cells.\n\n\nRESULTS\nRyanodine receptor labelling was observed in all SA node cells studied and showed both subsarcolemmal and intracellular staining. In the latter, labelling appeared as transverse bands with a regular periodicity of approximately 2 microm. This interval resembled that of the expected sarcomere spacing but did not, however, depend on the presence of transverse tubules. The bands of ryanodine receptors appeared to be located in the region of the Z lines, based on co-distribution studies with antibodies to alpha-actinin, myomesin and binding sites for phalloidin. Functional studies on single SA node cells showed that application of ryanodine (2 micromol/l) reduced the rate of firing of spontaneous action potentials (measured using the perforated patch clamp technique) and this was associated with changes in action potential characteristics. Ryanodine also significantly decreased the positive chronotropic actions of isoprenaline in both multicellular and single cell preparations. In single cells exposed to 100 nmol/l isoprenaline, ryanodine caused a decrease in the rate of firing and this was associated with a decrease in the amplitude of the measured calcium transients.\n\n\nCONCLUSIONS\nThese findings are the first to show immunocytochemical evidence for the presence and organisation of ryanodine receptor calcium release channels in mammalian SA node cells. This study also provides evidence of a role for ryanodine sensitive sites in the beta-adrenergic modulation of heart rate in this species.",
"title": ""
},
{
"docid": "40533c0a32bd67ae4e63ddd5f0a92506",
"text": "Synopsis: The present paper presents in chapter 1 a model for the characterization of concrete creep and shrinkage in design of concrete structures (Model B3), which is simpler, agrees better with the experimental data and is better theoretically justified than the previous models. The model complies with the general guidelines recently formulated by RILEM TC-107ß1. Justifications of various aspects of the model and diverse refinements are given in Chapter 2, and many simple explanations are appended in the commentary at the end of Chapter 1 (these parts do not to be read by those who merely want to apply the model). The prediction model B3 is calibrated by a computerized data bank comprising practically all the relevant test data obtained in various laboratories throughout the world. The coefficients of variation of the deviations of the model from the data are distinctly smaller than those for the latest CEB model (1990), and much smaller than those for the previous model in ACI 209 (which was developed in the mid-1960’s). The model is simpler than the previous models (BP and BPKX) developed at Northwestern University, yet it has comparable accuracy and is more rational. The effect of concrete composition and design strength on the model parameters is the main source of error of the model. A method to reduce this error by updating one or two model parameters on the basis of short-time creep tests is given. The updating of model parameters is particularly important for high-strength concretes and other special concretes containing various admixtures, superplasticizers, water-reducing agents and pozzolanic materials. For the updating of shrinkage prediction, a new method in which the shrinkage half-time is calibrated by simultaneous measurements of water loss is presented. This approach circumvents the large sensitivity of the shrinkage extrapolation problem to small changes in the material parameters. The new model allows a more realistic assessment of the creep and shrinkage effects in concrete structures, which significantly affect the durability and long-time serviceability of civil engineering infrastructure.",
"title": ""
},
{
"docid": "924e10782437c323b8421b156db50584",
"text": "Ontology Learning greatly facilitates the construction of ontologies by the ontology engineer. The notion of ontology learning that we propose here includes a number of complementary disciplines that feed on different types of unstructured and semi-structured data in order to support a semi-automatic, cooperative ontology engineering process. Our ontology learning framework proceeds through ontology import, extraction, pruning, and refinement, giving the ontology engineer a wealth of coordinated tools for ontology modelling. Besides of the general architecture, we show in this paper some exemplary techniques in the ontology learning cycle that we have implemented in our ontology learning environment, KAON Text-To-Onto.",
"title": ""
},
{
"docid": "e010c9ce6606ae64904a95bde0d1dfe8",
"text": "OBJECT\nThe extent of tumor resection that should be undertaken in patients with glioblastoma multiforme (GBM) remains controversial. The purpose of this study was to identify significant independent predictors of survival in these patients and to determine whether the extent of resection was associated with increased survival time.\n\n\nMETHODS\nThe authors retrospectively analyzed 416 consecutive patients with histologically proven GBM who underwent tumor resection at the authors' institution between June 1993 and June 1999. Volumetric data and other tumor characteristics identified on magnetic resonance (MR) imaging were collected prospectively.\n\n\nCONCLUSIONS\nFive independent predictors of survival were identified: age, Karnofsky Performance Scale (KPS) score, extent of resection, and the degree of necrosis and enhancement on preoperative MR imaging studies. A significant survival advantage was associated with resection of 98% or more of the tumor volume (median survival 13 months, 95% confidence interval [CI] 11.4-14.6 months), compared with 8.8 months (95% CI 7.4-10.2 months; p < 0.0001) for resections of less than 98%. Using an outcome scale ranging from 0 to 5 based on age, KPS score, and tumor necrosis on MR imaging, we observed significantly longer survival in patients with lower scores (1-3) who underwent aggressive resections, and a trend toward slightly longer survival was found in patients with higher scores (4-5). Gross-total tumor resection is associated with longer survival in patients with GBM, especially when other predictive variables are favorable.",
"title": ""
},
{
"docid": "a7e6a2145b9ae7ca2801a3df01f42f5e",
"text": "The aim of this systematic review was to compare the clinical performance and failure modes of teeth restored with intra-radicular retainers. A search was performed on PubMed/Medline, Central and ClinicalTrials databases for randomized clinical trials comparing clinical behavior and failures of at least two types of retainers. From 341 detected papers, 16 were selected for full-text analysis, of which 9 met the eligibility criteria. A manual search added 2 more studies, totalizing 11 studies that were included in this review. Evaluated retainers were fiber (prefabricated and customized) and metal (prefabricated and cast) posts, and follow-up ranged from 6 months to 10 years. Most studies showed good clinical behavior for evaluated intra-radicular retainers. Reported survival rates varied from 71 to 100% for fiber posts and 50 to 97.1% for metal posts. Studies found no difference in the survival among different metal posts and most studies found no difference between fiber and metal posts. Two studies also showed that remaining dentine height, number of walls and ferrule increased the longevity of the restored teeth. Failures of fiber posts were mainly due to post loss of retention, while metal post failures were mostly related to root fracture, post fracture and crown and/or post loss of retention. In conclusion, metal and fiber posts present similar clinical behavior at short to medium term follow-up. Remaining dental structure and ferrule increase the survival of restored pulpless teeth. Studies with longer follow-up are needed.",
"title": ""
},
{
"docid": "93f1bf78846bd048e0140b72e3140946",
"text": "Doubly-fed induction generator (DFIG) tied with grid and controlled in the stator flux oriented reference frame is known to have poor damping due to the large time constant of the stator circuit. Therefore, the stator flux exhibits sustained oscillation subjected to any disturbances. In this paper, we propose a state feedback control method to actively damp the oscillation by injecting optimal amount of rotor current. The controller is further modified to simplify the implementation of the control and to minimize the effect of current injection on the power converter rating. The modified controller can be termed as a suboptimal controller. The performance of the designed controllers are verified through simulation of the simplified model of DFIG with an arbitrary initial stator flux perturbation. The required rotor current injections are observed in each of the cases of design. Effect of current injection by different controller on the converter rating is also explained. The suboptimal control is implemented to damp the stator flux oscillation in a DFIG, used in wave energy conversion system (WECS). In WECS, the stator flux oscillation appears due to the phase sequence switching of the DFIG. Simulation results show that the active damping control improves the damping of the system.",
"title": ""
},
{
"docid": "75a59bfac741bf3e4c0835399d05b55a",
"text": "In this paper we describe a functioning low cost embedded vision system which can perform basic color blob tracking at 16.7 frames per second. This system utilizes a low cost CMOS color camera module and all image data is processed by a high speed, low cost microcontroller. This eliminates the need for a separate frame grabber and high speed host computer typically found in traditional vision systems. The resulting embedded system makes it possible to utilize simple color vision algorithms in applications like small mobile robotics where a traditional vision system would not be practical.",
"title": ""
},
{
"docid": "67d317befd382c34c143ebfe806a3b55",
"text": "In this paper, we present a novel meta-feature generation method in the context of meta-learning, which is based on rules that compare the performance of individual base learners in a one-against-one manner. In addition to these new meta-features, we also introduce a new meta-learner called Approximate Ranking Tree Forests (ART Forests) that performs very competitively when compared with several state-of-the-art meta-learners. Our experimental results are based on a large collection of datasets and show that the proposed new techniques can improve the overall performance of meta-learning for algorithm ranking significantly. A key point in our approach is that each performance figure of any base learner for any specific dataset is generated by optimising the parameters of the base learner separately for each dataset.",
"title": ""
},
{
"docid": "4da68af0db0b1e16f3597c8820b2390d",
"text": "We study the task of verifiable delegation of computation on encrypted data. We improve previous definitions in order to tolerate adversaries that learn whether or not clients accept the result of a delegated computation. In this strong model, we construct a scheme for arbitrary computations and highly efficient schemes for delegation of various classes of functions, such as linear combinations, high-degree univariate polynomials, and multivariate quadratic polynomials. Notably, the latter class includes many useful statistics. Using our solution, a client can store a large encrypted dataset on a server, query statistics over this data, and receive encrypted results that can be efficiently verified and decrypted.\n As a key contribution for the efficiency of our schemes, we develop a novel homomorphic hashing technique that allows us to efficiently authenticate computations, at the same cost as if the data were in the clear, avoiding a $10^4$ overhead which would occur with a naive approach. We support our theoretical constructions with extensive implementation tests that show the practical feasibility of our schemes.",
"title": ""
},
{
"docid": "c574d775613aee0b5a51c36062eb2cc4",
"text": "Generally additional leakage inductance and two clamp diodes are adopted into the conventional phase shift full bridge (PSFB) converter for reducing the voltage stress of secondary rectifier diodes and extending the range of zero voltage switching (ZVS) operation. However, the core and copper loss caused by additional leakage inductor can be high enough to decrease the whole efficiency of DC/DC converter. Therefore, a new ZVS PSFB converter with controlled leakage inductance of transformer is proposed. The proposed converter makes both the transformer and additional leakage inductor with same ferrite core by separated primary winding (SPW). Using this method, leakage inductance is controlled by the winding ratio of SPW. Moreover, by using this integrated magnetic component with single core, size and core loss can be greatly reduced and it results in the improvement of efficiency and power density of DC/DC converter. The operational principle and analysis of proposed converter are presented and verified by the 1.2kW prototype.",
"title": ""
},
{
"docid": "1da9ea0ec4c33454ad9217bcf7118c1c",
"text": "We use quantitative media (blogs, and news as a comparison) data generated by a large-scale natural language processing (NLP) text analysis system to perform a comprehensive and comparative study on how a company’s reported media frequency, sentiment polarity and subjectivity anticipates or reflects its stock trading volumes and financial returns. Our analysis provides concrete evidence that media data is highly informative, as previously suggested in the literature – but never studied on our scale of several large collections of blogs and news for over five years. Building on our findings, we give a sentiment-based market-neutral trading strategy which gives consistently favorable returns with low volatility over a five year period (2005-2009). Our results are significant in confirming the performance of general blog and news sentiment analysis methods over broad domains and sources. Moreover, several remarkable differences between news and blogs are also identified in this paper.",
"title": ""
}
] |
scidocsrr
|
4a37ef1f710754020c100affd2bd5ff0
|
Topic Modelling with Word Embeddings
|
[
{
"docid": "f6121f69419a074b657bb4a0324bae4a",
"text": "Latent Dirichlet allocation (LDA) is a popular topic modeling technique for exploring hidden topics in text corpora. Increasingly, topic modeling needs to scale to larger topic spaces and use richer forms of prior knowledge, such as word correlations or document labels. However, inference is cumbersome for LDA models with prior knowledge. As a result, LDA models that use prior knowledge only work in small-scale scenarios. In this work, we propose a factor graph framework, Sparse Constrained LDA (SC-LDA), for efficiently incorporating prior knowledge into LDA. We evaluate SC-LDA’s ability to incorporate word correlation knowledge and document label knowledge on three benchmark datasets. Compared to several baseline methods, SC-LDA achieves comparable performance but is significantly faster. 1 Challenge: Leveraging Prior Knowledge in Large-scale Topic Models Topic models, such as Latent Dirichlet Allocation (Blei et al., 2003, LDA), have been successfully used for discovering hidden topics in text collections. LDA is an unsupervised model—it requires no annotation—and discovers, without any supervision, the thematic trends in a text collection. However, LDA’s lack of supervision can lead to disappointing results. Often, the hidden topics learned by LDA fail to make sense to end users. Part of the problem is that the objective function of topic models does not always correlate with human judgments of topic quality (Chang et al., 2009). Therefore, it’s often necessary to incorporate prior knowledge into topic models to improve the model’s performance. Recent work has also shown that by interactive human feedback can improve the quality and stability of topics (Hu and Boyd-Graber, 2012; Yang et al., 2015). Information about documents (Ramage et al., 2009) or words (Boyd-Graber et al., 2007) can improve LDA’s topics. In addition to its occasional inscrutability, scalability can also hamper LDA’s adoption. Conventional Gibbs sampling—the most widely used inference for LDA—scales linearly with the number of topics. Moreover, accurate training usually takes many sampling passes over the dataset. Therefore, for large datasets with millions or even billions of tokens, conventional Gibbs sampling takes too long to finish. For standard LDA, recently introduced fast sampling methods (Yao et al., 2009; Li et al., 2014; Yuan et al., 2015) enable industrial applications of topic modeling to search engines and online advertising, where capturing the “long tail” of infrequently used topics requires large topic spaces. For example, while typical LDA models in academic papers have up to 103 topics, industrial applications with 105–106 topics are common (Wang et al., 2014). Moreover, scaling topic models to many topics can also reveal the hierarchical structure of topics (Downey et al., 2015). Thus, there is a need for topic models that can both benefit from rich prior information and that can scale to large datasets. However, existing methods for improving scalability focus on topic models without prior information. To rectify this, we propose a factor graph model that encodes a potential function over the hidden topic variables, encouraging topics consistent with prior knowledge. The factor model representation admits an efficient sampling algorithm that takes advantage of the model’s sparsity. We show that our method achieves comparable performance but runs significantly faster than baseline methods, enabling models to discover models with many topics enriched by prior knowledge. 2 Efficient Algorithm for Incorporating Knowledge into LDA In this section, we introduce the factor model for incorporating prior knowledge and show how to efficiently use Gibbs sampling for inference. 2.1 Background: LDA and SparseLDA A statistical topic model represents words in documents in a collection D as mixtures of T topics, which are multinomials over a vocabulary of size V . In LDA, each document d is associated with a multinomial distribution over topics, θd. The probability of a word type w given topic z is φw|z . The multinomial distributions θd and φz are drawn from Dirichlet distributions: α and β are the hyperparameters for θ and φ. We represent the document collection D as a sequence of words w, and topic assignments as z. We use symmetric priors α and β in the model and experiment, but asymmetric priors are easily encoded in the models (Wallach et al., 2009). Discovering the latent topic assignments z from observed words w requires inferring the the posterior distribution P (z|w). Griffiths and Steyvers (2004) propose using collapsed Gibbs sampling. The probability of a topic assignment z = t in document d given an observed word type w and the other topic assignments z− is P (z = t|z−, w) ∝ (nd,t + α) nw,t + β",
"title": ""
}
] |
[
{
"docid": "8d0221daae5933760698b8f4f7943870",
"text": "We introduce a novel, online method to predict pedestrian trajectories using agent-based velocity-space reasoning for improved human-robot interaction and collision-free navigation. Our formulation uses velocity obstacles to model the trajectory of each moving pedestrian in a robot’s environment and improves the motion model by adaptively learning relevant parameters based on sensor data. The resulting motion model for each agent is computed using statistical inferencing techniques, including a combination of Ensemble Kalman filters and a maximum-likelihood estimation algorithm. This allows a robot to learn individual motion parameters for every agent in the scene at interactive rates. We highlight the performance of our motion prediction method in real-world crowded scenarios, compare its performance with prior techniques, and demonstrate the improved accuracy of the predicted trajectories. We also adapt our approach for collision-free robot navigation among pedestrians based on noisy data and highlight the results in our simulator.",
"title": ""
},
{
"docid": "3538d14694af47dc0fb31696913da15a",
"text": "Complex queries are becoming commonplace, with the growing use of decision support systems. These complex queries often have a lot of common sub-expressions, either within a single query, or across multiple such queries run as a batch. Multiquery optimization aims at exploiting common sub-expressions to reduce evaluation cost. Multi-query optimization has hither-to been viewed as impractical, since earlier algorithms were exhaustive, and explore a doubly exponential search space.\nIn this paper we demonstrate that multi-query optimization using heuristics is practical, and provides significant benefits. We propose three cost-based heuristic algorithms: Volcano-SH and Volcano-RU, which are based on simple modifications to the Volcano search strategy, and a greedy heuristic. Our greedy heuristic incorporates novel optimizations that improve efficiency greatly. Our algorithms are designed to be easily added to existing optimizers. We present a performance study comparing the algorithms, using workloads consisting of queries from the TPC-D benchmark. The study shows that our algorithms provide significant benefits over traditional optimization, at a very acceptable overhead in optimization time.",
"title": ""
},
{
"docid": "8311e231fc648a725cd643ed531aeef9",
"text": "Given an image stream, our on-line algorithm will select the semantically-important images that summarize the visual experience of a mobile robot. Our approach consists of data pre-clustering using coresets followed by a graph based incremental clustering procedure using a topic based image representation. A coreset for an image stream is a set of representative images that semantically compresses the data corpus, in the sense that every frame has a similar representative image in the coreset. We prove that our algorithm efficiently computes the smallest possible coreset under natural well-defined similarity metric and up to provably small approximation factor. The output visual summary is computed via a hierarchical tree of coresets for different parts of the image stream. This allows multi-resolution summarization (or a video summary of specified duration) in the batch setting and a memory-efficient incremental summary for the streaming case.",
"title": ""
},
{
"docid": "a747b503e597ebdb9fd1a32b9dccd04e",
"text": "In this paper, we introduce KAZE features, a novel multiscale 2D feature detection and description algorithm in nonlinear scale spaces. Previous approaches detect and describe features at different scale levels by building or approximating the Gaussian scale space of an image. However, Gaussian blurring does not respect the natural boundaries of objects and smoothes to the same degree both details and noise, reducing localization accuracy and distinctiveness. In contrast, we detect and describe 2D features in a nonlinear scale space by means of nonlinear diffusion filtering. In this way, we can make blurring locally adaptive to the image data, reducing noise but retaining object boundaries, obtaining superior localization accuracy and distinctiviness. The nonlinear scale space is built using efficient Additive Operator Splitting (AOS) techniques and variable conductance diffusion. We present an extensive evaluation on benchmark datasets and a practical matching application on deformable surfaces. Even though our features are somewhat more expensive to compute than SURF due to the construction of the nonlinear scale space, but comparable to SIFT, our results reveal a step forward in performance both in detection and description against previous state-of-the-art methods.",
"title": ""
},
{
"docid": "1c0b590a687f628cb52d34a37a337576",
"text": "Hexagonal torus networks are special family of Eisenstein-Jacobi (EJ) networks which have gained popularity as good candidates network On-Chip (NoC) for interconnecting Multiprocessor System-on-Chips (MPSoCs). They showed better topological properties compared to the 2D torus networks with the same number of nodes. All-to-all broadcast is a collective communication algorithm used frequently in some parallel applications. Recently, an off-chip all-to-all broadcast algorithm has been proposed for hexagonal torus networks assuming half-duplex links and all-ports communication. The proposed all-to-all broadcast algorithm does not achieve the minimum transmission time and requires 24 kextra buffers, where kis the network diameter. We first extend this work by proposing an efficient all-to-all broadcast on hexagonal torus networks under full-duplex links and all-ports communications assumptions which achieves the minimum transmission delay but requires 36 k extra buffers per router. In a second stage, we develop a new all-to-all broadcast more suitable for hexagonal torus network on-chip that achieves optimal transmission delay time without requiring any extra buffers per router. By reducing the amount of buffer space, the new all-to-all broadcast reduces the routers cost which is an important issue in NoCs architectures.",
"title": ""
},
{
"docid": "4b284736c51435f9ab6f52f174dc7def",
"text": "Recognition of emotion draws on a distributed set of structures that include the occipitotemporal neocortex, amygdala, orbitofrontal cortex and right frontoparietal cortices. Recognition of fear may draw especially on the amygdala and the detection of disgust may rely on the insula and basal ganglia. Two important mechanisms for recognition of emotions are the construction of a simulation of the observed emotion in the perceiver, and the modulation of sensory cortices via top-down influences.",
"title": ""
},
{
"docid": "500e8ab316398313c90a0ea374f28ee8",
"text": "Advances in the science and observation of climate change are providing a clearer understanding of the inherent variability of Earth’s climate system and its likely response to human and natural influences. The implications of climate change for the environment and society will depend not only on the response of the Earth system to changes in radiative forcings, but also on how humankind responds through changes in technology, economies, lifestyle and policy. Extensive uncertainties exist in future forcings of and responses to climate change, necessitating the use of scenarios of the future to explore the potential consequences of different response options. To date, such scenarios have not adequately examined crucial possibilities, such as climate change mitigation and adaptation, and have relied on research processes that slowed the exchange of information among physical, biological and social scientists. Here we describe a new process for creating plausible scenarios to investigate some of the most challenging and important questions about climate change confronting the global community.",
"title": ""
},
{
"docid": "264338f11dbd4d883e791af8c15aeb0d",
"text": "With the advent of deep neural networks, learning-based approaches for 3D reconstruction have gained popularity. However, unlike for images, in 3D there is no canonical representation which is both computationally and memory efficient yet allows for representing high-resolution geometry of arbitrary topology. Many of the state-of-the-art learningbased 3D reconstruction approaches can hence only represent very coarse 3D geometry or are limited to a restricted domain. In this paper, we propose occupancy networks, a new representation for learning-based 3D reconstruction methods. Occupancy networks implicitly represent the 3D surface as the continuous decision boundary of a deep neural network classifier. In contrast to existing approaches, our representation encodes a description of the 3D output at infinite resolution without excessive memory footprint. We validate that our representation can efficiently encode 3D structure and can be inferred from various kinds of input. Our experiments demonstrate competitive results, both qualitatively and quantitatively, for the challenging tasks of 3D reconstruction from single images, noisy point clouds and coarse discrete voxel grids. We believe that occupancy networks will become a useful tool in a wide variety of learning-based 3D tasks.",
"title": ""
},
{
"docid": "0dafc618dbeb04c5ee347142d915a415",
"text": "Grid cells in the brain respond when an animal occupies a periodic lattice of 'grid fields' during navigation. Grids are organized in modules with different periodicity. We propose that the grid system implements a hierarchical code for space that economizes the number of neurons required to encode location with a given resolution across a range equal to the largest period. This theory predicts that (i) grid fields should lie on a triangular lattice, (ii) grid scales should follow a geometric progression, (iii) the ratio between adjacent grid scales should be √e for idealized neurons, and lie between 1.4 and 1.7 for realistic neurons, (iv) the scale ratio should vary modestly within and between animals. These results explain the measured grid structure in rodents. We also predict optimal organization in one and three dimensions, the number of modules, and, with added assumptions, the ratio between grid periods and field widths.",
"title": ""
},
{
"docid": "419b3914edc182e4deffd05edcabcbe8",
"text": "To investigate the effects of self-presentation on the construct validity of the impression management (IM) and self-deceptive enhancement (SDE) scales of the Balanced Inventory of Social Desirable Responding Version 7 (BIDR-7), 155 participants completed the IM and SDE scales combined with standard instructions. IM and SDE were also presented with three self-presentation instructions: fake good, Agency, and Communion instructions. In addition, selfand social desirability ratings were assessed for a list of 190 personality-trait words. It could be shown that not only IM, but also SDE can be faked if participants are appropriately instructed to do so. In addition, personality-trait words related to IM were rated as socially more desirable than those related to SDE. BIDR scales were more highly related in the faking conditions than in the standard instruction condition. In addition, faked BIDR scores were not related to undistorted BIDR scores. These results implicate that both SDE and IM are susceptible to faking like any other personality questionnaire, and that both SDE and IM loose their original meaning under faking. Therefore, at least under faking social desirability scales do not seem to provide additional diagnostic information beyond that derived from personality scales. 2003 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "9581c692787cfef1ce2916100add4c1e",
"text": "Diabetes related eye disease is growing as a major health concern worldwide. Diabetic retinopathy is an infirmity due to higher level of glucose in the retinal capillaries, resulting in cloudy vision and blindness eventually. With regular screening, pathology can be detected in the instigating stage and if intervened with in time medication could prevent further deterioration. This paper develops an automated diagnosis system to recognize retinal blood vessels, and pathologies, such as exudates and microaneurysms together with certain texture properties using image processing techniques. These anatomical and texture features are then fed into a multiclass support vector machine (SVM) for classifying it into normal, mild, moderate, severe and proliferative categories. Advantages include, it processes quickly a large collection of fundus images obtained from mass screening which lessens cost and increases efficiency for ophthalmologists. Our method was evaluated on two publicly available databases and got encouraging results with a state of the art in this area.",
"title": ""
},
{
"docid": "a40d3b98ab50a5cd924be09ab1f1cc40",
"text": "Feeling comfortable reading and understanding financial statements is critical to the success of healthcare executives and physicians involved in management. Businesses use three primary financial statements: a balance sheet represents the equation, Assets = Liabilities + Equity; an income statement represents the equation, Revenues - Expenses = Net Income; a statement of cash flows reports all sources and uses of cash during the represented period. The balance sheet expresses financial indicators at one particular moment in time, whereas the income statement and the statement of cash flows show activity that occurred over a stretch of time. Additional information is disclosed in attached footnotes and other supplementary materials. There are two ways to prepare financial statements. Cash-basis accounting recognizes revenue when it is received and expenses when they are paid. Accrual-basis accounting recognizes revenue when it is earned and expenses when they are incurred. Although cash-basis is acceptable, periodically using the accrual method reveals important information about receivables and liabilities that could otherwise remain hidden. Become more engaged with your financial statements by spending time reading them, tracking key performance indicators, and asking accountants and financial advisors questions. This will help you better understand your business and build a successful future.",
"title": ""
},
{
"docid": "0ec7969da568af2e743d969f9805063d",
"text": "In this letter, a notched-band Vivaldi antenna with high-frequency selectivity is designed and investigated. To obtain two notched poles inside the stopband, an open-circuited half-wavelength resonator and a short-circuited stepped impedance resonator are properly introduced into the traditional Vivaldi antenna. By theoretically calculating the resonant frequencies of the two loaded resonators, the frequency locations of the two notched poles can be precisely determined, thus achieving a wideband antenna with a desired notched band. To validate the feasibility of this new approach, a notched band antenna with a fractional bandwidth of 145.8% is fabricated and tested. Results indicate that good frequency selectivity of the notched band from 4.9 to 6.6 GHz is realized, and the antenna exhibits good impedance match, high radiation gain, and excellent radiation directivity in the passband. Both the simulation and measurement results are provided with good agreement.",
"title": ""
},
{
"docid": "cebeaf1d155d5d7e4c62ec84cf36c087",
"text": "This paper presents the comparison of power captured by vertical and horizontal axis wind turbine (VAWT and HAWT). According to Betz, the limit of maximum coefficient power (CP) is 0.59. In this case CP is important parameter that determines the power extracted by a wind turbine we made. This paper investigates the impact of wind speed variation of wind turbine to extract the power. For VAWT we used H-darrieus type whose swept area is 3.14 m2 and so is HAWT. The wind turbines have 3 blades for each type. The air foil of both wind turbines are NACA 4412. We tested the model of wind turbine with various wind velocity which affects the performance. We have found that CP of HAWT is 0.54 with captured maximum power is 1363.6 Watt while the CP of VAWT is 0.34 with captured maximum power is 505.69 Watt. The power extracted of both wind turbines seems that HAWT power is much better than VAWT power.",
"title": ""
},
{
"docid": "e990d87c81e9c49fd45fc27afc6ebc07",
"text": "PURPOSE\nThis study aimed to evaluate the effects of the subchronic consumption of energy drinks and their constituents (caffeine and taurine) in male Wistar rats using behavioural and oxidative measures.\n\n\nMETHODS\nEnergy drinks (ED 5, 7.5, and 10 mL/kg) or their constituents, caffeine (3.2 mg/kg) and taurine (40 mg/kg), either separately or in combination, were administered orally to animals for 28 days. Attention was measured though the ox-maze apparatus and the object recognition memory test. Following behavioural analyses, markers of oxidative stress, including SOD, CAT, GPx, thiol content, and free radicals, were measured in the prefrontal cortex, hippocampus, and striatum.\n\n\nRESULTS\nThe latency time to find the first reward was lower in animals that received caffeine, taurine, or a combination of both (P = 0.003; ANOVA/Bonferroni). In addition, these animals took less time to complete the ox-maze task (P = 0.0001; ANOVA/Bonferroni), and had better short-term memory (P < 0.01, Kruskal-Wallis). The ED 10 group showed improvement in the attention task, but did not differ on other measures. In addition, there was an imbalance in enzymatic markers of oxidative stress in the prefrontal cortex, the hippocampus, and the striatum. In the group that received both caffeine and taurine, there was a significant increase in the production of free radicals in the prefrontal cortex and in the hippocampus (P < 0.0001; ANOVA/Bonferroni).\n\n\nCONCLUSIONS\nExposure to a combination of caffeine and taurine improved memory and attention, and led to an imbalance in the antioxidant defence system. These results differed from those of the group that was exposed to the energy drink. This might be related to other components contained in the energy drink, such as vitamins and minerals, which may have altered the ability of caffeine and taurine to modulate memory and attention.",
"title": ""
},
{
"docid": "af56806a30f708cb0909998266b4d8c1",
"text": "There are many excellent toolkits which provide support for developing machine learning software in Python, R, Matlab, and similar environments. Dlib-m l is an open source library, targeted at both engineers and research scientists, which aims to pro vide a similarly rich environment for developing machine learning software in the C++ language. T owards this end, dlib-ml contains an extensible linear algebra toolkit with built in BLAS supp ort. It also houses implementations of algorithms for performing inference in Bayesian networks a nd kernel-based methods for classification, regression, clustering, anomaly detection, and fe atur ranking. To enable easy use of these tools, the entire library has been developed with contract p rogramming, which provides complete and precise documentation as well as powerful debugging too ls.",
"title": ""
},
{
"docid": "72e4984c05e6b68b606775bbf4ce3b33",
"text": "This paper defines a generative probabilistic model of parse trees, which we call PCFG-LA. This model is an extension of PCFG in which non-terminal symbols are augmented with latent variables. Finegrained CFG rules are automatically induced from a parsed corpus by training a PCFG-LA model using an EM-algorithm. Because exact parsing with a PCFG-LA is NP-hard, several approximations are described and empirically compared. In experiments using the Penn WSJ corpus, our automatically trained model gave a performance of 86.6% (F , sentences 40 words), which is comparable to that of an unlexicalized PCFG parser created using extensive manual feature selection.",
"title": ""
},
{
"docid": "eb971f815c884ba873685ceb5779258e",
"text": "While many schools of psychotherapy have held that our early experiences with our caretakers have a powerful impact on our adult functioning, there have been plenty of hard-nosed academics and researchers who've remained unconvinced. Back in 1968, psychologist Walter Mischel created quite a stir when he challenged the concept that we even have a core personality that organizes our behavior, contending instead that situational factors are much better predictors of what we think and do. Some developmental psychologists, like Judith Rich Harris, author of The Nurture Assumption, have gone so far as to argue that the only important thing parents give their children is their genes, not their care. Others, like Jerome Kagan, have emphasized the ongoing influence of inborn temperament in shaping human experience, asserting that the effect of early experience, if any, is far more fleeting than is commonly assumed. In one memorable metaphor, Kagan likened the unfolding of life to a tape recorder with the record button always turned on and new experiences overwriting and erasing previous experiences. n At the same time, the last 50 years have seen the accumulation of studies supporting an alternative view: the idea that the emotional quality of our earliest attachment experience is perhaps the single most important influence on human development. The central figure in the birth of this school of research has been British psychiatrist and psychoanalyst John Bowlby, who challenged the Freudian view of development, claiming that it had focused too narrowly on the inner world of the child without taking into account the actual relational environment that shapes the earliest stages of human consciousness.",
"title": ""
},
{
"docid": "cc220d8ae1fa77b9e045022bef4a6621",
"text": "Cuneiform tablets appertain to the oldest textual artifacts and are in extent comparable to texts written in Latin or ancient Greek. The Cuneiform Commentaries Project (CPP) from Yale University provides tracings of cuneiform tablets with annotated transliterations and translations. As a part of our work analyzing cuneiform script computationally with 3D-acquisition and word-spotting, we present a first approach for automatized learning of transliterations of cuneiform tablets based on a corpus of parallel lines. These consist of manually drawn cuneiform characters and their transliteration into an alphanumeric code. Since the Cuneiform script is only available as raster-data, we segment lines with a projection profile, extract Histogram of oriented Gradients (HoG) features, detect outliers caused by tablet damage, and align those features with the transliteration. We apply methods from part-of-speech tagging to learn a correspondence between features and transliteration tokens. We evaluate point-wise classification with K-Nearest Neighbors (KNN) and a Support Vector Machine (SVM); sequence classification with a Hidden Markov Model (HMM) and a Structured Support Vector Machine (SVM-HMM). Analyzing our findings, we reach the conclusion that the sparsity of data, inconsistent labeling and the variety of tracing styles do currently not allow for fully automatized transliterations with the presented approach. However, the pursuit of automated learning of transliterations is of great relevance as manual annotation in larger quantities is not viable, given the few experts capable of transcribing cuneiform tablets.",
"title": ""
},
{
"docid": "a5f557ddac63cd24a11c1490e0b4f6d4",
"text": "Continuous opinion dynamics optimizer (CODO) is an algorithm based on human collective opinion formation process for solving continuous optimization problems. In this paper, we have studied the impact of topology and introduction of leaders in the society on the optimization performance of CODO. We have introduced three new variants of CODO and studied the efficacy of algorithms on several benchmark functions. Experimentation demonstrates that scale free CODO performs significantly better than all algorithms. Also, the role played by individuals with different degrees during the optimization process is studied.",
"title": ""
}
] |
scidocsrr
|
d49032ba4809876b705c678cd2e8bdde
|
A Communication System for Deaf and Dumb People
|
[
{
"docid": "b73526f1fb0abb4373421994dbd07822",
"text": "in our country around 2.78% of peoples are not able to speak (dumb). Their communications with others are only using the motion of their hands and expressions. We proposed a new technique called artificial speaking mouth for dumb people. It will be very helpful to them for conveying their thoughts to others. Some peoples are easily able to get the information from their motions. The remaining is not able to understand their way of conveying the message. In order to overcome the complexity the artificial mouth is introduced for the dumb peoples. This system is based on the motion sensor. According to dumb people, for every motion they have a meaning. That message is kept in a database. Likewise all templates are kept in the database. In the real time the template database is fed into a microcontroller and the motion sensor is fixed in their hand. For every action the motion sensors get accelerated and give the signal to the microcontroller. The microcontroller matches the motion with the database and produces the speech signal. The output of the system is using the speaker. By properly updating the database the dumb will speak like a normal person using the artificial mouth. The system also includes a text to speech conversion (TTS) block that interprets the matched gestures.",
"title": ""
}
] |
[
{
"docid": "3b4fec89137f9d4690bff6470b285192",
"text": "The poor contrast and the overlapping of cervical cell cytoplasm are the major issues in the accurate segmentation of cervical cell cytoplasm. This paper presents an automated unsupervised cytoplasm segmentation approach which can effectively find the cytoplasm boundaries in overlapping cells. The proposed approach first segments the cell clumps from the cervical smear image and detects the nuclei in each cell clump. A modified Otsu method with prior class probability is proposed for accurate segmentation of nuclei from the cell clumps. Using distance regularized level set evolution, the contour around each nucleus is evolved until it reaches the cytoplasm boundaries. Promising results were obtained by experimenting on ISBI 2015 challenge dataset.",
"title": ""
},
{
"docid": "6ee55ac672b1d87d4f4947655d321fb8",
"text": "Federated identity providers, e.g., Facebook and PayPal, offer a convenient means for authenticating users to third-party applications. Unfortunately such cross-site authentications carry privacy and tracking risks. For example, federated identity providers can learn what applications users are accessing; meanwhile, the applications can know the users' identities in reality.\n This paper presents Crypto-Book, an anonymizing layer enabling federated identity authentications while preventing these risks. Crypto-Book uses a set of independently managed servers that employ a (t,n)-threshold cryptosystem to collectively assign credentials to each federated identity (in the form of either a public/private keypair or blinded signed messages). With the credentials in hand, clients can then leverage anonymous authentication techniques such as linkable ring signatures or partially blind signatures to log into third-party applications in an anonymous yet accountable way.\n We have implemented a prototype of Crypto-Book and demonstrated its use with three applications: a Wiki system, an anonymous group communication system, and a whistleblower submission system. Crypto-Book is practical and has low overhead: in a deployment within our research group, Crypto-Book group authentication took 1.607s end-to-end, an overhead of 1.2s compared to traditional non-privacy-preserving federated authentication.",
"title": ""
},
{
"docid": "9dbff74b02153ee33f23d00884d909f7",
"text": "The trend in isolated DC/DC converters is increasing output power demands and higher operating frequencies. Improved topologies and semiconductors can allow for lower loss at higher frequencies. A major barrier to further improvement is the transformer design. With high current levels and high frequency effects the transformers can become the major loss component in the circuit. High values of transformer leakage inductance can also greatly degrade the performance of the converter. Matrix transformers offer the ability to reduce winding loss and leakage inductance. This paper will study the impact of increased switching frequencies on transformer size and explore the use of matrix transformers in high current high frequency isolated applications. This paper will also propose an improved integrated matrix transformer design that can decrease core loss and further improve the performance of matrix transformers.",
"title": ""
},
{
"docid": "3a7657130cb165682cc2e688a7e7195b",
"text": "The functional simulator Simics provides a co-simulation integration path with a SystemC simulation environment to create Virtual Platforms. With increasing complexity of the SystemC models, this platform suffers from performance degradation due to the single threaded nature of the integrated Virtual Platform. In this paper, we present a multi-threaded Simics SystemC platform solution that significantly improves performance over the existing single threaded solution. The two schedulers run independently, only communicating in a thread safe manner through a message interface. Simics based logging and checkpointing are preserved within SystemC and tied to the corresponding Simics' APIs for a seamless experience. The solution also scales to multiple SystemC models within the platform, each running its own thread with an instantiation of the SystemC kernel. A second multi-cell solution is proposed providing comparable performance with the multi-thread solution, but reducing the burden of integration on the SystemC model. Empirical data is presented showing performance gains over the legacy single threaded solution.",
"title": ""
},
{
"docid": "af4106bc4051e01146101aeb58a4261f",
"text": "In recent years a great amount of research has focused on algorithms that learn features from unlabeled data. In this work we propose a model based on the Self-Organizing Map (SOM) neural network to learn features useful for the problem of automatic natural images classification. In particular we use the SOM model to learn single-layer features from the extremely challenging CIFAR-10 dataset, containing 60.000 tiny labeled natural images, and subsequently use these features with a pyramidal histogram encoding to train a linear SVM classifier. Despite the large number of images, the proposed feature learning method requires only few minutes on an entry-level system, however we show that a supervised classifier trained with learned features provides significantly better results than using raw pixels values or other handcrafted features designed specifically for image classification. Moreover, exploiting the topological property of the SOM neural network, it is possible to reduce the number of features and speed up the supervised training process combining topologically close neurons, without repeating the feature learning process.",
"title": ""
},
{
"docid": "abef10b620026b2c054ca69a3c75f930",
"text": "The idea that general intelligence may be more variable in males than in females has a long history. In recent years it has been presented as a reason that there is little, if any, mean sex difference in general intelligence, yet males tend to be overrepresented at both the top and bottom ends of its overall, presumably normal, distribution. Clear analysis of the actual distribution of general intelligence based on large and appropriately population-representative samples is rare, however. Using two population-wide surveys of general intelligence in 11-year-olds in Scotland, we showed that there were substantial departures from normality in the distribution, with less variability in the higher range than in the lower. Despite mean IQ-scale scores of 100, modal scores were about 105. Even above modal level, males showed more variability than females. This is consistent with a model of the population distribution of general intelligence as a mixture of two essentially normal distributions, one reflecting normal variation in general intelligence and one refecting normal variation in effects of genetic and environmental conditions involving mental retardation. Though present at the high end of the distribution, sex differences in variability did not appear to account for sex differences in high-level achievement.",
"title": ""
},
{
"docid": "bf7203bb63cda371d78fa7337f2d7e2f",
"text": "Want to get experience? Want to get any ideas to create new things in your life? Read mathematical structures of language now! By reading this book as soon as possible, you can renew the situation to get the inspirations. Yeah, this way will lead you to always think more and more. In this case, this book will be always right for you. When you can observe more about the book, you will know why you need this.",
"title": ""
},
{
"docid": "dfa611e19a3827c66ea863041a3ef1e2",
"text": "We study the problem of malleability of Bitcoin transactions. Our first two contributions can be summarized as follows: (i) we perform practical experiments on Bitcoin that show that it is very easy to maul Bitcoin transactions with high probability, and (ii) we analyze the behavior of the popular Bitcoin wallets in the situation when their transactions are mauled; we conclude that most of them are to some extend not able to handle this situation correctly. The contributions in points (i) and (ii) are experimental. We also address a more theoretical problem of protecting the Bitcoin distributed contracts against the “malleability” attacks. It is well-known that malleability can pose serious problems in some of those contracts. It concerns mostly the protocols which use a “refund” transaction to withdraw a financial deposit in case the other party interrupts the protocol. Our third contribution is as follows: (iii) we show a general method for dealing with the transaction malleability in Bitcoin contracts. In short: this is achieved by creating a malleability-resilient “refund” transaction which does not require any modification of the Bitcoin protocol.",
"title": ""
},
{
"docid": "1235be9c8056b20ded217bc7474208e1",
"text": "Pathological gambling (PG) is most likely associated with functional brain changes as well as neuropsychological and personality alterations. Recent research with the Iowa Gambling Task suggests decision-making impairments in PG. These deficits are usually attributed to disturbances in feedback processing and associated functional alterations of the orbitofrontal cortex. However, previous studies with other clinical populations found relations between executive (dorsolateral prefrontal) functions and decision-making using a task with explicit rules for gains and losses, the Game of Dice Task. In the present study, we assessed 25 male PG patients and 25 male healthy controls with the Game of Dice Task. PG patients showed pronounced deficits in the Game of Dice Task, and the frequency of risky decisions was correlated with executive functions and feedback processing. Therefore, risky decisions of PG patients might be influenced by both dorsolateral prefrontal and orbitofrontal cortex dysfunctions.",
"title": ""
},
{
"docid": "06803b2748e6a16ecb3bb93efe60e9a7",
"text": "Considerable buzz surrounds artificial intelligence, and, indeed, AI is all around us. As with any software-based technology, it is also prone to vulnerabilities. Here, the author examines how we determine whether AI is sufficiently reliable to do its job and how much we should trust its outcomes.",
"title": ""
},
{
"docid": "0d2791ea015a251efd06de0468315194",
"text": "We introduce a novel two-stage approach for the important cybersecurity problem of detecting the presence of a botnet and identifying the compromised nodes (the bots), ideally before the botnet becomes active. The first stage detects anomalies by leveraging large deviations of an empirical distribution. We propose two approaches to create the empirical distribution: 1) a flow-based approach estimating the histogram of quantized flows and 2) a graph-based approach estimating the degree distribution of node interaction graphs, encompassing both Erdős-Rényi graphs and scale-free graphs. The second stage detects the bots using ideas from social network community detection in a graph that captures correlations of interactions among nodes over time. Community detection is performed by maximizing a modularity measure in this graph. The modularity maximization problem is nonconvex. We propose a convex relaxation, an effective randomization algorithm, and establish sharp bounds on the suboptimality gap. We apply our method to real-world botnet traffic and compare its performance with other methods.",
"title": ""
},
{
"docid": "8e3bf062119c6de9fa5670ce4b00764b",
"text": "Heating red phosphorus in sealed ampoules in the presence of a Sn/SnI4 catalyst mixture has provided bulk black phosphorus at much lower pressures than those required for allotropic conversion by anvil cells. Herein we report the growth of ultra-long 1D red phosphorus nanowires (>1 mm) selectively onto a wafer substrate from red phosphorus powder and a thin film of red phosphorus in the present of a Sn/SnI4 catalyst. Raman spectra and X-ray diffraction characterization suggested the formation of crystalline red phosphorus nanowires. FET devices constructed with the red phosphorus nanowires displayed a typical I-V curve similar to that of black phosphorus and a similar mobility reaching 300 cm(2) V(-1) s with an Ion /Ioff ratio approaching 10(2) . A significant response to infrared light was observed from the FET device.",
"title": ""
},
{
"docid": "1df103aef2a4a5685927615cfebbd1ea",
"text": "While human subjects lift small objects using the precision grip between the tips of the fingers and thumb the ratio between the grip force and the load force (i.e. the vertical lifting force) is adapted to the friction between the object and the skin. The present report provides direct evidence that signals in tactile afferent units are utilized in this adaptation. Tactile afferent units were readily excited by small but distinct slips between the object and the skin revealed as vibrations in the object. Following such afferent slip responses the force ratio was upgraded to a higher, stable value which provided a safety margin to prevent further slips. The latency between the onset of the a slip and the appearance of the ratio change (74 ±9 ms) was about half the minimum latency for intended grip force changes triggered by cutaneous stimulation of the fingers. This indicated that the motor responses were automatically initiated. If the subjects were asked to very slowly separate their thumb and the opposing finger while the object was held in air, grip force reflexes originating from afferent slip responses appeared to counteract the voluntary command, but the maintained upgrading of the force ratio was suppressed. In experiments with weak electrical cutaneous stimulation delivered through the surfaces of the object it was established that tactile input alone could trigger the upgrading of the force ratio. Although, varying in responsiveness, each of the three types of tactile units which exhibit a pronounced dynamic sensitivity (FA I, FA II and SA I units) could reliably signal these slips. Similar but generally weaker afferent responses, sometimes followed by small force ratio changes, also occurred in the FA I and the SA I units in the absence of detectable vibrations events. In contrast to the responses associated with clear vibratory events, the weaker afferent responses were probably caused by localized frictional slips, i.e. slips limited to small fractions of the skin area in contact with the object. Indications were found that the early adjustment to a new frictional condition, which may appear soon (ca. 0.1–0.2 s) after the object is initially gripped, might depend on the vigorous responses in the FA I units during the initial phase of the lifts (see Westling and Johansson 1987). The role of the tactile input in the adaptation of the force coordination to the frictional condition is discussed.",
"title": ""
},
{
"docid": "40369d066befb131bf48114534a79698",
"text": "Spark has been increasingly adopted by industries in recent years for big data analysis by providing a fault tolerant, scalable and easy-to-use in memory abstraction. Moreover, the community has been actively developing a rich ecosystem around Spark, making it even more attractive. However, there is not yet a Spark specify benchmark existing in the literature to guide the development and cluster deployment of Spark to better fit resource demands of user applications. In this paper, we present SparkBench, a Spark specific benchmarking suite, which includes a comprehensive set of applications. SparkBench covers four main categories of applications, including machine learning, graph computation, SQL query and streaming applications. We also characterize the resource consumption, data flow and timing information of each application and evaluate the performance impact of a key configuration parameter to guide the design and optimization of Spark data analytic platform.",
"title": ""
},
{
"docid": "40f3a647fcaac638373f51fe125c36bb",
"text": "In this paper we presented a design of 4 bit attenuator with RF MEMS switches and distributed attenuation networks. The substrate of this attenuator is high resistance silicon and the TaN thin film is used as resistors. RF MEMS switches have excellent microwave properties to reduce the insertion loss of attenuator and increase the insulation. Distributed attenuation networks employed as fixed attenuators have the advantages of smaller size and better performance in comparison to conventional π or T-type fixed attenuators. Over DC-20GHz, the simulation results show the attenuation flatness of 1.52-1.65dB and the attenuation range of 15.35-17.02dB. The minimum attenuation is 0.44-1.96dB in the interesting frequency range. The size of the attenuator is 2152 × 7500μm2.",
"title": ""
},
{
"docid": "9d7a67f2cd12a6fd033ad102fb9c526e",
"text": "We begin by pretraining the source task model, fS , using the task loss on the labeled source data. Next, we perform pixel-level adaptation using our image space GAN losses together with semantic consistency and cycle consistency losses. This yeilds learned parameters for the image transformations, GS!T and GT!S , image discriminators, DS and DT , as well as an initial setting of the task model, fT , which is trained using pixel transformed source images and the corresponding source pixel labels. Finally, we perform feature space adpatation in order to update the target semantic model, fT , to have features which are aligned between the source images mapped into target style and the real target images. During this phase, we learn the feature discriminator, Dfeat and use this to guide the representation update to fT . In general, our method could also perform phases 2 and 3 simultaneously, but this would require more GPU memory then available at the time of these experiments.",
"title": ""
},
{
"docid": "b24fc322e0fec700ec0e647c31cfd74d",
"text": "Organometal trihalide perovskite solar cells offer the promise of a low-cost easily manufacturable solar technology, compatible with large-scale low-temperature solution processing. Within 1 year of development, solar-to-electric power-conversion efficiencies have risen to over 15%, and further imminent improvements are expected. Here we show that this technology can be successfully made compatible with electron acceptor and donor materials generally used in organic photovoltaics. We demonstrate that a single thin film of the low-temperature solution-processed organometal trihalide perovskite absorber CH3NH3PbI3-xClx, sandwiched between organic contacts can exhibit devices with power-conversion efficiency of up to 10% on glass substrates and over 6% on flexible polymer substrates. This work represents an important step forward, as it removes most barriers to adoption of the perovskite technology by the organic photovoltaic community, and can thus utilize the extensive existing knowledge of hybrid interfaces for further device improvements and flexible processing platforms.",
"title": ""
},
{
"docid": "eece7dab68d56d3d5f28a72e873a0a72",
"text": "OBJECTIVES\nTo describe the effect of multidisciplinary care on survival in women treated for breast cancer.\n\n\nDESIGN\nRetrospective, comparative, non-randomised, interventional cohort study.\n\n\nSETTING\nNHS hospitals, health boards in the west of Scotland, UK.\n\n\nPARTICIPANTS\n14,358 patients diagnosed with symptomatic invasive breast cancer between 1990 and 2000, residing in health board areas in the west of Scotland. 13,722 (95.6%) patients were eligible (excluding 16 diagnoses of inflammatory cancers and 620 diagnoses of breast cancer at death).\n\n\nINTERVENTION\nIn 1995, multidisciplinary team working was introduced in hospitals throughout one health board area (Greater Glasgow; intervention area), but not in other health board areas in the west of Scotland (non-intervention area).\n\n\nMAIN OUTCOME MEASURES\nBreast cancer specific mortality and all cause mortality.\n\n\nRESULTS\nBefore the introduction of multidisciplinary care (analysed time period January 1990 to September 1995), breast cancer mortality was 11% higher in the intervention area than in the non-intervention area (hazard ratio adjusted for year of incidence, age at diagnosis, and deprivation, 1.11; 95% confidence interval 1.00 to 1.20). After multidisciplinary care was introduced (time period October 1995 to December 2000), breast cancer mortality was 18% lower in the intervention area than in the non-intervention area (0.82, 0.74 to 0.91). All cause mortality did not differ significantly between populations in the earlier period, but was 11% lower in the intervention area than in the non-interventional area in the later period (0.89, 0.82 to 0.97). Interrupted time series analyses showed a significant improvement in breast cancer survival in the intervention area in 1996, compared with the expected survival in the same year had the pre-intervention trend continued (P=0.004). This improvement was maintained after the intervention was introduced.\n\n\nCONCLUSION\nIntroduction of multidisciplinary care was associated with improved survival and reduced variation in survival among hospitals. Further analysis of clinical audit data for multidisciplinary care could identify which aspects of care are most associated with survival benefits.",
"title": ""
},
{
"docid": "bc4ce8c0dce6515d1432a6baecef4614",
"text": "The lsemantica command, presented in this paper, implements Latent Semantic Analysis in Stata. Latent Semantic Analysis is a machine learning algorithm for word and text similarity comparison. Latent Semantic Analysis uses Truncated Singular Value Decomposition to derive the hidden semantic relationships between words and texts. lsemantica provides a simple command for Latent Semantic Analysis in Stata as well as complementary commands for text similarity comparison.",
"title": ""
},
{
"docid": "4413b7c20191b443be6184fe927384c8",
"text": "Falls and resulting physical-psychological consequences in the elderly are a major health hazard and a serious obstacle for independent living. So development of intelligent video surveillance systems is so important due to providing safe and secure environments. To this end, this paper proposes a novel approach for human fall detection based on human shape variation. Combination of best-fit approximated ellipse around the human body, projection histograms of the segmented silhouette and temporal changes of head pose, would provide a useful cue for detection different behaviors. Extracted feature vectors are finally fed to a multi-class support vector machine for precise classification of motions and determination of a fall event. Unlike existent fall detection systems that only deal with limited movement patterns, we considered wide range of motions consisting of normal daily life activities, abnormal behaviors and also unusual events. Reliable recognition rate of experimental results underlines satisfactory performance of our system.",
"title": ""
}
] |
scidocsrr
|
ce079846ab9f8d6aa54b41cbbcdb5cde
|
High-frequency oscillations and the neurobiology of schizophrenia
|
[
{
"docid": "f2e3343fbed363ede9e67903ea8422a5",
"text": "The emergence of a unified cognitive moment relies on the coordination of scattered mosaics of functionally specialized brain regions. Here we review the mechanisms of large-scale integration that counterbalance the distributed anatomical and functional organization of brain activity to enable the emergence of coherent behaviour and cognition. Although the mechanisms involved in large-scale integration are still largely unknown, we argue that the most plausible candidate is the formation of dynamic links mediated by synchrony over multiple frequency bands.",
"title": ""
}
] |
[
{
"docid": "9f7987bd6e65f26cd240cc5fcda82094",
"text": "Surface roughness is known to amplify hydrophobicity. It is observed that, in general, two drop shapes are possible on a given rough surface. These two cases correspond to the Wenzel (liquid wets the grooves of the rough surface) and Cassie (the drop sits on top of the peaks of the rough surface) formulas. Depending on the geometric parameters of the substrate, one of these two cases has lower energy. It is not guaranteed, though, that a drop will always exist in the lower energy state; rather, the state in which a drop will settle depends typically on how the drop is formed. In this paper, we investigate the transition of a drop from one state to another. In particular, we are interested in the transition of a \"Cassie drop\" to a \"Wenzel drop\", since it has implications on the design of superhydrophobic rough surfaces. We propose a methodology, based on energy balance, to determine whether a transition from the Cassie to Wenzel case is possible.",
"title": ""
},
{
"docid": "87aeb2c919ce8c06b52cb4bf64c5effb",
"text": "To study the operation of a PLL and its application to demodulate a FSK signal. The LM565 is a general purpose phase locked loop (PLL) containing a stable, highly linear voltage controlled oscillator (VCO) and a double balanced phase detector with good carrier suppression. This device can be used in several kinds of applications: data synchronization, Both the VCO free-running operation frequency and the filter bandwidth can be adjusted by using external resistors and capacitors. Next, the main features of the device are summarized:",
"title": ""
},
{
"docid": "1b5450c2f21cab5117275b787413b3ad",
"text": "The security and privacy of the data transmitted is an important aspect of the exchange of information on the Internet network. Cryptography and Steganography are two of the most commonly used digital data security techniques. In this research, we proposed the combination of the cryptographic method with Data Encryption Standard (DES) algorithm and the steganographic method with Discrete Cosine Transform (DCT) to develop a digital data security application. The application can be used to secure document data in Word, Excel, Powerpoint or PDF format. Data encrypted with DES algorithm and further hidden in image cover using DCT algorithm. The results showed that the quality of the image that has been inserted (stego-image) is still in a good category with an average PSNR value of 46.9 dB. Also, the experiment results show that the average computational time of 0.75 millisecond/byte, an average size increase of 4.79 times and a success rate of 58%. This research can help solve the problem of data and information security that will be sent through a public network like the internet.",
"title": ""
},
{
"docid": "391c34e983c99af1cc0a06f6f1d4a6bf",
"text": "Network protocol reverse engineering of botnet command and control (C&C) is a challenging task, which requires various manual steps and a significant amount of domain knowledge. Furthermore, most of today's C&C protocols are encrypted, which prevents any analysis on the traffic without first discovering the encryption algorithm and key. To address these challenges, we present an end-to-end system for automatically discovering the encryption algorithm and keys, generating a protocol specification for the C&C traffic, and crafting effective network signatures. In order to infer the encryption algorithm and key, we enhance state-of-the-art techniques to extract this information using lightweight binary analysis. In order to generate protocol specifications we infer field types purely by analyzing network traffic. We evaluate our approach on three prominent malware families: Sality, ZeroAccess and Ramnit. Our results are encouraging: the approach decrypts all three protocols, detects 97% of fields whose semantics are supported, and infers specifications that correctly align with real protocol specifications.",
"title": ""
},
{
"docid": "b12cc6abd517246009e1d4230d1878c4",
"text": "Electronic government is being increasingly recognized as a means for transforming public governance. Despite this increasing interest, information systems (IS) literature is mostly silent on what really contributes to the success of e-government 100 TEO, SRIVASTAVA, AND JIANG Web sites. To fill this gap, this study examines the role of trust in e-government success using the updated DeLone and McLean IS success model as the theoretical framework. The model is tested via a survey of 214 Singapore e-government Web site users. The results show that trust in government, but not trust in technology, is positively related to trust in e-government Web sites. Further, trust in e-government Web sites is positively related to information quality, system quality, and service quality. The quality constructs have different effects on “intention to continue” using the Web site and “satisfaction” with the Web site. Post hoc analysis indicates that the nature of usage (active versus passive users) may help us better understand the interrelationships among success variables examined in this study. This result suggests that the DeLone and McLean model can be further extended by examining the nature of IS use. In addition, it is important to consider the role of trust as well as various Web site quality attributes in understanding e-government success.",
"title": ""
},
{
"docid": "2578e5a45d99dec40f36a6a2de52136f",
"text": "Speech emotion recognition system aims at automatically identifying the emotion of the speaker from the speech. It is a modification of the speech recognition system which only identifies the speech. In this paper, we study the feature extraction algorithm such as pitch, formant frequency and MFCC.",
"title": ""
},
{
"docid": "340f4f9336dd0884bb112345492b47f9",
"text": "Inspired by how humans summarize long documents, we propose an accurate and fast summarization model that first selects salient sentences and then rewrites them abstractively (i.e., compresses and paraphrases) to generate a concise overall summary. We use a novel sentence-level policy gradient method to bridge the nondifferentiable computation between these two neural networks in a hierarchical way, while maintaining language fluency. Empirically, we achieve the new state-of-theart on all metrics (including human evaluation) on the CNN/Daily Mail dataset, as well as significantly higher abstractiveness scores. Moreover, by first operating at the sentence-level and then the word-level, we enable parallel decoding of our neural generative model that results in substantially faster (10-20x) inference speed as well as 4x faster training convergence than previous long-paragraph encoder-decoder models. We also demonstrate the generalization of our model on the test-only DUC2002 dataset, where we achieve higher scores than a state-of-the-art model.",
"title": ""
},
{
"docid": "ec105642406ba9111485618e85f5b7cd",
"text": "We present simulations of evacuation processes using a recently introduced cellular automaton model for pedestrian dynamics. This model applies a bionics approach to describe the interaction between the pedestrians using ideas from chemotaxis. Here we study a rather simple situation, namely the evacuation from a large room with one or two doors. It is shown that the variation of the model parameters allows to describe different types of behaviour, from regular to panic. We find a nonmonotonic dependence of the evacuation times on the coupling constants. These times depend on the strength of the herding behaviour, with minimal evacuation times for some intermediate values of the couplings, i.e. a proper combination of herding and use of knowledge about the shortest way to the exit.",
"title": ""
},
{
"docid": "c7a96129484bbedd063a0b322d9ae3d3",
"text": "BACKGROUND\nNon-invasive detection of aneuploidies in a fetal genome through analysis of cell-free DNA circulating in the maternal plasma is becoming a routine clinical test. Such tests, which rely on analyzing the read coverage or the allelic ratios at single-nucleotide polymorphism (SNP) loci, are not sensitive enough for smaller sub-chromosomal abnormalities due to sequencing biases and paucity of SNPs in a genome.\n\n\nRESULTS\nWe have developed an alternative framework for identifying sub-chromosomal copy number variations in a fetal genome. This framework relies on the size distribution of fragments in a sample, as fetal-origin fragments tend to be smaller than those of maternal origin. By analyzing the local distribution of the cell-free DNA fragment sizes in each region, our method allows for the identification of sub-megabase CNVs, even in the absence of SNP positions. To evaluate the accuracy of our method, we used a plasma sample with the fetal fraction of 13%, down-sampled it to samples with coverage of 10X-40X and simulated samples with CNVs based on it. Our method had a perfect accuracy (both specificity and sensitivity) for detecting 5 Mb CNVs, and after reducing the fetal fraction (to 11%, 9% and 7%), it could correctly identify 98.82-100% of the 5 Mb CNVs and had a true-negative rate of 95.29-99.76%.\n\n\nAVAILABILITY AND IMPLEMENTATION\nOur source code is available on GitHub at https://github.com/compbio-UofT/FSDA CONTACT: : brudno@cs.toronto.edu.",
"title": ""
},
{
"docid": "f97ed9ef35355feffb1ebf4242d7f443",
"text": "Moore’s law has allowed the microprocessor market to innovate at an astonishing rate. We believe microchip implants are the next frontier for the integrated circuit industry. Current health monitoring technologies are large, expensive, and consume significant power. By miniaturizing and reducing power, monitoring equipment can be implanted into the body and allow 24/7 health monitoring. We plan to implement a new transmitter topology, compressed sensing, which can be used for wireless communications with microchip implants. This paper focuses on the ADC used in the compressed sensing signal chain. Using the Cadence suite of tools and a 32/28nm process, we produced simulations of our compressed sensing Analog to Digital Converter to feed into a Digital Compression circuit. Our results indicate that a 12-bit, 20Ksample, 9.8nW Successive Approximation ADC is possible for diagnostic resolution (10 bits). By incorporating a hybrid-C2C DAC with differential floating voltage shields, it is possible to obtain 9.7 ENOB. Thus, we recommend this ADC for use in compressed sensing for biomedical purposes. Not only will it be useful in digital compressed sensing, but this can also be repurposed for use in analog compressed sensing.",
"title": ""
},
{
"docid": "09e2a91a25e4ecccc020a91e14a35282",
"text": "A core goal of auditory neuroscience is to build quantitative models that predict cortical responses to natural sounds. Reasoning that a complete model of auditory cortex must solve ecologically relevant tasks, we optimized hierarchical neural networks for speech and music recognition. The best-performing network contained separate music and speech pathways following early shared processing, potentially replicating human cortical organization. The network performed both tasks as well as humans and exhibited human-like errors despite not being optimized to do so, suggesting common constraints on network and human performance. The network predicted fMRI voxel responses substantially better than traditional spectrotemporal filter models throughout auditory cortex. It also provided a quantitative signature of cortical representational hierarchy-primary and non-primary responses were best predicted by intermediate and late network layers, respectively. The results suggest that task optimization provides a powerful set of tools for modeling sensory systems.",
"title": ""
},
{
"docid": "005794dc95a118fc7382c73a231b250b",
"text": "Fully convolutional neural networks (FCNs) have been shown to achieve the state-of-the-art performance on the task of classifying time series sequences. We propose the augmentation of fully convolutional networks with long short term memory recurrent neural network (LSTM RNN) sub-modules for time series classification. Our proposed models significantly enhance the performance of fully convolutional networks with a nominal increase in model size and require minimal preprocessing of the data set. The proposed long short term memory fully convolutional network (LSTM-FCN) achieves the state-of-the-art performance compared with others. We also explore the usage of attention mechanism to improve time series classification with the attention long short term memory fully convolutional network (ALSTM-FCN). The attention mechanism allows one to visualize the decision process of the LSTM cell. Furthermore, we propose refinement as a method to enhance the performance of trained models. An overall analysis of the performance of our model is provided and compared with other techniques.",
"title": ""
},
{
"docid": "af98839cc3e28820c8d79403d58d903a",
"text": "Annotating the increasing amounts of user-contributed images in a personalized manner is in great demand. However, this demand is largely ignored by the mainstream of automated image annotation research. In this paper we aim for personalizing automated image annotation by jointly exploiting personalized tag statistics and content-based image annotation. We propose a cross-entropy based learning algorithm which personalizes a generic annotation model by learning from a user's multimedia tagging history. Using cross-entropy-minimization based Monte Carlo sampling, the proposed algorithm optimizes the personalization process in terms of a performance measurement which can be flexibly chosen. Automatic image annotation experiments with 5,315 realistic users in the social web show that the proposed method compares favorably to a generic image annotation method and a method using personalized tag statistics only. For 4,442 users the performance improves, where for 1,088 users the absolute performance gain is at least 0.05 in terms of average precision. The results show the value of the proposed method.",
"title": ""
},
{
"docid": "1ade3a53c754ec35758282c9c51ced3d",
"text": "Radical hysterectomy represents the treatment of choice for FIGO stage IA2–IIA cervical cancer. It is associated with several serious complications such as urinary and anorectal dysfunction due to surgical trauma to the autonomous nervous system. In order to determine those surgical steps involving the risk of nerve injury during both classical and nerve-sparing radical hysterectomy, we investigated the relationships between pelvic fascial, vascular and nervous structures in a large series of embalmed and fresh female cadavers. We showed that the extent of potential denervation after classical radical hysterectomy is directly correlated with the radicality of the operation. The surgical steps that carry a high risk of nerve injury are the resection of the uterosacral and vesicouterine ligaments and of the paracervix. A nerve-sparing approach to radical hysterectomy for cervical cancer is feasible if specific resection limits, such as the deep uterine vein, are carefully identified and respected. However, a nerve-sparing surgical effort should be balanced with the oncological priorities of removal of disease and all its potential routes of local spread. L'hystérectomie radicale est le traitement de choix pour les cancers du col utérin de stade IA2–IIA de la Fédération Internationale de Gynécologie Obstétrique (FIGO). Cette intervention comporte plusieurs séquelles graves, telles que les dysfonctions urinaires ou ano-rectales, par traumatisme chirurgical des nerfs végétatifs pelviens. Pour mettre en évidence les temps chirurgicaux impliquant un risque de lésion nerveuse lors d'une hystérectomie radicale classique et avec préservation nerveuse, nous avons recherché les rapports entre le fascia pelvien, les structures vasculaires et nerveuses sur une large série de sujets anatomiques féminins embaumés et non embaumés. Nous avons montré que l'étendue de la dénervation potentielle après hystérectomie radicale classique était directement en rapport avec le caractère radical de l'intervention. Les temps chirurgicaux à haut risque pour des lésions nerveuses sont la résection des ligaments utéro-sacraux, des ligaments vésico-utérins et du paracervix. L'hystérectomie radicale avec préservation nerveuse est possible si des limites de résection spécifiques telle que la veine utérine profonde sont soigneusement identifiées et respectées. Cependant une chirurgie de préservation nerveuse doit être mise en balance avec les priorités carcinologiques d'exérèse du cancer et de toutes ses voies potentielles de dissémination locale.",
"title": ""
},
{
"docid": "a4c22a2947ad3ff6a7f103f9a82a28b7",
"text": "For a transaction database, a frequent itemset is an itemset included in at least a specified number of transactions. A frequent itemset P is maximal if P is included in no other frequent itemset, and closed if P is included in no other itemset included in the exactly same transactions as P . The problems of finding these frequent itemsets are fundamental in data mining, and from the applications, fast implementations for solving the problems are needed. In this paper, we propose efficient algorithms LCM (Linear time Closed itemset Miner), LCMfreq and LCMmax for these problems. We show the efficiency of our algorithms by computational experiments compared with existing algorithms.",
"title": ""
},
{
"docid": "bfd03d07c6a97a3b0a0c974f65070629",
"text": "People of skin of colour comprise the majority of the world's population and Asian subjects comprise more than half of the total population of the earth. Even so, the literature on the characteristics of the subjects with skin of colour is limited. Several groups over the past decades have attempted to decipher the underlying differences in skin structure and function in different ethnic skin types. However, most of these studies have been of small scale and in some studies interindividual differences in skin quality overwhelm any racial differences. There has been a recent call for more studies to address genetic together with phenotypic differences among different racial groups and in this respect several large-scale studies have been conducted recently. The most obvious ethnic skin difference relates to skin colour which is dominated by the presence of melanin. The photoprotection derived from this polymer influences the rate of the skin aging changes between the different racial groups. However, all racial groups are eventually subjected to the photoaging process. Generally Caucasians have an earlier onset and greater skin wrinkling and sagging signs than other skin types and in general increased pigmentary problems are seen in skin of colour although one large study reported that East Asians living in the U.S.A. had the least pigment spots. Induction of a hyperpigmentary response is thought to be through signaling by the protease-activated receptor-2 which together with its activating protease is increased in the epidermis of subjects with skin of colour. Changes in skin biophysical properties with age demonstrate that the more darkly pigmented subjects retaining younger skin properties compared with the more lightly pigmented groups. However, despite having a more compact stratum corneum (SC) there are conflicting reports on barrier function in these subjects. Nevertheless, upon a chemical or mechanical challenge the SC barrier function is reported to be stronger in subjects with darker skin despite having the reported lowest ceramide levels. One has to remember that barrier function relates to the total architecture of the SC and not just its lipid levels. Asian skin is reported to possess a similar basal transepidermal water loss (TEWL) to Caucasian skin and similar ceramide levels but upon mechanical challenge it has the weakest barrier function. Differences in intercellular cohesion are obviously apparent. In contrast reduced SC natural moisturizing factor levels have been reported compared with Caucasian and African American skin. These differences will contribute to differences in desquamation but few data are available. One recent study has shown reduced epidermal Cathepsin L2 levels in darker skin types which if also occurs in the SC could contribute to the known skin ashing problems these subjects experience. In very general terms as the desquamatory enzymes are extruded with the lamellar granules subjects with lowered SC lipid levels are expected to have lowered desquamatory enzyme levels. Increased pores size, sebum secretion and skin surface microflora occur in Negroid subjects. Equally increased mast cell granule size occurs in these subjects. The frequency of skin sensitivity is quite similar across different racial groups but the stimuli for its induction shows subtle differences. Nevertheless, several studies indicate that Asian skin maybe more sensitive to exogenous chemicals probably due to a thinner SC and higher eccrine gland density. In conclusion, we know more of the biophysical and somatosensory characteristics of ethnic skin types but clearly, there is still more to learn and especially about the inherent underlying biological differences in ethnic skin types.",
"title": ""
},
{
"docid": "48a8cfc2ac8c8c63bbd15aba5a830ef9",
"text": "We extend prior research on masquerade detection using UNIX commands issued by users as the audit source. Previous studies using multi-class training requires gathering data from multiple users to train specific profiles of self and non-self for each user. Oneclass training uses data representative of only one user. We apply one-class Naïve Bayes using both the multivariate Bernoulli model and the Multinomial model, and the one-class SVM algorithm. The result shows that oneclass training for this task works as well as multi-class training, with the great practical advantages of collecting much less data and more efficient training. One-class SVM using binary features performs best among the oneclass training algorithms.",
"title": ""
},
{
"docid": "f7e3ee26413525acea763f7d4635ebab",
"text": "Network Attached Storage (NAS) and Virtual Machines (VMs) are widely used in data centers thanks to their manageability, scalability, and ability to consolidate resources. But the shift from physical to virtual clients drastically changes the I/O workloads seen on NAS servers, due to guest file system encapsulation in virtual disk images and the multiplexing of request streams from different VMs. Unfortunately, current NAS workload generators and benchmarks produce workloads typical to physical machines. This paper makes two contributions. First, we studied the extent to which virtualization is changing existing NAS workloads. We observed significant changes, including the disappearance of file system meta-data operations at the NAS layer, changed I/O sizes, and increased randomness. Second, we created a set of versatile NAS benchmarks to synthesize virtualized workloads. This allows us to generate accurate virtualized workloads without the effort and limitations associated with setting up a full virtualized environment. Our experiments demonstrate that the relative error of our virtualized benchmarks, evaluated across 11 parameters, averages less than 10%.",
"title": ""
},
{
"docid": "b31676e958e8345132780499e5dd968d",
"text": "Following triggered corporate bankruptcies, an increasing number of prediction models have emerged since 1960s. This study provides a critical analysis of methodologies and empirical findings of applications of these models across 10 different countries. The study’s empirical exercise finds that predictive accuracies of different corporate bankruptcy prediction models are, generally, comparable. Artificially Intelligent Expert System (AIES) models perform marginally better than statistical and theoretical models. Overall, use of Multiple Discriminant Analysis (MDA) dominates the research followed by logit models. Study deduces useful observations and recommendations for future research in this field. JEL classification: G33; C49; C88",
"title": ""
},
{
"docid": "398169d654c89191090c04fa930e5e62",
"text": "Psychedelic drug flashbacks have been a puzzling clinical phenomenon observed by clinicians. Flashbacks are defined as transient, spontaneous recurrences of the psychedelic drug effect appearing after a period of normalcy following an intoxication of psychedelics. The paper traces the evolution of the concept of flashback and gives examples of the varieties encountered. Although many drugs have been advocated for the treatment of flashback, flashbacks generally decrease in intensity and frequency with abstinence from psychedelic drugs.",
"title": ""
}
] |
scidocsrr
|
e8dc792a00fbb4b8f024f2d4b08791a2
|
Robust Camera Calibration and Player Tracking in Broadcast Basketball Video
|
[
{
"docid": "f48e6475c0afeac09262cdc2f5681208",
"text": "Semantic analysis of sport sequences requires camera calibration to obtain player and ball positions in real-world coordinates. For court sports like tennis, the marker lines on the field can be used to determine the calibration parameters. We propose a real-time calibration algorithm that can be applied to all court sports simply by exchanging the court model. The algorithm is based on (1) a specialized court-line detector, (2) a RANSAC-based line parameter estimation, (3) a combinatorial optimization step to localize the court within the set of detected line segments, and (4) an iterative court-model tracking step. Our results show real-time calibration of, e.g., tennis and soccer sequences with a computation time of only about 6 ms per frame.",
"title": ""
},
{
"docid": "cfadde3d2e6e1d6004e6440df8f12b5a",
"text": "We propose an automatic camera calibration algorithm for court sports. The obtained camera calibration parameters are required for applications that need to convert positions in the video frame to real-world coordinates or vice versa. Our algorithm uses the line markings of the court for calibration and it can be applied to a variety of different sports since the geometric model of the court can be specified by the user. The algorithm starts with a model initialization step which locates the court in the image without any user assistance or a-priori knowledge about the most probable position. Image pixels are classified as court line pixels if they pass several tests including color and local texture restrictions. A Hough transform is applied to extract line elements, forming a set of court line candidates. The subsequent combinatorial search establishes correspondences between lines in the input image and lines from the court model. For the following input frames, an abbreviated calibration algorithm is used, which predicts the camera parameters for the new image and optimizes the parameters using a gradient descent algorithm. We have conducted experiments on a variety of sport videos (tennis, volleyball, and goal area sequences of soccer games). Video scenes with considerable difficulties were selected to test the robustness of the algorithm. Results show that the algorithm is very robust to occlusions, partial court views, bad lighting conditions, or shadows.",
"title": ""
},
{
"docid": "bc4791523b11a235d0b1c9e660ea1139",
"text": "In this paper, we present a novel system and effective algorithms for soccer video segmentation. The output, about whether the ball is in play, reveals high-level structure of the content. The first step is to classify each sample frame into 3 kinds of view using a unique domain-specific feature, grass-area-ratio. Here the grass value and classification rules are learned and automatically adjusted to each new clip. Then heuristic rules are used in processing the view label sequence, and obtain play/break status of the game. The results provide good basis for detailed content analysis in next step. We also show that lowlevel features and mid-level view classes can be combined to extract more information about the game, via the example of detecting grass orientation in the field. The results are evaluated under different metrics intended for different applications; the best result in segmentation is 86.5%.",
"title": ""
}
] |
[
{
"docid": "9bbc279974aaa899d12fee26948ce029",
"text": "Data-flow testing (DFT) is a family of testing strategies designed to verify the interactions between each program variable’s definition and its uses. Such a test objective of interest is referred to as a def-use pair. DFT selects test data with respect to various test adequacy criteria (i.e., data-flow coverage criteria) to exercise each pair. The original conception of DFT was introduced by Herman in 1976. Since then, a number of studies have been conducted, both theoretically and empirically, to analyze DFT’s complexity and effectiveness. In the past four decades, DFT has been continuously concerned, and various approaches from different aspects are proposed to pursue automatic and efficient data-flow testing. This survey presents a detailed overview of data-flow testing, including challenges and approaches in enforcing and automating it: (1) it introduces the data-flow analysis techniques that are used to identify def-use pairs; (2) it classifies and discusses techniques for data-flow-based test data generation, such as search-based testing, random testing, collateral-coverage-based testing, symbolic-execution-based testing, and model-checking-based testing; (3) it discusses techniques for tracking data-flow coverage; (4) it presents several DFT applications, including software fault localization, web security testing, and specification consistency checking; and (5) it summarizes recent advances and discusses future research directions toward more practical data-flow testing.",
"title": ""
},
{
"docid": "8183fe0c103e2ddcab5b35549ed8629f",
"text": "The performance of Douglas-Rachford splitting and the alternating direction method of multipliers (ADMM) (i.e. Douglas-Rachford splitting on the dual problem) are sensitive to conditioning of the problem data. For a restricted class of problems that enjoy a linear rate of convergence, we show in this paper how to precondition the optimization data to optimize a bound on that rate. We also generalize the preconditioning methods to problems that do not satisfy all assumptions needed to guarantee a linear convergence. The efficiency of the proposed preconditioning is confirmed in a numerical example, where improvements of more than one order of magnitude are observed compared to when no preconditioning is used.",
"title": ""
},
{
"docid": "05d3d0d62d2cff27eace1fdfeecf9814",
"text": "This article solves the equilibrium problem in a pure-exchange, continuous-time economy in which some agents face information costs or other types of frictions effectively preventing them from investing in the stock market. Under the assumption that the restricted agents have logarithmic utilities, a complete characterization of equilibrium prices and consumption/ investment policies is provided. A simple calibration shows that the model can help resolve some of the empirical asset pricing puzzles.",
"title": ""
},
{
"docid": "d2eb6c8dc6a3dd475248582361e89284",
"text": "In the last few years, uncertainty management has come to be recognized as a fundamental aspect of data integration. It is now accepted that it may not be possible to remove uncertainty generated during data integration processes and that uncertainty in itself may represent a source of relevant information. Several issues, such as the aggregation of uncertain mappings and the querying of uncertain mediated schemata, have been addressed by applying well-known uncertainty management theories. However, several problems lie unresolved. This article sketches an initial picture of this highly active research area; it details existing works in the light of a homogeneous framework, and identifies and discusses the leading issues awaiting solutions.",
"title": ""
},
{
"docid": "95612aa090b77fc660279c5f2886738d",
"text": "Healthy biological systems exhibit complex patterns of variability that can be described by mathematical chaos. Heart rate variability (HRV) consists of changes in the time intervals between consecutive heartbeats called interbeat intervals (IBIs). A healthy heart is not a metronome. The oscillations of a healthy heart are complex and constantly changing, which allow the cardiovascular system to rapidly adjust to sudden physical and psychological challenges to homeostasis. This article briefly reviews current perspectives on the mechanisms that generate 24 h, short-term (~5 min), and ultra-short-term (<5 min) HRV, the importance of HRV, and its implications for health and performance. The authors provide an overview of widely-used HRV time-domain, frequency-domain, and non-linear metrics. Time-domain indices quantify the amount of HRV observed during monitoring periods that may range from ~2 min to 24 h. Frequency-domain values calculate the absolute or relative amount of signal energy within component bands. Non-linear measurements quantify the unpredictability and complexity of a series of IBIs. The authors survey published normative values for clinical, healthy, and optimal performance populations. They stress the importance of measurement context, including recording period length, subject age, and sex, on baseline HRV values. They caution that 24 h, short-term, and ultra-short-term normative values are not interchangeable. They encourage professionals to supplement published norms with findings from their own specialized populations. Finally, the authors provide an overview of HRV assessment strategies for clinical and optimal performance interventions.",
"title": ""
},
{
"docid": "21324c71d70ca79d2f2c7117c759c915",
"text": "The wide-spread of social media provides unprecedented sources of written language that can be used to model and infer online demographics. In this paper, we introduce a novel visual text analytics system, DemographicVis, to aid interactive analysis of such demographic information based on user-generated content. Our approach connects categorical data (demographic information) with textual data, allowing users to understand the characteristics of different demographic groups in a transparent and exploratory manner. The modeling and visualization are based on ground truth demographic information collected via a survey conducted on Reddit.com. Detailed user information is taken into our modeling process that connects the demographic groups with features that best describe the distinguishing characteristics of each group. Features including topical and linguistic are generated from the user-generated contents. Such features are then analyzed and ranked based on their ability to predict the users' demographic information. To enable interactive demographic analysis, we introduce a web-based visual interface that presents the relationship of the demographic groups, their topic interests, as well as the predictive power of various features. We present multiple case studies to showcase the utility of our visual analytics approach in exploring and understanding the interests of different demographic groups. We also report results from a comparative evaluation, showing that the DemographicVis is quantitatively superior or competitive and subjectively preferred when compared to a commercial text analysis tool.",
"title": ""
},
{
"docid": "ae45ce27587d855735b3e8e67785f17b",
"text": "Word sense disambiguation has been recognized as a major problem in natural language processing research for over forty years. Both quantitive and qualitative methods have been tried, but much of this work has been stymied by difficulties in acquiring appropriate lexical resources, such as semantic networks and annotated corpora. In particular, much of the work on qualitative methods has had to focus on ‘‘toy’’ domains since currently available semantic networks generally lack broad coverage. Similarly, much of the work on quantitative methods has had to depend on small amounts of hand-labeled text for testing and training. We have achieved considerable progress recently by taking advantage of a new source of testing and training materials. Rather than depending on small amounts of hand-labeled text, we have been making use of relatively large amounts of parallel text, text such as the Canadian Hansards, which are available in multiple languages. The translation can often be used in lieu of hand-labeling. For example, consider the polysemous word sentence, which has two major senses: (1) a judicial sentence, and (2), a syntactic sentence. We can collect a number of sense (1) examples by extracting instances that are translated as peine, and we can collect a number of sense (2) examples by extracting instances that are translated as phrase. In this way, we have been able to acquire a considerable amount of testing and training material for developing and testing our disambiguation algorithms. The availability of this testing and training material has enabled us to develop quantitative disambiguation methods that achieve 92 percent accuracy in discriminating between two very distinct senses of a noun such as sentence. In the training phase, we collect a number of instances of each sense of the polysemous noun. Then in the testing phase, we are given a new instance of the noun, and are asked to assign the instance to one of the senses. We attempt to answer this question by comparing the context of the unknown instance with contexts of known instances using a Bayesian argument that has been applied successfully in related tasks such as author identification and information retrieval. The Bayesian classifier requires an estimate of Pr(wsense), the probability of finding the word w in a particular context. Care must be taken in estimating these probabilities since there are so many parameters (e.g., 100,000 for each sense) and so little training material (e.g., 5,000 words for each sense). We have found that it helps to smooth the estimates obtained from the training material with estimates obtained from the entire corpus. The idea is that the training material provides poorly measured estimates, whereas the entire corpus provides less relevant estimates. We seek a trade-off between measurement errors and relevance using a novel interpolation procedure that has one free parameter, an estimate of how much the conditional probabilities Pr(wsense) will differ from the global probabilities Pr(w). In the sense tagging application, we expect quite large differences, more than 20% of the vocabulary behaves very differently in the conditional context; in other applications such as author identification, we expect much smaller differences and find that less than 2% of the vocabulary depends very much on the author. The ‘‘sense disambiguation’’ problem covers a broad set of issues. Dictionaries, for example, make use of",
"title": ""
},
{
"docid": "2baf55123171c6e2110b19b1583c3d17",
"text": "A novel three-way power divider using tapered lines is presented. It has several strip resistors which are formed like a ladder between the tapered-line conductors to achieve a good output isolation. The equivalent circuits are derived with the EE/OE/OO-mode analysis based on the fundamental propagation modes in three-conductor coupled lines. The fabricated three-way power divider shows a broadband performance in input return loss which is greater than 20 dB over a 3:1 bandwidth in the C-Ku bands.",
"title": ""
},
{
"docid": "28c22ea34762a7bf65fdc50a37b558f5",
"text": "Web threats pose the most significant cyber threat. Websites have been developed or manipulated by attackers for use as attack tools. Existing malicious website detection techniques can be classified into the categories of static and dynamic detection approaches, which respectively aim to detect malicious websites by analyzing web contents, and analyzing run-time behaviors using honeypots. However, existing malicious website detection approaches have technical and computational limitations to detect sophisticated attacks and analyze massive collected data. The main objective of this research is to minimize the limitations of malicious website detection. This paper presents a novel cross-layer malicious website detection approach which analyzes network-layer traffic and application-layer website contents simultaneously. Detailed data collection and performance evaluation methods are also presented. Evaluation based on data collected during 37 days shows that the computing time of the cross-layer detection is 50 times faster than the dynamic approach while detection can be almost as effective as the dynamic approach. Experimental results indicate that the cross-layer detection outperforms existing malicious website detection techniques.",
"title": ""
},
{
"docid": "15cfa9005e68973cbca60f076180b535",
"text": "Much of the literature on fair classifiers considers the case of a single classifier used once, in isolation. We initiate the study of composition of fair classifiers. In particular, we address the pitfalls of näıve composition and give general constructions for fair composition. Focusing on the individual fairness setting proposed in [Dwork, Hardt, Pitassi, Reingold, Zemel, 2011], we also extend our results to a large class of group fairness definitions popular in the recent literature. We exhibit several cases in which group fairness definitions give misleading signals under composition and conclude that additional context is needed to evaluate both group and individual fairness under composition.",
"title": ""
},
{
"docid": "1df4fad2d5448364834608f4bc9d10a0",
"text": "What causes adolescents to be materialistic? Prior research shows parents and peers are an important influence. Researchers have viewed parents and peers as socialization agents that transmit consumption attitudes, goals, and motives to adolescents. We take a different approach, viewing parents and peers as important sources of emotional support and psychological well-being, which increase self-esteem in adolescents. Supportive parents and peers boost adolescents' self-esteem, which decreases their need to turn to material goods to develop positive selfperceptions. In a study with 12–18 year-olds, we find support for our view that self-esteem mediates the relationship between parent/peer influence and adolescent materialism. © 2010 Society for Consumer Psychology. Published by Elsevier Inc. All rights reserved. Rising levels of materialism among adolescents have raised concerns among parents, educators, and consumer advocates.More than half of 9–14 year-olds agree that, “when you grow up, the more money you have, the happier you are,” and over 60% agree that, “the only kind of job I want when I grow up is one that getsme a lot of money” (Goldberg, Gorn, Peracchio, & Bamossy, 2003). These trends have lead social scientists to conclude that adolescents today are “...the most brand-oriented, consumer-involved, and materialistic generation in history” (Schor, 2004, p. 13). What causes adolescents to bematerialistic? Themost consistent finding to date is that adolescent materialism is related to the interpersonal influences in their lives—notably, parents and peers. The vast majority of research is based on a social influence perspective, viewing parents and peers as socialization agents that transmit consumption attitudes, goals, and motives to adolescents through modeling, reinforcement, and social interaction. In early research, Churchill and Moschis (1979) proposed that adolescents learn rational aspects of consumption from their parents and social aspects of consumption (materialism) from their peers. Moore and ⁎ Corresponding author. Villanova School of Business, 800 Lancaster Avenue, Villanova, PA 19085, USA. Fax: +1 520 621 7483. E-mail addresses: chaplin@eller.arizona.edu (L.N. Chaplin), djohn@umn.edu (D.R. John). 1057-7408/$ see front matter © 2010 Society for Consumer Psychology. Publish doi:10.1016/j.jcps.2010.02.002 Moschis (1981) examined family communication styles, suggesting that certain styles (socio-oriented) promote conformity to others' views, setting the stage for materialism. In later work, Goldberg et al. (2003) posited that parents transmit materialistic values to their offspring by modeling these values. Researchers have also reported positive correlations betweenmaterialism and socio-oriented family communication (Moore & Moschis, 1981), parents' materialism (Flouri, 2004; Goldberg et al., 2003), peer communication about consumption (Churchill & Moschis, 1979; Moschis & Churchill, 1978), and susceptibility to peer influence (Achenreiner, 1997; Banerjee & Dittmar, 2008; Roberts, Manolis, & Tanner, 2008). We take a different approach. Instead of viewing parents and peers as socialization agents that transmit consumption attitudes and values, we consider parents and peers as important sources of emotional support and psychological well-being, which lay the foundation for self-esteem in adolescents. We argue that supportive parents and peers boost adolescents' self-esteem, which decreases their need to embrace material goods as a way to develop positive self-perceptions. Prior research is suggestive of our perspective. In studies with young adults, researchers have found a link between (1) lower parental support (cold and controlling mothers) and a focus on financial success aspirations (Kasser, Ryan, Zax, & Sameroff, 1995: 18 year-olds) and (2) lower parental support (less affection and supervision) in ed by Elsevier Inc. All rights reserved. 1 Support refers to warmth, affection, nurturance, and acceptance (Becker, 1981; Ellis, Thomas, and Rollins, 1976). Parental nurturance involves the development of caring relationships, in which parents reason with their children about moral conflicts, involve them in family decision making, and set high moral expectations (Maccoby, 1984; Staub, 1988). 177 L.N. Chaplin, D.R. John / Journal of Consumer Psychology 20 (2010) 176–184 divorced families and materialism (Rindfleisch, Burroughs, & Denton, 1997: 20–32 year-olds). These studies do not focus on adolescents, do not examine peer factors, nor do they include measures of self-esteem or self-worth. But, they do suggest that parents and peers can influence materialism in ways other than transmitting consumption attitudes and values, which has been the focus of prior research on adolescent materialism. In this article, we seek preliminary evidence for our view by testing whether self-esteem mediates the relationship between parent/peer influence and adolescent materialism. We include parent and peer factors that inhibit or encourage adolescent materialism, which allows us to test self-esteem as a mediator under both conditions. For parental influence, we include parental support (inhibits materialism) and parents' materialism (encourages materialism). Both factors have appeared in prior materialism studies, but our interest here is whether self-esteem is a mediator of their influence on materialism. For peer influence, we include peer support (inhibits materialism) and peers' materialism (encourages materialism), with our interest being whether self-esteem is a mediator of their influence on materialism. These peer factors are new to materialism research and offer potentially new insights. Contrary to prior materialism research, which views peers as encouraging materialism among adolescents, we also consider the possibility that peers may be a positive influence by providing emotional support in the same way that parents do. Our research offers several contributions to understanding materialism in adolescents. First, we provide a broader perspective on the role of parents and peers as influences on adolescent materialism. The social influence perspective, which views parents and peers as transmitting consumption attitudes and values, has dominated materialism research with children and adolescents since its early days. We provide a broader perspective by considering parents and peers as much more than socialization agents—they contribute heavily to the sense of self-esteem that adolescents possess, which influences materialism. Second, our perspective provides a process explanation for why parents and peers influence materialism that can be empirically tested. Prior research offers a valuable set of findings about what factors correlate with adolescent materialism, but the process responsible for the correlation is left untested. Finally, we provide a parsimonious explanation for why different factors related to parent and peer influence affect adolescent materialism. Although the number of potential parent and peer factors is large, it is possible that there is a common thread (self-esteem) for why these factors influence adolescent materialism. Isolating mediators, such as selfesteem, could provide the basis for developing a conceptual framework to tie together findings across prior studies with different factors, providing a more unified explanation for why certain adolescents are more vulnerable to materialism.",
"title": ""
},
{
"docid": "7c106fc6fc05ec2d35b89a1dec8e2ca2",
"text": "OBJECTIVE\nCurrent estimates of the prevalence of depression during pregnancy vary widely. A more precise estimate is required to identify the level of disease burden and develop strategies for managing depressive disorders. The objective of this study was to estimate the prevalence of depression during pregnancy by trimester, as detected by validated screening instruments (ie, Beck Depression Inventory, Edinburgh Postnatal Depression Score) and structured interviews, and to compare the rates among instruments.\n\n\nDATA SOURCES\nObservational studies and surveys were searched in MEDLINE from 1966, CINAHL from 1982, EMBASE from 1980, and HealthSTAR from 1975.\n\n\nMETHODS OF STUDY SELECTION\nA validated study selection/data extraction form detailed acceptance criteria. Numbers and percentages of depressed patients, by weeks of gestation or trimester, were reported.\n\n\nTABULATION, INTEGRATION, AND RESULTS\nTwo reviewers independently extracted data; a third party resolved disagreement. Two raters assessed quality by using a 12-point checklist. A random effects meta-analytic model produced point estimates and 95% confidence intervals (CIs). Heterogeneity was examined with the chi(2) test (no systematic bias detected). Funnel plots and Begg-Mazumdar test were used to assess publication bias (none found). Of 714 articles identified, 21 (19,284 patients) met the study criteria. Quality scores averaged 62%. Prevalence rates (95% CIs) were 7.4% (2.2, 12.6), 12.8% (10.7, 14.8), and 12.0% (7.4, 16.7) for the first, second, and third trimesters, respectively. Structured interviews found lower rates than the Beck Depression Inventory but not the Edinburgh Postnatal Depression Scale.\n\n\nCONCLUSION\nRates of depression, especially during the second and third trimesters of pregnancy, are substantial. Clinical and economic studies to estimate maternal and fetal consequences are needed.",
"title": ""
},
{
"docid": "b15c689ff3dd7b2e7e2149e73b5451ac",
"text": "The Web provides a fertile ground for word-of-mouth communication and more and more consumers write about and share product-related experiences online. Given the experiential nature of tourism, such first-hand knowledge communicated by other travelers is especially useful for travel decision making. However, very little is known about what motivates consumers to write online travel reviews. A Web-based survey using an online consumer panel was conducted to investigate consumers’ motivations to write online travel reviews. Measurement scales to gauge the motivations to contribute online travel reviews were developed and tested. The results indicate that online travel review writers are mostly motivated by helping a travel service provider, concerns for other consumers, and needs for enjoyment/positive self-enhancement. Venting negative feelings through postings is clearly not seen as an important motive. Motivational differences were found for gender and income level. Implications of the findings for online travel communities and tourism marketers are discussed.",
"title": ""
},
{
"docid": "e18b08d7f7895339b432a9f9faf5a923",
"text": "We present a parallelized navigation architecture that is capable of running in real-time and incorporating long-term loop closure constraints while producing the optimal Bayesian solution. This architecture splits the inference problem into a low-latency update that incorporates new measurements using just the most recent states (filter), and a high-latency update that is capable of closing long loops and smooths using all past states (smoother). This architecture employs the probabilistic graphical models of Factor Graphs, which allows the low-latency inference and highlatency inference to be viewed as sub-operations of a single optimization performed within a single graphical model. A specific factorization of the full joint density is employed that allows the different inference operations to be performed asynchronously while still recovering the optimal solution produced by a full batch optimization. Due to the real-time, asynchronous nature of this algorithm, updates to the state estimates from the highlatency smoother will naturally be delayed until the smoother calculations have completed. This architecture has been tested within a simulated aerial environment and on real data collected from an autonomous ground vehicle. In all cases, the concurrent architecture is shown to recover the full batch solution, even while updated state estimates are produced in real-time.",
"title": ""
},
{
"docid": "d8af86876a53cdafc8973b9e78838ca7",
"text": "Preferred walking speed (PWS) reflects the integrated performance of the relevant physiological sub-systems, including energy expenditure. It remains unclear whether the PWS during over-ground walking is chosen to optimize one's balance control because studies on the effects of speed on the body's balance control have been limited. The current study aimed to bridge the gap by quantifying the effects of the walking speed on the body's center of mass (COM) motion relative to the center of pressure (COP) in terms of the changes and directness of the COM-COP inclination angle (IA) and its rate of change (RCIA). Data of the COM and COP were measured from fifteen young healthy males at three walking speeds including PWS using a motion capture system. The values of IAs and RCIAs at key gait events and their average values over gait phases were compared between speeds using one-way repeated measures ANOVA. With increasing walking speed, most of the IA and RCIA related variables were significantly increased (p<0.05) but not for those of the frontal IA. Significant quadratic trends (p<0.05) with highest directness at PWS were found in IA during single-limb support, and in RCIA during single-limb and double-limb support. The results suggest that walking at PWS corresponded to the COM-COP control maximizing the directness of the RCIAs over the gait cycle, a compromise between the effects of walking speed and the speed of weight transfer. The data of IA and RCIA at PWS may be used in future assessment of balance control ability in people with different levels of balance impairments.",
"title": ""
},
{
"docid": "ba4fb2947987c87a5103616d4bc138de",
"text": "In intelligent tutoring systems with natural language dialogue, speech act classification, the task of detecting learners’ intentions, informs the system’s response mechanism. In this paper, we propose supervised machine learning models for speech act classification in the context of an online collaborative learning game environment. We explore the role of context (i.e. speech acts of previous utterances) for speech act classification. We compare speech act classification models trained and tested with contextual and non-contextual features (contents of the current utterance). The accuracy of the proposed models is high. A surprising finding is the modest role of context in automatically predicting the speech acts.",
"title": ""
},
{
"docid": "116294113ff20558d3bcb297950f6d63",
"text": "This paper aims to analyze the influence of a Halbach array by using a semi analytical design optimization approach on a novel electrical machine design with slotless air gap winding. The useable magnetic flux density caused by the Halbach array magnetization is studied and compared to conventional radial magnetization systems. First, several discrete magnetic flux densities are analyzed for an infinitesimal wire size in an air gap range from 0.1 mm to 5 mm by the finite element method in Ansys Maxwell. Fourier analysis is used to approximate continuous functions for each magnetic flux density characteristic for each air gap height. Then, using a six-step commutation control, the magnetic flux acting on a certain phase geometry is considered for a parametric machine model. The design optimization approach utilizes the design freedom of the magnetic flux density shape in air gap as well as the heights and depths of all magnetic circuit components, which are stator and rotor cores, permanent magnets, air gap, and air gap winding. Use of a nonlinear optimization formulation, allows for fast and precise analytical calculation of objective function. In this way the influence of both magnetizations on Pareto optimal machine design sets, when mass and efficiency are weighted, are compared. Other design requirements, such as torque, current, air gap and wire height, are considered via constraints on this optimization. Finally, an optimal motor design study for the Halbach array magnetization pattern is compared to the conventional radial magnetization. As a reference design, an existing 15-inch rim wheel-hub motor with air gap winding is used.",
"title": ""
},
{
"docid": "8a8dd829c9b7ce0c46ef1fd0736cc006",
"text": "In this paper, we introduce a generic inference hybrid framework for Convolutional Recurrent Neural Network (conv-RNN) of semantic modeling of text, seamless integrating the merits on extracting different aspects of linguistic information from both convolutional and recurrent neural network structures and thus strengthening the semantic understanding power of the new framework. Besides, based on conv-RNN, we also propose a novel sentence classification model and an attention based answer selection model with strengthening power for the sentence matching and classification respectively. We validate the proposed models on a very wide variety of data sets, including two challenging tasks of answer selection (AS) and five benchmark datasets for sentence classification (SC). To the best of our knowledge, it is by far the most complete comparison results in both AS and SC. We empirically show superior performances of conv-RNN in these different challenging tasks and benchmark datasets and also summarize insights on the performances of other state-of-the-arts methodologies.",
"title": ""
},
{
"docid": "98f811a1b5445763505009684ef1d160",
"text": "This study examined the relationship between three of the ‘‘Big Five’’ traits (neuroticism, extraversion, and openness), self-esteem, loneliness and narcissism, and Facebook use. Participants were 393 first year undergraduate psychology students from a medium-sized Australian university who completed an online questionnaire. Negative binomial regression models showed that students with higher openness levels reported spending more time on Facebook and having more friends on Facebook. Interestingly, students with higher levels of loneliness reported having more Facebook friends. Extraversion, neuroticism, selfesteem and narcissism did not have significant associations with Facebook use. It was concluded that students who are high in openness use Facebook to connect with others in order to discuss a wide range of interests, whereas students who are high in loneliness use the site to compensate for their lack of offline",
"title": ""
},
{
"docid": "40e129b6264892f1090fd9a8d6a9c1ae",
"text": "We introduce an algorithm for text detection and localization (\"spotting\") that is computationally efficient and produces state-of-the-art results. Our system uses multi-channel MSERs to detect a large number of promising regions, then subsamples these regions using a clustering approach. Representatives of region clusters are binarized and then passed on to a deep network. A final line grouping stage forms word-level segments. On the ICDAR 2011 and 2015 benchmarks, our algorithm obtains an F-score of 82% and 83%, respectively, at a computational cost of 1.2 seconds per frame. We also introduce a version that is three times as fast, with only a slight reduction in performance.",
"title": ""
}
] |
scidocsrr
|
3d5f5152e17c3246c4220e2ef4702a5a
|
InputFinder: Reverse Engineering Closed Binaries using Hardware Performance Counters
|
[
{
"docid": "049c9e3abf58bfd504fa0645bb4d1fdc",
"text": "The following section describes the tools we built to test the utilities. These tools include the fuzz (random character) generator, ptyjig (to test interactive utilities), and scripts to automate the testing process. Next, we will describe the tests we performed, giving the types of input we presented to the utilities. Results from the tests will follow along with an analysis of the results, including identification and classification of the program bugs that caused the crashes. The final section presents concluding remarks, including suggestions for avoiding the types of problems detected by our study and some commentary on the bugs we found. We include an Appendix with the user manual pages for fuzz and ptyjig.",
"title": ""
},
{
"docid": "7cb61609adf6e3c56c762d6fe322903c",
"text": "In this paper, we give an overview of the BitBlaze project, a new approach to computer security via binary analysis. In particular, BitBlaze focuses on building a unified binary analysis platform and using it to provide novel solutions to a broad spectrum of different security problems. The binary analysis platform is designed to enable accurate analysis, provide an extensible architecture, and combines static and dynamic analysis as well as program verification techniques to satisfy the common needs of security applications. By extracting security-related properties from binary programs directly, BitBlaze enables a principled, root-cause based approach to computer security, offering novel and effective solutions, as demonstrated with over a dozen different security applications.",
"title": ""
}
] |
[
{
"docid": "4b250bd1c7bcca08f011f5ebc2808e4c",
"text": "As a result of the rapid growth of available services provided via Internet, as well as multiple accounts a person owns, reliable user authentication schemes are mandatory for security purposes. OTP systems have prevailed as the best viable solution for security over sensitive information and pose an interesting field for research. Although, OTP schemes enhance authentication's security through various algorithmic customizations and extensions, certain compromises should be made; especially since excessively tolerable to vulnerability systems tend to have high computational and storage needs. In order to minimize the risk of a non-authenticated user having access to sensitive data, depending on the use, OTP system's architecture differs; as its tolerance towards already known attack methods. In this paper, the most widely accepted and promising OTP schemes are described and evaluated in terms of resistance against security attacks and in terms of computational intensity (performance efficiency). The results showed that there is a correlation between the security level, the computational efficiency and the storage needs of an OTP system.",
"title": ""
},
{
"docid": "22c9f931198f054e7994e7f1db89a194",
"text": "Learning a good distance metric plays a vital role in many multimedia retrieval and data mining tasks. For example, a typical content-based image retrieval (CBIR) system often relies on an effective distance metric to measure similarity between any two images. Conventional CBIR systems simply adopting Euclidean distance metric often fail to return satisfactory results mainly due to the well-known semantic gap challenge. In this article, we present a novel framework of Semi-Supervised Distance Metric Learning for learning effective distance metrics by exploring the historical relevance feedback log data of a CBIR system and utilizing unlabeled data when log data are limited and noisy. We formally formulate the learning problem into a convex optimization task and then present a new technique, named as “Laplacian Regularized Metric Learning” (LRML). Two efficient algorithms are then proposed to solve the LRML task. Further, we apply the proposed technique to two applications. One direct application is for Collaborative Image Retrieval (CIR), which aims to explore the CBIR log data for improving the retrieval performance of CBIR systems. The other application is for Collaborative Image Clustering (CIC), which aims to explore the CBIR log data for enhancing the clustering performance of image pattern clustering tasks. We conduct extensive evaluation to compare the proposed LRML method with a number of competing methods, including 2 standard metrics, 3 unsupervised metrics, and 4 supervised metrics with side information. Encouraging results validate the effectiveness of the proposed technique.",
"title": ""
},
{
"docid": "e3a7b1302e70b003acac4c15057908a7",
"text": "modeling business processes a petri net-oriented approach modeling business processes a petri net oriented approach modeling business processes: a petri net-oriented approach modeling business processes a petri net oriented approach modeling business processes a petri net oriented approach modeling business processes: a petri net-oriented approach modeling business processes a petri net oriented approach a petri net-based software process model for developing modeling business processes a petri net oriented approach petri nets and business process management dagstuhl modeling business processes a petri net oriented approach killer app for petri nets process mining a petri net approach to analysis and composition of web information gathering and process modeling in a petri net modeling business processes a petri net oriented approach an ontology-based evaluation of process modeling with business process modeling in inspire using petri nets document about nc fairlane manuals is available on print from business process modeling to the specification of modeling of adaptive cyber physical systems using aspect petri net theory and the modeling of systems tbsh towards agent-based modeling and verification of a discussion of object-oriented process modeling modeling and simulation versions of business process using workflow modeling for virtual enterprise: a petri net simulation of it service processes with petri-nets george mason university the volgenau school of engineering process-oriented business performance management with syst 620 / ece 673 discrete event systems general knowledge questions answers on india tool-based business process modeling using the som approach income/w f a petri net based approach to w orkflow segment 2 exam study guide world history jbacs specifying business processes over objects rd.springer english june exam 2013 question paper 3 nulet",
"title": ""
},
{
"docid": "f8c7fcba6d0cb889836dc868f3ba12c8",
"text": "This article reviews dominant media portrayals of mental illness, the mentally ill and mental health interventions, and examines what social, emotional and treatment-related effects these may have. Studies consistently show that both entertainment and news media provide overwhelmingly dramatic and distorted images of mental illness that emphasise dangerousness, criminality and unpredictability. They also model negative reactions to the mentally ill, including fear, rejection, derision and ridicule. The consequences of negative media images for people who have a mental illness are profound. They impair self-esteem, help-seeking behaviours, medication adherence and overall recovery. Mental health advocates blame the media for promoting stigma and discrimination toward people with a mental illness. However, the media may also be an important ally in challenging public prejudices, initiating public debate, and projecting positive, human interest stories about people who live with mental illness. Media lobbying and press liaison should take on a central role for mental health professionals, not only as a way of speaking out for patients who may not be able to speak out for themselves, but as a means of improving public education and awareness. Also, given the consistency of research findings in this field, it may now be time to shift attention away from further cataloguing of media representations of mental illness to the more challenging prospect of how to use the media to improve the life chances and recovery possibilities for the one in four people living with mental disorders.",
"title": ""
},
{
"docid": "9365a612900a8bf0ddef8be6ec17d932",
"text": "Stabilization exercise program has become the most popular treatment method in spinal rehabilitation since it has shown its effectiveness in some aspects related to pain and disability. However, some studies have reported that specific exercise program reduces pain and disability in chronic but not in acute low back pain, although it can be helpful in the treatment of acute low back pain by reducing recurrence rate (Ferreira et al., 2006).",
"title": ""
},
{
"docid": "aff3f2e70cb7f6dbff9dad0881e3e86f",
"text": "Knowledge graphs holistically integrate information about entities from multiple sources. A key step in the construction and maintenance of knowledge graphs is the clustering of equivalent entities from different sources. Previous approaches for such an entity clustering suffer from several problems, e.g., the creation of overlapping clusters or the inclusion of several entities from the same source within clusters. We therefore propose a new entity clustering algorithm CLIP that can be applied both to create entity clusters and to repair entity clusters determined with another clustering scheme. In contrast to previous approaches, CLIP not only uses the similarity between entities for clustering but also further features of entity links such as the so-called link strength. To achieve a good scalability we provide a parallel implementation of CLIP based on Apache Flink. Our evaluation for different datasets shows that the new approach can achieve substantially higher cluster quality than previous approaches.",
"title": ""
},
{
"docid": "b7bfebcf77d9486473b9fcd1f4b91e63",
"text": "One of the most widespread applications of the Global Positioning System (GPS) is vehicular navigation. Improving the navigation accuracy continues to be a focus of research, commonly answered by the use of additional sensors. A sensor commonly fused with GPS is the inertial measurement unit (IMU). Due to the fact that the requirements of commercial systems are low cost, small size, and power conservative, micro-electro mechanical sensors (MEMS) IMUs are used. They provide navigation capability even in the absence of GPS signals or in the presence of high multipath or jamming. This paper addresses a centralized filter construction whereby navigation solutions from multiple IMUs are fused together to improve accuracy in GPS degraded areas. The proposed filter is a collection of several single IMU block filters. Each block filter is a 21 state IMU filter. Because each block filter estimates position, velocity and attitude, the system can utilize relative updates between the IMUs. These relative updates provide a method of reducing the position drift in the absence of GPS observations. The proposed filter’s performance is analyzed as a function of the number of IMUs used and relative update type, using a data set consisting of GPS outages, urban canyons and residential open sky conditions. While the use of additional IMUs (including a single IMU) provides negligible improvement in open sky conditions (where GPS alone is sufficient), the use of two, three, four and five IMUs provided a horizontal position improvement of 25 %, 29 %, 32 %, and 34 %, respectively, when GPS observations are removed for 30 seconds. Similarly, the velocity RMS improved by 25 %, 31%, 33%, and 34% for two, three, four and five IMUs, respectively. Attitude estimation also improves significantly ranging from 30 % – 76 %. Results also indicate that the use of more IMUs provides the system with better multipath rejection and performance in urban canyons.",
"title": ""
},
{
"docid": "5cd5cc82b973ede163528a5755c5cc75",
"text": "The wave of digital health is continuously growing and promises to transform healthcare and optimize the patients' experience. Asthma is in the center of these digital developments, as it is a chronic disease that requires the continuous attention of both health care professionals and patients themselves. The accurate and timely assessment of the state of asthma is the fundamental basis of digital health approaches and is also the most significant factor toward the preventive and efficient management of the disease. Furthermore, the necessity of inhaled medication offers a basic platform upon which modern technologies can be integrated, namely the inhaler device itself. Inhaler-based monitoring devices were introduced in the beginning of the 1980s and have been evolving but mainly for the assessment of medication adherence. As technology progresses and novel sensing components are becoming available, the enhancement of inhalers with a wider range of monitoring capabilities holds the promise to further support and optimize asthma self-management. The current article aims to take a step for the mapping of this territory and start the discussion among healthcare professionals and engineers for the identification and the development of technologies that can offer personalized asthma self-management with clinical significance. In this direction, a technical review of inhaler based monitoring devices is presented, together with an overview of their use in clinical research. The aggregated results are then summarized and discussed for the identification of key drivers that can lead the future of inhalers.",
"title": ""
},
{
"docid": "23fe6b01d4f31e69e753ff7c78674f19",
"text": "Advancements in information technology often task users with complex and consequential privacy and security decisions. A growing body of research has investigated individuals’ choices in the presence of privacy and information security tradeoffs, the decision-making hurdles affecting those choices, and ways to mitigate such hurdles. This article provides a multi-disciplinary assessment of the literature pertaining to privacy and security decision making. It focuses on research on assisting individuals’ privacy and security choices with soft paternalistic interventions that nudge users toward more beneficial choices. The article discusses potential benefits of those interventions, highlights their shortcomings, and identifies key ethical, design, and research challenges.",
"title": ""
},
{
"docid": "11ad0993b62e016175638d80f9acd694",
"text": "Progressive macular hypomelanosis (PMH) is a skin disorder that is characterized by hypopigmented macules and usually seen in young adults. The skin microbiota, in particular the bacterium Propionibacterium acnes, is suggested to play a role. Here, we compared the P. acnes population of 24 PMH lesions from eight patients with corresponding nonlesional skin of the patients and matching control samples from eight healthy individuals using an unbiased, culture-independent next-generation sequencing approach. We also compared the P. acnes population before and after treatment with a combination of lymecycline and benzoylperoxide. We found an association of one subtype of P. acnes, type III, with PMH. This type was predominant in all PMH lesions (73.9% of reads in average) but only detected as a minor proportion in matching control samples of healthy individuals (14.2% of reads in average). Strikingly, successful PMH treatment is able to alter the composition of the P. acnes population by substantially diminishing the proportion of P. acnes type III. Our study suggests that P. acnes type III may play a role in the formation of PMH. Furthermore, it sheds light on substantial differences in the P. acnes phylotype distribution between the upper and lower back and abdomen in healthy individuals.",
"title": ""
},
{
"docid": "80f9ecd1945cbdc93ecdb28afd44d8e3",
"text": "This paper addresses the problem of pixel-level segmentation and classification of scene images with an entirely learning-based approach using Long Short Term Memory (LSTM) recurrent neural networks, which are commonly used for sequence classification. We investigate two-dimensional (2D) LSTM networks for natural scene images taking into account the complex spatial dependencies of labels. Prior methods generally have required separate classification and image segmentation stages and/or pre- and post-processing. In our approach, classification, segmentation, and context integration are all carried out by 2D LSTM networks, allowing texture and spatial model parameters to be learned within a single model. The networks efficiently capture local and global contextual information over raw RGB values and adapt well for complex scene images. Our approach, which has a much lower computational complexity than prior methods, achieved state-of-the-art performance over the Stanford Background and the SIFT Flow datasets. In fact, if no pre- or post-processing is applied, LSTM networks outperform other state-of-the-art approaches. Hence, only with a single-core Central Processing Unit (CPU), the running time of our approach is equivalent or better than the compared state-of-the-art approaches which use a Graphics Processing Unit (GPU). Finally, our networks' ability to visualize feature maps from each layer supports the hypothesis that LSTM networks are overall suited for image processing tasks.",
"title": ""
},
{
"docid": "7c06010200faa47511896228fcb36097",
"text": "Polysaccharide immunomodulators were first discovered over 40 years ago. Although very few have been rigorously studied, recent reports have revealed the mechanism of action and structure-function attributes of some of these molecules. Certain polysaccharide immunomodulators have been identified that have profound effects in the regulation of immune responses during the progression of infectious diseases, and studies have begun to define structural aspects of these molecules that govern their function and interaction with cells of the host immune system. These polymers can influence innate and cell-mediated immunity through interactions with T cells, monocytes, macrophages, and polymorphonuclear lymphocytes. The ability to modulate the immune response in an appropriate way can enhance the host's immune response to certain infections. In addition, this strategy can be utilized to augment current treatment regimens such as antimicrobial therapy that are becoming less efficacious with the advent of antibiotic resistance. This review focuses on recent studies that illustrate the structural and biologic activities of specific polysaccharide immunomodulators and outlines their potential for clinical use.",
"title": ""
},
{
"docid": "ca0cef542eafe283ba4fa224e3c87df6",
"text": "Central to all human interaction is the mutual understanding of emotions, achieved primarily by a set of biologically rooted social signals evolved for this purpose-facial expressions of emotion. Although facial expressions are widely considered to be the universal language of emotion, some negative facial expressions consistently elicit lower recognition levels among Eastern compared to Western groups (see [4] for a meta-analysis and [5, 6] for review). Here, focusing on the decoding of facial expression signals, we merge behavioral and computational analyses with novel spatiotemporal analyses of eye movements, showing that Eastern observers use a culture-specific decoding strategy that is inadequate to reliably distinguish universal facial expressions of \"fear\" and \"disgust.\" Rather than distributing their fixations evenly across the face as Westerners do, Eastern observers persistently fixate the eye region. Using a model information sampler, we demonstrate that by persistently fixating the eyes, Eastern observers sample ambiguous information, thus causing significant confusion. Our results question the universality of human facial expressions of emotion, highlighting their true complexity, with critical consequences for cross-cultural communication and globalization.",
"title": ""
},
{
"docid": "22c4d3ad653b5bbf8e2f9fb426ee2b2d",
"text": "The breadth and complexity of the field of mathematics make the identification and study of the cognitive phenotypes that define learning disabilities in mathematics (MD) a formidable endeavor. A learning disability can result from deficits in the ability to represent or process information in one or all of the many mathematical domains (e.g., geometry) or in one or a set of individual competencies within each domain. The goal is further complicated by the task of distinguishing poor achievement due to inadequate instruction from poor achievement due to an actual cognitive disability (Geary, Brown, & Samaranayake, 1991). One approach that can be used to circumvent this instructional confound involves is to apply the theories and methods used to study mathematical competencies in normal children to the study of children with a picture of the cognitive and brain systems that can contribute to MD begins to emerge. The combination of theoretical and empirical approaches has been primarily applied to the study of numerical and arithmetical competencies and is therefore only a first step to a complete understanding of the cognitive and brain systems that support mathematical compe-tency, and any associated learning disabilities. It is, nonetheless, a start. We overview what this research strategy has revealed about children with MD in the second section, and discuss diagnostic, prevalence, and etiological issues in the first section. In the third section, we present a general organizational framework for approaching the study of MD in any mathematical domain and use this framework and an earlier taxonomy of MD subtypes (Geary, 1993) to better understand the cognitive deficits described in the second section.",
"title": ""
},
{
"docid": "c875bfed84555d5a32a32e39a703e703",
"text": "For mmWave directional air interface expected in 5G communications, current discontinuous reception (DRX) mechanisms would be inadequate. Beam searching, for alignment of beams at User Equipment (UE) and 5G base station (NR nodeB), cannot be avoided in directional communication. We propose to exploit dual connectivity of UE, to both LTE eNB and NR nodeB, for effective 5G DRX. We present a novel hybrid directional-DRX (HD-DRX) mechanism, where beam searching is performed only when necessary. Probabilistic estimate of power saving and delay is conducted by capturing various states of UE through a semi-Markov process. Our numerical analysis achieves 13% improvement in power saving for HD-DRX compared with directional-DRX. We validate our numerical analysis with simulation studies on real traffic trace.",
"title": ""
},
{
"docid": "1c4165c47ae9870e31a7106f1b82e94d",
"text": "INTRODUCTION\nPrevious studies found that aircraft maintenance workers may be exposed to organophosphates in hydraulic fluid and engine oil. Studies have also illustrated a link between long-term low-level organophosphate pesticide exposure and depression.\n\n\nMETHODS\nA questionnaire containing the Patient Health Questionnaire 8 depression screener was e-mailed to 52,080 aircraft maintenance workers (with N = 4801 complete responses) in a cross-sectional study to determine prevalence and severity of depression and descriptions of their occupational exposures.\n\n\nRESULTS\nThere was no significant difference between reported depression prevalence and severity in similar exposure groups in which aircraft maintenance workers were exposed or may have been exposed to organophosphate esters compared to similar exposure groups in which they were not exposed. However, a dichotomous measure of the prevalence of depression was significantly associated with self-reported exposure levels from low (OR: 1.21) to moderate (OR: 1.68) to high exposure (OR: 2.70) and with each exposure route including contact (OR: 1.68), inhalation (OR: 2.52), and ingestion (OR: 2.55). A self-reported four-level measure of depression severity was also associated with a self-reported four-level measure of exposure.\n\n\nDISCUSSION\nBased on self-reported exposures and outcomes, an association is observed between organophosphate exposure and depression; however, we cannot assume that the associations we observed are causal because some workers may have been more likely to report exposure to organophosphate esters and also more likely to report depression. Future studies should consider using a larger sample size, better methods for characterizing crew chief exposures, and bioassays to measure dose rather than exposure. Hardos JE, Whitehead LW, Han I, Ott DK, Waller DK. Depression prevalence and exposure to organophosphate esters in aircraft maintenance workers. Aerosp Med Hum Perform. 2016; 87(8):712-717.",
"title": ""
},
{
"docid": "dfa482fe44d97e3a3812e35a3964b39c",
"text": "This paper illustrates the use of the recently introduced method of partial directed coherence in approaching how interactions among neural structures change over short time spans that characterize well defined behavioral states. Central to the method is its use of multivariate time series modelling in conjunction with the concept of Granger causality. Simulated neural network models were used to illustrate the technique's power and limitations when dealing with neural spiking data. This was followed by the analysis of multi-unit activity data illustrating dynamical change in the interaction of thalamo-cortical structures in a behaving rat.",
"title": ""
},
{
"docid": "b48d9e46a22fce04dac6949b08a7673c",
"text": "Khadtare Y, Chaudhari A, Waghmare P, Prashant S. (laser-assisted new attachment procedure) The LANAP Protocol A Minimally Invasive Bladeless Procedure. J Periodontol Med Clin Pract 2014;01: 264-271 1 2 2 3 Dr. Yogesh Khadtare , Dr. Amit Chaudhari , Dr. Pramod Waghmare , Dr. Shekhar Prashant Review Article Journal of Periodontal Medicine & Clinical Practice JPMCP Journal of Periodontal Medicine & Clinical Practice",
"title": ""
},
{
"docid": "8a3031bb351b3a285bbb7b90db407801",
"text": "Koch-shaped dipoles are introduced for the first time in a wideband antenna design and evolve the traditional Euclidean log-periodic dipole array into the log-periodic Koch-dipole array (LPKDA). Antenna size can be reduced while maintaining its overall performance characteristics. Observations and characteristics of both antennas are discussed. Advantages and disadvantages of the proposed LPKDA are validated through a fabricated proof-of-concept prototype that exhibited approximately 12% size reduction with minimal degradation in the impedance and pattern bandwidths. This is the first application of Koch prefractal elements in a miniaturized wideband antenna design.",
"title": ""
},
{
"docid": "94b9f7e9879e01718dc453c83f0f363d",
"text": "This paper aims to acquaint researchers in the quantitative social and behavior sciences with recent advances in causal inference which provide a systematic methodology for defining, estimating, testing, and defending causal claims in experimental and observational studies. These advances are illustrated using a general theory of causation based on nonparametric structural equation models (SEM) – a natural generalization of those used by econometricians and social scientists in the 1950-60s, which provides a coherent mathematical foundation for the analysis of causes and counterfactuals. In particular, the paper surveys the development of mathematical tools for inferring (from a combination of data and assumptions) answers to three types of causal queries: (1) queries about the effects of potential interventions, (also called “causal effects” or “policy evaluation”) (2) queries about probabilities of counterfactuals, (including assessment of “regret,” “attribution” or “causes of effects”) and (3) queries about direct and indirect effects (also known as “mediation”). Finally, the paper clarifies the role of propensity score matching in causal analysis, defines the relationships between the structural and potential-outcome frameworks, and develops symbiotic tools that use the strong features of both. A paper submitted to Sociological Methodology. This research benefited from conversations with Peter Bentler, Stephen Morgan, Jeffrey Wooldridge and was supported in parts by NIH grant #1R01 LM009961-01, NSF grant #IIS-0914211, and ONR grant #N000-14-09-1-0665. 1 Submitted to Sociological Methodology. TECHNICAL REPORT R-355 October 2009",
"title": ""
}
] |
scidocsrr
|
ead04a502da002be7bfcd130dec750d8
|
Variational PatchMatch MultiView Reconstruction and Refinement
|
[
{
"docid": "0d13be9f5e2082af96c370d3c316204f",
"text": "We present a combined hardware and software solution for markerless reconstruction of non-rigidly deforming physical objects with arbitrary shape in real-time. Our system uses a single self-contained stereo camera unit built from off-the-shelf components and consumer graphics hardware to generate spatio-temporally coherent 3D models at 30 Hz. A new stereo matching algorithm estimates real-time RGB-D data. We start by scanning a smooth template model of the subject as they move rigidly. This geometric surface prior avoids strong scene assumptions, such as a kinematic human skeleton or a parametric shape model. Next, a novel GPU pipeline performs non-rigid registration of live RGB-D data to the smooth template using an extended non-linear as-rigid-as-possible (ARAP) framework. High-frequency details are fused onto the final mesh using a linear deformation model. The system is an order of magnitude faster than state-of-the-art methods, while matching the quality and robustness of many offline algorithms. We show precise real-time reconstructions of diverse scenes, including: large deformations of users' heads, hands, and upper bodies; fine-scale wrinkles and folds of skin and clothing; and non-rigid interactions performed by users on flexible objects such as toys. We demonstrate how acquired models can be used for many interactive scenarios, including re-texturing, online performance capture and preview, and real-time shape and motion re-targeting.",
"title": ""
}
] |
[
{
"docid": "2ce2d44c6c19ad683989bbf8b117f778",
"text": "Modern computer systems feature multiple homogeneous or heterogeneous computing units with deep memory hierarchies, and expect a high degree of thread-level parallelism from the software. Exploitation of data locality is critical to achieving scalable parallelism, but adds a significant dimension of complexity to performance optimization of parallel programs. This is especially true for programming models where locality is implicit and opaque to programmers. In this paper, we introduce the hierarchical place tree (HPT) model as a portable abstraction for task parallelism and data movement. The HPT model supports co-allocation of data and computation at multiple levels of a memory hierarchy. It can be viewed as a generalization of concepts from the Sequoia and X10 programming models, resulting in capabilities that are not supported by either. Compared to Sequoia, HPT supports three kinds of data movement in a memory hierarchy rather than just explicit data transfer between adjacent levels, as well as dynamic task scheduling rather than static task assignment. Compared to X10, HPT provides a hierarchical notion of places for both computation and data mapping. We describe our work-in-progress on implementing the HPT model in the Habanero-Java (HJ) compiler and runtime system. Preliminary results on general-purpose multicore processors and GPU accelerators indicate that the HPT model can be a promising portable abstraction for future multicore processors.",
"title": ""
},
{
"docid": "dbafe7db0387b56464ac630404875465",
"text": "Recognition of body posture and motion is an important physiological function that can keep the body in balance. Man-made motion sensors have also been widely applied for a broad array of biomedical applications including diagnosis of balance disorders and evaluation of energy expenditure. This paper reviews the state-of-the-art sensing components utilized for body motion measurement. The anatomy and working principles of a natural body motion sensor, the human vestibular system, are first described. Various man-made inertial sensors are then elaborated based on their distinctive sensing mechanisms. In particular, both the conventional solid-state motion sensors and the emerging non solid-state motion sensors are depicted. With their lower cost and increased intelligence, man-made motion sensors are expected to play an increasingly important role in biomedical systems for basic research as well as clinical diagnostics.",
"title": ""
},
{
"docid": "4338819a7ff4753c37f209ec0ba010ba",
"text": "Hydraulic and pneumatic actuators are used as actuators of robots. They have large capabilities of instantaneous output, but with problems of increase in size and mass, and difficulty for precise control. In contrast, electromagnetic motors have better controllability, lower cost, and smaller size. However, in order to actuate robots, they are usually used with reducers which have high reduction ratio, and it is difficult to realize creature-like dynamic motions such as fast running and high jumping, due to low backdrivability of joints. To solve the problem, we have developed leg mechanisms, which consist of a spring and a damper inspired by bi-articular muscle-tendon complex of animals. The final target is to develop a quadruped robot which can walk, run fast and jump highly like a cat. A cat mainly uses its hind legs in jumping and front legs in landing. It implies that the hind legs play an important role in jumping, and that the front legs do in landing. For this reason, it is necessary to design different leg structures for front and hind legs. In this paper, we develop a new front leg mechanism suitable to a hind leg mechanism which was already made by our group, and make a small quadruped robot. As the result of experiments for dynamic motions, stable running trot at a speed of 3.5 kilometers per hour and forward jumping of 1 body length per jump have been realized by the robot.",
"title": ""
},
{
"docid": "8ad20ab4523e4cc617142a2de299dd4a",
"text": "OBJECTIVE\nTo determine the reliability and internal validity of the Hypospadias Objective Penile Evaluation (HOPE)-score, a newly developed scoring system assessing the cosmetic outcome in hypospadias.\n\n\nPATIENTS AND METHODS\nThe HOPE scoring system incorporates all surgically-correctable items: position of meatus, shape of meatus, shape of glans, shape of penile skin and penile axis. Objectivity was established with standardized photographs, anonymously coded patients, independent assessment by a panel, standards for a \"normal\" penile appearance, reference pictures and assessment of the degree of abnormality. A panel of 13 pediatric urologists completed 2 questionnaires, each consisting of 45 series of photographs, at an interval of at least 1 week. The inter-observer reliability, intra-observer reliability and internal validity were analyzed.\n\n\nRESULTS\nThe correlation coefficients for the HOPE-score were as follows: intra-observer reliability 0.817, inter-observer reliability 0.790, \"non-parametric\" internal validity 0.849 and \"parametric\" internal validity 0.842. These values reflect good reproducibility, sufficient agreement among observers and a valid measurement of differences and similarities in cosmetic appearance.\n\n\nCONCLUSIONS\nThe HOPE-score is the first scoring system that fulfills the criteria of a valid measurement tool: objectivity, reliability and validity. These favorable properties support its use as an objective outcome measure of the cosmetic result after hypospadias surgery.",
"title": ""
},
{
"docid": "bdf81fccbfa77dadcad43699f815475e",
"text": "The objective of this paper is classifying images by the object categories they contain, for example motorbikes or dolphins. There are three areas of novelty. First, we introduce a descriptor that represents local image shape and its spatial layout, together with a spatial pyramid kernel. These are designed so that the shape correspondence between two images can be measured by the distance between their descriptors using the kernel. Second, we generalize the spatial pyramid kernel, and learn its level weighting parameters (on a validation set). This significantly improves classification performance. Third, we show that shape and appearance kernels may be combined (again by learning parameters on a validation set).\n Results are reported for classification on Caltech-101 and retrieval on the TRECVID 2006 data sets. For Caltech-101 it is shown that the class specific optimization that we introduce exceeds the state of the art performance by more than 10%.",
"title": ""
},
{
"docid": "a93320450450dd761ea73dfc395c8b46",
"text": "There has been much discussion recently about the scope and limits of purely symbolic models of the mind and abotlt the proper role of connectionism in cognitive modeling. This paper describes the \"symbol grounding problem\": How can the semantic interpretation of a formal symbol system be made intrinsic to the system, rather than just parasitic on the meanings in our heads? How can the meanings of the meaningless symbol tokens, manipulated solely on the basis of their (arbitrary) shapes, be grounded in anything but other meaningless symbols? The problem is analogous to trying to learn Chinese from a Chinese/Chinese dictionary alone. A candidate solution is sketched: Symbolic representations must be grounded bottom-up in nonsymbolic representations of two kinds: (1) iconic representations, which are analogs of the proximal sensory projections of distal objects and events, and (2) categorical representations, which are learned and innate feature detectors that pick out the invariant features of object and event categories from their sensory projections. Elementary symbols are the names of these object and event categories, assigned on the basis of their (nonsymbolic) categorical representations. Higher-order (3) symbolic representations, grounded in these elementary symbols, consist of symbol strings describing category membership relations (e.g. \"An X is a Y that is Z \"). Connectionism is one natural candidate for the mechanism that learns the invariant features underlying categorical representations, thereby connecting names to the proximal projections of the distal objects they stand for. In this way connectionism can be seen as a complementary component in a hybrid nonsymbolic/symbolic model of the mind, rather than a rival to purely symbolic modeling. Such a hybrid model would not have an autonomous symbolic \"module,\" however; the symbolic functions would emerge as an intrinsically \"dedicated\" symbol system as a consequence of the bottom-up grounding of categories' names in their sensory representations. Symbol manipulation would be governed not just by the arbitrary shapes of the symbol tokens, but by the nonarbitrary shapes of the icons and category invariants in which they are grounded.",
"title": ""
},
{
"docid": "15008ad340cc2358a65deecc8e8cbbea",
"text": "We present a framework for generating street networks and parcel layouts. Our goal is the generation of high-quality layouts that can be used for urban planning and virtual environments. We propose a solution based on hierarchical domain splitting using two splitting types: streamline-based splitting, which splits a region along one or multiple streamlines of a cross field, and template-based splitting, which warps pre-designed templates to a region and uses the interior geometry of the template as the splitting lines. We combine these two splitting approaches into a hierarchical framework, providing automatic and interactive tools to explore the design space.",
"title": ""
},
{
"docid": "c1cdb2ab2a594e7fbb1dfdb261f0910c",
"text": "Adaptive tracking-by-detection methods are widely used in computer vision for tracking arbitrary objects. Current approaches treat the tracking problem as a classification task and use online learning techniques to update the object model. However, for these updates to happen one needs to convert the estimated object position into a set of labelled training examples, and it is not clear how best to perform this intermediate step. Furthermore, the objective for the classifier (label prediction) is not explicitly coupled to the objective for the tracker (estimation of object position). In this paper, we present a framework for adaptive visual object tracking based on structured output prediction. By explicitly allowing the output space to express the needs of the tracker, we avoid the need for an intermediate classification step. Our method uses a kernelised structured output support vector machine (SVM), which is learned online to provide adaptive tracking. To allow our tracker to run at high frame rates, we (a) introduce a budgeting mechanism that prevents the unbounded growth in the number of support vectors that would otherwise occur during tracking, and (b) show how to implement tracking on the GPU. Experimentally, we show that our algorithm is able to outperform state-of-the-art trackers on various benchmark videos. Additionally, we show that we can easily incorporate additional features and kernels into our framework, which results in increased tracking performance.",
"title": ""
},
{
"docid": "cceec94ed2462cd657be89033244bbf9",
"text": "This paper examines how student effort, consistency, motivation, and marginal learning, influence student grades in an online course. We use data from eleven Microeconomics courses taught online for a total of 212 students. Our findings show that consistency, or less time variation, is a statistically significant explanatory variable, whereas effort, or total minutes spent online, is not. Other independent variables include GPA and the difference between a pre-test and a post-test. The GPA is used as a measure of motivation, and the difference between a posttest and pre-test as marginal learning. As expected, the level of motivation is found statistically significant at a 99% confidence level, and marginal learning is also significant at a 95% level.",
"title": ""
},
{
"docid": "70b221d169e042b37c04cd7ef23b9c60",
"text": "Understanding the biology of prostate cancer metastasis has been limited by the lack of tissue for study. We studied the clinical data, distribution of prostate cancer involvement, morphology, immunophenotypes, and gene expression from 30 rapid autopsies of men who died of hormone-refractory prostate cancer. A tissue microarray was constructed and quantitatively evaluated for expression of prostate-specific antigen, androgen receptor, chromogranin, synaptophysin, MIB-1, and alpha-methylacylCoA-racemase markers. Hierarchical clustering of 16 rapid autopsy tumor samples was performed to evaluate the cDNA expression pattern associated with the morphology. Comparisons were made between patients as well as within the same patient. Metastatic hormone-refractory prostate cancer has a heterogeneous morphology, immunophenotype, and genotype, demonstrating that \"metastatic disease\" is a group of diseases even within the same patient. An appreciation of this heterogeneity is critical to evaluating diagnostic and prognostic biomarkers as well as to designing therapeutic targets for advanced disease.",
"title": ""
},
{
"docid": "c78c382da2513de0a9c55de31c230f7d",
"text": "Text entry using gaze-based interaction is a vital communication tool for people with motor impairments. Most solutions require the user to fixate on a key for a given dwell time to select it, thus limiting the typing speed. In this paper we introduce EyeSwipe, a dwell-time-free gaze-typing method. With EyeSwipe, the user gaze-types the first and last characters of a word using the novel selection mechanism \"reverse crossing.\" To gaze-type the characters in the middle of the word, the user only needs to glance at the vicinity of the respective keys. We compared the performance of EyeSwipe with that of a dwell-time-based virtual keyboard. EyeSwipe afforded statistically significantly higher typing rates and more comfortable interaction in experiments with ten participants who reached 11.7 words per minute (wpm) after 30 min typing with EyeSwipe.",
"title": ""
},
{
"docid": "2ad6b17fcb0ea20283e318a3fed2939f",
"text": "A fundamental problem of time series is k nearest neighbor (k-NN) query processing. However, existing methods are not fast enough for large dataset. In this paper, we propose a novel approach, STS3, to process k-NN queries by transforming time series to sets and measure the similarity under Jaccard metric. Our approach is more accurate than Dynamic Time Warping(DTW) in our suitable scenarios and it is faster than most of the existing methods, due to the efficient similarity search for sets. Besides, we also developed an index, a pruning and an approximation technique to improve the k-NN query procedure. As shown in the experimental results, all of them could accelerate the query processing effectively.",
"title": ""
},
{
"docid": "feee488a72016554ebf982762d51426e",
"text": "Optical imaging sensors, such as television or infrared cameras, collect information about targets or target regions. It is thus necessary to control the sensor's line-of-sight (LOS) to achieve accurate pointing. Maintaining sensor orientation toward a target is particularly challenging when the imaging sensor is carried on a mobile vehicle or when the target is highly dynamic. Controlling an optical sensor LOS with an inertially stabilized platform (ISP) can meet these challenges.A target tracker is a process, typically involving image processing techniques, for detecting targets in optical imagery. This article describes the use and design of ISPs and target trackers for imaging optical sensors.",
"title": ""
},
{
"docid": "8074d30cb422922bc134d07547932685",
"text": "Research paper recommenders emerged over the last decade to ease finding publications relating to researchers' area of interest. The challenge was not just to provide researchers with very rich publications at any time, any place and in any form but to also offer the right publication to the right researcher in the right way. Several approaches exist in handling paper recommender systems. However, these approaches assumed the availability of the whole contents of the recommending papers to be freely accessible, which is not always true due to factors such as copyright restrictions. This paper presents a collaborative approach for research paper recommender system. By leveraging the advantages of collaborative filtering approach, we utilize the publicly available contextual metadata to infer the hidden associations that exist between research papers in order to personalize recommendations. The novelty of our proposed approach is that it provides personalized recommendations regardless of the research field and regardless of the user's expertise. Using a publicly available dataset, our proposed approach has recorded a significant improvement over other baseline methods in measuring both the overall performance and the ability to return relevant and useful publications at the top of the recommendation list.",
"title": ""
},
{
"docid": "c0eefa05ff1e98217402d9cd6390271d",
"text": "Transductive classification using labeled and unlabeled objects in a heterogeneous information network for knowledge extraction is an interesting and challenging problem. Most of the real-world networks are heterogeneous in their natural setting and traditional methods of classification for homogeneous networks are not suitable for heterogeneous networks. In a heterogeneous network, various meta-paths connecting objects of the target type, on which classification is to be performed, make the classification task more challenging. The semantic of each meta-path would lead to the different accuracy of classification. Therefore, weight learning of meta-paths is required to leverage their semantics simultaneously by a weighted combination. In this work, we propose a novel meta-path based framework, HeteClass, for transductive classification of target type objects. HeteClass explores the network schema of the given network and can also incorporate the knowledge of the domain expert to generate a set of meta-paths. The regularization based weight learning method proposed in HeteClass is effective to compute the weights of symmetric as well as asymmetric meta-paths in the network, and the weights generated are consistent with the real-world understanding. Using the learned weights, a homogeneous information network is formed on target type objects by the weighted combination, and transductive classification is performed. The proposed framework HeteClass is flexible to utilize any suitable classification algorithm for transductive classification and can be applied on heterogeneous information networks with arbitrary network schema. Experimental results show the effectiveness of the HeteClass for classification of unlabeled objects in heterogeneous information networks using real-world data sets. © 2016 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "8aca909e0f83a8ac917a453fdcc73b6f",
"text": "Nearly half a century ago, military organizations introduced “Tempest” emission-security test standards to control information leakage from unintentional electromagnetic emanations of digital electronics. The nature of these emissions has changed with evolving technology; electromechanic devices have vanished and signal frequencies increased several orders of magnitude. Recently published eavesdropping attacks on modern flat-panel displays and cryptographic coprocessors demonstrate that the risk remains acute for applications with high protection requirements. The ultra-wideband signal processing technology needed for practical attacks finds already its way into consumer electronics. Current civilian RFI limits are entirely unsuited for emission security purposes. Only an openly available set of test standards based on published criteria will help civilian vendors and users to estimate and manage emission-security risks appropriately. This paper outlines a proposal and rationale for civilian electromagnetic emission-security limits. While the presented discussion aims specifically at far-field video eavesdropping in the VHF and UHF bands, the most easy to demonstrate risk, much of the presented approach for setting test limits could be adapted equally to address other RF emanation risks.",
"title": ""
},
{
"docid": "13b60edf872141b7164ed2a92f6534fc",
"text": "Ordinary differential equations (ODEs) provide a classical framework to model the dynamics of biological systems, given temporal experimental data. Qualitative analysis of the ODE model can lead to further biological insight and deeper understanding compared to traditional experiments alone. Simulation of the model under various perturbations can generate novel hypotheses and motivate the design of new experiments. This short paper will provide an overview of the ODE modeling framework, and present examples of how ODEs can be used to address problems in cancer biology.",
"title": ""
},
{
"docid": "22bf1c80bb833a7cdf6dd70936b40cb7",
"text": "Text messaging has become a popular form of communication with mobile phones worldwide. We present findings from a large scale text messaging study of 70 university students in the United States. We collected almost 60, 000 text messages over a period of 4 months using a custom logging tool on our participants' phones. Our re- sults suggest that students communicate with a large number of contacts for extended periods of time, engage in simultaneous conversations with as many as 9 contacts, and often use text messaging as a method to switch between a variety of communication mediums. We also explore the content of text messages, and ways text message habits have changed over the last decade as it has become more popular. Finally, we offer design suggestions for future mobile communication tools.",
"title": ""
},
{
"docid": "6ec4c9e6b3e2a9fd4da3663a5b21abcd",
"text": "In order to ensure the service quality, modern Internet Service Providers (ISPs) invest tremendously on their network monitoring and measurement infrastructure. Vast amount of network data, including device logs, alarms, and active/passive performance measurement across different network protocols and layers, are collected and stored for analysis. As network measurement grows in scale and sophistication, it becomes increasingly challenging to effectively “search” for the relevant information that best support the needs of network operations. In this paper, we look into techniques that have been widely applied in the information retrieval and search engine domain and explore their applicability in network management domain. We observe that unlike the textural information on the Internet, network data are typically annotated with time and location information, which can be further augmented using information based on network topology, protocol and service dependency. We design NetSearch, a system that pre-processes various network data sources on data ingestion, constructs index that matches both the network spatial hierarchy model and the inherent timing/textual information contained in the data, and efficiently retrieves the relevant information that network operators search for. Through case study, we demonstrate that NetSearch is an important capability for many critical network management functions such as complex impact analysis.",
"title": ""
}
] |
scidocsrr
|
b3d84e8df06bae2823f705922221cafc
|
Superpixel segmentation using Linear Spectral Clustering
|
[
{
"docid": "bff8ad5f962f501b299a0f69a0a820fd",
"text": "Many methods for object recognition, segmentation, etc., rely on tessellation of an image into “superpixels”. A superpixel is an image patch which is better aligned with intensity edges than a rectangular patch. Superpixels can be extracted with any segmentation algorithm, however, most of them produce highly irregular superpixels, with widely varying sizes and shapes. A more regular space tessellation may be desired. We formulate the superpixel partitioning problem in an energy minimization framework, and optimize with graph cuts. Our energy function explicitly encourages regular superpixels. We explore variations of the basic energy, which allow a trade-off between a less regular tessellation but more accurate boundaries or better efficiency. Our advantage over previous work is computational efficiency, principled optimization, and applicability to 3D “supervoxel” segmentation. We achieve high boundary recall on 2D images and spatial coherence on video. We also show that compact superpixels improve accuracy on a simple application of salient object segmentation.",
"title": ""
},
{
"docid": "fea6d5cffd6b2943fac155231e7e9d89",
"text": "We propose a principled account on multiclass spectral clustering. Given a discrete clustering formulation, we first solve a relaxed continuous optimization problem by eigendecomposition. We clarify the role of eigenvectors as a generator of all optimal solutions through orthonormal transforms. We then solve an optimal discretization problem, which seeks a discrete solution closest to the continuous optima. The discretization is efficiently computed in an iterative fashion using singular value decomposition and nonmaximum suppression. The resulting discrete solutions are nearly global-optimal. Our method is robust to random initialization and converges faster than other clustering methods. Experiments on real image segmentation are reported. Spectral graph partitioning methods have been successfully applied to circuit layout [3, 1], load balancing [4] and image segmentation [10, 6]. As a discriminative approach, they do not make assumptions about the global structure of data. Instead, local evidence on how likely two data points belong to the same class is first collected and a global decision is then made to divide all data points into disjunct sets according to some criterion. Often, such a criterion can be interpreted in an embedding framework, where the grouping relationships among data points are preserved as much as possible in a lower-dimensional representation. What makes spectral methods appealing is that their global-optima in the relaxed continuous domain are obtained by eigendecomposition. However, to get a discrete solution from eigenvectors often requires solving another clustering problem, albeit in a lower-dimensional space. That is, eigenvectors are treated as geometrical coordinates of a point set. Various clustering heuristics such as Kmeans [10, 9], transportation [2], dynamic programming [1], greedy pruning or exhaustive search [3, 10] are subsequently employed on the new point set to retrieve partitions. We show that there is a principled way to recover a discrete optimum. This is based on a fact that the continuous optima consist not only of the eigenvectors, but of a whole family spanned by the eigenvectors through orthonormal transforms. The goal is to find the right orthonormal transform that leads to a discretization.",
"title": ""
}
] |
[
{
"docid": "ad5943b20597be07646cca1af9d23660",
"text": "Defects in safety critical processes can lead to accidents that result in harm to people or damage to property. Therefore, it is important to find ways to detect and remove defects from such processes. Earlier work has shown that Fault Tree Analysis (FTA) [3] can be effective in detecting safety critical process defects. Unfortunately, it is difficult to build a comprehensive set of Fault Trees for a complex process, especially if this process is not completely welldefined. The Little-JIL process definition language has been shown to be effective for defining complex processes clearly and precisely at whatever level of granularity is desired [1]. In this work, we present an algorithm for generating Fault Trees from Little-JIL process definitions. We demonstrate the value of this work by showing how FTA can identify safety defects in the process from which the Fault Trees were automatically derived.",
"title": ""
},
{
"docid": "0df26f2f40e052cde72048b7538548c3",
"text": "Keshif is an open-source, web-based data exploration environment that enables data analytics novices to create effective visual and interactive dashboards and explore relations with minimal learning time, and data analytics experts to explore tabular data in multiple perspectives rapidly with minimal setup time. In this paper, we present a high-level overview of the exploratory features and design characteristics of Keshif, as well as its API and a selection of its implementation specifics. We conclude with a discussion of its use as an open-source project.",
"title": ""
},
{
"docid": "242686291812095c5320c1c8cae6da27",
"text": "In the modern high-performance transceivers, mixers (both upand down-converters) are required to have large dynamic range in order to meet the system specifications. The lower end of the dynamic range is indicated by the noise floor which tells how small a signal may be processed while the high end is determined by the non-linearity which causes distortion, compression and saturation of the signal and thus limits the maximum signal amplitude input to the mixer for the undistorted output. Compared to noise, the linearity requirement is much higher in mixer design because it is generally the limiting factor to the transceiver’s linearity. Therefore, this paper will emphasize on the linearization techniques for analog multipliers and mixers, which have been a very active research area since 1960s.",
"title": ""
},
{
"docid": "6acb744fdeb496ef6a154c76b794e515",
"text": "UNLABELLED\nOvococci form a morphological group that includes several human pathogens (enterococci and streptococci). Their shape results from two modes of cell wall insertion, one allowing division and one allowing elongation. Both cell wall synthesis modes rely on a single cytoskeletal protein, FtsZ. Despite the central role of FtsZ in ovococci, a detailed view of the in vivo nanostructure of ovococcal Z-rings has been lacking thus far, limiting our understanding of their assembly and architecture. We have developed the use of photoactivated localization microscopy (PALM) in the ovococcus human pathogen Streptococcus pneumoniae by engineering spDendra2, a photoconvertible fluorescent protein optimized for this bacterium. Labeling of endogenously expressed FtsZ with spDendra2 revealed the remodeling of the Z-ring's morphology during the division cycle at the nanoscale level. We show that changes in the ring's axial thickness and in the clustering propensity of FtsZ correlate with the advancement of the cell cycle. In addition, we observe double-ring substructures suggestive of short-lived intermediates that may form upon initiation of septal cell wall synthesis. These data are integrated into a model describing the architecture and the remodeling of the Z-ring during the cell cycle of ovococci.\n\n\nIMPORTANCE\nThe Gram-positive human pathogen S. pneumoniae is responsible for 1.6 million deaths per year worldwide and is increasingly resistant to various antibiotics. FtsZ is a cytoskeletal protein polymerizing at midcell into a ring-like structure called the Z-ring. FtsZ is a promising new antimicrobial target, as its inhibition leads to cell death. A precise view of the Z-ring architecture in vivo is essential to understand the mode of action of inhibitory drugs (see T. den Blaauwen, J. M. Andreu, and O. Monasterio, Bioorg Chem 55:27-38, 2014, doi:10.1016/j.bioorg.2014.03.007, for a review on FtsZ inhibitors). This is notably true in ovococcoid bacteria like S. pneumoniae, in which FtsZ is the only known cytoskeletal protein. We have used superresolution microscopy to obtain molecular details of the pneumococcus Z-ring that have so far been inaccessible with conventional microscopy. This study provides a nanoscale description of the Z-ring architecture and remodeling during the division of ovococci.",
"title": ""
},
{
"docid": "da94c4b6780dcb7376edc3750285b113",
"text": "OBJECTIVE\nThis review examines the clinical outcomes associated with exposure to chronic intrafamilial trauma and explores the treatment of the psychological, biological and cognitive sequelae.\n\n\nMETHOD\nThe existing research literature on the subject was collected, using Index Medicus/MEDLINE, Psychological Abstracts and the PILOTS database. The research findings were supplemented with clinical observations by the authors and other clinical writings on this topic.\n\n\nRESULTS\nChildren with histories of exposure to multiple traumatic experiences within their families or in medical settings usually meet criteria for numerous clinical diagnoses, none of which capture the complexity of their biological, emotional and cognitive problems. These are expressed in a multitude of psychological, cognitive, somatic and behavioural problems, ranging from learning disabilities to aggression against self and others.\n\n\nCONCLUSIONS\nExposure to intrafamilial violence and other chronic trauma results in pervasive psychological and biological deficits. Treatment needs to address issues of safety, stabilise impulsive aggression against self and others, promote mastery experiences, compensate for specific developmental deficits, and judiciously process both the traumatic memories and trauma-related expectations.",
"title": ""
},
{
"docid": "975ad302614c296c71f544a356cae3d5",
"text": "For orthodontists, the post-World War II era was characterized by the introduction of fluoridation, sit-down dentistry, and an upswing in extractions. Postwar prosperity, the baby boom, and increased enlightenment of parents contributed to what was later called the \"golden age of orthodontics.\" The subsequent clamor for more orthodontists led to a proliferation of graduate departments and inauguration of the AAO Preceptorship Program. There was also an increase in mixed-dentition treatment, requiring improved methods of analyzing arch lengths.",
"title": ""
},
{
"docid": "5bb63d07c8d7c743c505e6fd7df3dc4f",
"text": "XML similarity evaluation has become a central issue in the database and information communities, its applications ranging over document clustering, version control, data integration and ranked retrieval. Various algorithms for comparing hierarchically structured data, XML documents in particular, have been proposed in the literature. Most of them make use of techniques for finding the edit distance between tree structures, XML documents being commonly modeled as Ordered Labeled Trees. Yet, a thorough investigation of current approaches led us to identify several similarity aspects, i.e., sub-tree related structural and semantic similarities, which are not sufficiently addressed while comparing XML documents. In this paper, we provide an integrated and fine-grained comparison framework to deal with both structural and semantic similarities in XML documents (detecting the occurrences and repetitions of structurally and semantically similar sub-trees), and to allow the end-user to adjust the comparison process according to her requirements. Our framework consists of four main modules for i) discovering the structural commonalities between sub-trees, ii) identifying sub-tree semantic resemblances, iii) computing tree-based edit operations costs, and iv) computing tree edit distance. Experimental results demonstrate higher comparison accuracy with respect to alternative methods, while timing experiments reflect the impact of semantic similarity on overall system performance. © 2002 Elsevier Science. All rights reserved.",
"title": ""
},
{
"docid": "a68244dedee73f87103a1e05a8c33b20",
"text": "Given the knowledge that the same or similar objects appear in a set of images, our goal is to simultaneously segment that object from the set of images. To solve this problem, known as the cosegmentation problem, we present a method based upon hierarchical clustering. Our framework first eliminates intra-class heterogeneity in a dataset by clustering similar images together into smaller groups. Then, from each image, our method extracts multiple levels of segmentation and creates connections between regions (e.g. superpixel) across levels to establish intra-image multi-scale constraints. Next we take advantage of the information available from other images in our group. We design and present an efficient method to create inter-image relationships, e.g. connections between image regions from one image to all other images in an image cluster. Given the intra & inter-image connections, we perform a segmentation of the group of images into foreground and background regions. Finally, we compare our segmentation accuracy to several other state-of-the-art segmentation methods on standard datasets, and also demonstrate the robustness of our method on real world data.",
"title": ""
},
{
"docid": "18ee965b96c72dbbfc8ce833548a4f72",
"text": "With the inverse synthetic aperture radar (ISAR) imaging model, targets should move smoothly during the coherent processing interval (CPI). Since the CPI is quite long, fluctuations of a target's velocity and gesture will deteriorate image quality. This paper presents a multiple-input-multiple-output (MIMO)-ISAR imaging method by combining MIMO techniques and ISAR imaging theory. By using a special M-transmitter N-receiver linear array, a group of M orthogonal phase-code modulation signals with identical bandwidth and center frequency is transmitted. With a matched filter set, every target response corresponding to the orthogonal signals can be isolated at each receiving channel, and range compression is completed simultaneously. Based on phase center approximation theory, the minimum entropy criterion is used to rearrange the echo data after the target's velocity has been estimated, and then, the azimuth imaging will finally finish. The analysis of imaging and simulation results show that the minimum CPI of the MIMO-ISAR imaging method is 1/MN of the conventional ISAR imaging method under the same azimuth-resolution condition. It means that most flying targets can satisfy the condition that targets should move smoothly during CPI; therefore, the applicability and the quality of ISAR imaging will be improved.",
"title": ""
},
{
"docid": "9477c5bfa5d8e6966a5ef73491bf3165",
"text": "This paper presents an approach for classification of textual conversation into multiple domain categories using support vector classifier. The feature reduction is done through Principal Component Analysis (PCA) to extract the important features from the feature vector. These features are passed to different configurations of SVM and the best one is chosen for the final process of classification. The domain's categories are defined on real life situations and conversation to train the system like education & research, personal, patriotism, terrorism, medical, religious, sports, and business. The experiment results show that the proposed method works effectively with more than 75% accuracy.",
"title": ""
},
{
"docid": "a5568dd5ad71790a621c5e7931e8e675",
"text": "The article first describes characteristics of major infrastructure projects. Second, it documents a much neglected topic in economics: that ex ante estimates of costs and benefits are often very different from actual ex post costs and benefits. For large infrastructure projects the consequences are cost overruns, benefit shortfalls, and the systematic underestimation of risks. Third, implications for cost–benefit analysis are described, including that such analysis is not to be trusted for major infrastructure projects. Fourth, the article uncovers the causes of this state of affairs in terms of perverse incentives that encourage promoters to underestimate costs and overestimate benefits in the business cases for their projects. But the projects that are made to look best on paper are the projects that amass the highest cost overruns and benefit shortfalls in reality. The article depicts this situation as ‘survival of the unfittest’. Fifth, the article sets out to explain how the problem may be solved, with a view to arriving at more efficient and more democratic projects, and avoiding the scandals that often accompany major infrastructure investments. Finally, the article identifies current trends in major infrastructure development. It is argued that a rapid increase in stimulus spending, combined with more investments in emerging economies, combined with more spending on information technology is catapulting infrastructure investment from the frying pan into the fire.",
"title": ""
},
{
"docid": "037d8aa430923ddaaf5f7d280f5ea0c2",
"text": "We describe a system that recognizes human postures with heavy self-occlusion. In particular, we address posture recognition in a robot assisted-living scenario, where the environment is equipped with a top-view camera for monitoring human activities. This setup is very useful because top-view cameras lead to accurate localization and limited inter-occlusion between persons, but conversely they suffer from body parts being frequently self-occluded. The conventional way of posture recognition relies on good estimation of body part positions, which turns out to be unstable in the top-view due to occlusion and foreshortening. In our approach, we learn a posture descriptor for each specific posture category. The posture descriptor encodes how well the person in the image can be `explained' by the model. The postures are subsequently recognized from the matching scores returned by the posture descriptors. We select the state-of-the-art approach of pose estimation as our posture descriptor. The results show that our method is able to correctly classify 79.7% of the test sample, which outperforms the conventional approach by over 23%.",
"title": ""
},
{
"docid": "3c58e2cb4b12ae6e1d5d8676b4a495d1",
"text": "We introduce anisotropic Voronoi diagrams, a generalization of multiplicatively weighted Voronoi diagrams suitable for generating guaranteed-quality meshes of domains in which long, skinny triangles are required, and where the desired anisotropy varies over the domain. We discuss properties of anisotropic Voronoi diagrams of arbitrary dimensionality---most notably circumstances in which a site can see its entire Voronoi cell. In two dimensions, the anisotropic Voronoi diagram dualizes to a triangulation under these same circumstances. We use these properties to develop an algorithm for anisotropic triangular mesh generation in which no triangle has an angle smaller than 20A, as measured from the skewed perspective of any point in the triangle.",
"title": ""
},
{
"docid": "d7a9465ac031cf7be6f3e74276805f0f",
"text": "Half of American workers have a level of education that does not match the level of education required for their job. Of these, a majority are overeducated, i.e. have more schooling than necessary to perform their job (see, e.g., Leuven & Oosterbeek, 2011). In this paper, we use data from the National Longitudinal Survey of Youth 1979 (NLSY79) combined with the pooled 1989-1991 waves of the CPS to provide some of the first evidence regarding the dynamics of overeducation over the life cyle. Shedding light on this question is key to disentangle the role played by labor market frictions versus other factors such as selection on unobservables, compensating differentials or career mobility prospects. Overall, our results suggest that overeducation is a fairly persistent phenomenon, with 79% of workers remaining overeducated after one year. Initial overeducation also has an impact on wages much later in the career, which points to the existence of scarring effects. Finally, we find some evidence of duration dependence, with a 6.5 point decrease in the exit rate from overeducation after having spent five years overeducated. JEL Classification: J24; I21 ∗Duke University †University of North Carolina at Chapel Hill and IZA ‡Duke University and IZA.",
"title": ""
},
{
"docid": "9a758183aa6bf6ee8799170b5a526e7e",
"text": "The field of serverless computing has recently emerged in support of highly scalable, event-driven applications. A serverless application is a set of stateless functions, along with the events that should trigger their activation. A serverless runtime allocates resources as events arrive, avoiding the need for costly pre-allocated or dedicated hardware. \nWhile an attractive economic proposition, serverless computing currently lags behind the state of the art when it comes to function composition. This paper addresses the challenge of programming a composition of functions, where the composition is itself a serverless function. \nWe demonstrate that engineering function composition into a serverless application is possible, but requires a careful evaluation of trade-offs. To help in evaluating these trade-offs, we identify three competing constraints: functions should be considered as black boxes; function composition should obey a substitution principle with respect to synchronous invocation; and invocations should not be double-billed. \nFurthermore, we argue that, if the serverless runtime is limited to a reactive core, i.e. one that deals only with dispatching functions in response to events, then these constraints form the serverless trilemma. Without specific runtime support, compositions-as-functions must violate at least one of the three constraints. \nFinally, we demonstrate an extension to the reactive core of an open-source serverless runtime that enables the sequential composition of functions in a trilemma-satisfying way. We conjecture that this technique could be generalized to support other combinations of functions.",
"title": ""
},
{
"docid": "2bc13f23a10b9701517718f93d86e8f4",
"text": "In this paper, a novel method for profiling phishing activity from an analysis of phishing emails is proposed. Profiling is useful in determining the activity of an individual or a particular group of phishers. Work in the area of phishing is usually aimed at detection of phishing emails. In this paper, we concentrate on profiling as distinct from detection of phishing emails. We formulate the profiling problem as a multi-label classification problem using the hyperlinks in the phishing emails as features and structural properties of emails along with who is (i.e. DNS) information on hyperlinks as profile classes. Further, we generate profiles based on classifier predictions. Thus, classes become elements of profiles. We employ a boosting algorithm (AdaBoost) as well as SVMto generate multi-label class predictions on three different datasets created from hyperlink information in phishing emails. These predictions are further utilized to generate complete profiles of these emails. Results show that profiling can be done with quite high accuracy using hyperlink information.",
"title": ""
},
{
"docid": "0e3f43a28c477ae0e15a8608d3a1d4a5",
"text": "This report provides an overview of the current state of the art deep learning architectures and optimisation techniques, and uses the ADNI hippocampus MRI dataset as an example to compare the effectiveness and efficiency of different convolutional architectures on the task of patch-based 3dimensional hippocampal segmentation, which is important in the diagnosis of Alzheimer’s Disease. We found that a slightly unconventional ”stacked 2D” approach provides much better classification performance than simple 2D patches without requiring significantly more computational power. We also examined the popular ”tri-planar” approach used in some recently published studies, and found that it provides much better results than the 2D approaches, but also with a moderate increase in computational power requirement. Finally, we evaluated a full 3D convolutional architecture, and found that it provides marginally better results than the tri-planar approach, but at the cost of a very significant increase in computational power requirement. ar X iv :1 50 5. 02 00 0v 1 [ cs .L G ] 8 M ay 2 01 5",
"title": ""
},
{
"docid": "f1ada71621322b8f0b4c48130aa79bd5",
"text": "In this paper, we study a set of real-time scheduling problems whose objectives can be expressed as piecewise linear utility functions. This model has very wide applications in scheduling-related problems, such as mixed criticality, response time minimization, and tardiness analysis. Approximation schemes and matrix vectorization techniques are applied to transform scheduling problems into linear constraint optimization with a piecewise linear and concave objective; thus, a neural network-based optimization method can be adopted to solve such scheduling problems efficiently. This neural network model has a parallel structure, and can also be implemented on circuits, on which the converging time can be significantly limited to meet real-time requirements. Examples are provided to illustrate how to solve the optimization problem and to form a schedule. An approximation ratio bound of 0.5 is further provided. Experimental studies on a large number of randomly generated sets suggest that our algorithm is optimal when the set is nonoverloaded, and outperforms existing typical scheduling strategies when there is overload. Moreover, the number of steps for finding an approximate solution remains at the same level when the size of the problem (number of jobs within a set) increases.",
"title": ""
},
{
"docid": "f8d06c65acdbec0a41fe49fc4e7aef09",
"text": "We present an exhaustive review of research on automatic classification of sounds from musical instruments. Two different but complementary approaches are examined, the perceptual approach and the taxonomic approach. The former is targeted to derive perceptual similarity functions in order to use them for timbre clustering and for searching and retrieving sounds by timbral similarity. The latter is targeted to derive indexes for labeling sounds after cultureor user-biased taxonomies. We review the relevant features that have been used in the two areas and then we present and discuss different techniques for similarity-based clustering of sounds and for classification into pre-defined instrumental categories.",
"title": ""
}
] |
scidocsrr
|
33de766f28a69a864ecd5ce970baf882
|
Enabling Low-Latency Applications in Fog-Radio Access Networks
|
[
{
"docid": "335a330d7c02f13c0f50823461f4e86f",
"text": "Migrating computational intensive tasks from mobile devices to more resourceful cloud servers is a promising technique to increase the computational capacity of mobile devices while saving their battery energy. In this paper, we consider an MIMO multicell system where multiple mobile users (MUs) ask for computation offloading to a common cloud server. We formulate the offloading problem as the joint optimization of the radio resources-the transmit precoding matrices of the MUs-and the computational resources-the CPU cycles/second assigned by the cloud to each MU-in order to minimize the overall users' energy consumption, while meeting latency constraints. The resulting optimization problem is nonconvex (in the objective function and constraints). Nevertheless, in the single-user case, we are able to compute the global optimal solution in closed form. In the more challenging multiuser scenario, we propose an iterative algorithm, based on a novel successive convex approximation technique, converging to a local optimal solution of the original nonconvex problem. We then show that the proposed algorithmic framework naturally leads to a distributed and parallel implementation across the radio access points, requiring only a limited coordination/signaling with the cloud. Numerical results show that the proposed schemes outperform disjoint optimization algorithms.",
"title": ""
}
] |
[
{
"docid": "b544aec3db71397c3b81851e8d770fda",
"text": "A novel substrate integrated waveguide (SIW) slot antenna having folded corrugated stubs is proposed for suppressing the backlobes of the SIW slot antenna associated with the diffraction of the spillover current. The longitudinal array of the folded stubs replacing the SIW via-holes effectively prevents the propagation of the surface spillover current. The measured front-to-back ratio (FTBR) has been greatly (15 dB) improved from that of the common SIW slot antenna. We expect that the proposed folded corrugated SIW (FCSIW) slot antenna plays an important role for reducing the excessive backside radiation of the SIW slot antenna and for decreasing mutual coupling in SIW slot antenna arrays.",
"title": ""
},
{
"docid": "dba1d0b9a2c409bd6ff9c39cbdb1e7ed",
"text": "Recent research suggests that social interactions in video games may lead to the development of community bonding and prosocial attitudes. Building on this line of research, a national survey of U.S. adults finds that gamers who develop ties with a community of fellow gamers possess gaming social capital, a new gaming-related community construct that is shown to be a positive antecedent in predicting both face-to-face social capital and civic participation.",
"title": ""
},
{
"docid": "a04302721f62c1af3b9be630524f03ab",
"text": "Hyperspectral image processing has been a very dynamic area in remote sensing and other applications in recent years. Hyperspectral images provide ample spectral information to identify and distinguish spectrally similar materials for more accurate and detailed information extraction. Wide range of advanced classification techniques are available based on spectral information and spatial information. To improve classification accuracy it is essential to identify and reduce uncertainties in image processing chain. This paper presents the current practices, problems and prospects of hyperspectral image classification. In addition, some important issues affecting classification performance are discussed.",
"title": ""
},
{
"docid": "9ed3b0144df3dfa88b9bfa61ee31f40a",
"text": "OBJECTIVE\nTo determine the frequency of early relapse after achieving good initial correction in children who were on clubfoot abduction brace.\n\n\nMETHODS\nThe cross-sectional study was conducted at the Jinnah Postgraduate Medical Centre, Karachi, and included parents of children of either gender in the age range of 6 months to 3years with idiopathic clubfoot deformities who had undergone Ponseti treatment between September 2012 and June 2013, and who were on maintenance brace when the data was collected from December 2013 to March 2014. Parents of patients with follow-up duration in brace less than six months and those with syndromic clubfoot deformity were excluded. The interviews were taken through a purposive designed questionnaire. SPSS 16 was used for data analysis.\n\n\nRESULTS\nThe study included parents of 120 patients. Of them, 95(79.2%) behaved with good compliance on Denis Browne Splint, 10(8.3%) were fair and 15(12.5%)showed poor compliance. Major reason for poor and non-compliance was unaffordability of time and cost for regular follow-up. Besides, 20(16.67%) had inconsistent use due to delay inre-procurement of Foot Abduction Braceonce the child had outgrown the shoe. Only 4(3.33%) talked of cultural barriers and conflict of interest between the parents. Early relapse was observed in 23(19.16%) patients and 6(5%) of them responded to additional treatment and were put back on brace treatment; 13(10.83%) had minor relapse with forefoot varus, without functional disability, and the remaining 4(3.33%) had major relapse requiring extensive surgery. Overall success was recorded in 116(96.67%) cases.\n\n\nCONCLUSIONS\nThe positioning of shoes on abduction brace bar, comfort in shoes, affordability, initial and subsequent delay in procurement of new shoes once the child's feet overgrew the shoe, were the four containable factors on the part of Ponseti practitioner.",
"title": ""
},
{
"docid": "ff02ddb759f94367813324ce15f09f8d",
"text": "The present work describes a website designed for remote teaching of optical measurements using lasers. It enables senior undergraduate and postgraduate students to learn theoretical aspects of the subject and also have a means to perform experiments for better understanding of the application at hand. At this stage of web development, optical methods considered are those based on refractive index changes in the material medium. The website is specially designed in order to provide remote access of expensive lasers, cameras, and other laboratory instruments by employing a commercially available web browser. The web suite integrates remote experiments, hands-on experiments and life-like optical images generated by using numerical simulation techniques based on Open Foam software package. The remote experiments are real time experiments running in the physical laboratory but can be accessed remotely from anywhere in the world and at any time. Numerical simulation of problems enhances learning, visualization of problems and interpretation of results. In the present work hand-on experimental results are discussed with respect to simulated results. A reasonable amount of resource material, specifically theoretical background of interferometry is available on the website along with computer programs image processing and analysis of results obtained in an experiment.",
"title": ""
},
{
"docid": "123b93071e0ae555734c0ab27d29b6bf",
"text": "Computer-Assisted Pronunciation Training System (CAPT) has become an important learning aid in second language (L2) learning. Our approach to CAPT is based on the use of phonological rules to capture language transfer effects that may cause mispronunciations. This paper presents an approach for automatic derivation of phonological rules from L2 speech. The rules are used to generate an extended recognition network (ERN) that captures the canonical pronunciations of words, as well as the possible mispronunciations. The ERN is used with automatic speech recognition for mispronunciation detection. Experimentation with an L2 speech corpus that contains recordings from 100 speakers aims to compare the automatically derived rules with manually authored rules. Comparable performance is achieved in mispronunciation detection (i.e. telling which phone is wrong). The automatically derived rules also offer improved performance in diagnostic accuracy (i.e. identify how the phone is wrong).",
"title": ""
},
{
"docid": "d245fbc12d9a7d36751e3b75d9eb0e62",
"text": "What makes for an explanation of \"black box\" AI systems such as Deep Nets? We reviewed the pertinent literatures on explanation and derived key ideas. This set the stage for our empirical inquiries, which include conceptual cognitive modeling, the analysis of a corpus of cases of \"naturalistic explanation\" of computational systems, computational cognitive modeling, and the development of measures for performance evaluation. The purpose of our work is to contribute to the program of research on “Explainable AI.” In this report we focus on our initial synthetic modeling activities and the development of measures for the evaluation of explainability in human-machine work systems. INTRODUCTION The importance of explanation in AI has been emphasized in the popular press, with considerable discussion of the explainability of Deep Nets and Machine Learning systems (e.g., Kuang, 2017). For such “black box” systems, there is a need to explain how they work so that users and decision makers can develop appropriate trust and reliance. As an example, referencing Figure 1, a Deep Net that we created was trained to recognize types of tools. Figure 1. Some examples of Deep Net classification. Outlining the axe and overlaying bird silhouettes on it resulted in a confident misclassification. While a fuzzy hammer is correctly classified, an embossed rendering is classified as a saw. Deep Nets can classify with high hit rates for images that fall within the variation of their training sets, but are nonetheless easily spoofed using instances that humans find easy to classify. Furthermore, Deep Nets have to provide some classification for an input. Thus, a Volkswagen might be classified as a tulip by a Deep Net trained to recognize types of flowers. So, if Deep Nets do not actually possess human-semantic concepts (e.g., that axes have things that humans call \"blades\"), what do the Deep Nets actually \"see\"? And more directly, how can users be enabled to develop appropriate trust and reliance on these AI systems? Articles in the popular press highlight the successes of Deep Nets (e.g., the discovery of planetary systems in Hubble Telescope data; Temming 2018), and promise diverse applications \"... the recognition of faces, handwriting, speech... navigation and control of autonomous vehicles... it seems that neural networks are being used everywhere\" (Lucky, 2018, p. 24). And yet \"models are more complex and less interpretable than ever... Justifying [their] decisions will only become more crucial\" (Biran and Cotton, 2017, p. 4). Indeed, a proposed regulation before the European Union (Goodman and Flaxman, 2016) asserts that users have the \"right to an explanation.” What form must an explanation for Deep Nets take? This is a challenge in the DARPA \"Explainable AI\" (XAI) Program: To develop AI systems that can engage users in a process in which the mechanisms and \"decisions\" of the AI are explained. Our tasks on the Program are to: (1). Integrate philosophical studies and psychological research in order to identify consensus points, key concepts and key variables of explanatory reasoning, (2). Develop and validate measures of explanation goodness, explanation satisfaction, mental models and human-XAI performance, (3) Develop and evaluate a computational model of how people understand computational devices, and C op yr ig ht 2 01 8 by H um an F ac to rs a nd E rg on om ic s So ci et y. D O I 1 0. 11 77 /1 54 19 31 21 86 21 04 7 Proceedings of the Human Factors and Ergonomics Society 2018 Annual Meeting 197",
"title": ""
},
{
"docid": "76dd7060fdbf9927495985dd5313896f",
"text": "Many network solutions and overlay networks utilize probabilistic techniques to reduce information processing and networking costs. This survey article presents a number of frequently used and useful probabilistic techniques. Bloom filters and their variants are of prime importance, and they are heavily used in various distributed systems. This has been reflected in recent research and many new algorithms have been proposed for distributed systems that are either directly or indirectly based on Bloom filters. In this survey, we give an overview of the basic and advanced techniques, reviewing over 20 variants and discussing their application in distributed systems, in particular for caching, peer-to-peer systems, routing and forwarding, and measurement data summarization.",
"title": ""
},
{
"docid": "fe05cc4e31effca11e2718ce05635a97",
"text": "In security-sensitive applications, the success of machine learning depends on a thorough vetting of their resistance to adversarial data. In one pertinent, well-motivated attack scenario, an adversary may attempt to evade a deployed system at test time by carefully manipulating attack samples. In this work, we present a simple but effective gradientbased approach that can be exploited to systematically assess the security of several, widely-used classification algorithms against evasion attacks. Following a recently proposed framework for security evaluation, we simulate attack scenarios that exhibit different risk levels for the classifier by increasing the attacker’s knowledge of the system and her ability to manipulate attack samples. This gives the classifier designer a better picture of the classifier performance under evasion attacks, and allows him to perform a more informed model selection (or parameter setting). We evaluate our approach on the relevant security task of malware detection in PDF files, and show that such systems can be easily evaded. We also sketch some countermeasures suggested by our analysis.",
"title": ""
},
{
"docid": "b6d856bf3b61883e3755cf00810b98c7",
"text": "The development of cell printing is vital for establishing biofabrication approaches as clinically relevant tools. Achieving this requires bio-inks which must not only be easily printable, but also allow controllable and reproducible printing of cells. This review outlines the general principles and current progress and compares the advantages and challenges for the most widely used biofabrication techniques for printing cells: extrusion, laser, microvalve, inkjet and tissue fragment printing. It is expected that significant advances in cell printing will result from synergistic combinations of these techniques and lead to optimised resolution, throughput and the overall complexity of printed constructs.",
"title": ""
},
{
"docid": "8bdbf6fc33bc0b2cb5911683c13912a0",
"text": "The breaking of solid objects, like glass or pottery, poses a complex problem for computer animation. We present our methods of using physical simulation to drive the animation of breaking objects. Breakage is obtaned in a three-dimensional flexible model as the limit of elastic behavior. This article describes three principal features of the model: a breakage model, a collision-detection/response scheme, and a geometric modeling method. We use networks of point masses connected by springs to represent physical objects that can bend and break. We present effecient collision-detection algorithms, appropriate for simulating the collisions between the various pieces that interact in breakage. The capability of modeling real objects is provided by a technique of building up composite structures from simple lattice models. We applied these methods to animate the breaking of a teapot and other dishware activities in the animationTipsy Turvy shown at Siggraph '89. Animation techniques that rely on physical simulation to control the motion of objects are discussed, and further topics for research are presented.",
"title": ""
},
{
"docid": "8a24f9d284507765e0026ae8a70fc482",
"text": "The diagnosis of pulmonary tuberculosis in patients with Human Immunodeficiency Virus (HIV) is complicated by the increased presence of sputum smear negative tuberculosis. Diagnosis of smear negative pulmonary tuberculosis is made by an algorithm recommended by the National Tuberculosis and Leprosy Programme that uses symptoms, signs and laboratory results. The objective of this study is to determine the sensitivity and specificity of the tuberculosis treatment algorithm used for the diagnosis of sputum smear negative pulmonary tuberculosis. A cross-section study with prospective enrollment of patients was conducted in Dar-es-Salaam Tanzania. For patients with sputum smear negative, sputum was sent for culture. All consenting recruited patients were counseled and tested for HIV. Patients were evaluated using the National Tuberculosis and Leprosy Programme guidelines and those fulfilling the criteria of having active pulmonary tuberculosis were started on anti tuberculosis therapy. Remaining patients were provided appropriate therapy. A chest X-ray, mantoux test, and Full Blood Picture were done for each patient. The sensitivity and specificity of the recommended algorithm was calculated. Predictors of sputum culture positive were determined using multivariate analysis. During the study, 467 subjects were enrolled. Of those, 318 (68.1%) were HIV positive, 127 (27.2%) had sputum culture positive for Mycobacteria Tuberculosis, of whom 66 (51.9%) were correctly treated with anti-Tuberculosis drugs and 61 (48.1%) were missed and did not get anti-Tuberculosis drugs. Of the 286 subjects with sputum culture negative, 107 (37.4%) were incorrectly treated with anti-Tuberculosis drugs. The diagnostic algorithm for smear negative pulmonary tuberculosis had a sensitivity and specificity of 38.1% and 74.5% respectively. The presence of a dry cough, a high respiratory rate, a low eosinophil count, a mixed type of anaemia and presence of a cavity were found to be predictive of smear negative but culture positive pulmonary tuberculosis. The current practices of establishing pulmonary tuberculosis diagnosis are not sensitive and specific enough to establish the diagnosis of Acid Fast Bacilli smear negative pulmonary tuberculosis and over treat people with no pulmonary tuberculosis.",
"title": ""
},
{
"docid": "c19844950a3531d152408fd05904772b",
"text": "Processing sequential data of variable length is a major challenge in a wide range of applications, such as speech recognition, language modeling, generative image modeling and machine translation. Here, we address this challenge by proposing a novel recurrent neural network (RNN) architecture, the Fast-Slow RNN (FS-RNN). The FS-RNN incorporates the strengths of both multiscale RNNs and deep transition RNNs as it processes sequential data on different timescales and learns complex transition functions from one time step to the next. We evaluate the FS-RNN on two character level language modeling data sets, Penn Treebank and Hutter Prize Wikipedia, where we improve state of the art results to 1.19 and 1.25 bits-per-character (BPC), respectively. In addition, an ensemble of two FS-RNNs achieves 1.20 BPC on Hutter Prize Wikipedia outperforming the best known compression algorithm with respect to the BPC measure. We also present an empirical investigation of the learning and network dynamics of the FS-RNN, which explains the improved performance compared to other RNN architectures. Our approach is general as any kind of RNN cell is a possible building block for the FS-RNN architecture, and thus can be flexibly applied to different tasks.",
"title": ""
},
{
"docid": "48774da3dd848f6e7dc0b63fdf89694e",
"text": "Near Field Communication (NFC) offers intuitive interactions between humans and vehicles. In this paper we explore different NFC based use cases in an automotive context. Nearly all described use cases have been implemented in a BMW vehicle to get experiences of NFC in a real in-car environment. We describe the underlying soft- and hardware architecture and our experiences in setting up the prototype.",
"title": ""
},
{
"docid": "9201964dfef74396dabb6bd2a3effee3",
"text": "A MATLAB program was developed to invert first arrival travel time picks from zero offset profiling borehole ground penetrating radar traces to obtain the electromagnetic wave propagation velocities in soil. Zero-offset profiling refers to a mode of operation wherein the centers of the bistatic antennae being lowered to the same depth below ground for each measurement. The inversion uses a simulated annealing optimization routine, whereby the model attempts to reduce the root mean square error between the measured and modeled travel time by perturbing the velocity in a ray tracing routine. Measurement uncertainty is incorporated through the presentation of the ensemble mean and standard deviation from the results of a Monte Carlo simulation. The program features a pre-processor to modify or delete travel time information from the profile before inversion and post-processing through presentation of the ensemble statistics of the water contents inferred from the velocity profile. The program includes a novel application of a graphical user interface to animate the velocity fitting routine. r 2004 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "77e385b7e7305ec0553c980f22bfa3b4",
"text": "Two and three-dimensional simulations of experiments on atmosphere mixing and stratification in a nuclear power plant containment were performed with the code CFX4.4, with the inclusion of simple models for steam condensation. The purpose was to assess the applicability of the approach to simulate the behaviour of light gases in containments at accident conditions. The comparisons of experimental and simulated results show that, despite a tendency to simulate more intensive mixing, the proposed approach may replicate the non-homogeneous structure of the atmosphere reasonably well. Introduction One of the nuclear reactor safety issues that have lately been considered using Computational Fluid Dynamics (CFD) codes is the problem of predicting the eventual non-homogeneous concentration of light flammable gas (hydrogen) in the containment of a nuclear power plant (NPP) at accident conditions. During a hypothetical severe accident in a Pressurized Water Reactor NPP, hydrogen could be generated due to Zircaloy oxidation in the reactor core. Eventual high concentrations of hydrogen in some parts of the containment could cause hydrogen ignition and combustion, which could threaten the containment integrity. The purpose of theoretical investigations is to predict hydrogen behaviour at accident conditions prior to combustion. In the past few years, many investigations about the possible application of CFD codes for this purpose have been started [1-5]. CFD codes solve the transport mass, momentum and energy equations when a fluid system is modelled using local instantaneous description. Some codes, which also use local instantaneous description, have been developed specifically for nuclear applications [68]. Although many CFD codes are multi-purpose, some of them still lack some models, which are necessary for adequate simulations of containment phenomena. In particular, the modelling of steam condensation often has to be incorporated in the codes by the users. These theoretical investigations are complemented by adequate experiments. Recently, the following novel integral experimental facilities have been set up in Europe: TOSQAN [9,10], at the Institut de Radioprotection et de Sureté Nucléaire (IRSN) in Saclay (France), MISTRA [9,11], at the",
"title": ""
},
{
"docid": "86f25f09b801d28ce32f1257a39ddd44",
"text": "Modern mobile devices have access to a wealth of data suitable for learning models, which in turn can greatly improve the user experience on the device. For example, language models can improve speech recognition and text entry, and image models can automatically select good photos. However, this rich data is often privacy sensitive, large in quantity, or both, which may preclude logging to the data-center and training there using conventional approaches. We advocate an alternative that leaves the training data distributed on the mobile devices, and learns a shared model by aggregating locally-computed updates. We term this decentralized approach Federated Learning. We present a practical method for the federated learning of deep networks that proves robust to the unbalanced and non-IID data distributions that naturally arise. This method allows high-quality models to be trained in relatively few rounds of communication, the principal constraint for federated learning. The key insight is that despite the non-convex loss functions we optimize, parameter averaging over updates from multiple clients produces surprisingly good results, for example decreasing the communication needed to train an LSTM language model by two orders of magnitude.",
"title": ""
},
{
"docid": "c3b88e3ff4c4f8932892b5692e4b10eb",
"text": "Medicine is an extremely challenging field of research, which has been more than any other discipline of fundamental importance for human existence. The variety and inherent complexity of unsolved problems has made it a major driving force for many natural and engineering sciences. Hence, from the early days of Computer Graphics and Computer Vision the medical field has been one of most important application areas with an enduring provision of fascinating research challenges. Conversely, individual Graphics and Computer Vision tools and methods have become increasingly irreplaceable in modern medicine. In this article I will present my personal view of the interdisciplinary field of surgery simulation which encompasses many different disciplines including Medicine, Computer Graphics, Computer Vision, Mechanics, Material Sciences, Robotics and Numeric Analysis. I will discuss the individual tasks, challenges and problems arising during the design and implementation of advanced surgery simulation environments, where my emphasis is directed towards the roles of graphics and vision.",
"title": ""
},
{
"docid": "69fd3e6e9a1fc407d20b0fb19fc536e3",
"text": "In the last decade, the research topic of automatic analysis of facial expressions has become a central topic in machine vision research. Nonetheless, there is a glaring lack of a comprehensive, readily accessible reference set of face images that could be used as a basis for benchmarks for efforts in the field. This lack of easily accessible, suitable, common testing resource forms the major impediment to comparing and extending the issues concerned with automatic facial expression analysis. In this paper, we discuss a number of issues that make the problem of creating a benchmark facial expression database difficult. We then present the MMI facial expression database, which includes more than 1500 samples of both static images and image sequences of faces in frontal and in profile view displaying various expressions of emotion, single and multiple facial muscle activation. It has been built as a Web-based direct-manipulation application, allowing easy access and easy search of the available images. This database represents the most comprehensive reference set of images for studies on facial expression analysis to date.",
"title": ""
},
{
"docid": "f01a19652bff88923a3141fb56d805e2",
"text": "This paper presents a visible light communication system, focusing mostly on the aspects related with the hardware design and implementation. The designed system is aimed to ensure a highly-reliable communication between a commercial LED-based traffic light and a receiver mounted on a vehicle. Enabling wireless data transfer between the road infrastructure and vehicles has the potential to significantly increase the safety and efficiency of the transportation system. The paper presents the advantages of the proposed system and explains same of the choices made in the implementation process.",
"title": ""
}
] |
scidocsrr
|
3435459303d013c8fa9546cc78ca486f
|
A New Zero-Voltage-Switching Push-Pull Converter
|
[
{
"docid": "872ef59b5bec5f6cbb9fcb206b6fe49e",
"text": "In this paper, the analysis and design of a three-level LLC series resonant converter (TL LLC SRC) for high- and wide-input-voltage applications is presented. The TL LLC SRC discussed in this paper consists of two half-bridge LLC SRCs in series, sharing a resonant inductor and a transformer. Its main advantages are that the voltage across each switch is clamped at half of the input voltage and that voltage balance is achieved. Thus, it is suitable for high-input-voltage applications. Moreover, due to its simple driving signals, the additional circulating current of the conventional TL LLC SRCs does not appear in the converter, and a simpler driving circuitry is allowed to be designed. With this converter, the operation principles, the gain of the LLC resonant tank, and the zero-voltage-switching condition under wide input voltage variation are analyzed. Both the current and voltage stresses over different design factors of the resonant tank are discussed as well. Based on the results of these analyses, a design example is provided and its validity is confirmed by an experiment involving a prototype converter with an input of 400-600 V and an output of 48 V/20 A. In addition, a family of TL LLC SRCs with double-resonant tanks for high-input-voltage applications is introduced. While this paper deals with a TL LLC SRC, the analysis results can be applied to other TL LLC SRCs for wide-input-voltage applications.",
"title": ""
},
{
"docid": "9f5d77e73fb63235a6e094d437f1be7e",
"text": "An improved zero-voltage and zero-current-switching (ZVZCS) full bridge dc-dc converter is proposed based on phase shift control. With an auxiliary center tapped rectifier at the secondary side, an auxiliary voltage source is applied to reset the primary current of the transformer winding. Therefore, zero-voltage switching for the leading leg switches and zero-current switching for the lagging leg switches can be achieved, respectively, without any increase of current and voltage stresses. Since the primary current in the circulating interval for the phase shift full bridge converter is eliminated, the conduction loss in primary switches is reduced. A 1 kW prototype is made to verify the theoretical analysis.",
"title": ""
}
] |
[
{
"docid": "c7b58a4ebb65607d1545d3bc506c2fed",
"text": "The goal of this study was to examine the relationship of self-efficacy, social support, and coping strategies with stress levels of university students. Seventy-five Education students completed four questionnaires assessing these variables. Significant correlations were found for stress with total number of coping strategies and the use of avoidance-focused coping strategies. As well, there was a significant correlation between social support from friends and emotion-focused coping strategies. Gender differences were found, with women reporting more social support from friends than men. Implications of these results for counselling university students are discussed.",
"title": ""
},
{
"docid": "2418cf34f09335d6232193b21ee7ae49",
"text": "The purpose of this study was to improve the accuracy of real-time ego-motion tracking through inertial sensor and vision sensor fusion. Due to low sampling rates supported by web-based vision sensor and accumulation of errors in inertial sensors, ego-motion tracking with vision sensors is commonly afflicted by slow updating rates, while motion tracking with inertial sensor suffers from rapid deterioration in accuracy with time. This paper starts with a discussion of developed algorithms for calibrating two relative rotations of the system using only one reference image. Next, stochastic noises associated with the inertial sensor are identified using Allan Variance analysis, and modeled according to their characteristics. Finally, the proposed models are incorporated into an extended Kalman filter for inertial sensor and vision sensor fusion. Compared with results from conventional sensor fusion models, we have shown that ego-motion tracking can be greatly enhanced using the proposed error correction model.",
"title": ""
},
{
"docid": "5ea560095b752ca8e7fb6672f4092980",
"text": "Access control is a security aspect whose requirements evolve with technology advances and, at the same time, contemporary social contexts. Multitudes of access control models grow out of their respective application domains such as healthcare and collaborative enterprises; and even then, further administering means, human factor considerations, and infringement management are required to effectively deploy the model in the particular usage environment. This paper presents a survey of access control mechanisms along with their deployment issues and solutions available today. We aim to give a comprehensive big picture as well as pragmatic deployment details to guide in understanding, setting up and enforcing access control in its real world application.",
"title": ""
},
{
"docid": "ad1cf5892f7737944ba23cd2e44a7150",
"text": "The ‘blockchain’ is the core mechanism for the Bitcoin digital payment system. It embraces a set of inter-related technologies: the blockchain itself as a distributed record of digital events, the distributed consensus method to agree whether a new block is legitimate, automated smart contracts, and the data structure associated with each block. We propose a permanent distributed record of intellectual effort and associated reputational reward, based on the blockchain that instantiates and democratises educational reputation beyond the academic community. We are undertaking initial trials of a private blockchain or storing educational records, drawing also on our previous research into reputation management for educational systems.",
"title": ""
},
{
"docid": "1c20908b24c78b43a858ba154165b544",
"text": "The implementation of concentrated windings in interior permanent magnet (IPM) machines has numerous advantages over distributed windings, with the disadvantage being mainly the decrease in saliency ratio. This paper presents a proposed finite element (FE) method in which the d- and q-axis inductances (Ld and Lq) of the IPM machine with fractional-slot concentrated windings can be accurately determined. This method is used to determine Ld and Lq of various winding configurations and to determine the optimum saliency ratio for a 12-slot 14-pole model with fractional-slot concentrated windings. FE testing were carried out by the use of Flux2D.",
"title": ""
},
{
"docid": "529edb7ca367261731a154c24512d288",
"text": "OBJECTIVE\nA depressive disorder is an illness that involves the body, mood, thoughts and behaviors. This study was performed to identify the presence of depression among medical students of Urmia University of Medical Sciences.\n\n\nMETHODS\nA descriptive cross-sectional study was conducted on 700 undergraduate medical and basic sciences students. Beck depression inventory (BDI) used for data gathering.\n\n\nRESULTS\nMean score of BDI was 10.4 ± 0.8 and 52.6% of students scored under the depression threshold. Four of them had severe depression. RESULTS showed no significant relationship between depression and age, education, sex, rank of birth or duration of education.\n\n\nCONCLUSION\nPrevalence of depression that can affect the students' quality of education and social behavior was high in Urmia University of Medical Sciences.",
"title": ""
},
{
"docid": "19ab6d7d30cd27f97b948674575efe2a",
"text": "We present a user-friendly image editing system that supports a drag-and-drop object insertion (where the user merely drags objects into the image, and the system automatically places them in 3D and relights them appropriately), postprocess illumination editing, and depth-of-field manipulation. Underlying our system is a fully automatic technique for recovering a comprehensive 3D scene model (geometry, illumination, diffuse albedo, and camera parameters) from a single, low dynamic range photograph. This is made possible by two novel contributions: an illumination inference algorithm that recovers a full lighting model of the scene (including light sources that are not directly visible in the photograph), and a depth estimation algorithm that combines data-driven depth transfer with geometric reasoning about the scene layout. A user study shows that our system produces perceptually convincing results, and achieves the same level of realism as techniques that require significant user interaction.",
"title": ""
},
{
"docid": "fe06ac2458e00c5447a255486189f1d1",
"text": "The design and control of robots from the perspective of human safety is desired. We propose a mechanical compliance control system as a new pneumatic arm control system. However, safety against collisions with obstacles in an unpredictable environment is difficult to insure in previous system. The main feature of the proposed system is that the two desired pressure values are calculated by using two other desired values, the end compliance of the arm and the end position and posture of the arm.",
"title": ""
},
{
"docid": "48072e0b5a49302982c643ae675f60c0",
"text": "News recommendation has become a big attraction with which major Web search portals retain their users. Contentbased Filtering and Collaborative Filtering are two effective methods, each serving a specific recommendation scenario. The Content-based Filtering approaches inspect rich contexts of the recommended items, while the Collaborative Filtering approaches predict the interests of long-tail users by collaboratively learning from interests of related users. We have observed empirically that, for the problem of news topic displaying, both the rich context of news topics and the long-tail users exist. Therefore, in this paper, we propose a Content-based Collaborative Filtering approach (CCF) to bring both Content-based Filtering and Collaborative Filtering approaches together. We found that combining the two is not an easy task, but the benefits of CCF are impressive. On one hand, CCF makes recommendations based on the rich contexts of the news. On the other hand, CCF collaboratively analyzes the scarce feedbacks from the long-tail users. We tailored this CCF approach for the news topic displaying on the Bing front page and demonstrated great gains in attracting users. In the experiments and analyses part of this paper, we discuss the performance gains and insights in news topic recommendation in Bing.",
"title": ""
},
{
"docid": "504fcb97010d71fd07aca8bc9543af8b",
"text": "The presence of raindrop induced distortion can have a significant negative impact on computer vision applications. Here we address the problem of visual raindrop distortion in standard colour video imagery for use in non-static, automotive computer vision applications where the scene can be observed to be changing over subsequent consecutive frames. We utilise current state of the art research conducted into the investigation of salience mapping as means of initial detection of potential raindrop candidates. We further expand on this prior state of the art work to construct a combined feature rich descriptor of shape information (Hu moments), isolation of raindrops pixel information from context, and texture (saliency derived) within an improved visual bag of words verification framework. Support Vector Machine and Random Forest classification were utilised for verification of potential candidates, and the effects of increasing discrete cluster centre counts on detection rates were studied. This novel approach of utilising extended shape information, isolation of context, and texture, along with increasing cluster counts, achieves a notable 13% increase in precision (92%) and 10% increase in recall (86%) against prior state of the art. False positive rates were also observed to decrease with a minimal false positive rate of 14% observed. iv ACKNOWLEDGEMENTS I wish to thank Dr Toby Breckon for his time and commitment during my project, and for the help given in patiently explaining things for me. I also wish to thank Dr Mark Stillwell for his never tiring commitment to proofreading and getting up to speed with my project. Without them, this thesis would never have come together in the way it did. I wish to thank my partner, Dr Victoria Gortowski for allowing me to go back to university, supporting me and having faith that I could do it, without which, I do not think I would have. And last but not least, Lara and Mitsy. Thank you.",
"title": ""
},
{
"docid": "581df8e68fdd475d1f0fab64335aa412",
"text": "In this paper, a method for Li-ion battery state of charge (SOC) estimation using particle filter (PF) is proposed. The equivalent circuit model for Li-ion battery is established based on the available battery block in MATLAB/Simulink. To improve the model's accuracy, the circuit parameters are represented by functions of SOC. Then, the PF algorithm is utilized to do SOC estimation for the battery model. From simulation it reveals that PF provides accurate SOC estimation. It is demonstrated that the proposed method is effective on Li-ion battery SOC estimation.",
"title": ""
},
{
"docid": "d341486002f2b0f5e620f5a63873577c",
"text": "Various Internet solutions take their power processing and analysis from cloud computing services. Internet of Things (IoT) applications started discovering the benefits of computing, processing, and analysis on the device itself aiming to reduce latency for time-critical applications. However, on-device processing is not suitable for resource-constraints IoT devices. Edge computing (EC) came as an alternative solution that tends to move services and computation more closer to consumers, at the edge. In this letter, we study and discuss the applicability of merging deep learning (DL) models, i.e., convolutional neural network (CNN), recurrent neural network (RNN), and reinforcement learning (RL), with IoT and information-centric networking which is a promising future Internet architecture, combined all together with the EC concept. Therefore, a CNN model can be used in the IoT area to exploit reliably data from a complex environment. Moreover, RL and RNN have been recently integrated into IoT, which can be used to take the multi-modality of data in real-time applications into account.",
"title": ""
},
{
"docid": "0b19bd9604fae55455799c39595c8016",
"text": "Our study concerns an important current problem, that of diffusion of information in social networks. This problem has received significant attention from the Internet research community in the recent times, driven by many potential applications such as viral marketing and sales promotions. In this paper, we focus on the target set selection problem, which involves discovering a small subset of influential players in a given social network, to perform a certain task of information diffusion. The target set selection problem manifests in two forms: 1) top-k nodes problem and 2) λ -coverage problem. In the top-k nodes problem, we are required to find a set of k key nodes that would maximize the number of nodes being influenced in the network. The λ-coverage problem is concerned with finding a set of key nodes having minimal size that can influence a given percentage λ of the nodes in the entire network. We propose a new way of solving these problems using the concept of Shapley value which is a well known solution concept in cooperative game theory. Our approach leads to algorithms which we call the ShaPley value-based Influential Nodes (SPINs) algorithms for solving the top-k nodes problem and the λ -coverage problem. We compare the performance of the proposed SPIN algorithms with well known algorithms in the literature. Through extensive experimentation on four synthetically generated random graphs and six real-world data sets (Celegans, Jazz, NIPS coauthorship data set, Netscience data set, High-Energy Physics data set, and Political Books data set), we show that the proposed SPIN approach is more powerful and computationally efficient.",
"title": ""
},
{
"docid": "d69d694eadb068dc019dce0eb51d5322",
"text": "In this paper the application of image prior combinations to the Bayesian Super Resolution (SR) image registration and reconstruction problem is studied. Two sparse image priors, a Total Variation (TV) prior and a prior based on the `1 norm of horizontal and vertical first order differences (f.o.d.), are combined with a non-sparse Simultaneous Auto Regressive (SAR) prior. Since, for a given observation model, each prior produces a different posterior distribution of the underlying High Resolution (HR) image, the use of variational approximation will produce as many posterior approximations as priors we want to combine. A unique approximation is obtained here by finding the distribution on the HR image given the observations that minimizes a linear convex combination of Kullback-Leibler (KL) divergences. We find this distribution in closed form. The estimated HR images are compared with the ones obtained by other SR reconstruction methods.",
"title": ""
},
{
"docid": "47e1d6903fff73a4a5059265e609a6f7",
"text": "Users of cloud services are presented with a bewildering choice of VM types and the choice of VM can have significant implications on performance and cost. In this paper we address the fundamental problem of accurately and economically choosing the best VM for a given workload and user goals. To address the problem of optimal VM selection, we present PARIS, a data-driven system that uses a novel hybrid offline and online data collection and modeling framework to provide accurate performance estimates with minimal data collection. PARIS is able to predict workload performance for different user-specified metrics, and resulting costs for a wide range of VM types and workloads across multiple cloud providers. When compared to sophisticated baselines, including collaborative filtering and a linear interpolation model using measured workload performance on two VM types, PARIS produces significantly better estimates of performance. For instance, it reduces runtime prediction error by a factor of 4 for some workloads on both AWS and Azure. The increased accuracy translates into a 45% reduction in user cost while maintaining performance.",
"title": ""
},
{
"docid": "69d7eab2e20e18538af87c11e0794fed",
"text": "Usually scientists breed research ideas inspired by previous publications, but they are unlikely to follow all publications in the unbounded literature collection. The volume of literature keeps on expanding extremely fast, whilst not all papers contribute equal impact to the academic society. Being aware of potentially influential literature would put one in an advanced position in choosing important research references. Hence, estimation of potential influence is of great significance. We study a challenging problem of identifying potentially influential literature. We examine a set of hypotheses on what are the fundamental characteristics for highly cited papers and find some interesting patterns. Based on these observations, we learn to identify potentially influential literature via Future Influence Prediction (FIP), which aims to estimate the future influence of literature. The system takes a series of features of a particular publication as input and produces as output the estimated citation counts of that article after a given time period. We consider several regression models to formulate the learning process and evaluate their performance based on the coefficient of determination (R2). Experimental results on a real-large data set show a mean average predictive performance of 83.6% measured in R^2. We apply the learned model to the application of bibliography recommendation and obtain prominent performance improvement in terms of Mean Average Precision (MAP).",
"title": ""
},
{
"docid": "b3ffe7b94b8965be5fb4f702c4ce5f3d",
"text": "BACKGROUND\nThe ability to rise from sitting to standing is critical to an individual's quality of life, as it is a prerequisite for functional independence. The purpose of the current study was to examine the hypothesis that test durations as assessed with the instrumented repeated Sit-To-Stand (STS) show stronger associations with health status, functional status and daily physical activity of older adults than manually recorded test durations.\n\n\nMETHODS\nIn 63 older participants (mean age 83 ±6.9 years, 51 female), health status was assessed using the European Quality of Life questionnaire and functional status was assessed using the physical function index of the of the RAND-36. Physical performance was measured using a wearable sensor-based STS test. From this test, durations, sub-durations and kinematics of the STS movements were estimated and analysed. In addition, physical activity was measured for one week using an activity monitor and episodes of lying, sitting, standing and locomotion were identified. Associations between STS parameters with health status, functional status and daily physical activity were assessed.\n\n\nRESULTS\nThe manually recorded STS times were not significantly associated with health status (p = 0.457) and functional status (p = 0.055), whereas the instrumented STS times were (both p = 0.009). The manually recorded STS durations showed a significant association to daily physical activity for mean sitting durations (p = 0.042), but not for mean standing durations (p = 0.230) and mean number of locomotion periods (p = 0.218). Furthermore, durations of the dynamic sit-to-stand phase of the instrumented STS showed more significant associations with health status, functional status and daily physical activity (all p = 0.001) than the static phases standing and sitting (p = 0.043-0.422).\n\n\nCONCLUSIONS\nAs hypothesized, instrumented STS durations were more strongly associated with participant health status, functional status and physical activity than manually recorded STS durations in older adults. Furthermore, instrumented STS allowed assessment of the dynamic phases of the test, which were likely more informative than the static sitting and standing phases.",
"title": ""
},
{
"docid": "81d4f23c5b6d407e306569f4e3ad4be9",
"text": "While much progress has been made in wearable computing in recent years, input techniques remain a key challenge. In this paper, we introduce uTrack, a technique to convert the thumb and fingers into a 3D input system using magnetic field (MF) sensing. A user wears a pair of magnetometers on the back of their fingers and a permanent magnet affixed to the back of the thumb. By moving the thumb across the fingers, we obtain a continuous input stream that can be used for 3D pointing. Specifically, our novel algorithm calculates the magnet's 3D position and tilt angle directly from the sensor readings. We evaluated uTrack as an input device, showing an average tracking accuracy of 4.84 mm in 3D space - sufficient for subtle interaction. We also demonstrate a real-time prototype and example applications allowing users to interact with the computer using 3D finger input.",
"title": ""
},
{
"docid": "d8e81272912d09a83eb692c43c9fe5c4",
"text": "In this paper, we compare the effectiveness of Hidden Markov Models (HMMs) with that of Profile Hidden Markov Models (PHMMs), where both are trained on sequences of API calls. We compare our results to static analysis using HMMs trained on sequences of opcodes, and show that dynamic analysis achieves significantly stronger results in many cases. Furthermore, in comparing our two dynamic analysis approaches, we find that using PHMMs consistently outperforms our technique based on HMMs.",
"title": ""
},
{
"docid": "2d2465aff21421330f82468858a74cf4",
"text": "There has been a tremendous increase in popularity and adoption of wearable fitness trackers. These fitness trackers predominantly use Bluetooth Low Energy (BLE) for communicating and syncing the data with user's smartphone. This paper presents a measurement-driven study of possible privacy leakage from BLE communication between the fitness tracker and the smartphone. Using real BLE traffic traces collected in the wild and in controlled experiments, we show that majority of the fitness trackers use unchanged BLE address while advertising, making it feasible to track them. The BLE traffic of the fitness trackers is found to be correlated with the intensity of user's activity, making it possible for an eavesdropper to determine user's current activity (walking, sitting, idle or running) through BLE traffic analysis. Furthermore, we also demonstrate that the BLE traffic can represent user's gait which is known to be distinct from user to user. This makes it possible to identify a person (from a small group of users) based on the BLE traffic of her fitness tracker. As BLE-based wearable fitness trackers become widely adopted, our aim is to identify important privacy implications of their usage and discuss prevention strategies.",
"title": ""
}
] |
scidocsrr
|
08a8ce6bea9ce053a4a2c10da877bf2f
|
PinOS: a programmable framework for whole-system dynamic instrumentation
|
[
{
"docid": "b7222f86da6f1e44bd1dca88eb59dc4b",
"text": "A virtualized system includes a new layer of software, the virtual machine monitor. The VMM's principal role is to arbitrate accesses to the underlying physical host platform's resources so that multiple operating systems (which are guests of the VMM) can share them. The VMM presents to each guest OS a set of virtual platform interfaces that constitute a virtual machine (VM). Once confined to specialized, proprietary, high-end server and mainframe systems, virtualization is now becoming more broadly available and is supported in off-the-shelf systems based on Intel architecture (IA) hardware. This development is due in part to the steady performance improvements of IA-based systems, which mitigates traditional virtualization performance overheads. Intel virtualization technology provides hardware support for processor virtualization, enabling simplifications of virtual machine monitor software. Resulting VMMs can support a wider range of legacy and future operating systems while maintaining high performance.",
"title": ""
}
] |
[
{
"docid": "a338df86cf504d246000c42512473f93",
"text": "Natural Language Processing (NLP) has emerged with a wide scope of research in the area. The Burmese language, also called the Myanmar Language is a resource scarce, tonal, analytical, syllable-timed and principally monosyllabic language with Subject-Object-Verb (SOV) ordering. NLP of Burmese language is also challenged by the fact that it has no white spaces and word boundaries. Keeping these facts in view, the current paper is a first formal attempt to present a bibliography of research works pertinent to NLP tasks in Burmese language. Instead of presenting mere catalogue, the current work is also specifically elaborated by annotations as well as classifications of NLP task research works in NLP related categories. The paper presents the state-of-the-art of Burmese NLP tasks. Both annotations and classifications of NLP tasks of Burmese language are useful to the scientific community as it shows where the field of research in Burmese NLP is going. In fact, to the best of author’s knowledge, this is first work of its kind worldwide for any language. For a period spanning more than 25 years, the paper discusses Burmese language Word Identification, Segmentation, Disambiguation, Collation, Semantic Parsing and Tokenization followed by Part-Of-Speech (POS) Tagging, Machine Translation Systems (MTS), Text Keying/Input, Recognition and Text Display Methods. Burmese language WordNet, Search Engine and influence of other languages on Burmese language are also discussed.",
"title": ""
},
{
"docid": "755c4c452a535f30e53f0e9e77f71d20",
"text": "Learning approaches have shown great success in the task of super-resolving an image given a low resolution input. Video superresolution aims for exploiting additionally the information from multiple images. Typically, the images are related via optical flow and consecutive image warping. In this paper, we provide an end-to-end video superresolution network that, in contrast to previous works, includes the estimation of optical flow in the overall network architecture. We analyze the usage of optical flow for video super-resolution and find that common off-the-shelf image warping does not allow video super-resolution to benefit much from optical flow. We rather propose an operation for motion compensation that performs warping from low to high resolution directly. We show that with this network configuration, video superresolution can benefit from optical flow and we obtain state-of-the-art results on the popular test sets. We also show that the processing of whole images rather than independent patches is responsible for a large increase in accuracy.",
"title": ""
},
{
"docid": "6089f02c3fc3b1760c03190818c28af1",
"text": "In this paper we suggest viewing images (as well as attacks on them) as a sequence of linear operators and propose novel hashing algorithms employing transforms that are based on matrix invariants. To derive this sequence, we simply cover a two dimensional representation of an image by a sequence of (possibly overlapping) rectangles R/sub i/ whose sizes and locations are chosen randomly/sup 1/ from a suitable distribution. The restriction of the image (representation) to each R/sub i/ gives rise to a matrix A/sub i/. The fact that A/sub i/'s will overlap and are random, makes the sequence (respectively) a redundant and non-standard representation of images, but is crucial for our purposes. Our algorithms first construct a secondary image, derived from input image by pseudo-randomly extracting features that approximately capture semi-global geometric characteristics. From the secondary image (which does not perceptually resemble the input), we further extract the final features which can be used as a hash value (and can be further suitably quantized). In this paper, we use spectral matrix invariants as embodied by singular value decomposition. Surprisingly, formation of the secondary image turns out be quite important since it not only introduces further robustness (i.e., resistance against standard signal processing transformations), but also enhances the security properties (i.e. resistance against intentional attacks). Indeed, our experiments reveal that our hashing algorithms extract most of the geometric information from the images and hence are robust to severe perturbations (e.g. up to %50 cropping by area with 20 degree rotations) on images while avoiding misclassification. Our methods are general enough to yield a watermark embedding scheme, which will be studied in another paper.",
"title": ""
},
{
"docid": "60d6869cadebea71ef549bb2a7d7e5c3",
"text": "BACKGROUND\nAcne is a common condition seen in up to 80% of people between 11 and 30 years of age and in up to 5% of older adults. In some patients, it can result in permanent scars that are surprisingly difficult to treat. A relatively new treatment, termed skin needling (needle dermabrasion), seems to be appropriate for the treatment of rolling scars in acne.\n\n\nAIM\nTo confirm the usefulness of skin needling in acne scarring treatment.\n\n\nMETHODS\nThe present study was conducted from September 2007 to March 2008 at the Department of Systemic Pathology, University of Naples Federico II and the UOC Dermatology Unit, University of Rome La Sapienza. In total, 32 patients (20 female, 12 male patients; age range 17-45) with acne rolling scars were enrolled. Each patient was treated with a specific tool in two sessions. Using digital cameras, photos of all patients were taken to evaluate scar depth and, in five patients, silicone rubber was used to make a microrelief impression of the scars. The photographic data were analysed by using the sign test statistic (alpha < 0.05) and the data from the cutaneous casts were analysed by fast Fourier transformation (FFT).\n\n\nRESULTS\nAnalysis of the patient photographs, supported by the sign test and of the degree of irregularity of the surface microrelief, supported by FFT, showed that, after only two sessions, the severity grade of rolling scars in all patients was greatly reduced and there was an overall aesthetic improvement. No patient showed any visible signs of the procedure or hyperpigmentation.\n\n\nCONCLUSION\nThe present study confirms that skin needling has an immediate effect in improving acne rolling scars and has advantages over other procedures.",
"title": ""
},
{
"docid": "bb1554d174df80e7db20e943b4a69249",
"text": "Any static, global analysis of the expression and data relationships in a program requires a knowledge of the control flow of the program. Since one of the primary reasons for doing such a global analysis in a compiler is to produce optimized programs, control flow analysis has been embedded in many compilers and has been described in several papers. An early paper by Prosser [5] described the use of Boolean matrices (or, more particularly, connectivity matrices) in flow analysis. The use of “dominance” relationships in flow analysis was first introduced by Prosser and much expanded by Lowry and Medlock [6]. References [6,8,9] describe compilers which use various forms of control flow analysis for optimization. Some recent developments in the area are reported in [4] and in [7].\n The underlying motivation in all the different types of control flow analysis is the need to codify the flow relationships in the program. The codification may be in connectivity matrices, in predecessor-successor tables, in dominance lists, etc. Whatever the form, the purpose is to facilitate determining what the flow relationships are; in other words to facilitate answering such questions as: is this an inner loop?, if an expression is removed from the loop where can it be correctly and profitably placed?, which variable definitions can affect this use?\n In this paper the basic control flow relationships are expressed in a directed graph. Various graph constructs are then found and shown to codify interesting global relationships.",
"title": ""
},
{
"docid": "f8ac5a0dbd0bf8228b8304c1576189b9",
"text": "The importance of cost planning for solid waste management (SWM) in industrialising regions (IR) is not well recognised. The approaches used to estimate costs of SWM can broadly be classified into three categories - the unit cost method, benchmarking techniques and developing cost models using sub-approaches such as cost and production function analysis. These methods have been developed into computer programmes with varying functionality and utility. IR mostly use the unit cost and benchmarking approach to estimate their SWM costs. The models for cost estimation, on the other hand, are used at times in industrialised countries, but not in IR. Taken together, these approaches could be viewed as precedents that can be modified appropriately to suit waste management systems in IR. The main challenges (or problems) one might face while attempting to do so are a lack of cost data, and a lack of quality for what data do exist. There are practical benefits to planners in IR where solid waste problems are critical and budgets are limited.",
"title": ""
},
{
"docid": "3e24de04f0b1892b27fc60bb8a405d0d",
"text": "A power factor (PF) corrected single stage, two-switch isolated zeta converter is proposed for arc welding. This modified zeta converter is having two switches and two clamping diodes on the primary side of a high-frequency transformer. This, in turn, results in reduced switch stress. The proposed converter is designed to operate in a discontinuous inductor current mode (DICM) to achieve inherent PF correction at the utility. The DICM operation substantially reduces the complexity of the control and effectively regulates the output dc voltage. The proposed converter offers several features, such as inherent overload current limit and fast parametrical response, to the load and source voltage conditions. This, in turn, results in an improved performance in terms of power quality indices and an enhanced weld bead quality. The proposed modified zeta converter is designed and its performance is simulated in the MATLAB/Simulink environment. Simulated results are also verified experimentally on a developed prototype of the converter. The performance of the system is investigated in terms of its input PF, displacement PF, total harmonic distortion of ac mains current, voltage regulation, and robustness to prove its efficacy in overall performance.",
"title": ""
},
{
"docid": "c03a2f4634458d214d961c3ae9438d1d",
"text": "An accurate small-signal model of three-phase photovoltaic (PV) inverters with a high-order grid filter is derived in this paper. The proposed model takes into account the influence of both the inverter operating point and the PV panel characteristics on the inverter dynamic response. A sensitivity study of the control loops to variations of the DC voltage, PV panel transconductance, supplied power, and grid inductance is performed using the proposed small-signal model. Analytical and experimental results carried out on a 100-kW PV inverter are presented.",
"title": ""
},
{
"docid": "28bb2aa8a05e90072e2dc4a3b5d871d5",
"text": "Radio Frequency Identification (RFID) security has not been properly handled in numerous applications, such as in public transportation systems. In this paper, a methodology to reverse engineer and detect security flaws is put into practice. Specifically, the communications protocol of an ISO/IEC 14443-B public transportation card used by hundreds of thousands of people in Spain was analyzed. By applying the methodology with a hardware tool (Proxmark 3), it was possible to access private information (e.g. trips performed, buses taken, fares applied…), to capture tag-reader communications, and even emulate both tags and readers.",
"title": ""
},
{
"docid": "7db9cf29dd676fa3df5a2e0e95842b6e",
"text": "We present a novel approach to still image denoising based on e ective filtering in 3D transform domain by combining sliding-window transform processing with block-matching. We process blocks within the image in a sliding manner and utilize the block-matching concept by searching for blocks which are similar to the currently processed one. The matched blocks are stacked together to form a 3D array and due to the similarity between them, the data in the array exhibit high level of correlation. We exploit this correlation by applying a 3D decorrelating unitary transform and e ectively attenuate the noise by shrinkage of the transform coe cients. The subsequent inverse 3D transform yields estimates of all matched blocks. After repeating this procedure for all image blocks in sliding manner, the final estimate is computed as weighed average of all overlapping blockestimates. A fast and e cient algorithm implementing the proposed approach is developed. The experimental results show that the proposed method delivers state-of-art denoising performance, both in terms of objective criteria and visual quality.",
"title": ""
},
{
"docid": "207e90cebdf23fb37f10b5ed690cb4fc",
"text": "In the scientific digital libraries, some papers from different research communities can be described by community-dependent keywords even if they share a semantically similar topic. Articles that are not tagged with enough keyword variations are poorly indexed in any information retrieval system which limits potentially fruitful exchanges between scientific disciplines. In this paper, we introduce a novel experimentally designed pipeline for multi-label semantic-based tagging developed for open-access metadata digital libraries. The approach starts by learning from a standard scientific categorization and a sample of topic tagged articles to find semantically relevant articles and enrich its metadata accordingly. Our proposed pipeline aims to enable researchers reaching articles from various disciplines that tend to use different terminologies. It allows retrieving semantically relevant articles given a limited known variation of search terms. In addition to achieving an accuracy that is higher than an expanded query based method using a topic synonym set extracted from a semantic network, our experiments also show a higher computational scalability versus other comparable techniques. We created a new benchmark extracted from the open-access metadata of a scientific digital library and published it along with the experiment code to allow further research in the topic.",
"title": ""
},
{
"docid": "d004de75764e87fe246617cb7e3259a6",
"text": "OBJECTIVE\nClinical decision-making regarding the prevention of depression is complex for pregnant women with histories of depression and their health care providers. Pregnant women with histories of depression report preference for nonpharmacological care, but few evidence-based options exist. Mindfulness-based cognitive therapy has strong evidence in the prevention of depressive relapse/recurrence among general populations and indications of promise as adapted for perinatal depression (MBCT-PD). With a pilot randomized clinical trial, our aim was to evaluate treatment acceptability and efficacy of MBCT-PD relative to treatment as usual (TAU).\n\n\nMETHOD\nPregnant adult women with depression histories were recruited from obstetric clinics at 2 sites and randomized to MBCT-PD (N = 43) or TAU (N = 43). Treatment acceptability was measured by assessing completion of sessions, at-home practice, and satisfaction. Clinical outcomes were interview-based depression relapse/recurrence status and self-reported depressive symptoms through 6 months postpartum.\n\n\nRESULTS\nConsistent with predictions, MBCT-PD for at-risk pregnant women was acceptable based on rates of completion of sessions and at-home practice assignments, and satisfaction with services was significantly higher for MBCT-PD than TAU. Moreover, at-risk women randomly assigned to MBCT-PD reported significantly improved depressive outcomes compared with participants receiving TAU, including significantly lower rates of depressive relapse/recurrence and lower depressive symptom severity during the course of the study.\n\n\nCONCLUSIONS\nMBCT-PD is an acceptable and clinically beneficial program for pregnant women with histories of depression; teaching the skills and practices of mindfulness meditation and cognitive-behavioral therapy during pregnancy may help to reduce the risk of depression during an important transition in many women's lives.",
"title": ""
},
{
"docid": "97a3c599c7410a0e12e1784585260b95",
"text": "This research focuses on 3D printed carbon-epoxy composite components in which the reinforcing carbon fibers have been preferentially aligned during the micro-extrusion process. Most polymer 3D printing techniques use unreinforced polymers. By adding carbon fiber as a reinforcing material, properties such as mechanical strength, electrical conductivity, and thermal conductivity can be greatly enhanced. However, these properties are significantly influenced by the degree of fiber alignment (or lack thereof). A Design of Experiments (DOE) approach was used to identify significant process parameters affecting preferential fiber alignment in the micro-extrusion process. A 2D Fast Fourier Transform (FFT) was used with ImageJ software to quantify the degree of fiber alignment in micro-extruded carbonepoxy pastes. Based on analysis of experimental results, tensile test samples were printed with fibers aligned parallel and perpendicular to the tensile axis. A standard test method for tensile properties of plastic revealed that the 3D printed test coupons with fibers aligned parallel to the tensile axis were significantly better in tensile strength and modulus. Results of this research can be used to 3D print components with locally controlled fiber alignment that is difficult to achieve via conventional composite manufacturing techniques.",
"title": ""
},
{
"docid": "8a613c019c6b3b83d55378c3149df8f7",
"text": "For the performance and accuracy requirements of brushless DC motor speed control system, this paper integrates organically neural network and the traditional PID to constitute brushless DC motor speed control system based on BP neural network self-tuning parameters PID control. The traditional PID controller is used in the beginning several seconds, and then another parameter self-tuning PID controller based on BP neural network is converted to after training for seconds. Simulation model was established in Matlab/Simulink. The simulation results indicate that the neutral network PID controller can improve the robustness of the system and has better adaptabilities to the model and environments compared with the traditional PID controller.",
"title": ""
},
{
"docid": "22bbeceff175ee2e9a462b753ce24103",
"text": "BACKGROUND\nEUS-guided FNA can help diagnose and differentiate between various pancreatic and other lesions.The aim of this study was to compare approaches among involved/relevant physicians to the controversies surrounding the use of FNA in EUS.\n\n\nMETHODS\nA five-case survey was developed, piloted, and validated. It was collected from a total of 101 physicians, who were all either gastroenterologists (GIs), surgeons or oncologists. The survey compared the management strategies chosen by members of these relevant disciplines regarding EUS-guided FNA.\n\n\nRESULTS\nFor CT operable T2NOM0 pancreatic tumors the research demonstrated variance as to whether to undertake EUS-guided FNA, at p < 0.05. For inoperable pancreatic tumors 66.7% of oncologists, 62.2% of surgeons and 79.1% of GIs opted for FNA (p < 0.05). For cystic pancreatic lesions, oncologists were more likely to send patients to surgery without FNA. For stable simple pancreatic cysts (23 mm), most physicians (66.67%) did not recommend FNA. For a submucosal gastric 19 mm lesion, 63.2% of surgeons recommended FNA, vs. 90.0% of oncologists (p < 0.05).\n\n\nCONCLUSIONS\nControversies as to ideal application of EUS-FNA persist. Optimal guidelines should reflect the needs and concerns of the multidisciplinary team who treat patients who need EUS-FNA. Multi-specialty meetings assembled to manage patients with these disorders may be enlightening and may help develop consensus.",
"title": ""
},
{
"docid": "c5a15fd3102115aebc940cbc4ce5e474",
"text": "We present a novel approach for visual detection and attribute-based search of vehicles in crowded surveillance scenes. Large-scale processing is addressed along two dimensions: 1) large-scale indexing, where hundreds of billions of events need to be archived per month to enable effective search and 2) learning vehicle detectors with large-scale feature selection, using a feature pool containing millions of feature descriptors. Our method for vehicle detection also explicitly models occlusions and multiple vehicle types (e.g., buses, trucks, SUVs, cars), while requiring very few manual labeling. It runs quite efficiently at an average of 66 Hz on a conventional laptop computer. Once a vehicle is detected and tracked over the video, fine-grained attributes are extracted and ingested into a database to allow future search queries such as “Show me all blue trucks larger than 7 ft. length traveling at high speed northbound last Saturday, from 2 pm to 5 pm”. We perform a comprehensive quantitative analysis to validate our approach, showing its usefulness in realistic urban surveillance settings.",
"title": ""
},
{
"docid": "ced0328f339248158e8414c3315330c5",
"text": "Novel inline coplanar-waveguide (CPW) bandpass filters composed of quarter-wavelength stepped-impedance resonators are proposed, using loaded air-bridge enhanced capacitors and broadside-coupled microstrip-to-CPW transition structures for both wideband spurious suppression and size miniaturization. First, by suitably designing the loaded capacitor implemented by enhancing the air bridges printed over the CPW structure and the resonator parameters, the lower order spurious passbands of the proposed filter may effectively be suppressed. Next, by adopting the broadside-coupled microstrip-to-CPW transitions as the fed structures to provide required input and output coupling capacitances and high attenuation level in the upper stopband, the filter with suppressed higher order spurious responses may be achieved. In this study, two second- and fourth-order inline bandpass filters with wide rejection band are implemented and thoughtfully examined. Specifically, the proposed second-order filter has its stopband extended up to 13.3f 0, where f0 stands for the passband center frequency, and the fourth-order filter even possesses better stopband up to 19.04f0 with a satisfactory rejection greater than 30 dB",
"title": ""
},
{
"docid": "2210176bcb0f139e3f7f7716447f3920",
"text": "Automatic metadata generation provides scalability and usability for digital libraries and their collections. Machine learning methods offer robust and adaptable automatic metadata extraction. We describe a Support Vector Machine classification-based method for metadata extraction from header part of research papers and show that it outperforms other machine learning methods on the same task. The method first classifies each line of the header into one or more of 15 classes. An iterative convergence procedure is then used to improve the line classification by using the predicted class labels of its neighbor lines in the previous round. Further metadata extraction is done by seeking the best chunk boundaries of each line. We found that discovery and use of the structural patterns of the data and domain based word clustering can improve the metadata extraction performance. An appropriate feature normalization also greatly improves the classification performance. Our metadata extraction method was originally designed to improve the metadata extraction quality of the digital libraries Citeseer [17] and EbizSearch[24]. We believe it can be generalized to other digital libraries.",
"title": ""
},
{
"docid": "15c3ddb9c01d114ab7d09f010195465b",
"text": "In this paper we have described a solution for supporting independent living of the elderly by means of equipping their home with a simple sensor network to monitor their behaviour. Standard home automation sensors including movement sensors and door entry point sensors are used. By monitoring the sensor data, important information regarding any anomalous behaviour will be identified. Different ways of visualizing large sensor data sets and representing them in a format suitable for clustering the abnormalities are also investigated. In the latter part of this paper, recurrent neural networks are used to predict the future values of the activities for each sensor. The predicted values are used to inform the caregiver in case anomalous behaviour is predicted in the near future. Data collection, classification and prediction are investigated in real home environments with elderly occupants suffering from dementia.",
"title": ""
},
{
"docid": "a478b6f7accfb227e6ee5a6b35cd7fa1",
"text": "This paper presents the development of an ultra-high-speed permanent magnet synchronous motor (PMSM) that produces output shaft power of 2000 W at 200 000 rpm with around 90% efficiency. Due to the guaranteed open-loop stability over the full operating speed range, the developed motor system is compact and low cost since it can avoid the design complexity of a closed-loop controller. This paper introduces the collaborative design approach of the motor system in order to ensure both performance requirements and stability over the full operating speed range. The actual implementation of the motor system is then discussed. Finally, computer simulation and experimental results are provided to validate the proposed design and its effectiveness",
"title": ""
}
] |
scidocsrr
|
6b6cac5e93def751f9bbdea50eb6f793
|
A 5.8nW, 45ppm/°C on-chip CMOS wake-up timer using a constant charge subtraction scheme
|
[
{
"docid": "09cdd9081d3a7ec3fbca31f2dc577ae7",
"text": "A self-chopped relaxation oscillator with adaptive supply generation provides the stable output clock against variations in temperature and supply voltages. The frequency drift is less than ±0.1% for the supply voltage changing from 1.6 to 3.2 V and ±0.1% for a temperature range from -20 to 100°C, which is reduced by 83% with the self-chopped technique. This relaxation oscillator is implemented in a 60-nm CMOS technology with its active area equals to 0.048 mm2. It consumes 2.8 uA from a 1.6-V supply.",
"title": ""
},
{
"docid": "55658c75bcc3a12c1b3f276050f28355",
"text": "Sensing systems such as biomedical implants, infrastructure monitoring systems, and military surveillance units are constrained to consume only picowatts to nanowatts in standby and active mode, respectively. This tight power budget places ultra-low power demands on all building blocks in the systems. This work proposes a voltage reference for use in such ultra-low power systems, referred to as the 2T voltage reference, which has been demonstrated in silicon across three CMOS technologies. Prototype chips in 0.13 μm show a temperature coefficient of 16.9 ppm/°C (best) and line sensitivity of 0.033%/V, while consuming 2.22 pW in 1350 μm2. The lowest functional Vdd 0.5 V. The proposed design improves energy efficiency by 2 to 3 orders of magnitude while exhibiting better line sensitivity and temperature coefficient in less area, compared to other nanowatt voltage references. For process spread analysis, 49 dies are measured across two runs, showing the design exhibits comparable spreads in TC and output voltage to existing voltage references in the literature. Digital trimming is demonstrated, and assisted one temperature point digital trimming, guided by initial samples with two temperature point trimming, enables TC <; 50 ppm/°C and ±0.35% output precision across all 25 dies. Ease of technology portability is demonstrated with silicon measurement results in 65 nm, 0.13 μm, and 0.18 μm CMOS technologies.",
"title": ""
}
] |
[
{
"docid": "d452700b9c919ba62156beecb0d50b91",
"text": "In this paper we propose a solution to the problem of body part segmentation in noisy silhouette images. In developing this solution we revisit the issue of insufficient labeled training data, by investigating how synthetically generated data can be used to train general statistical models for shape classification. In our proposed solution we produce sequences of synthetically generated images, using three dimensional rendering and motion capture information. Each image in these sequences is labeled automatically as it is generated and this labeling is based on the hand labeling of a single initial image.We use shape context features and Hidden Markov Models trained based on this labeled synthetic data. This model is then used to segment silhouettes into four body parts; arms, legs, body and head. Importantly, in all the experiments we conducted the same model is employed with no modification of any parameters after initial training.",
"title": ""
},
{
"docid": "704598402da135b6b7e3251de4c6edf8",
"text": "Almost every complex software system today is configurable. While configurability has many benefits, it challenges performance prediction, optimization, and debugging. Often, the influences of individual configuration options on performance are unknown. Worse, configuration options may interact, giving rise to a configuration space of possibly exponential size. Addressing this challenge, we propose an approach that derives a performance-influence model for a given configurable system, describing all relevant influences of configuration options and their interactions. Our approach combines machine-learning and sampling heuristics in a novel way. It improves over standard techniques in that it (1) represents influences of options and their interactions explicitly (which eases debugging), (2) smoothly integrates binary and numeric configuration options for the first time, (3) incorporates domain knowledge, if available (which eases learning and increases accuracy), (4) considers complex constraints among options, and (5) systematically reduces the solution space to a tractable size. A series of experiments demonstrates the feasibility of our approach in terms of the accuracy of the models learned as well as the accuracy of the performance predictions one can make with them.",
"title": ""
},
{
"docid": "4560e1b7318013be0688b8e73692fda4",
"text": "This paper introduces a new real-time object detection approach named Yes-Net. It realizes the prediction of bounding boxes and class via single neural network like YOLOv2 and SSD, but owns more efficient and outstanding features. It combines local information with global information by adding the RNN architecture as a packed unit in CNN model to form the basic feature extractor. Independent anchor boxes coming from full-dimension kmeans is also applied in Yes-Net, it brings better average IOU than grid anchor box. In addition, instead of NMS, YesNet uses RNN as a filter to get the final boxes, which is more efficient. For 416 × 416 input, Yes-Net achieves 74.3% mAP on VOC2007 test at 39 FPS on an Nvidia Titan X Pascal.",
"title": ""
},
{
"docid": "5d63815adaad5d2c1b80ddd125157842",
"text": "We consider the problem of building scalable semantic parsers for Freebase, and present a new approach for learning to do partial analyses that ground as much of the input text as possible without requiring that all content words be mapped to Freebase concepts. We study this problem on two newly introduced large-scale noun phrase datasets, and present a new semantic parsing model and semi-supervised learning approach for reasoning with partial ontological support. Experiments demonstrate strong performance on two tasks: referring expression resolution and entity attribute extraction. In both cases, the partial analyses allow us to improve precision over strong baselines, while parsing many phrases that would be ignored by existing techniques.",
"title": ""
},
{
"docid": "734eb2576affeb2e34f07b5222933f12",
"text": "In this paper, a novel chemical sensor system utilizing an Ion-Sensitive Field Effect Transistor (ISFET) for pH measurement is presented. Compared to other interface circuits, this system uses auto-zero amplifiers with a pingpong control scheme and array of Programmable-Gate Ion-Sensitive Field Effect Transistor (PG-ISFET). By feedback controlling the programable gates of ISFETs, the intrinsic sensor offset can be compensated for uniformly. Furthermore the chemical signal sensitivity can be enhanced due to the feedback system on the sensing node. A pingpong structure and operation protocol has been developed to realize the circuit, reducing the error and achieve continuous measurement. This system has been designed and fabricated in AMS 0.35µm, to compensate for a threshold voltage variation of ±5V and enhance the pH sensitivity to 100mV/pH.",
"title": ""
},
{
"docid": "4fbde6cd9d511072680a4f20f6674acf",
"text": "A 50-year-old man developed numerous pustules and bullae on the trunk and limbs 15 days after anal fissure surgery. The clinicopathological diagnosis was iododerma induced by topical povidone-iodine sitz baths postoperatively. Complete resolution occurred within 3 weeks using systemic corticosteroids and forced diuresis.",
"title": ""
},
{
"docid": "4b051e3908eabb5f550094ebabf6583d",
"text": "This paper presents a review of modern cooling system employed for the thermal management of power traction machines. Various solutions for heat extractions are described: high thermal conductivity insulation materials, spray cooling, high thermal conductivity fluids, combined liquid and air forced convection, and loss mitigation techniques.",
"title": ""
},
{
"docid": "a80a251eb27f1f337fb18442de423b66",
"text": "We address the synthesis of controllers for large groups of robots and sensors, tackling the specific problem of controlling a swarm of robots to generate patterns specified by implicit functions of the form s(x, y) = 0. We derive decentralized controllers that allow the robots to converge to a given curve S and spread along this curve. We consider implicit functions that are weighted sums of radial basis functions created by interpolating from a set of constraint points, which give us a high degree of control over the desired 2D curves. We describe the generation of simple plans for swarms of robots using these functions and illustrate our approach through simulations and real experiments.",
"title": ""
},
{
"docid": "1f752034b5307c0118d4156d0b95eab3",
"text": "Importance\nTherapy-related myeloid neoplasms are a potentially life-threatening consequence of treatment for autoimmune disease (AID) and an emerging clinical phenomenon.\n\n\nObjective\nTo query the association of cytotoxic, anti-inflammatory, and immunomodulating agents to treat patients with AID with the risk for developing myeloid neoplasm.\n\n\nDesign, Setting, and Participants\nThis retrospective case-control study and medical record review included 40 011 patients with an International Classification of Diseases, Ninth Revision, coded diagnosis of primary AID who were seen at 2 centers from January 1, 2004, to December 31, 2014; of these, 311 patients had a concomitant coded diagnosis of myelodysplastic syndrome (MDS) or acute myeloid leukemia (AML). Eighty-six cases met strict inclusion criteria. A case-control match was performed at a 2:1 ratio.\n\n\nMain Outcomes and Measures\nOdds ratio (OR) assessment for AID-directed therapies.\n\n\nResults\nAmong the 86 patients who met inclusion criteria (49 men [57%]; 37 women [43%]; mean [SD] age, 72.3 [15.6] years), 55 (64.0%) had MDS, 21 (24.4%) had de novo AML, and 10 (11.6%) had AML and a history of MDS. Rheumatoid arthritis (23 [26.7%]), psoriasis (18 [20.9%]), and systemic lupus erythematosus (12 [14.0%]) were the most common autoimmune profiles. Median time from onset of AID to diagnosis of myeloid neoplasm was 8 (interquartile range, 4-15) years. A total of 57 of 86 cases (66.3%) received a cytotoxic or an immunomodulating agent. In the comparison group of 172 controls (98 men [57.0%]; 74 women [43.0%]; mean [SD] age, 72.7 [13.8] years), 105 (61.0%) received either agent (P = .50). Azathioprine sodium use was observed more frequently in cases (odds ratio [OR], 7.05; 95% CI, 2.35- 21.13; P < .001). Notable but insignificant case cohort use among cytotoxic agents was found for exposure to cyclophosphamide (OR, 3.58; 95% CI, 0.91-14.11) followed by mitoxantrone hydrochloride (OR, 2.73; 95% CI, 0.23-33.0). Methotrexate sodium (OR, 0.60; 95% CI, 0.29-1.22), mercaptopurine (OR, 0.62; 95% CI, 0.15-2.53), and mycophenolate mofetil hydrochloride (OR, 0.66; 95% CI, 0.21-2.03) had favorable ORs that were not statistically significant. No significant association between a specific length of time of exposure to an agent and the drug's category was observed.\n\n\nConclusions and Relevance\nIn a large population with primary AID, azathioprine exposure was associated with a 7-fold risk for myeloid neoplasm. The control and case cohorts had similar systemic exposures by agent category. No association was found for anti-tumor necrosis factor agents. Finally, no timeline was found for the association of drug exposure with the incidence in development of myeloid neoplasm.",
"title": ""
},
{
"docid": "05a5e3849c9fca4d788aa0210d8f7294",
"text": "The growth of mobile phone users has lead to a dramatic increasing of SMS spam messages. Recent reports clearly indicate that the volume of mobile phone spam is dramatically increasing year by year. In practice, fighting such plague is difficult by several factors, including the lower rate of SMS that has allowed many users and service providers to ignore the issue, and the limited availability of mobile phone spam-filtering software. Probably, one of the major concerns in academic settings is the scarcity of public SMS spam datasets, that are sorely needed for validation and comparison of different classifiers. Moreover, traditional content-based filters may have their performance seriously degraded since SMS messages are fairly short and their text is generally rife with idioms and abbreviations. In this paper, we present details about a new real, public and non-encoded SMS spam collection that is the largest one as far as we know. Moreover, we offer a comprehensive analysis of such dataset in order to ensure that there are no duplicated messages coming from previously existing datasets, since it may ease the task of learning SMS spam classifiers and could compromise the evaluation of methods. Additionally, we compare the performance achieved by several established machine learning techniques. In summary, the results indicate that the procedure followed to build the collection does not lead to near-duplicates and, regarding the classifiers, the Support Vector Machines outperforms other evaluated techniques and, hence, it can be used as a good baseline for further comparison. Keywords—Mobile phone spam; SMS spam; spam filtering; text categorization; classification.",
"title": ""
},
{
"docid": "bf56462f283d072c4157d5c5665eead3",
"text": "Various scientific computations have become so complex, and thus computation tools play an important role. In this paper, we explore the state-of-the-art framework providing high-level matrix computation primitives with MapReduce through the case study approach, and demonstrate these primitives with different computation engines to show the performance and scalability. We believe the opportunity for using MapReduce in scientific computation is even more promising than the success to date in the parallel systems literature.",
"title": ""
},
{
"docid": "65192c3b3e3bfe96e187bf391df049b4",
"text": "This paper presents a new single-stage singleswitch (S4) high power factor correction (PFC) AC/DC converter suitable for low power applications (< 150 W) with a universal input voltage range (90–265 Vrms). The proposed topology integrates a buck-boost input current shaper followed by a buck and a buck-boost converter, respectively. As a result, the proposed converter can operate with larger duty cycles compared to the exiting S4 topologies; hence, making them suitable for extreme step-down voltage conversion applications. Several desirable features are gained when the three integrated converter cells operate in discontinuous conduction mode (DCM). These features include low semiconductor voltage stress, zero-current switch at turn-on, and simple control with a fast well-regulated output voltage. A detailed circuit analysis is performed to derive the design equations. The theoretical analysis and effectiveness of the proposed approach are confirmed by experimental results obtained from a 35-W/12-Vdc laboratory prototype.",
"title": ""
},
{
"docid": "be7cc41f9e8d3c9e08c5c5ff1ea79f59",
"text": "A person’s emotions and state of mind are apparent in their face and eyes. As a Latin proverb states: “The face is the portrait of the mind; the eyes, its informers.”. This presents a huge challenge for computer graphics researchers in the generation of artificial entities that aim to replicate the movement and appearance of the human eye, which is so important in human-human interactions. This State of the Art Report provides an overview of the efforts made on tackling this challenging task. As with many topics in Computer Graphics, a cross-disciplinary approach is required to fully understand the workings of the eye in the transmission of information to the user. We discuss the movement of the eyeballs, eyelids, and the head from a physiological perspective and how these movements can be modelled, rendered and animated in computer graphics applications. Further, we present recent research from psychology and sociology that seeks to understand higher level behaviours, such as attention and eye-gaze, during the expression of emotion or during conversation, and how they are synthesised in Computer Graphics and",
"title": ""
},
{
"docid": "3472ffbc39fce27a2878c6564a99e1fe",
"text": "This paper tests for evidence of contagion between the financial markets of Thailand, Malaysia, Indonesia, Korea, and the Philippines. We find that correlations in currency and sovereign spreads increase significantly during the crisis period, whereas the equity market correlations offer mixed evidence. We construct a set of dummy variables using daily news to capture the impact of own-country and cross-border news on the markets. We show that after controlling for owncountry news and other fundamentals, there is evidence of cross-border contagion in the currency and equity markets. [JEL F30, F40, G15]",
"title": ""
},
{
"docid": "2ee4b0cac13eb147ea014fc9787d6f54",
"text": "Implementation of e-banking services system provides various advantages for the company are cost and time efficiencies, and be able to create differentiation and able to target market segments with low cost. To determine the underlying motives customers to select and prefer one channel and the other required a systematic exploration of the customer perception. This study aims to understand the customers' perception of various e-banking channels and how their motives in choosing the channel usage. By understanding customer perceptions about various banking channels, it is expected to be more helpful in assisting banks in Indonesia to introduce e-banking. A convinience sampling technique used to select 234 customers who surveyed in this study. Findings suggest that Automatic Teller Machine was percieved to be low cost, low complexity and most usefulness. EFT is also almost similar to the ATM, get the low cost and low complexity but low on security. Perception on Internet banking was secure and usefulness and high on privacy. Meanwhile perception on SMS banking was easy to access and also high on privacy. And phone banking was perceived to be the most expensive and inaccurate. Future studies are expected in addition to using other multivariate techniques would also be able to add other attributes are more influential.",
"title": ""
},
{
"docid": "d88043824732d96340028c74489e01a0",
"text": "Removing perspective distortion from hand held camera captured document images is one of the primitive tasks in document analysis, but unfortunately no such method exists that can reliably remove the perspective distortion from document images automatically. In this paper, we propose a convolutional neural network based method for recovering homography from hand-held camera captured documents. Our proposed method works independent of document’s underlying content and is trained end-to-end in a fully automatic way. Specifically, this paper makes following three contributions: firstly, we introduce a large scale synthetic dataset for recovering homography from documents images captured under different geometric and photometric transformations; secondly, we show that a generic convolutional neural network based architecture can be successfully used for regressing the corners positions of documents captured under wild settings; thirdly, we show that L1 loss can be reliably used for corners regression. Our proposed method gives state-of-the-art performance on the tested datasets, and has potential to become an integral part of document analysis pipeline.",
"title": ""
},
{
"docid": "645a92cd2f789f8708a522a35100611b",
"text": "INTRODUCTION\nMalignant Narcissism has been recognized as a serious condition but it has been largely ignored in psychiatric literature and research. In order to bring this subject to the attention of mental health professionals, this paper presents a contemporary synthesis of the biopsychosocial dynamics and recommendations for treatment of Malignant Narcissism.\n\n\nMETHODS\nWe reviewed the literature on Malignant Narcissism which was sparse. It was first described in psychiatry by Otto Kernberg in 1984. There have been few contributions to the literature since that time. We discovered that the syndrome of Malignant Narcissism was expressed in fairy tales as a part of the collective unconscious long before it was recognized by psychiatry. We searched for prominent malignant narcissists in recent history. We reviewed the literature on treatment and developed categories for family assessment.\n\n\nRESULTS\nMalignant Narcissism is described as a core Narcissistic personality disorder, antisocial behavior, ego-syntonic sadism, and a paranoid orientation. There is no structured interview or self-report measure that identifies Malignant Narcissism and this interferes with research, clinical diagnosis and treatment. This paper presents a synthesis of current knowledge about Malignant Narcissism and proposes a foundation for treatment.\n\n\nCONCLUSIONS\nMalignant Narcissism is a severe personality disorder that has devastating consequences for the family and society. It requires attention within the discipline of psychiatry and the social science community. We recommend treatment in a therapeutic community and a program of prevention that is focused on psychoeducation, not only in mental health professionals, but in the wider social community.",
"title": ""
},
{
"docid": "c3e037cb49fb639217142437ed3e8e04",
"text": "Machine learning models are now used extensively for decision making in diverse applications, but for non-experts they are essentially black boxes. While there has been some work on the explanation of classifications, these are targeted at the expert user. For the non-expert, a better model is one of justification not detailing how the model made its decision, but justifying it to the human user on his or her terms. In this paper we introduce the idea of a justification narrative: a simple model-agnostic mapping of the essential values underlying a classification to a semantic space. We present a package that automatically produces these narratives and realizes them visually or textually.",
"title": ""
},
{
"docid": "5116a1defa7bf03633bdb5488b585157",
"text": "Face spoofing can be performed in a variety of ways such as replay attack, print attack, and mask attack to deceive an automated recognition algorithm. To mitigate the effect of spoofing attempts, face anti-spoofing approaches aim to distinguish between genuine samples and spoofed samples. The focus of this paper is to detect spoofing attempts via Haralick texture features. The proposed algorithm extracts block-wise Haralick texture features from redundant discrete wavelet transformed frames obtained from a video. Dimensionality of the feature vector is reduced using principal component analysis and two class classification is performed using support vector machine. Results on the 3DMAD database show that the proposed algorithm achieves state-of-the-art results for both frame-based and video-based approaches, including 100% accuracy on video-based spoofing detection. Further, the results are reported on existing benchmark databases on which the proposed feature extraction framework archives state-of-the-art performance.",
"title": ""
},
{
"docid": "0fdd503f2687bbc786eff4db750b0911",
"text": "Researchers have access to large online archives of scientific articles. As a consequence, finding relevant papers has become more difficult. Newly formed online communities of researchers sharing citations provides a new way to solve this problem. In this paper, we develop an algorithm to recommend scientific articles to users of an online community. Our approach combines the merits of traditional collaborative filtering and probabilistic topic modeling. It provides an interpretable latent structure for users and items, and can form recommendations about both existing and newly published articles. We study a large subset of data from CiteULike, a bibliography sharing service, and show that our algorithm provides a more effective recommender system than traditional collaborative filtering.",
"title": ""
}
] |
scidocsrr
|
244cd6cc9ed077cee7e478eb0481ea58
|
Chapter 2 The role of motivation in promoting and sustaining self-regulated learning
|
[
{
"docid": "c56c71775a0c87f7bb6c59d6607e5280",
"text": "A correlational study examined relationships between motivational orientation, self-regulated learning, and classroom academic performance for 173 seventh graders from eight science and seven English classes. A self-report measure of student self-efficacy, intrinsic value, test anxiety, self-regulation, and use of learning strategies was administered, and performance data were obtained from work on classroom assignments. Self-efficacy and intrinsic value were positively related to cognitive engagement and performance. Regression analyses revealed that, depending on the outcome measure, self-regulation, self-efficacy, and test anxiety emerged as the best predictors of performance. Intrinsic value did not have a direct influence on performance but was strongly related to self-regulation and cognitive strategy use, regardless of prior achievement level. The implications of individual differences in motivational orientation for cognitive engagement and self-regulation in the classroom are discussed.",
"title": ""
}
] |
[
{
"docid": "78bd1c7ea28a4af60991b56ccd658d7f",
"text": "The number of categories for action recognition is growing rapidly. It is thus becoming increasingly hard to collect sufficient training data to learn conventional models for each category. This issue may be ameliorated by the increasingly popular “zero-shot learning” (ZSL) paradigm. In this framework a mapping is constructed between visual features and a human interpretable semantic description of each category, allowing categories to be recognised in the absence of any training data. Existing ZSL studies focus primarily on image data, and attribute-based semantic representations. In this paper, we address zero-shot recognition in contemporary video action recognition tasks, using semantic word vector space as the common space to embed videos and category labels. This is more challenging because the mapping between the semantic space and space-time features of videos containing complex actions is more complex and harder to learn. We demonstrate that a simple self-training and data augmentation strategy can significantly improve the efficacy of this mapping. Experiments on human action datasets including HMDB51 and UCF101 demonstrate that our approach achieves the state-of-the-art zero-shot action recognition performance.",
"title": ""
},
{
"docid": "e81bfbeaf2b4d6575f67ad5268e5b6f3",
"text": "Combination antibiotic therapy for Gram-negative sepsis is controversial. The present review provides a brief summary of the existing knowledge on combination therapy for severe infections with multidrug-resistant Pseudomonas spp., Acinetobacter spp., and Enterobacteriaceae. Empirical combination antibiotic therapy is recommended for severe sepsis and septic shock to reduce mortality related to inappropriate antibiotic treatment. Because definitive combination therapy has not been proven superior to monotherapy in meta-analyses, it is generally advised to de-escalate antibiotic therapy when the antibiotic susceptibility profile is known, although it cannot be excluded that some subgroups of patients might still benefit from continued combination therapy. Definitive combination therapy is recommended for carbapenemase-producing Enterobacteriaceae and should also be considered for severe infections with Pseudomonas and Acinetobacter spp. when beta-lactams cannot be used. Because resistance to broad-spectrum beta-lactams is increasing in Gram-negative bacteria and because no new antibiotics are expected to become available in the near future, the antibacterial potential of combination therapy should be further explored. In vitro data suggest that combinations can be effective even if the bacteria are resistant to the individual antibiotics, although existing evidence is insufficient to support the choice of combinations and explain the synergistic effects observed. In vitro models can be used to screen for effective combinations that can later be validated in animal or clinical studies. Further, in the absence of clinical evidence, in vitro data might be useful in supporting therapeutic decisions for severe infections with multidrug-resistant Gram-negative bacteria.",
"title": ""
},
{
"docid": "d686161665a3cee49cded0b64946d0dc",
"text": "A new technique is proposed to classify the defects that could occur on the PCB using neural network paradigm. The algorithms to segment the image into basic primitive patterns, enclosing the primitive patterns, patterns assignment, patterns normalization, and classification have been developed based on binary morphological image processing and Learning Vector Quantization (LVQ) neural network. Thousands of defective patterns have been used for training, and the neural network is tested for evaluating its performance. A defective PCB image is used to ensure the function of the proposed technique.",
"title": ""
},
{
"docid": "6b70e5f3216ce5c69f4ccda7deb55b33",
"text": "Modern optimization-based approaches to control increasingly allow automatic generation of complex behavior from only a model and an objective. Recent years has seen growing interest in fast solvers to also allow real-time operation on robots, but the computational cost of such trajectory optimization remains prohibitive for many applications. In this paper we examine a novel deep neural network approximation and validate it on a safe navigation problem with a real nano-quadcopter. As the risk of costly failures is a major concern with real robots, we propose a risk-aware resampling technique. Contrary to prior work this active learning approach is easy to use with existing solvers for trajectory optimization, as well as deep learning. We demonstrate the efficacy of the approach on a difficult collision avoidance problem with non-cooperative moving obstacles. Our findings indicate that the resulting neural network approximations are least 50 times faster than the trajectory optimizer while still satisfying the safety requirements. We demonstrate the potential of the approach by implementing a synthesized deep neural network policy on the nano-quadcopter microcontroller.",
"title": ""
},
{
"docid": "e502cdbbbf557c8365b0d4b69745e225",
"text": "This half-day hands-on studio will teach how to design and develop effective interfaces for head mounted and wrist worn wearable computers through the application of user-centered design principles. Attendees will learn gain the knowledge and tools needed to rapidly develop prototype applications, and also complete a hands-on design task. They will also learn good design guidelines for wearable systems and how to apply those guidelines. A variety of tools will be used that do not require any hardware or software experience, many of which are free and/or open source. Attendees will also be provided with material that they can use to continue their learning after the studio is over.",
"title": ""
},
{
"docid": "9cc299e5b86ba95372351ef31e567b31",
"text": "449 www.erpublication.org Abstract—Brain tumor is an uncontrolled growth of tissues in human brain. This tumor, when turns in to cancer becomes life-threatening. For images of human brain different techniques are used to capture image. These techniques involve X-Ray, Computer Tomography (CT) and Magnetic Resonance imaging MRI. For diagnosis, MRI is used to distinguish pathologic tissue from normal tissue, especially for brain related disorders and has more advantages over other techniques. The fundamental aspect that makes segmentation of medical images difficult is the complexity and variability of the anatomy that is being imaged. It may not be possible to locate certain structures without detailed anatomical knowledge. In this paper, a method to extract the brain tumor from the MRI image using clustering and watershed segmentation is proposed. The proposed method combines K-means clustering and watershed segmentation after applying some morphological operations for better results. The major advantage of watershed segmentation is that it is able to construct a complete division of the image but the disadvantages are over segmentation and sensitivity which was overcome by using K-means clustering to produce a primary segmentation of the image.",
"title": ""
},
{
"docid": "6b6fd5bfbe1745a49ce497490cef949d",
"text": "This paper investigates optimal power allocation strategies over a bank of independent parallel Gaussian wiretap channels where a legitimate transmitter and a legitimate receiver communicate in the presence of an eavesdropper and an unfriendly jammer. In particular, we formulate a zero-sum power allocation game between the transmitter and the jammer where the payoff function is the secrecy rate. We characterize the optimal power allocation strategies as well as the Nash equilibrium in some asymptotic regimes. We also provide a set of results that cast further insight into the problem. Our scenario, which is applicable to current OFDM communications systems, demonstrates that transmitters that adapt to jammer experience much higher secrecy rates than non-adaptive transmitters.",
"title": ""
},
{
"docid": "36434b0f4f6b6d567f4b5eed720a09da",
"text": "Integrated information systems for managing patient data transform the nature of hospital work to the extent that the work practices, the responsibilities, even the professional identities are likely to undergo major changes. Therefore, during the organizational implementation of the IS, attention should be paid to the future users and how they understand and see what is going on. Here the focus is on these interpretation processes, analyzed as technological frames. That is, people develop different assumptions, expectations and knowledge concerning new technology. During this sense-making process they build their idea of that technology, its technological frame. We analyzed the pre-implementation frames that could be discerned in 24 interviews of hospital personnel. Main influences on the frames in this case were the work role in the organization, knowledge about the new system, and attitudes toward the old systems. The social context appeared to have a significant influence in the users' interpretation processes and the frames seemed to be congruent within one group. So far, the incongruence between groups appeared to have caused no major problems for the implementation.",
"title": ""
},
{
"docid": "31e955e62361b6857b31d09398760830",
"text": "Measuring “how much the human is in the interaction” - the level of engagement - is instrumental in building effective interactive robots. Engagement, however, is a complex, multi-faceted cognitive mechanism that is only indirectly observable. This article formalizes with-me-ness as one of such indirect measures. With-me-ness, a concept borrowed from the field of Computer-Supported Collaborative Learning, measures in a well-defined way to what extent the human is with the robot over the course of an interactive task. As such, it is a meaningful precursor of engagement. We expose in this paper the full methodology, from real-time estimation of the human's focus of attention (relying on a novel, open-source, vision-based head pose estimator), to on-line computation of with-me-ness. We report as well on the experimental validation of this approach, using a naturalistic setup involving children during a complex robot-teaching task.",
"title": ""
},
{
"docid": "68735fb7f8f0485c0e3048fdf156973a",
"text": "Recently, as biometric technology grows rapidly, the importance of fingerprint spoof detection technique is emerging. In this paper, we propose a technique to detect forged fingerprints using contrast enhancement and Convolutional Neural Networks (CNNs). The proposed method detects the fingerprint spoof by performing contrast enhancement to improve the recognition rate of the fingerprint image, judging whether the sub-block of fingerprint image is falsified through CNNs composed of 6 weight layers and totalizing the result. Our fingerprint spoof detector has a high accuracy of 99.8% on average and has high accuracy even after experimenting with one detector in all datasets.",
"title": ""
},
{
"docid": "31dbf3fcd1a70ad7fb32fb6e69ef88e3",
"text": "OBJECTIVE\nHealth care researchers have not taken full advantage of the potential to effectively convey meaning in their multivariate data through graphical presentation. The aim of this paper is to translate knowledge from the fields of analytical chemistry, toxicology, and marketing research to the field of medicine by introducing the radar plot, a useful graphical display method for multivariate data.\n\n\nSTUDY DESIGN AND SETTING\nDescriptive study based on literature review.\n\n\nRESULTS\nThe radar plotting technique is described, and examples are used to illustrate not only its programming language, but also the differences in tabular and bar chart approaches compared to radar-graphed data displays.\n\n\nCONCLUSION\nRadar graphing, a form of radial graphing, could have great utility in the presentation of health-related research, especially in situations in which there are large numbers of independent variables, possibly with different measurement scales. This technique has particular relevance for researchers who wish to illustrate the degree of multiple-group similarity/consensus, or group differences on multiple variables in a single graphical display.",
"title": ""
},
{
"docid": "ffe6a2a92fc8cfb4c7e44f79b6038e88",
"text": "This paper aims at development of non linear dynamic model for Magnetic Levitation System and proposed linear and nonlinear state space controllers. The linear controller was designed by linearizing the model around equilibrium point, while nonlinear controller was based on feedback linearization where a nonlinear state-space transformation is used to linearize the system exactly. Relative degree of the system was determined and conditions were found that ensure relative degree be well defined. Magnetic Levitation system considered in this study is taken as a ferromagnetic ball suspended in a voltage controlled magnetic field. Dynamic behaviour of the system was modeled by the study of electromagnetic and mechanical subsystems. State space model was derived from the system equations. Linear full state feedback controller along with linear observer was designed and was compared with nonlinear full state feedback with nonlinear observer. Both linear and nonlinear controllers were simulated using matlab and results are presented. Key-Words: Magnetic Levitation System; Nonlinear Model; Exact Linearization; Electromagnet; unstable",
"title": ""
},
{
"docid": "4818e47ceaec70457701649832fb90c4",
"text": "Consider a computer system having a CPU that feeds jobs to two input/output (I/O) devices having different speeds. Let &thgr; be the fraction of jobs routed to the first I/O device, so that 1 - &thgr; is the fraction routed to the second. Suppose that α = α(&thgr;) is the steady-sate amount of time that a job spends in the system. Given that &thgr; is a decision variable, a designer might wish to minimize α(&thgr;) over &thgr;. Since α(·) is typically difficult to evaluate analytically, Monte Carlo optimization is an attractive methodology. By analogy with deterministic mathematical programming, efficient Monte Carlo gradient estimation is an important ingredient of simulation-based optimization algorithms. As a consequence, gradient estimation has recently attracted considerable attention in the simulation community. It is our goal, in this article, to describe one efficient method for estimating gradients in the Monte Carlo setting, namely the likelihood ratio method (also known as the efficient score method). This technique has been previously described (in less general settings than those developed in this article) in [6, 16, 18, 21]. An alternative gradient estimation procedure is infinitesimal perturbation analysis; see [11, 12] for an introduction. While it is typically more difficult to apply to a given application than the likelihood ratio technique of interest here, it often turns out to be statistically more accurate.\n In this article, we first describe two important problems which motivate our study of efficient gradient estimation algorithms. Next, we will present the likelihood ratio gradient estimator in a general setting in which the essential idea is most transparent. The section that follows then specializes the estimator to discrete-time stochastic processes. We derive likelihood-ratio-gradient estimators for both time-homogeneous and non-time homogeneous discrete-time Markov chains. Later, we discuss likelihood ratio gradient estimation in continuous time. As examples of our analysis, we present the gradient estimators for time-homogeneous continuous-time Markov chains; non-time homogeneous continuous-time Markov chains; semi-Markov processes; and generalized semi-Markov processes. (The analysis throughout these sections assumes the performance measure that defines α(&thgr;) corresponds to a terminating simulation.) Finally, we conclude the article with a brief discussion of the basic issues that arise in extending the likelihood ratio gradient estimator to steady-state performance measures.",
"title": ""
},
{
"docid": "cacf4a2d7004bccecb0e8965de695e69",
"text": "The WebNLG challenge consists in mapping sets of RDF triples to text. It provides a common benchmark on which to train, evaluate and compare “microplanners”, i.e. generation systems that verbalise a given content by making a range of complex interacting choices including referring expression generation, aggregation, lexicalisation, surface realisation and sentence segmentation. In this paper, we introduce the microplanning task, describe data preparation, introduce our evaluation methodology, analyse participant results and provide a brief description of the participating systems.",
"title": ""
},
{
"docid": "15c638985f66b70e8bbd46b6e078ea7d",
"text": "Strigolactones are a structurally diverse class of plant hormones that control many aspects of shoot and root growth. Strigolactones are also exuded by plants into the rhizosphere, where they promote symbiotic interactions with arbuscular mycorrhizal fungi and germination of root parasitic plants in the Orobanchaceae family. Therefore, understanding how strigolactones are made, transported, and perceived may lead to agricultural innovations as well as a deeper knowledge of how plants function. Substantial progress has been made in these areas over the past decade. In this review, we focus on the molecular mechanisms, core developmental roles, and evolutionary history of strigolactone signaling. We also propose potential translational applications of strigolactone research to agriculture.",
"title": ""
},
{
"docid": "f37d32a668751198ed8acde8ab3bdc12",
"text": "INTRODUCTION\nAlthough the critical feature of attention-deficit/hyperactivity disorder (ADHD) is a persistent pattern of inattention and/or hyperactivity/impulsivity behavior, the disorder is clinically heterogeneous, and concomitant difficulties are common. Children with ADHD are at increased risk for experiencing lifelong impairments in multiple domains of daily functioning. In the present study we aimed to build a brief ADHD impairment-related tool -ADHD concomitant difficulties scale (ADHD-CDS)- to assess the presence of some of the most important comorbidities that usually appear associated with ADHD such as emotional/motivational management, fine motor coordination, problem-solving/management of time, disruptive behavior, sleep habits, academic achievement and quality of life. The two main objectives of the study were (i) to discriminate those profiles with several and important ADHD functional difficulties and (ii) to create a brief clinical tool that fosters a comprehensive evaluation process and can be easily used by clinicians.\n\n\nMETHODS\nThe total sample included 399 parents of children with ADHD aged 6-18 years (M = 11.65; SD = 3.1; 280 males) and 297 parents of children without a diagnosis of ADHD (M = 10.91; SD = 3.2; 149 male). The scale construction followed an item improved sequential process.\n\n\nRESULTS\nFactor analysis showed a 13-item single factor model with good fit indices. Higher scores on inattention predicted higher scores on ADHD-CDS for both the clinical sample (β = 0.50; p < 0.001) and the whole sample (β = 0.85; p < 0.001). The ROC curve for the ADHD-CDS (against the ADHD diagnostic status) gave an area under the curve (AUC) of.979 (95%, CI = [0.969, 0.990]).\n\n\nDISCUSSION\nThe ADHD-CDS has shown preliminary adequate psychometric properties, with high convergent validity and good sensitivity for different ADHD profiles, which makes it a potentially appropriate and brief instrument that may be easily used by clinicians, researchers, and health professionals in dealing with ADHD.",
"title": ""
},
{
"docid": "cfa6b8603acc094a2a476806199819fb",
"text": "Several theories claim that dreaming is a random by-product of REM sleep physiology and that it does not serve any natural function. Phenomenal dream content, however, is not as disorganized as such views imply. The form and content of dreams is not random but organized and selective: during dreaming, the brain constructs a complex model of the world in which certain types of elements, when compared to waking life, are underrepresented whereas others are over represented. Furthermore, dream content is consistently and powerfully modulated by certain types of waking experiences. On the basis of this evidence, I put forward the hypothesis that the biological function of dreaming is to simulate threatening events, and to rehearse threat perception and threat avoidance. To evaluate this hypothesis, we need to consider the original evolutionary context of dreaming and the possible traces it has left in the dream content of the present human population. In the ancestral environment human life was short and full of threats. Any behavioral advantage in dealing with highly dangerous events would have increased the probability of reproductive success. A dream-production mechanism that tends to select threatening waking events and simulate them over and over again in various combinations would have been valuable for the development and maintenance of threat-avoidance skills. Empirical evidence from normative dream content, children's dreams, recurrent dreams, nightmares, post traumatic dreams, and the dreams of hunter-gatherers indicates that our dream-production mechanisms are in fact specialized in the simulation of threatening events, and thus provides support to the threat simulation hypothesis of the function of dreaming.",
"title": ""
},
{
"docid": "88520d58d125e87af3d5ea6bb4335c4f",
"text": "We present an algorithm for marker-less performance capture of interacting humans using only three hand-held Kinect cameras. Our method reconstructs human skeletal poses, deforming surface geometry and camera poses for every time step of the depth video. Skeletal configurations and camera poses are found by solving a joint energy minimization problem which optimizes the alignment of RGBZ data from all cameras, as well as the alignment of human shape templates to the Kinect data. The energy function is based on a combination of geometric correspondence finding, implicit scene segmentation, and correspondence finding using image features. Only the combination of geometric and photometric correspondences and the integration of human pose and camera pose estimation enables reliable performance capture with only three sensors. As opposed to previous performance capture methods, our algorithm succeeds on general uncontrolled indoor scenes with potentially dynamic background, and it succeeds even if the cameras are moving.",
"title": ""
},
{
"docid": "e4f3337ce89cac4531ec3e7602d331ba",
"text": "Existing object proposal algorithms usually search for possible object regions over multiple locations and scales separately, which ignore the interdependency among different objects and deviate from the human perception procedure. To incorporate global interdependency between objects into object localization, we propose an effective Tree-structured Reinforcement Learning (Tree-RL) approach to sequentially search for objects by fully exploiting both the current observation and historical search paths. The Tree-RL approach learns multiple searching policies through maximizing the long-term reward that reflects localization accuracies over all the objects. Starting with taking the entire image as a proposal, the Tree-RL approach allows the agent to sequentially discover multiple objects via a tree-structured traversing scheme. Allowing multiple near-optimal policies, Tree-RL offers more diversity in search paths and is able to find multiple objects with a single feedforward pass. Therefore, Tree-RL can better cover different objects with various scales which is quite appealing in the context of object proposal. Experiments on PASCAL VOC 2007 and 2012 validate the effectiveness of the Tree-RL, which can achieve comparable recalls with current object proposal algorithms via much fewer candidate windows.",
"title": ""
},
{
"docid": "087ca9ca531f14e8546c9f03d9e76ed3",
"text": "Deep generative models have shown promising results in generating realistic images, but it is still non-trivial to generate images with complicated structures. The main reason is that most of the current generative models fail to explore the structures in the images including spatial layout and semantic relations between objects. To address this issue, we propose a novel deep structured generative model which boosts generative adversarial networks (GANs) with the aid of structure information. In particular, the layout or structure of the scene is encoded by a stochastic and-or graph (sAOG), in which the terminal nodes represent single objects and edges represent relations between objects. With the sAOG appropriately harnessed, our model can successfully capture the intrinsic structure in the scenes and generate images of complicated scenes accordingly. Furthermore, a detection network is introduced to infer scene structures from a image. Experimental results demonstrate the effectiveness of our proposed method on both modeling the intrinsic structures, and generating realistic images.",
"title": ""
}
] |
scidocsrr
|
8f48c709df5899a124a12feb1ebf6042
|
A Novel Hybrid Self-Adaptive Bat Algorithm
|
[
{
"docid": "828c54f29339e86107f1930ae2a5e77f",
"text": "Artificial bee colony (ABC) algorithm is an optimization algorithm based on a particular intelligent behaviour of honeybee swarms. This work compares the performance of ABC algorithm with that of differential evolution (DE), particle swarm optimization (PSO) and evolutionary algorithm (EA) for multi-dimensional numeric problems. The simulation results show that the performance of ABC algorithm is comparable to those of the mentioned algorithms and can be efficiently employed to solve engineering problems with high dimensionality. # 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "3293e4e0d7dd2e29505db0af6fbb13d1",
"text": "A new heuristic approach for minimizing possibly nonlinear and non-differentiable continuous space functions is presented. By means of an extensive testbed it is demonstrated that the new method converges faster and with more certainty than many other acclaimed global optimization methods. The new method requires few control variables, is robust, easy to use, and lends itself very well to parallel computation.",
"title": ""
}
] |
[
{
"docid": "9a960c22af98114a91b00f66c7b4498f",
"text": "Hybrid Intelligent Systems that combine knowledge-based and artificial neural network systems typically have four phases involving domain knowledge representation, mapping of this knowledge into an initial connectionist architecture, network training, and rule extraction, respectively. The final phase is important because it can provide a trained connectionist architecture with explanation power and validate its output decisions. Moreover, it can be used to refine and maintain the initial knowledge acquired from domain experts. In this paper, we present three rule-extraction techniques. The first technique extracts a set of binary rules from any type of neural network. The other two techniques are specific to feedforward networks, with a single hidden layer of sigmoidal units. Technique 2 extracts partial rules that represent the most important embedded knowledge with an adjustable level of detail, while the third technique provides a more comprehensive and universal approach. A rule-evaluation technique, which orders extracted rules based on three performance measures, is then proposed. The three techniques area applied to the iris and breast cancer data sets. The extracted rules are evaluated qualitatively and quantitatively, and are compared with those obtained by",
"title": ""
},
{
"docid": "9b99371de5da25c3e2cc2d8787da7d21",
"text": "lations, is a critical ecological process (Ims and Yoccoz 1997). It can maintain genetic diversity, rescue declining populations, and re-establish extirpated populations. Sufficient movement of individuals between isolated, extinction-prone populations can allow an entire network of populations to persist via metapopulation dynamics (Hanski 1991). As areas of natural habitat are reduced in size and continuity by human activities, the degree to which the remaining fragments are functionally linked by dispersal becomes increasingly important. The strength of those linkages is determined largely by a property known as “connectivity”, which, despite its intuitive appeal, is inconsistently defined. At one extreme, metapopulation ecologists argue for a habitat patch-level definition, while at the other, landscape ecologists insist that connectivity is a landscape-scale property (Merriam 1984; Taylor et al. 1993; Tischendorf and Fahrig 2000; Moilanen and Hanski 2001; Tischendorf 2001a; Moilanen and Nieminen 2002). Differences in perspective notwithstanding, theoreticians do agree that connectivity has undeniable effects on many population processes (Wiens 1997; Moilanen and Hanski 2001). It is therefore desirable to quantify connectivity and use these measurements as a basis for decision making. Currently, many reserve design algorithms factor in some measure of connectivity when weighing alternative plans (Siitonen et al. 2002, 2003; Singleton et al. 2002; Cabeza 2003). Consideration of connectivity during the reserve design process could highlight situations where it really matters. For example, alternative reserve designs that are similar in other factors such as area, habitat quality, and cost may differ greatly in connectivity (Siitonen et al. 2002). This matters because the low-connectivity scenarios may not be able to support viable populations of certain species over long periods of time. Analyses of this sort could also redirect some project resources towards improving the connectivity of a reserve network by building movement corridors or acquiring small, otherwise undesirable habitat patches that act as links between larger patches (Keitt et al. 1997). Reserve designs could therefore include the demographic and genetic benefits of increased connectivity without substantially increasing the cost of the project (eg Siitonen et al. 2002). If connectivity is to serve as a guide, at least in part, for conservation decision-making, it clearly matters how it is measured. Unfortunately, the ecological literature is awash with different connectivity metrics. How are land managers and decision makers to efficiently choose between these alternatives, when ecologists cannot even agree on a basic definition of connectivity, let alone how it is best measured? Aside from the theoretical perspectives to which they are tied, these metrics differ in two important regards: the type of data they require and the level of detail they provide. Here, we attempt to cut through some of the confusion surrounding connectivity by developing a classification scheme based on these key differences between metrics. 529",
"title": ""
},
{
"docid": "60c9355aba12e84461519f28b157c432",
"text": "Successful recurrent models such as long short-term memories (LSTMs) and gated recurrent units (GRUs) use ad hoc gating mechanisms. Empirically these models have been found to improve the learning of medium to long term temporal dependencies and to help with vanishing gradient issues. We prove that learnable gates in a recurrent model formally provide quasiinvariance to general time transformations in the input data. We recover part of the LSTM architecture from a simple axiomatic approach. This result leads to a new way of initializing gate biases in LSTMs and GRUs. Experimentally, this new chrono initialization is shown to greatly improve learning of long term dependencies, with minimal implementation effort. Recurrent neural networks (e.g. (Jaeger, 2002)) are a standard machine learning tool to model and represent temporal data; mathematically they amount to learning the parameters of a parameterized dynamical system so that its behavior optimizes some criterion, such as the prediction of the next data in a sequence. Handling long term dependencies in temporal data has been a classical issue in the learning of recurrent networks. Indeed, stability of a dynamical system comes at the price of exponential decay of the gradient signals used for learning, a dilemma known as the vanishing gradient problem (Pascanu et al., 2012; Hochreiter, 1991; Bengio et al., 1994). This has led to the introduction of recurrent models specifically engineered to help with such phenomena. Use of feedback connections (Hochreiter & Schmidhuber, 1997) and control of feedback weights through gating mechanisms (Gers et al., 1999) partly alleviate the vanishing gradient problem. The resulting architectures, namely long short-term memories (LSTMs (Hochreiter & Schmidhuber, 1997; Gers et al., 1999)) and gated recurrent units (GRUs (Chung et al., 2014)) have become a standard for treating sequential data. Using orthogonal weight matrices is another proposed solution to the vanishing gradient problem, thoroughly studied in (Saxe et al., 2013; Le et al., 2015; Arjovsky et al., 2016; Wisdom et al., 2016; Henaff et al., 2016). This comes with either computational overhead, or limitation in representational power. Furthermore, restricting the weight matrices to the set of orthogonal matrices makes forgetting of useless information difficult. The contribution of this paper is threefold: ∙ We show that postulating invariance to time transformations in the data (taking invariance to time warping as an axiom) necessarily leads to a gate-like mechanism in recurrent models (Section 1). This provides a clean derivation of part of the popular LSTM and GRU architectures from first principles. In this framework, gate values appear as time contraction or time dilation coefficients, similar in spirit to the notion of time constant introduced in (Mozer, 1992). ∙ From these insights, we provide precise prescriptions on how to initialize gate biases (Section 2) depending on the range of time dependencies to be captured. It has previously been advocated that setting the bias of the forget gate of LSTMs to 1 or 2 provides overall good performance (Gers & Schmidhuber, 2000; Jozefowicz et al., 2015). The viewpoint here 1 ar X iv :1 80 4. 11 18 8v 1 [ cs .L G ] 2 3 M ar 2 01 8 Published as a conference paper at ICLR 2018 explains why this is reasonable in most cases, when facing medium term dependencies, but fails when facing long to very long term dependencies. ∙ We test the empirical benefits of the new initialization on both synthetic and real world data (Section 3). We observe substantial improvement with long-term dependencies, and slight gains or no change when short-term dependencies dominate. 1 FROM TIME WARPING INVARIANCE TO GATING When tackling sequential learning problems, being resilient to a change in time scale is crucial. Lack of resilience to time rescaling implies that we can make a problem arbitrarily difficult simply by changing the unit of measurement of time. Ordinary recurrent neural networks are highly nonresilient to time rescaling: a task can be rendered impossible for an ordinary recurrent neural network to learn, simply by inserting a fixed, small number of zeros or whitespaces between all elements of the input sequence. An explanation is that, with a given number of recurrent units, the class of functions representable by an ordinary recurrent network is not invariant to time rescaling. Ideally, one would like a recurrent model to be able to learn from time-warped input data x(c(t)) as easily as it learns from data x(t), at least if the time warping c(t) is not overly complex. The change of time c may represent not only time rescalings, but, for instance, accelerations or decelerations of the phenomena in the input data. We call a class of models invariant to time warping, if for any model in the class with input data x(t), and for any time warping c(t), there is another (or the same) model in the class that behaves on data x(c(t)) in the same way the original model behaves on x(t). (In practice, this will only be possible if the warping c is not too complex.) We will show that this is deeply linked to having gating mechanisms in the model. Invariance to time rescaling Let us first discuss the simpler case of a linear time rescaling. Formally, this is a linear transformation of time, that is c : R+ −→ R+ t ↦−→ αt (1) with α > 0. For instance, receiving a new input character every 10 time steps only, would correspond to α = 0.1. Studying time transformations is easier in the continuous-time setting. The discrete time equation of a basic recurrent network with hidden state ht, ht+1 = tanh (Wx xt +Wh ht + b) (2) can be seen as a time-discretized version of the continuous-time equation1 dh(t) dt = tanh (︀ Wx x(t) +Wh h(t) + b )︀ − h(t) (3) namely, (2) is the Taylor expansion h(t+ δt) ≈ h(t) + δt dh(t) dt with discretization step δt = 1. Now imagine that we want to describe time-rescaled data x(αt) with a model from the same class. Substituting t← c(t) = αt, x(t)← x(αt) and h(t)← h(αt) and rewriting (3) in terms of the new variables, the time-rescaled model satisfies2 dh(t) dt = α tanh (︀ Wx x(t) +Wh h(t) + b )︀ − αh(t). (4) However, when translated back to a discrete-time model, this no longer describes an ordinary RNN but a leaky RNN (Jaeger, 2002, §8.1). Indeed, taking the Taylor expansion of h(t+ δt) with δt = 1 in (4) yields the recurrent model ht+1 = α tanh (Wx xt +Wh ht + b) + (1− α)ht (5) We will use indices ht for discrete time and brackets h(t) for continuous time. More precisely, introduce a new time variable T and set the model and data with variable T to H(T ) := h(c(T )) and X(T ) := x(c(T )). Then compute dH(T ) dT . Then rename H to h, X to x and T to t to match the original notation.",
"title": ""
},
{
"docid": "1403e5ee76253ebf7e58300bf9f4dc8a",
"text": "PURPOSE\nTo evaluate the marginal fit of CAD/CAM copings milled from hybrid ceramic (Vita Enamic) blocks and lithium disilicate (IPS e.max CAD) blocks, and to evaluate the effect of crystallization firing on the marginal fit of lithium disilicate copings.\n\n\nMATERIALS AND METHODS\nA standardized metal die with a 1-mm-wide shoulder finish line was imaged using the CEREC AC Bluecam. The coping was designed using CEREC 3 software. The design was used to fabricate 15 lithium disilicate and 15 hybrid ceramic copings. Design and milling were accomplished by one operator. The copings were seated on the metal die using a pressure clamp with a uniform pressure of 5.5 lbs. A Macroview Microscope (14×) was used for direct viewing of the marginal gap. Four areas were imaged on each coping (buccal, distal, lingual, mesial). Image analysis software was used to measure the marginal gaps in μm at 15 randomly selected points on each of the four surfaces. A total of 60 measurements were made per specimen. For lithium disilicate copings the measurements for marginal gap were made before and after crystallization firing. Data were analyzed using paired t-test and Kruskal-Wallis test.\n\n\nRESULTS\nThe overall mean difference in marginal gap between the hybrid ceramic and crystallized lithium disilicate copings was statistically significant (p < 0.01). Greater mean marginal gaps were measured for crystallized lithium disilicate copings. The overall mean difference in marginal gap before and after firing (precrystallized and crystallized lithium disilicate copings) showed an average of 62 μm increase in marginal gap after firing. This difference was also significant (p < 0.01).\n\n\nCONCLUSIONS\nA significant difference exists in the marginal gap discrepancy when comparing hybrid ceramic and lithium disilicate CAD/CAM crowns. Also crystallization firing can result in a significant increase in the marginal gap of lithium disilicate CAD/CAM crowns.",
"title": ""
},
{
"docid": "d1fa7cf9a48f1ad5502f6aec2981f79a",
"text": "Despite the increasing use of social media platforms for information and news gathering, its unmoderated nature often leads to the emergence and spread of rumours, i.e., items of information that are unverified at the time of posting. At the same time, the openness of social media platforms provides opportunities to study how users share and discuss rumours, and to explore how to automatically assess their veracity, using natural language processing and data mining techniques. In this article, we introduce and discuss two types of rumours that circulate on social media: long-standing rumours that circulate for long periods of time, and newly emerging rumours spawned during fast-paced events such as breaking news, where reports are released piecemeal and often with an unverified status in their early stages. We provide an overview of research into social media rumours with the ultimate goal of developing a rumour classification system that consists of four components: rumour detection, rumour tracking, rumour stance classification, and rumour veracity classification. We delve into the approaches presented in the scientific literature for the development of each of these four components. We summarise the efforts and achievements so far toward the development of rumour classification systems and conclude with suggestions for avenues for future research in social media mining for the detection and resolution of rumours.",
"title": ""
},
{
"docid": "a6a9376f6205d5c2bc48964d482b6443",
"text": "Enrollment in online courses is rapidly increasing and attrition rates remain high. This paper presents a literature review addressing the role of interactivity in student satisfaction and persistence in online learning. Empirical literature was reviewed through the lens of Bandura's social cognitive theory, Anderson's interaction equivalency theorem, and Tinto's social integration theory. Findings suggest that interactivity is an important component of satisfaction and persistence for online learners, and that preferences for types of online interactivity vary according to type of learner. Student–instructor interaction was also noted to be a primary variable in online student satisfaction and persistence.",
"title": ""
},
{
"docid": "4cd966d7f6ebeb840d73aa1397ffacc5",
"text": "On an average 9 out of 10 startups fail(industry standard). Several reasons are responsible for the failure of a startup including bad management, lack of funds, etc. This work aims to create a predictive model for startups based on many key things involved at various stages in the life of a startup. It is highly desirable to increase the success rate of startups and not much work have been done to address the same. We propose a method to predict the outcome of a startups based on many key factors like seed funding amount, seed funding time, Series A funding, factors contributing to the success and failure of the company at every milestone. We can have created several models based on the data that we have carefully put together from various sources like Crunchbase, Tech Crunch, etc. Several data mining classification techniques were used on the preprocessed data along with various data mining optimizations and validations. We provide our analysis using techniques such as Random Forest, ADTrees, Bayesian Networks, and so on. We evaluate the correctness of our models based on factors like area under the ROC curve, precision and recall. We show that a startup can use our models to decide which factors they need to focus more on, in order to hit the success mark.",
"title": ""
},
{
"docid": "729581c92155092a82886e58284e8b92",
"text": "We investigate here the capabilities of a 400-element reconfigurable transmitarray antenna to synthesize monopulse radiation patterns for radar applications in X-band. The generation of the sum (Σ) and difference (A) patterns are demonstrated both theoretically and experimentally for broadside as well as tilted beams in different azimuthal planes. Two different feed configurations have been considered, namely, a single focal source and a four-element focal source configuration. The latter enables the simultaneous generation of a Σ- and two A-patterns in orthogonal planes, which is an important advantage for tracking applications with stringent requirements in speed and accuracy.",
"title": ""
},
{
"docid": "13b887760a87bc1db53b16eb4fba2a01",
"text": "Customer preferences for products are drifting over time. Product perception and popularity are constantly changing as new selection emerges. Similarly, customer inclinations are evolving, leading them to ever redefine their taste. Thus, modeling temporal dynamics should be a key when designing recommender systems or general customer preference models. However, this raises unique challenges. Within the eco-system intersecting multiple products and customers, many different characteristics are shifting simultaneously, while many of them influence each other and often those shifts are delicate and associated with a few data instances. This distinguishes the problem from concept drift explorations, where mostly a single concept is tracked. Classical time-window or instance-decay approaches cannot work, as they lose too much signal when discarding data instances. A more sensitive approach is required, which can make better distinctions between transient effects and long term patterns. The paradigm we offer is creating a model tracking the time changing behavior throughout the life span of the data. This allows us to exploit the relevant components of all data instances, while discarding only what is modeled as being irrelevant. Accordingly, we revamp two leading collaborative filtering recommendation approaches. Evaluation is made on a large movie rating dataset by Netflix. Results are encouraging and better than those previously reported on this dataset.",
"title": ""
},
{
"docid": "adb9eaaf50a43d637bf59ce38d7e8f99",
"text": "In response to a stressor, physiological changes are set into motion to help an individual cope with the stressor. However, chronic activation of these stress responses, which include the hypothalamic–pituitary–adrenal axis and the sympathetic–adrenal–medullary axis, results in chronic production of glucocorticoid hormones and catecholamines. Glucocorticoid receptors expressed on a variety of immune cells bind cortisol and interfere with the function of NF-kB, which regulates the activity of cytokine-producing immune cells. Adrenergic receptors bind epinephrine and norepinephrine and activate the cAMP response element binding protein, inducing the transcription of genes encoding for a variety of cytokines. The changes in gene expression mediated by glucocorticoid hormones and catecholamines can dysregulate immune function. There is now good evidence (in animal and human studies) that the magnitude of stress-associated immune dysregulation is large enough to have health implications.",
"title": ""
},
{
"docid": "edc74b0742ef05296e6a36ff360a1bcc",
"text": "Many security primitives are based on hard mathematical problems. Using hard AI problems for security is emerging as an exciting new paradigm, but has been under-explored. In this paper, we present a new security primitive based on hard AI problems, namely, a novel family of graphical password systems built on top of Captcha technology, which we call Captcha as graphical passwords (CaRP). CaRP is both a Captcha and a graphical password scheme. CaRP addresses a number of security problems altogether, such as online guessing attacks, relay attacks, and, if combined with dual-view technologies, shoulder-surfing attacks. Notably, a CaRP password can be found only probabilistically by automatic online guessing attacks even if the password is in the search set. CaRP also offers a novel approach to address the well-known image hotspot problem in popular graphical password systems, such as PassPoints, that often leads to weak password choices. CaRP is not a panacea, but it offers reasonable security and usability and appears to fit well with some practical applications for improving online security.",
"title": ""
},
{
"docid": "bbee52ebe65b2f7b8d0356a3fbdb80bf",
"text": "Science Study Book Corpus Document Filter [...] enters a d orbital. The valence electrons (those added after the last noble gas configuration) in these elements include the ns and (n \\u2013 1) d electrons. The official IUPAC definition of transition elements specifies those with partially filled d orbitals. Thus, the elements with completely filled orbitals (Zn, Cd, Hg, as well as Cu, Ag, and Au in Figure 6.30) are not technically transition elements. However, the term is frequently used to refer to the entire d block (colored yellow in Figure 6.30), and we will adopt this usage in this textbook. Inner transition elements are metallic elements in which the last electron added occupies an f orbital.",
"title": ""
},
{
"docid": "107bb53e3ceda3ee29fc348febe87f11",
"text": "The objective here is to develop a flat surface area measuring system which is used to calculate the surface area of any irregular sheet. The irregular leather sheet is used in this work. The system is self protected by user name and password set through software for security purpose. Only authorize user can enter into the system by entering the valid pin code. After entering into the system, the user can measure the area of any irregular sheet, monitor and control the system. The heart of the system is Programmable Logic Controller (Master K80S) which controls the complete working of the system. The controlling instructions for the system are given through the designed Human to Machine Interface (HMI). For communication purpose the GSM modem is also interfaced with the Programmable Logic Controller (PLC). The remote user can also monitor the current status of the devices by sending SMS message to the GSM modem.",
"title": ""
},
{
"docid": "a46460113926b688f144ddec74e03918",
"text": "The authors describe a new self-report instrument, the Inventory of Depression and Anxiety Symptoms (IDAS), which was designed to assess specific symptom dimensions of major depression and related anxiety disorders. They created the IDAS by conducting principal factor analyses in 3 large samples (college students, psychiatric patients, community adults); the authors also examined the robustness of its psychometric properties in 5 additional samples (high school students, college students, young adults, postpartum women, psychiatric patients) who were not involved in the scale development process. The IDAS contains 10 specific symptom scales: Suicidality, Lassitude, Insomnia, Appetite Loss, Appetite Gain, Ill Temper, Well-Being, Panic, Social Anxiety, and Traumatic Intrusions. It also includes 2 broader scales: General Depression (which contains items overlapping with several other IDAS scales) and Dysphoria (which does not). The scales (a) are internally consistent, (b) capture the target dimensions well, and (c) define a single underlying factor. They show strong short-term stability and display excellent convergent validity and good discriminant validity in relation to other self-report and interview-based measures of depression and anxiety.",
"title": ""
},
{
"docid": "9aee53ac010545e963f4e4697bf04ec2",
"text": "For financial institutions, the ability to predict or forecast business failures is crucial, as incorrect decisions can have direct financial consequences. Bankruptcy prediction and credit scoring are the two major research problems in the accounting and finance domain. In the literature, a number of models have been developed to predict whether borrowers are in danger of bankruptcy and whether they should be considered a good or bad credit risk. Since the 1990s, machine-learning techniques, such as neural networks and decision trees, have been studied extensively as tools for bankruptcy prediction and credit score modeling. This paper reviews 130 related journal papers from the period between 1995 and 2010, focusing on the development of state-of-the-art machine-learning techniques, including hybrid and ensemble classifiers. Related studies are compared in terms of classifier design, datasets, baselines, and other experimental factors. This paper presents the current achievements and limitations associated with the development of bankruptcy-prediction and credit-scoring models employing machine learning. We also provide suggestions for future research.",
"title": ""
},
{
"docid": "2567835d4af183ff0d57c698cd7c0a39",
"text": "OBJECTIVE\nThis descriptive study explores motivation of toddlers who are typically developing to persist with challenging occupations.\n\n\nMETHOD\nThe persistence of 33 children, 12 to 19 months of age (M = 15.7 months), in functional play and self-feeding with a utensil was examined through videotape analysis of on-task behaviors.\n\n\nRESULTS\nA modest correlation was demonstrated between the percentages of on-task time in the two conditions (r = .44, p < .01). Although chronological age was not associated with persistence, participants' age-equivalent fine motor scores were correlated with persistence with challenging toys (r = .39, p < .03) but not with self-feeding with a utensil. Having an older sibling was associated with longer periods of functional play, t(32) = 3.02, p < .005, but the amount the parent urged the child to eat with a utensil was not associated with persistence in self-feeding.\n\n\nCONCLUSION\nThe modest association between on-task time for functional play and self-feeding with a utensil reveals that factors other than urge to meet perceptual motor challenges lead to children's persistence. The results reinforce the importance of considering not only challenging activities, but also the experienced meaning that elicits optimal effort and, thus, learning.",
"title": ""
},
{
"docid": "3cb17167f1ce894d8a36d94ab7e9bd22",
"text": "FMCW (Frequency Modulation Continuous Wave) radar has been widely used in active safety systems recently. On account of range-velocity processing, ghost targets and missed targets exist in a multi-target situation. To address those issues, a two-step scheme as well as a novel FMCW waveform have been proposed in this paper. The proposed waveform contains four segments: fast up ramp, slow up ramp, flat frequency and slow down ramp in each period. The approximate range can be detected by using the fast up ramp in the first step, then the unambiguous velocity can be calculated during the flat frequency segment in the second step. The combination of the two independent measurements yields, finally, the real targets and eliminates the ghost targets. In addition, the computational complexity of our proposed scheme is 80% lower than ramps with different slopes method, and the effectiveness of our approach is demonstrated by simulation results.",
"title": ""
},
{
"docid": "4b8af6dfcaaea4246c10ab840ea03608",
"text": "Mobile cloud computing (MCC) as an emerging and prospective computing paradigm, can significantly enhance computation capability and save energy of smart mobile devices (SMDs) by offloading computation-intensive tasks from resource-constrained SMDs onto the resource-rich cloud. However, how to achieve energy-efficient computation offloading under the hard constraint for application completion time remains a challenge issue. To address such a challenge, in this paper, we provide an energy-efficient dynamic offloading and resource scheduling (eDors) policy to reduce energy consumption and shorten application completion time. We first formulate the eDors problem into the energy-efficiency cost (EEC) minimization problem while satisfying the task-dependency requirements and the completion time deadline constraint. To solve the optimization problem, we then propose a distributed eDors algorithm consisting of three subalgorithms of computation offloading selection, clock frequency control and transmission power allocation. More importantly, we find that the computation offloading selection depends on not only the computing workload of a task, but also the maximum completion time of its immediate predecessors and the clock frequency and transmission power of the mobile device. Finally, our experimental results in a real testbed demonstrate that the eDors algorithm can effectively reduce the EEC by optimally adjusting the CPU clock frequency of SMDs based on the dynamic voltage and frequency scaling (DVFS) technique in local computing, and adapting the transmission power for the wireless channel conditions in cloud computing.",
"title": ""
},
{
"docid": "775182872259257a0abff42d53b7bb04",
"text": "Matriptase is an epithelial-derived, cell surface serine protease. This protease activates hepatocyte growth factor (HGF) and urokinase plasminogen activator (uPA), two proteins thought to be involved in the growth and motility of cancer cells, particularly carcinomas, and in the vascularization of tumors. Thus, matriptase may play an important role in the progression of carcinomas, such as breast cancer. We examined the regulation of activation of matriptase in human breast cancer cells, in comparison to non-transformed mammary epithelial cells 184A1N4 and MCF-10A. Results clearly indicated that unlike non-transformed mammary epithelial cells, breast cancer cells do not respond to the known activators of matriptase, serum and sphingosine 1-phosphate (S1P). Similar levels of activated matriptase were detected in breast cancer cells, grown in the presence or absence of S1P. However, up to five-fold higher levels of activated matriptase were detected in the conditioned media from the cancer cells grown in the absence of serum and S1P, when compared to non-transformed mammary epithelial cells. S1P also induces formation of cortical actin structures in non-transformed cells, but not in breast cancer cells. These results show that in non-transformed cells, S1P induces a rearrangement of the actin cytoskeleton and stimulates proteolytic activity on cell surfaces. In contrast, S1P treatment of breast cancer cells does not activate matriptase, and instead these cells constitutively activate the protease. In addition, breast cancer cells respond differently to S1P in terms of the regulation of actin cytoskeletal structures. Matriptase and its cognate inhibitor, HGF activator inhibitor 1 (HAI-1) colocalize on the cell periphery of breast cancer cells and form stable complexes in the extracellular milieu, suggesting that the inhibitor serves to prevent undesired proteolysis in these cells. Finally, we demonstrate that treatment of T-47D cells with epidermal growth factor (EGF), which promotes cell ruffling, stimulates increased accumulation of activated matriptase at the sites of membrane ruffling, suggesting a possible functional role at these sites.",
"title": ""
}
] |
scidocsrr
|
766b754a5f72b9dc43c52f29163e27bb
|
Probabilistic Contact Estimation and Impact Detection for State Estimation of Quadruped Robots
|
[
{
"docid": "164879a016e455123c3b3c94d291ebf7",
"text": "A robot manipulator sharing its workspace with humans should be able to quickly detect collisions and safely react for limiting injuries due to physical contacts. In the absence of external sensing, relative motions between robot and human are not predictable and unexpected collisions may occur at any location along the robot arm. Based on physical quantities such as total energy and generalized momentum of the robot manipulator, we present an efficient collision detection method that uses only proprioceptive robot sensors and provides also directional information for a safe robot reaction after collision. The approach is first developed for rigid robot arms and then extended to the case of robots with elastic joints, proposing different reaction strategies. Experimental results on collisions with the DLR-III lightweight manipulator are reported",
"title": ""
},
{
"docid": "1402ffc97f879b7b24aa079101abf791",
"text": "In the framework of physical human-robot interaction (pHRI), methodologies and experimental tests are presented for the problem of detecting and reacting to collisions between a robot manipulator and a human being. Using a lightweight robot that was especially designed for interactive and cooperative tasks, we show how reactive control strategies can significantly contribute to ensuring safety to the human during physical interaction. Several collision tests were carried out, illustrating the feasibility and effectiveness of the proposed approach. While a subjective ldquosafetyrdquo feeling is experienced by users when being able to naturally stop the robot in autonomous motion, a quantitative analysis of different reaction strategies was lacking. In order to compare these strategies on an objective basis, a mechanical verification platform has been built. The proposed collision detection and reactions methods prove to work very reliably and are effective in reducing contact forces far below any level which is dangerous to humans. Evaluations of impacts between robot and human arm or chest up to a maximum robot velocity of 2.7 m/s are presented.",
"title": ""
}
] |
[
{
"docid": "5306a9e31534841c944396841301ae4c",
"text": "Despite the significant gains made globally in reducing the burden of malaria, the disease remains a major public health challenge, especially in sub-Saharan Africa (SSA) including Ghana. There is a significant gap in financing malaria control globally. The private sector could become a significant source of financing malaria control. To get the private sector to appreciate the need to invest in malaria control, it is important to provide evidence of the economic burden of malaria on businesses. The objective of this study, therefore, was to estimate the economic burden on malaria on businesses in Ghana, so as to stimulate the sector’s investment in malaria control. Data covering 2012–2014 were collected from 62 businesses sampled from Greater Accra, Ashanti and Western Regions of Ghana, which have the highest concentration of businesses in the country. Data on the cost of businesses’ spending on treatment and prevention of malaria in staff and their dependants as well as staff absenteeism due to malaria and expenditure on other health-related activities were collected. Views of business leaders on the effect of malaria on their businesses were also compiled. The analysis was extrapolated to cover 5828 businesses across the country. The results show that businesses in Ghana lost about US$6.58 million to malaria in 2014, 90 % of which were direct costs. A total of 3913 workdays were lost due to malaria in firms in the study sample during the period 2012–2014. Businesses in the study sample spent an average of 0.5 % of the annual corporate returns on treatment of malaria in employees and their dependants, 0.3 % on malaria prevention, and 0.5 % on other health-related corporate social responsibilities. Again business leaders affirmed that malaria affects their businesses’ efficiency, employee attendance and productivity and expenses. Finally, about 93 % of business leaders expressed the need private sector investment in malaria control. The economic burden of malaria on businesses in Ghana cannot be underestimated. This, together with business leaders’ acknowledgement that it is important for private sector investment in malaria control, provides motivation for engagement of the private sector in financing malaria control activities.",
"title": ""
},
{
"docid": "ea5dfaeaa63f4a0586955a6d60bf7a8a",
"text": "Prior knowledge can be used to improve predictive performance of learning algorithms or reduce the amount of data required for training. The same goal is pursued within the learning using privileged information paradigm which was recently introduced by Vapnik et al. and is aimed at utilizing additional information available only at training time-a framework implemented by SVM+. We relate the privileged information to importance weighting and show that the prior knowledge expressible with privileged features can also be encoded by weights associated with every training example. We show that a weighted SVM can always replicate an SVM+ solution, while the converse is not true and we construct a counterexample highlighting the limitations of SVM+. Finally, we touch on the problem of choosing weights for weighted SVMs when privileged features are not available.",
"title": ""
},
{
"docid": "6fed39aba9c72f21c553a82d97a2cb23",
"text": "This paper presents a position sensorless closed loop control of a switched reluctance linear motor. The aim of the proposed control is to damp the position of the studied motor. Indeed, the position oscillations can harm some applications requiring high position precision. Moreover, they can induce the linear switched reluctance motor to an erratic working. The proposed control solution is based on back Electromotive Forces which give information about the oscillatory behaviour of the studied motor and avoid the use of a cumbersome and expensive position linear sensor. The determination of the designed control law parameters was based on the singular perturbation theory. The efficiency of the proposed control solution was proven by simulations and experimental tests.",
"title": ""
},
{
"docid": "476c102cd8942d54751cfb7f403099f2",
"text": "Cognitive radio (CR) represents the proper technological solution in case of radio resources scarcity and availability of shared channels. For the deployment of CR solutions, it is important to implement proper sensing procedures, which are aimed at continuously surveying the status of the channels. However, accurate views of the resources status can be achieved only through the cooperation of many sensing devices. For these reasons, in this paper, we propose the utilization of the Social Internet of Things (SIoT) paradigm, according to which objects are capable of establishing social relationships in an autonomous way, with respect to the rules set by their owners. The resulting social network enables faster and trustworthy information/service discovery exploiting the social network of “friend” objects. We first describe the general approach according to which members of the SIoT collaborate to exchange channel status information. Then, we discuss the main features, i.e., the possibility to implement a distributed approach for a low-complexity cooperation and the scalability feature in heterogeneous networks. Simulations have also been run to show the advantages in terms of increased capacity and decreased interference probability.",
"title": ""
},
{
"docid": "b591667db2fd53ac9332464b4babd877",
"text": "Health Insurance fraud is a major crime that imposes significant financial and personal costs on individuals, businesses, government and society as a whole. So there is a growing concern among the insurance industry about the increasing incidence of abuse and fraud in health insurance. Health Insurance frauds are driving up the overall costs of insurers, premiums for policyholders, providers and then intern countries finance system. It encompasses a wide range of illicit practices and illegal acts. This paper provides an approach to detect and predict potential frauds by applying big data, hadoop environment and analytic methods which can lead to rapid detection of claim anomalies. The solution is based on a high volume of historical data from various insurance company data and hospital data of a specific geographical area. Such sources are typically voluminous, diverse, and vary significantly over the time. Therefore, distributed and parallel computing tools collectively termed big data have to be developed. Paper demonstrate the effectiveness and efficiency of the open-source predictive modeling framework we used, describe the results from various predictive modeling techniques .The platform is able to detect erroneous or suspicious records in submitted health care data sets and gives an approach of how the hospital and other health care data is helpful for the detecting health care insurance fraud by implementing various data analytic module such as decision tree, clustering and naive Bayesian classification. Aim is to build a model that can identify the claim is a fraudulent or not by relating data from hospitals and insurance company to make health insurance more efficient and to ensure that the money is spent on legitimate causes. Critical objectives included the development of a fraud detection engine with an aim to help those in the health insurance business and minimize the loss of funds to fraud.",
"title": ""
},
{
"docid": "52b354c9b1cfe53598f159b025ec749a",
"text": "This paper describes a survey designed to determine the information seeking behavior of graduate students at the University of Macedonia (UoM). The survey is a continuation of a previous one undertaken in the Faculties of Philosophy and Engineering at the Aristotle University of Thessaloniki (AUTh). This paper primarily presents results from the UoM survey, but also makes comparisons with the findings from the earlier survey at AUTh. The 254 UoM students responding tend to use the simplest information search techniques with no critical variations between different disciplines. Their information seeking behavior seems to be influenced by their search experience, computer and web experience, perceived ability and frequency of use of esources, and not by specific personal characteristics or attendance at library instruction programs. Graduate students of both universities similar information seeking preferences, with the UoM students using more sophisticated techniques, such as Boolean search and truncation, more often than the AUTh students.",
"title": ""
},
{
"docid": "6f3931bf36c98642ee89284c6d6d7b7e",
"text": "Despite rapidly increasing numbers of diverse online shoppers the relationship of website design to trust, satisfaction, and loyalty has not previously been modeled across cultures. In the current investigation three components of website design (Information Design, Navigation Design, and Visual Design) are considered for their impact on trust and satisfaction. In turn, relationships of trust and satisfaction to online loyalty are evaluated. Utilizing data collected from 571 participants in Canada, Germany, and China various relationships in the research model are tested using PLS analysis for each country separately. In addition the overall model is tested for all countries combined as a control and verification of earlier research findings, although this time with a mixed country sample. All paths in the overall model are confirmed. Differences are determined for separate country samples concerning whether Navigation Design, Visual Design, and Information Design result in trust, satisfaction, and ultimately loyalty suggesting design characteristics should be a central consideration in website design across cultures.",
"title": ""
},
{
"docid": "b86f9981230708c2e84dc643d9ad16ad",
"text": "The article provides an analysis and reports experimental validation of the various performance metrics of the LoRa low-power wide-area network technology. The LoRa modulation is based on chirp spread spectrum, which enables use of low-quality oscillators in the end device, and to make the synchronization faster and more reliable. Moreover, LoRa technology provides over 150 dB link budget, providing good coverage. Therefore, LoRa seems to be quite a promising option for implementing communication in many diverse Internet of Things applications. In this article, we first briefly overview the specifics of the LoRa technology and analyze the scalability of the LoRa wide-area network. Then, we introduce setups of the performance measurements. The results show that using the transmit power of 14 dBm and the highest spreading factor of 12, more than 60% of the packets are received from the distance of 30 km on water. With the same configuration, we measured the performance of LoRa communication in mobile scenarios. The presented results reveal that at around 40 km/h, the communication performance gets worse, because duration of the LoRa-modulated symbol exceeds coherence time. However, it is expected that communication link is more reliable when lower spreading factors are used.",
"title": ""
},
{
"docid": "f1d67673483176bd6e596e4f078c17b4",
"text": "The current web suffers information overloading: it is increasingly difficult and time consuming to obtain information desired. Ontologies, the key concept behind the Semantic Web, will provide the means to overcome such problem by providing meaning to the available data. An ontology provides a shared and common understanding of a domain and information machine-processable semantics. To make the Semantic Web a reality and lift current Web to its full potential, powerful and expressive languages are required. Such web ontology languages must be able to describe and organize knowledge in the Web in a machine understandable way. However, organizing knowledge requires the facilities of a logical formalism which can deal with temporal, spatial, epistemic, and inferential aspects of knowledge. Implementations of Web ontology languages must provide these inference services, making them much more than just simple data storage and retrieval systems. This paper presents a state of the art for the most relevant Semantic Web Languages: XML, RDF(s), OIL, DAML+OIL, and OWL, together with a detailed comparison based on modeling primitives and language to language characteristics.",
"title": ""
},
{
"docid": "320bde052bb8d325c90df45cb21ac5de",
"text": "The power generated by solar photovoltaic (PV) module depends on surrounding irradiance, temperature and shading conditions. Under partial shading conditions (PSC) the power from the PV module can be dramatically reduced and maximum power point tracking (MPPT) control will be affected. This paper presents a hybrid simulation model of PV cell/module and system using Matlab®/Simulink® and Pspice®. The hybrid simulation model includes the solar PV cells and the converter power stage and can be expanded to add MPPT control and other functions. The model is able to simulate both the I-V characteristics curves and the P-V characteristics curves of PV modules under uniform shading conditions (USC) and PSC. The model is used to study different parameters variations effects on the PV array. The developed model is suitable to simulate several homogeneous or/and heterogeneous PV cells or PV panels connected in series and/or in parallel.",
"title": ""
},
{
"docid": "daef1d0005da14d3a5717bf400cd69e7",
"text": "Deep learning methods have typically been trained on large datasets in which many training examples are available. However, many real-world product datasets have only a small number of images available for each product. We explore the use of deep learning methods for recognizing object instances when we have only a single training example per class. We show that feedforward neural networks outperform state-of-the-art methods for recognizing objects from novel viewpoints even when trained from just a single image per object. To further improve our performance on this task, we propose to take advantage of a supplementary dataset in which we observe a separate set of objects from multiple viewpoints. We introduce a new approach for training deep learning methods for instance recognition with limited training data, in which we use an auxiliary multi-view dataset to train our network to be robust to viewpoint changes. We find that this approach leads to a more robust classifier for recognizing objects from novel viewpoints, outperforming previous state-of-the-art approaches including keypoint-matching, template-based techniques, and sparse coding.",
"title": ""
},
{
"docid": "d98f60a2a0453954543da840076e388a",
"text": "The back-propagation algorithm is the cornerstone of deep learning. Despite its importance, few variations of the algorithm have been attempted. This work presents an approach to discover new variations of the back-propagation equation. We use a domain specific language to describe update equations as a list of primitive functions. An evolution-based method is used to discover new propagation rules that maximize the generalization performance after a few epochs of training. We find several update equations that can train faster with short training times than standard back-propagation, and perform similar as standard back-propagation at convergence.",
"title": ""
},
{
"docid": "381c02fb1ce523ddbdfe3acdde20abf1",
"text": "Domain-specific accelerators (DSAs), which sacrifice programmability for efficiency, are a reaction to the waning benefits of device scaling. This article demonstrates that there are commonalities between DSAs that can be exploited with programmable mechanisms. The goals are to create a programmable architecture that can match the benefits of a DSA and to create a platform for future accelerator investigations.",
"title": ""
},
{
"docid": "77cfc86c63ca0a7b3ed3b805ea16b9c9",
"text": "The research presented in this paper is about detecting collaborative networks inside the structure of a research social network. As case study we consider ResearchGate and SEE University academic staff. First we describe the methodology used to crawl and create an academic-academic network depending from their fields of interest. We then calculate and discuss four social network analysis centrality measures (closeness, betweenness, degree, and PageRank) for entities in this network. In addition to these metrics, we have also investigated grouping of individuals, based on automatic clustering depending from their reciprocal relationships.",
"title": ""
},
{
"docid": "bdbe25fba90d952f9bb0a46abdbee5c7",
"text": "A \"discount\" version of Q-methodology for HCI, called \"HCI-Q\", can be used in iterative design cycles to explore, from the point of view of users and other stakeholders, what makes technologies personally significant. Initially, designers critically reflect on their own assumptions about how a design may affect social and individual behavior. Then, designers use these assumptions as stimuli to elicit other people's points of view. This process of critical self-reflection and evaluation helps the designer to assess the fit between a design and its intended social context of use. To demonstrate the utility of HCI-Q for research and design, we use HCI-Q to explore stakeholders' responses to a prototype Alternative and Augmentative Communication (AAC) application called Vid2Speech. We show that our adaptation of Q-methodology is useful for revealing the structure of consensus and conflict among stakeholder perspectives, helping to situate design within the context of relevant value tensions and norms.",
"title": ""
},
{
"docid": "d814a42313d2d42d0cd20c5b484806ff",
"text": "This paper compares Ad hoc On-demand Distance Vector (AODV), Dynamic Source Routing (DSR), and Wireless Routing Protocol (WRP) for MANETs to Distance Vector protocol to better understand the major characteristics of the three routing protocols, using a parallel discrete event-driven simulator, GloMoSim. MANET (mobile ad hoc network) is a multi-hop wireless network without a fixed infrastructure. There has not been much work that compares the performance of the MANET routing protocols, especially to Distance Vector protocol, which is a general routing protocol developed for legacy wired networks. The results of our experiments brought us nine key findings. Followings are some of our key findings: (1) AODV is most sensitive to changes in traffic load in the messaging overhead for routing. The number of control packets generated by AODV became 36 times larger when the traffic load was increased. For Distance Vector, WRP and DSR, their increase was approximately 1.3 times, 1.1 times and 7.6 times, respectively. (2) Two advantages common in the three MANET routing protocols compared to classical Distance Vector protocol were identified to be scalability for node mobility in end-to-end delay and scalability for node density in messaging overhead. (3) WRP resulted in the shortest delay and highest packet delivery rate, implying that WRP will be the best for real-time applications in the four protocols compared. WRP demonstrated the best traffic-scalability; control overhead will not increase much when traffic load increases.",
"title": ""
},
{
"docid": "47db0fdd482014068538a00f7dc826a9",
"text": "Importance\nThe use of palliative care programs and the number of trials assessing their effectiveness have increased.\n\n\nObjective\nTo determine the association of palliative care with quality of life (QOL), symptom burden, survival, and other outcomes for people with life-limiting illness and for their caregivers.\n\n\nData Sources\nMEDLINE, EMBASE, CINAHL, and Cochrane CENTRAL to July 2016.\n\n\nStudy Selection\nRandomized clinical trials of palliative care interventions in adults with life-limiting illness.\n\n\nData Extraction and Synthesis\nTwo reviewers independently extracted data. Narrative synthesis was conducted for all trials. Quality of life, symptom burden, and survival were analyzed using random-effects meta-analysis, with estimates of QOL translated to units of the Functional Assessment of Chronic Illness Therapy-palliative care scale (FACIT-Pal) instrument (range, 0-184 [worst-best]; minimal clinically important difference [MCID], 9 points); and symptom burden translated to the Edmonton Symptom Assessment Scale (ESAS) (range, 0-90 [best-worst]; MCID, 5.7 points).\n\n\nMain Outcomes and Measures\nQuality of life, symptom burden, survival, mood, advance care planning, site of death, health care satisfaction, resource utilization, and health care expenditures.\n\n\nResults\nForty-three RCTs provided data on 12 731 patients (mean age, 67 years) and 2479 caregivers. Thirty-five trials used usual care as the control, and 14 took place in the ambulatory setting. In the meta-analysis, palliative care was associated with statistically and clinically significant improvements in patient QOL at the 1- to 3-month follow-up (standardized mean difference, 0.46; 95% CI, 0.08 to 0.83; FACIT-Pal mean difference, 11.36] and symptom burden at the 1- to 3-month follow-up (standardized mean difference, -0.66; 95% CI, -1.25 to -0.07; ESAS mean difference, -10.30). When analyses were limited to trials at low risk of bias (n = 5), the association between palliative care and QOL was attenuated but remained statistically significant (standardized mean difference, 0.20; 95% CI, 0.06 to 0.34; FACIT-Pal mean difference, 4.94), whereas the association with symptom burden was not statistically significant (standardized mean difference, -0.21; 95% CI, -0.42 to 0.00; ESAS mean difference, -3.28). There was no association between palliative care and survival (hazard ratio, 0.90; 95% CI, 0.69 to 1.17). Palliative care was associated consistently with improvements in advance care planning, patient and caregiver satisfaction, and lower health care utilization. Evidence of associations with other outcomes was mixed.\n\n\nConclusions and Relevance\nIn this meta-analysis, palliative care interventions were associated with improvements in patient QOL and symptom burden. Findings for caregiver outcomes were inconsistent. However, many associations were no longer significant when limited to trials at low risk of bias, and there was no significant association between palliative care and survival.",
"title": ""
},
{
"docid": "db693b698c2cbc35a4b13c2e4a345f6b",
"text": "In this paper, a new design of a loaded cross dipole antennas (LCDA) with an omni-directional radiation pattern in the horizontal plane and broad-band characteristics is investigated. An efficient optimization procedure based on a genetic algorithm is employed to design the LCDA and to determine the parameters working over a 25:1 bandwidth. The simulation results are compared with measurements.",
"title": ""
},
{
"docid": "28b3d7fbcb20f5548d22dbf71b882a05",
"text": "In this paper, we propose a novel abnormal event detection method with spatio-temporal adversarial networks (STAN). We devise a spatio-temporal generator which synthesizes an inter- frame by considering spatio-temporal characteristics with bidirectional ConvLSTM. A proposed spatio-temporal discriminator determines whether an input sequence is real-normal or not with 3D convolutional layers. These two networks are trained in an adversarial way to effectively encode spatio-temporal features of normal patterns. After the learning, the generator and the discriminator can be independently used as detectors, and deviations from the learned normal patterns are detected as abnormalities. Experimental results show that the proposed method achieved competitive performance compared to the state-of-the-art methods. Further, for the interpretation, we visualize the location of abnormal events detected by the proposed networks using a generator loss and discriminator gradients.",
"title": ""
},
{
"docid": "b266ab6e6a0fd75fb3d97b25970cab99",
"text": "a r t i c l e i n f o Keywords: Customer relationship management CRM Customer relationship performance Information technology Marketing capabilities Social media technology This study examines how social media technology usage and customer-centric management systems contribute to a firm-level capability of social customer relationship management (CRM). Drawing from the literature in marketing, information systems, and strategic management, the first contribution of this study is the conceptu-alization and measurement of social CRM capability. The second key contribution is the examination of how social CRM capability is influenced by both customer-centric management systems and social media technologies. These two resources are found to have an interactive effect on the formation of a firm-level capability that is shown to positively relate to customer relationship performance. The study analyzes data from 308 organizations using a structural equation modeling approach. Much like marketing managers in the late 1990s through early 2000s, who participated in the widespread deployment of customer relationship management (CRM) technologies, today's managers are charged with integrating nascent technologies – namely, social media applications – with existing systems and processes to develop new capabilities that foster stronger relationships with customers. This merger of existing CRM systems with social media technology has given way to a new concept of CRM that incorporates a more collaborative and network-focused approach to managing customer relationships. The term social CRM has recently emerged to describe this new way of developing and maintaining customer relationships (Greenberg, 2010). Marketing scholars have defined social CRM as the integration of customer-facing activities, including processes, systems, and technologies, with emergent social media applications to engage customers in collaborative conversations and enhance customer relationships (Greenberg, 2010; Trainor, 2012). Organizations are recognizing the potential of social CRM and have made considerable investments in social CRM technology over the past two years. According to Sarner et al. (2011), spending in social CRM technology increased by more than 40% in 2010 and is expected to exceed $1 billion by 2013. Despite the current hype surrounding social media applications, the efficacy of social CRM technology remains largely unknown and underexplored. Several questions remain unanswered, such as: 1) Can social CRM increase customer retention and loyalty? 2) How do social CRM technologies contribute to firm outcomes? 3) What role is played by CRM processes and technologies? As a result, companies are largely left to experiment with their social application implementations (Sarner et al., 2011), and they …",
"title": ""
}
] |
scidocsrr
|
e5063f8f277d9ad09c8c8650687db3cb
|
Detecting Credit Card Fraud by Decision Trees and Support Vector Machines
|
[
{
"docid": "51eb8e36ffbf5854b12859602f7554ef",
"text": "Fraud is increasing dramatically with the expansion of modern technology and the global superhighways of communication, resulting in the loss of billions of dollars worldwide each year. Although prevention technologies are the best way to reduce fraud, fraudsters are adaptive and, given time, will usually find ways to circumvent such measures. Methodologies for the detection of fraud are essential if we are to catch fraudsters once fraud prevention has failed. Statistics and machine learning provide effective technologies for fraud detection and have been applied successfully to detect activities such as money laundering, e-commerce credit card fraud, telecommunications fraud and computer intrusion, to name but a few. We describe the tools available for statistical fraud detection and the areas in which fraud detection technologies are most used.",
"title": ""
},
{
"docid": "342e3fd05878ebff3bc2686fb05009f5",
"text": "Due to a rapid advancement in the electronic commerce technology, use of credit cards has dramatically increased. As credit card becomes the most popular mode of payment, credit card frauds are becoming increasingly rampant in recent years. In this paper, we model the sequence of operations in credit card transaction processing using a confidence-based neural network. Receiver operating characteristic (ROC) analysis technology is also introduced to ensure the accuracy and effectiveness of fraud detection. A neural network is initially trained with synthetic data. If an incoming credit card transaction is not accepted by the trained neural network model (NNM) with sufficiently low confidence, it is considered to be fraudulent. This paper shows how confidence value, neural network algorithm and ROC can be combined successfully to perform credit card fraud detection.",
"title": ""
}
] |
[
{
"docid": "ca117e9bfd90df7ac652628b342a4b62",
"text": "In this article, we introduce an explicit count-based strategy to build word space models with syntactic contexts (dependencies). A filtering method is defined to reduce explicit word-context vectors. This traditional strategy is compared with a neural embedding (predictive) model also based on syntactic dependencies. The comparison was performed using the same parsed corpus for both models. Besides, the dependency-based methods are also compared with bag-of-words strategies, both count-based and predictive ones. The results show that our traditional countbased model with syntactic dependencies outperforms other strategies, including dependency-based embeddings, but just for the tasks focused on discovering similarity between words with the same function (i.e. near-synonyms).",
"title": ""
},
{
"docid": "4883a841f80cc60a9b7f505af2d1984d",
"text": "Hiring good people is tough, but keeping them can be even tougher. The professionals streaming out of today's MBA programs are so well educated and achievement oriented that they could do well in virtually any job. But will they stay? According to noted career experts Timothy Butler and James Waldroop, only if their jobs fit their deeply embedded life interests--that is, their long-held, emotionally driven passions. Butler and Waldroop identify the eight different life interests of people drawn to business careers and introduce the concept of job sculpting, the art of matching people to jobs that resonate with the activities that make them truly happy. Managers don't need special training to job sculpt, but they do need to listen more carefully when employees describe what they like and dislike about their jobs. Once managers and employees have discussed deeply embedded life interests--ideally, during employee performance reviews--they can work together to customize future work assignments. In some cases, that may mean simply adding another assignment to existing responsibilities. In other cases, it may require moving that employee to a new position altogether. Skills can be stretched in many directions, but if they are not going in the right direction--one that is congruent with deeply embedded life interests--employees are at risk of becoming dissatisfied and uncommitted. And in an economy where a company's most important asset is the knowledge, energy, and loyalty of its people, that's a large risk to take.",
"title": ""
},
{
"docid": "b9da5b905cfe701303b627f359c30624",
"text": "Parametric embedding methods such as parametric t-distributed Stochastic Neighbor Embedding (pt-SNE) enables out-of-sample data visualization without further computationally expensive optimization or approximation. However, pt-SNE favors small mini-batches to train a deep neural network but large minibatches to approximate its cost function involving all pairwise data point comparisons, and thus has difficulty in finding a balance. To resolve the conflicts, we present parametric t-distributed stochastic exemplar-centered embedding. Our strategy learns embedding parameters by comparing training data only with precomputed exemplars to indirectly preserve local neighborhoods, resulting in a cost function with significantly reduced computational and memory complexity. Moreover, we propose a shallow embedding network with high-order feature interactions for data visualization, which is much easier to tune but produces comparable performance in contrast to a deep feedforward neural network employed by pt-SNE. We empirically demonstrate, using several benchmark datasets, that our proposed method significantly outperforms pt-SNE in terms of robustness, visual effects, and quantitative evaluations.",
"title": ""
},
{
"docid": "0f1fd9d1daeea4f175f57c5b32c471fc",
"text": "An overview of cluster analysis techniques from a data mining point of view is given. This is done by a strict separation of the questions of various similarity and distance measures and related optimization criteria for clusterings from the methods to create and modify clusterings themselves. In addition to this general setting and overview, the second focus is used on discussions of the essential ingredients of the demographic cluster algorithm of IBM's Intelligent Miner, based Condorcet's criterion.",
"title": ""
},
{
"docid": "a2673b70bf6c7cf50f2f4c4db2845e19",
"text": "This paper presents a summary of the first Workshop on Building Linguistically Generalizable Natural Language Processing Systems, and the associated Build It Break It, The Language Edition shared task. The goal of this workshop was to bring together researchers in NLP and linguistics with a shared task aimed at testing the generalizability of NLP systems beyond the distributions of their training data. We describe the motivation, setup, and participation of the shared task, provide discussion of some highlighted results, and discuss lessons learned.",
"title": ""
},
{
"docid": "5c2c7faab9ba34058057cea35bcc6b92",
"text": "Today, there are a large number of online discussion fora on the internet which are meant for users to express, discuss and exchange their views and opinions on various topics. For example, news portals, blogs, social media channels such as youtube. typically allow users to express their views through comments. In such fora, it has been often observed that user conversations sometimes quickly derail and become inappropriate such as hurling abuses, passing rude and discourteous comments on individuals or certain groups/communities. Similarly, some virtual agents or bots have also been found to respond back to users with inappropriate messages. As a result, inappropriate messages or comments are turning into an online menace slowly degrading the effectiveness of user experiences. Hence, automatic detection and filtering of such inappropriate language has become an important problem for improving the quality of conversations with users as well as virtual agents. In this paper, we propose a novel deep learning-based technique for automatically identifying such inappropriate language. We especially focus on solving this problem in two application scenarios—(a) Query completion suggestions in search engines and (b) Users conversations in messengers. Detecting inappropriate language is challenging due to various natural language phenomenon such as spelling mistakes and variations, polysemy, contextual ambiguity and semantic variations. For identifying inappropriate query suggestions, we propose a novel deep learning architecture called “Convolutional Bi-Directional LSTM (C-BiLSTM)\" which combines the strengths of both Convolution Neural Networks (CNN) and Bi-directional LSTMs (BLSTM). For filtering inappropriate conversations, we use LSTM and Bi-directional LSTM (BLSTM) sequential models. The proposed models do not rely on hand-crafted features, are trained end-end as a single model, and effectively capture both local features as well as their global semantics. Evaluating C-BiLSTM, LSTM and BLSTM models on real-world search queries and conversations reveals that they significantly outperform both pattern-based and other hand-crafted feature-based baselines.",
"title": ""
},
{
"docid": "6c81b1fe36a591b3b86a5e912a8792c1",
"text": "Mobile phones, sensors, patients, hospitals, researchers, providers and organizations are nowadays, generating huge amounts of healthcare data. The real challenge in healthcare systems is how to find, collect, analyze and manage information to make people's lives healthier and easier, by contributing not only to understand new diseases and therapies but also to predict outcomes at earlier stages and make real-time decisions. In this paper, we explain the potential benefits of big data to healthcare and explore how it improves treatment and empowers patients, providers and researchers. We also describe the ability of reality mining in collecting large amounts of data to understand people's habits, detect and predict outcomes, and illustrate the benefits of big data analytics through five effective new pathways that could be adopted to promote patients' health, enhance medicine, reduce cost and improve healthcare value and quality. We cover some big data solutions in healthcare and we shed light on implementations, such as Electronic Healthcare Record (HER) and Electronic Healthcare Predictive Analytics (e-HPA) in US hospitals. Furthermore, we complete the picture by highlighting some challenges that big data analytics faces in healthcare.",
"title": ""
},
{
"docid": "5e858796f025a9e2b91109835d827c68",
"text": "Several divergent application protocols have been proposed for Internet of Things (IoT) solutions including CoAP, REST, XMPP, AMQP, MQTT, DDS, and others. Each protocol focuses on a specific aspect of IoT communications. The lack of a protocol that can handle the vertical market requirements of IoT applications including machine-to-machine, machine-to-server, and server-to-server communications has resulted in a fragmented market between many protocols. In turn, this fragmentation is a main hindrance in the development of new services that require the integration of multiple IoT services to unlock new capabilities and provide horizontal integration among services. In this work, after articulating the major shortcomings of the current IoT protocols, we outline a rule-based intelligent gateway that bridges the gap between existing IoT protocols to enable the efficient integration of horizontal IoT services. While this intelligent gateway enhances the gloomy picture of protocol fragmentation in the context of IoT, it does not address the root cause of this fragmentation, which lies in the inability of the current protocols to offer a wide range of QoS guarantees. To offer a solution that stems the root cause of this protocol fragmentation issue, we propose a generic IoT protocol that is flexible enough to address the IoT vertical market requirements. In this regard, we enhance the baseline MQTT protocol by allowing it to support rich QoS features by exploiting a mix of IP multicasting, intelligent broker queuing management, and traffic analytics techniques. Our initial evaluation of the lightweight enhanced MQTT protocol reveals significant improvement over the baseline protocol in terms of the delay performance.",
"title": ""
},
{
"docid": "ac2c1d325a242cc0037474d2a51c2b70",
"text": "In female mammals, one of the two X chromosomes is silenced for dosage compensation between the sexes. X-chromosome inactivation is initiated in early embryogenesis by the Xist RNA that localizes to the inactive X chromosome. During development, the inactive X chromosome is further modified, a specialized form of facultative heterochromatin is formed and gene repression becomes stable and independent of Xist in somatic cells. The recent identification of several factors involved in this process has provided insights into the mechanism of Xist localization and gene silencing. The emerging picture is complex and suggests that chromosome-wide silencing can be partitioned into several steps, the molecular components of which are starting to be defined.",
"title": ""
},
{
"docid": "7670b1eea992a1e83d3ebc1464563d60",
"text": "The present work was conducted to demonstrate a method that could be used to assess the hypothesis that children with specific language impairment (SLI) often respond more slowly than unimpaired children on a range of tasks. The data consisted of 22 pairs of mean response times (RTs) obtained from previously published studies; each pair consisted of a mean RT for a group of children with SLI for an experimental condition and the corresponding mean RT for a group of children without SLI. If children with SLI always respond more slowly than unimpaired children and by an amount that does not vary across tasks, then RTs for children with SLI should increase linearly as a function of RTs for age-matched control children without SLI. This result was obtained and is consistent with the view that differences in processing speed between children with and without SLI reflect some general (i.e., non-task specific) component of cognitive processing. Future applications of the method are suggested.",
"title": ""
},
{
"docid": "fd9a5d3158a0079431ee4d740e5e24ba",
"text": "Justifying art activities in early childhood education seems like a trivial task. Everyone knows that young children love to draw, dip their fingers in paint or squeeze playdough to create images and forms that only those with hardened hearts would find difficult to appreciate. Children seem happier when they have access to art materials and supplies than when they are denied such opportunities, and usually they do not need much invitation to spontaneously take advantage of these offerings. The outcomes of children’s “art play” tend to fascinate adult audiences – and adults: from artistically naïve parents, through psychologists and therapists, to researchers specifically studying artistic development – have long attempted to understand their significance and meaning. Early childhood classrooms are cheerful and require minimal budgets to decorate with the abundance of children’s art. Early childhood parents and teachers also trust, or at least hope, that there are some significant formative benefits to children from their engagement in art activities, such as development of creativity.",
"title": ""
},
{
"docid": "01a70ee73571e848575ed992c1a3a578",
"text": "BACKGROUND\nNursing turnover is a major issue for health care managers, notably during the global nursing workforce shortage. Despite the often hierarchical structure of the data used in nursing studies, few studies have investigated the impact of the work environment on intention to leave using multilevel techniques. Also, differences between intentions to leave the current workplace or to leave the profession entirely have rarely been studied.\n\n\nOBJECTIVE\nThe aim of the current study was to investigate how aspects of the nurse practice environment and satisfaction with work schedule flexibility measured at different organisational levels influenced the intention to leave the profession or the workplace due to dissatisfaction.\n\n\nDESIGN\nMultilevel models were fitted using survey data from the RN4CAST project, which has a multi-country, multilevel, cross-sectional design. The data analysed here are based on a sample of 23,076 registered nurses from 2020 units in 384 hospitals in 10 European countries (overall response rate: 59.4%). Four levels were available for analyses: country, hospital, unit, and individual registered nurse. Practice environment and satisfaction with schedule flexibility were aggregated and studied at the unit level. Gender, experience as registered nurse, full vs. part-time work, as well as individual deviance from unit mean in practice environment and satisfaction with work schedule flexibility, were included at the individual level. Both intention to leave the profession and the hospital due to dissatisfaction were studied.\n\n\nRESULTS\nRegarding intention to leave current workplace, there is variability at both country (6.9%) and unit (6.9%) level. However, for intention to leave the profession we found less variability at the country (4.6%) and unit level (3.9%). Intention to leave the workplace was strongly related to unit level variables. Additionally, individual characteristics and deviance from unit mean regarding practice environment and satisfaction with schedule flexibility were related to both outcomes. Major limitations of the study are its cross-sectional design and the fact that only turnover intention due to dissatisfaction was studied.\n\n\nCONCLUSIONS\nWe conclude that measures aiming to improve the practice environment and schedule flexibility would be a promising approach towards increased retention of registered nurses in both their current workplaces and the nursing profession as a whole and thus a way to counteract the nursing shortage across European countries.",
"title": ""
},
{
"docid": "4d0f926c0b097f7b253db787e0c76b5c",
"text": "The processing and interpretation of pain signals is a complex process that entails excitation of peripheral nerves, local interactions within the spinal dorsal horn, and the activation of ascending and descending circuits that comprise a loop from the spinal cord to supraspinal structures and finally exciting nociceptive inputs at the spinal level. Although the \"circuits\" described here appear to be part of normal pain processing, the system demonstrates a remarkable ability to undergo neuroplastic transformations when nociceptive inputs are extended over time, and such adaptations function as a pronociceptive positive feedback loop. Manipulations directed to disrupt any of the nodes of this pain facilitatory loop may effectively disrupt the maintenance of the sensitized pain state and diminish or abolish neuropathic pain. Understanding the ascending and descending pain facilitatory circuits may provide for the design of rational therapies that do not interfere with normal sensory processing.",
"title": ""
},
{
"docid": "8e6ba93f41c4e59fe937b1d48dfb0f74",
"text": "This paper aims at studying the impact of the colors of e-commerce websites on consumer memorization and buying intention. Based on a literature review we wish to introduce the theoretical and methodological bases addressing this issue. A conceptual model is proposed, showing the effects of the color of the e-commerce website and of its components Hue, Brightness and Saturation, on the behavioral responses of the consumer memorization and buying intention. These responses are conveyed by mood. Data collection was carried out during a laboratory experiment in order to control for the measurement of the colored appearance of e-commerce websites. Participants visited one of the 8 versions of a website designed for the research, selling music CDs. Data analysis using ANOVA, regressions and general linear models (GLM), show a significant effect of color on memorization, conveyed by mood. The interaction of hue and brightness, using chromatic colors for the dominant (background) and dynamic (foreground) supports memorization and buying intention, when contrast is based on low brightness. A negative mood infers better memorization but a decreasing buying intention. The managerial, methodological and theoretical implications, as well as the future ways of research were put in prospect.",
"title": ""
},
{
"docid": "ba8467f6b5a28a2b076f75ac353334a0",
"text": "Progress in science has advanced the development of human society across history, with dramatic revolutions shaped by information theory, genetic cloning, and artificial intelligence, among the many scientific achievements produced in the 20th century. However, the way that science advances itself is much less well-understood. In this work, we study the evolution of scientific development over the past century by presenting an anatomy of 89 million digitalized papers published between 1900 and 2015. We find that science has benefited from the shift from individual work to collaborative effort, with over 90% of the world-leading innovations generated by collaborations in this century, nearly four times higher than they were in the 1900s. We discover that rather than the frequent myopic- and self-referencing that was common in the early 20th century, modern scientists instead tend to look for literature further back and farther around. Finally, we also observe the globalization of scientific development from 1900 to 2015, including 25-fold and 7-fold increases in international collaborations and citations, respectively, as well as a dramatic decline in the dominant accumulation of citations by the US, the UK, and Germany, from ~95% to ~50% over the same period. Our discoveries are meant to serve as a starter for exploring the visionary ways in which science has developed throughout the past century, generating insight into and an impact upon the current scientific innovations and funding policies.",
"title": ""
},
{
"docid": "a1fed0bcce198ad333b45bfc5e0efa12",
"text": "Contemporary games are making significant strides towards offering complex, immersive experiences for players. We can now explore sprawling 3D virtual environments populated by beautifully rendered characters and objects with autonomous behavior, engage in highly visceral action-oriented experiences offering a variety of missions with multiple solutions, and interact in ever-expanding online worlds teeming with physically customizable player avatars.",
"title": ""
},
{
"docid": "d6de2969e89e211f6faf8a47854ee43e",
"text": "Digital image forensics has attracted a lot of attention recently for its role in identifying the origin of digital image. Although different forensic approaches have been proposed, one of the most popular approaches is to rely on the imaging sensor pattern noise, where each sensor pattern noise uniquely corresponds to an imaging device and serves as the intrinsic fingerprint. The correlation-based detection is heavily dependent upon the accuracy of the extracted pattern noise. In this work, we discuss the way to extract the pattern noise, in particular, explore the way to make better use of the pattern noise. Unlike current methods that directly compare the whole pattern noise signal with the reference one, we propose to only compare the large components of these two signals. Our detector can better identify the images taken by different cameras. In the meantime, it needs less computational complexity.",
"title": ""
},
{
"docid": "f77a235f49cc8b0c037eb0c528b2c9dc",
"text": "This paper describes the museum wearable: a wearable computer which orchestrates an audiovisual narration as a function of the visitor’s interests gathered from his/her physical path in the museum and length of stops. The wearable is made by a lightweight and small computer that people carry inside a shoulder pack. It offers an audiovisual augmentation of the surrounding environment using a small, lightweight eye-piece display (often called private-eye) attached to conventional headphones. Using custom built infrared location sensors distributed in the museum space, and statistical mathematical modeling, the museum wearable builds a progressively refined user model and uses it to deliver a personalized audiovisual narration to the visitor. This device will enrich and personalize the museum visit as a visual and auditory storyteller that is able to adapt its story to the audience’s interests and guide the public through the path of the exhibit.",
"title": ""
},
{
"docid": "0e1e5ab11e04789e00c99439384edc82",
"text": "Linking multiple accounts owned by the same user across different online social networks (OSNs) is an important issue in social networks, known as identity reconciliation. Graph matching is one of popular techniques to solve this problem by identifying a map that matches a set of vertices across different OSNs. Among them, percolation-based graph matching (PGM) has been explored to identify entities belonging to a same user across two different networks based on a set of initial pre-matched seed nodes and graph structural information. However, existing PGM algorithms have been applied in only undirected networks while many OSNs are represented by directional relationships (e.g., followers or followees in Twitter or Facebook). For PGM to be applicable in real world OSNs represented by directed networks with a small set of overlapping vertices, we propose a percolation-based directed graph matching algorithm, namely PDGM, by considering the following two key features: (1) similarity of two nodes based on directional relationships (i.e., outgoing edges vs. incoming edges); and (2) celebrity penalty such as penalty given for nodes with a high in-degree. Through the extensive simulation experiments, our results show that the proposed PDGM outperforms the baseline PGM counterpart that does not consider either directional relationships or celebrity penalty.",
"title": ""
},
{
"docid": "d094b75f0a1b7f40b39f02bb74397d71",
"text": "We propose a theory that relates difficulty of learning in deep architectures to culture and language. It is articulated around the following hypotheses: (1) learning in an individual human brain is hampered by the presence of effective local minima; (2) this optimization difficulty is particularly important when it comes to learning higher-level abstractions, i.e., concepts that cover a vast and highly-nonlinear span of sensory configurations; (3) such high-level abstractions are best represented in brains by the composition of many levels of representation, i.e., by deep architectures; (4) a human brain can learn such high-level abstractions if guided by the signals produced by other humans, which act as hints or indirect supervision for these high-level abstractions; and (5), language and the recombination and optimization of mental concepts provide an efficient evolutionary recombination operator, and this gives rise to rapid search in the space of communicable ideas that help humans build up better high-level internal representations of their world. These hypotheses put together imply that human culture and the evolution of ideas have been crucial to counter an optimization difficulty: this optimization difficulty would otherwise make it very difficult for human brains to capture high-level knowledge of the world. The theory is grounded in experimental observations of the difficulties of training deep artificial neural networks. Plausible consequences of this theory for the efficiency of cultural evolution are sketched.",
"title": ""
}
] |
scidocsrr
|
3a88e89f334f38c50cdab99d2e23217f
|
Deep context-aware descreening and rescreening of halftone images
|
[
{
"docid": "66a1a943580cdd300f9579e80f258a2e",
"text": "The rise of multi-million-item dataset initiatives has enabled data-hungry machine learning algorithms to reach near-human semantic classification performance at tasks such as visual object and scene recognition. Here we describe the Places Database, a repository of 10 million scene photographs, labeled with scene semantic categories, comprising a large and diverse list of the types of environments encountered in the world. Using the state-of-the-art Convolutional Neural Networks (CNNs), we provide scene classification CNNs (Places-CNNs) as baselines, that significantly outperform the previous approaches. Visualization of the CNNs trained on Places shows that object detectors emerge as an intermediate representation of scene classification. With its high-coverage and high-diversity of exemplars, the Places Database along with the Places-CNNs offer a novel resource to guide future progress on scene recognition problems.",
"title": ""
}
] |
[
{
"docid": "8d9fbeda9f6a77e927ac14b0d426d1d3",
"text": "This paper describes a new detector for finding perspective rectangle structural features that runs in real-time. Given the vanishing points within an image, the algorithm recovers the edge points that are aligned along the vanishing lines. We then efficiently recover the intersections of pairs of lines corresponding to different vanishing points. The detector has been designed for robot visual mapping, and we present the application of this detector to real-time stereo matching and reconstruction over a corridor sequence for this goal.",
"title": ""
},
{
"docid": "281c64b492a1aff7707dbbb5128799c8",
"text": "Internet business models have been widely discussed in literature and applied within the last decade. Nevertheless, a clear understanding of some e-commerce concepts does not exist yet. The classification of business models in e-commerce is one of these areas. The current research tries to fill this gap through a conceptual and qualitative study. Nine main e-commerce business model types are selected from literature and analyzed to define the criteria and their sub-criteria (characteristics). As a result three different classifications for business models are determined. This study can be used to improve the understanding of essential functions, relations and mechanisms of existing e-commerce business models.",
"title": ""
},
{
"docid": "91c024a832bfc07bc00b7086bcf77add",
"text": "Topic-focused multi-document summarization aims to produce a summary biased to a given topic or user profile. This paper presents a novel extractive approach based on manifold-ranking of sentences to this summarization task. The manifold-ranking process can naturally make full use of both the relationships among all the sentences in the documents and the relationships between the given topic and the sentences. The ranking score is obtained for each sentence in the manifold-ranking process to denote the biased information richness of the sentence. Then the greedy algorithm is employed to impose diversity penalty on each sentence. The summary is produced by choosing the sentences with both high biased information richness and high information novelty. Experiments on DUC2003 and DUC2005 are performed and the ROUGE evaluation results show that the proposed approach can significantly outperform existing approaches of the top performing systems in DUC tasks and baseline approaches.",
"title": ""
},
{
"docid": "c95894477d7279deb7ddbb365030c34e",
"text": "Among mammals living in social groups, individuals form communication networks where they signal their identity and social status, facilitating social interaction. In spite of its importance for understanding of mammalian societies, the coding of individual-related information in the vocal signals of non-primate mammals has been relatively neglected. The present study focuses on the spotted hyena Crocuta crocuta, a social carnivore known for its complex female-dominated society. We investigate if and how the well-known hyena's laugh, also known as the giggle call, encodes information about the emitter. By analyzing acoustic structure in both temporal and frequency domains, we show that the hyena's laugh can encode information about age, individual identity and dominant/subordinate status, providing cues to receivers that could enable assessment of the social position of an emitting individual. The range of messages encoded in the hyena's laugh is likely to play a role during social interactions. This call, together with other vocalizations and other sensory channels, should ensure an array of communication signals that support the complex social system of the spotted hyena. Experimental studies are now needed to decipher precisely the communication network of this species.",
"title": ""
},
{
"docid": "0df2ca944dcdf79369ef5a7424bf3ffe",
"text": "This article first presents two theories representing distinct approaches to the field of stress research: Selye's theory of `systemic stress' based in physiology and psychobiology, and the `psychological stress' model developed by Lazarus. In the second part, the concept of coping is described. Coping theories may be classified according to two independent parameters: traitoriented versus state-oriented, and microanalytic versus macroanalytic approaches. The multitude of theoretical conceptions is based on the macroanalytic, trait-oriented approach. Examples of this approach that are presented in this article are `repression–sensitization,' `monitoringblunting,' and the `model of coping modes.' The article closes with a brief outline of future perspectives in stress and coping research.",
"title": ""
},
{
"docid": "531e30bf9610b82f6fc650652e6fc836",
"text": "A versatile microreactor platform featuring a novel chemical-resistant microvalve array has been developed using combined silicon/polymer micromachining and a special polymer membrane transfer process. The basic valve unit in the array has a typical ‘transistor’ structure and a PDMS/parylene double-layer valve membrane. A robust multiplexing algorithm is also proposed for individual addressing of a large array using a minimal number of signal inputs. The in-channel microvalve is leakproof upon pneumatic actuation. In open status it introduces small impedance to the fluidic flow, and allows a significantly larger dynamic range of flow rates (∼ml min−1) compared with most of the microvalves reported. Equivalent electronic circuits were established by modeling the microvalves as PMOS transistors and the fluidic channels as simple resistors to provide theoretical prediction of the device fluidic behavior. The presented microvalve/reactor array showed excellent chemical compatibility in the tests with several typical aggressive chemicals including those seriously degrading PDMS-based microfluidic devices. Combined with the multiplexing strategy, this versatile array platform can find a variety of lab-on-a-chip applications such as addressable multiplex biochemical synthesis/assays, and is particularly suitable for those requiring tough chemicals, large flow rates and/or high-throughput parallel processing. As an example, the device performance was examined through the addressed synthesis of 30-mer DNA oligonucleotides followed by sequence validation using on-chip hybridization. The results showed leakage-free valve array addressing and proper synthesis in target reactors, as well as uniform flow distribution and excellent regional reaction selectivity. (Some figures in this article are in colour only in the electronic version) 0960-1317/06/081433+11$30.00 © 2006 IOP Publishing Ltd Printed in the UK 1433",
"title": ""
},
{
"docid": "b35efe68d99331d481e439ae8fbb4a64",
"text": "Semantic matching (SM) for textual information can be informally defined as the task of effectively modeling text matching using representations more complex than those based on simple and independent set of surface forms of words or stems (typically indicated as bag-of-words). In this perspective, matching named entities (NEs) implies that the associated model can both overcomes mismatch between different representations of the same entities, e.g., George H. W. Bush vs. George Bush, and carry out entity disambiguation to avoid incorrect matches between different but similar entities, e.g., the entity above with his son George W. Bush. This means that both the context and structure of NEs must be taken into account in the IR model. SM becomes even more complex when attempting to match the shared semantics between two larger pieces of text, e.g., phrases or clauses, as there is currently no theory indicating how words should be semantically composed for deriving the meaning of text. The complexity above has traditionally led to define IR models based on bag-of-word representations in the vector space model (VSM), where (i) the necessary structure is minimally taken into account by considering n-grams or phrases; and (ii) the matching coverage is increased by projecting text in latent semantic spaces or alternatively by applying query expansion. Such methods introduce a considerable amount of noise, which negatively balances the benefit of achieving better coverage in most cases, thus producing no IR system improvement. In the last decade, a new class of semantic matching approaches based on the so-called Kernel Methods (KMs) for structured data (see e.g., [4]) have been proposed. KMs also adopt scalar products (which, in this context, take the names of kernel functions) in VSM. However, KMs introduce two new important aspects: • the scalar product is implicitly computed using smart techniques, which enable the use of huge feature spaces, e.g., all possible skip n-grams; and • KMs are typically applied within supervised algorithms, e.g., SVMs, which, exploiting training data, can filter out irrelevant features and noise. In this talk, we will briefly introduce and summarize, the latest results on kernel methods for semantic matching by focusing on structural kernels. These can be applied to match syntactic and/or semantic representations of text shaped as trees. Several variants are available: the Syntactic Tree Kernels (STK), [2], the String Kernels (SK) [5] and the Partial Tree Kernels (PTK) [4]. Most interestingly, we will present tree kernels exploiting SM between words contained in a text structure, i.e., the Syntactic Semantic Tree Kernels (SSTK) [1] and the Smoothed Partial Tree Kernels (SPTK) [3]. These extend STK and PTK by allowing for soft matching (i.e., via similarity computation) between nodes associated with different but related labels, e.g., synonyms. The node similarity can be derived from manually annotated resources, e.g., WordNet or Wikipedia, as well as using corpus-based clustering approaches, e.g., latent semantic analysis (LSA). An example of the use of such kernels for question classification in the question answering domain will illustrate the potentials of their structural similarity approach.",
"title": ""
},
{
"docid": "8a299d5d999c2399b683c4fbaf5f80a7",
"text": "Microbiota-oriented studies based on metagenomic or metatranscriptomic sequencing have revolutionised our understanding on microbial ecology and the roles of both clinical and environmental microbes. The analysis of massive metatranscriptomic data requires extensive computational resources, a collection of bioinformatics tools and expertise in programming. We developed COMAN (Comprehensive Metatranscriptomics Analysis), a web-based tool dedicated to automatically and comprehensively analysing metatranscriptomic data. COMAN pipeline includes quality control of raw reads, removal of reads derived from non-coding RNA, followed by functional annotation, comparative statistical analysis, pathway enrichment analysis, co-expression network analysis and high-quality visualisation. The essential data generated by COMAN are also provided in tabular format for additional analysis and integration with other software. The web server has an easy-to-use interface and detailed instructions, and is freely available at http://sbb.hku.hk/COMAN/ COMAN is an integrated web server dedicated to comprehensive functional analysis of metatranscriptomic data, translating massive amount of reads to data tables and high-standard figures. It is expected to facilitate the researchers with less expertise in bioinformatics in answering microbiota-related biological questions and to increase the accessibility and interpretation of microbiota RNA-Seq data.",
"title": ""
},
{
"docid": "8bd2f3b6cdcfe6c36fe65f970642cd3e",
"text": "Partial Discharges (PDs) are one of the most important classes of ageing processes that occur within electrical insulation. The measurement of PDs is useful in the diagnosis of electrical equipment because PDs activity is related to different ageing mechanisms. Classical Phase-Resolved Partial Discharge (PRPD) patterns are able to identify PD sources when they are related to a clear degradation process and when the noise level is low compared to the amplitudes of the PDs. However, real insulation systems usually exhibit several PD sources and the noise level is high, especially if measurements are performed on-line. High-frequency (HF) sensors and advanced signal processing techniques have been successfully applied to identify these phenomena in real insulation systems. In this paper, spectral power analyses of PD pulses and the spectral power ratios at different frequencies were calculated to classify PD sources and noise by means of a graphical representation in a plane. This technique is a flexible tool for noise identification and will be useful for pulse characterization.",
"title": ""
},
{
"docid": "3d3110b19142e9a01bf4252742ce9586",
"text": "Detecting unsolicited content and the spammers who create it is a long-standing challenge that affects all of us on a daily basis. The recent growth of richly-structured social networks has provided new challenges and opportunities in the spam detection landscape. Motivated by the Tagged.com social network, we develop methods to identify spammers in evolving multi-relational social networks. We model a social network as a time-stamped multi-relational graph where vertices represent users, and edges represent different activities between them. To identify spammer accounts, our approach makes use of structural features, sequence modelling, and collective reasoning. We leverage relational sequence information using k-gram features and probabilistic modelling with a mixture of Markov models. Furthermore, in order to perform collective reasoning and improve the predictive power of a noisy abuse reporting system, we develop a statistical relational model using hinge-loss Markov random fields (HL-MRFs), a class of probabilistic graphical models which are highly scalable. We use Graphlab Create and Probabilistic Soft Logic (PSL) to prototype and experimentally evaluate our solutions on internet-scale data from Tagged.com. Our experiments demonstrate the effectiveness of our approach, and show that models which incorporate the multi-relational nature of the social network significantly gain predictive performance over those that do not.",
"title": ""
},
{
"docid": "3af0d725852cd082e2b83bc885b0b68b",
"text": "Plastic solid waste (PSW) presents challenges and opportunities to societies regardless of their sustainability awareness and technological advances. In this paper, recent progress in the recycling and recovery of PSW is reviewed. A special emphasis is paid on waste generated from polyolefinic sources, which makes up a great percentage of our daily single-life cycle plastic products. The four routes of PSW treatment are detailed and discussed covering primary (re-extrusion), secondary (mechanical), tertiary (chemical) and quaternary (energy recovery) schemes and technologies. Primary recycling, which involves the re-introduction of clean scrap of single polymer to the extrusion cycle in order to produce products of the similar material, is commonly applied in the processing line itself but rarely applied among recyclers, as recycling materials rarely possess the required quality. The various waste products, consisting of either end-of-life or production (scrap) waste, are the feedstock of secondary techniques, thereby generally reduced in size to a more desirable shape and form, such as pellets, flakes or powders, depending on the source, shape and usability. Tertiary treatment schemes have contributed greatly to the recycling status of PSW in recent years. Advanced thermo-chemical treatment methods cover a wide range of technologies and produce either fuels or petrochemical feedstock. Nowadays, non-catalytic thermal cracking (thermolysis) is receiving renewed attention, due to the fact of added value on a crude oil barrel and its very valuable yielded products. But a fact remains that advanced thermo-chemical recycling of PSW (namely polyolefins) still lacks the proper design and kinetic background to target certain desired products and/or chemicals. Energy recovery was found to be an attainable solution to PSW in general and municipal solid waste (MSW) in particular. The amount of energy produced in kilns and reactors applied in this route is sufficiently investigated up to the point of operation, but not in terms of integration with either petrochemical or converting plants. Although primary and secondary recycling schemes are well established and widely applied, it is concluded that many of the PSW tertiary and quaternary treatment schemes appear to be robust and worthy of additional investigation.",
"title": ""
},
{
"docid": "70f672268ae0b3e0e344a4f515057e6b",
"text": "Murder-suicide, homicide-suicide, and dyadic death all refer to an incident where a homicide is committed followed by the perpetrator's suicide almost immediately or soon after the homicide. Homicide-suicides are relatively uncommon and vary from region to region. In the selected literature that we reviewed, shooting was the common method of killing and suicide, and only 3 cases of homicidal hanging involving child victims were identified. We present a case of dyadic death where the method of killing and suicide was hanging, and the victim was a young woman.",
"title": ""
},
{
"docid": "85bfa5d711d845175759a8e3973d37cb",
"text": "Human motion and behaviour in crowded spaces is influenced by several factors, such as the dynamics of other moving agents in the scene, as well as the static elements that might be perceived as points of attraction or obstacles. In this work, we present a new model for human trajectory prediction which is able to take advantage of both human-human and human-space interactions. The future trajectory of humans, are generated by observing their past positions and interactions with the surroundings. To this end, we propose a “context-aware” recurrent neural network LSTM model, which can learn and predict human motion in crowded spaces such as a sidewalk, a museum or a shopping mall. We evaluate our model on a public pedestrian datasets, and we contribute a new challenging dataset that collects videos of humans that navigate in a (real) crowded space such as a big museum. Results show that our approach can predict human trajectories better when compared to previous state-of-the-art forecasting models.",
"title": ""
},
{
"docid": "77b4be1fb0b87eb1ee0399c073a7b78f",
"text": "In this work, we present an interactive system for visual analysis of urban traffic congestion based on GPS trajectories. For these trajectories we develop strategies to extract and derive traffic jam information. After cleaning the trajectories, they are matched to a road network. Subsequently, traffic speed on each road segment is computed and traffic jam events are automatically detected. Spatially and temporally related events are concatenated in, so-called, traffic jam propagation graphs. These graphs form a high-level description of a traffic jam and its propagation in time and space. Our system provides multiple views for visually exploring and analyzing the traffic condition of a large city as a whole, on the level of propagation graphs, and on road segment level. Case studies with 24 days of taxi GPS trajectories collected in Beijing demonstrate the effectiveness of our system.",
"title": ""
},
{
"docid": "ea041a1df42906b0d5a3644ae8ba933b",
"text": "In recent years, program verifiers and interactive theorem provers have become more powerful and more suitable for verifying large programs or proofs. This has demonstrated the need for improving the user experience of these tools to increase productivity and to make them more accessible to nonexperts. This paper presents an integrated development environment for Dafny—a programming language, verifier, and proof assistant—that addresses issues present in most state-of-the-art verifiers: low responsiveness and lack of support for understanding non-obvious verification failures. The paper demonstrates several new features that move the state-of-the-art closer towards a verification environment that can provide verification feedback as the user types and can present more helpful information about the program or failed verifications in a demand-driven and unobtrusive way.",
"title": ""
},
{
"docid": "93a3895a03edcb50af74db901cb16b90",
"text": "OBJECT\nBecause lumbar magnetic resonance (MR) imaging fails to identify a treatable cause of chronic sciatica in nearly 1 million patients annually, the authors conducted MR neurography and interventional MR imaging in 239 consecutive patients with sciatica in whom standard diagnosis and treatment failed to effect improvement.\n\n\nMETHODS\nAfter performing MR neurography and interventional MR imaging, the final rediagnoses included the following: piriformis syndrome (67.8%), distal foraminal nerve root entrapment (6%), ischial tunnel syndrome (4.7%), discogenic pain with referred leg pain (3.4%), pudendal nerve entrapment with referred pain (3%), distal sciatic entrapment (2.1%), sciatic tumor (1.7%), lumbosacral plexus entrapment (1.3%), unappreciated lateral disc herniation (1.3%), nerve root injury due to spinal surgery (1.3%), inadequate spinal nerve root decompression (0.8%), lumbar stenosis (0.8%), sacroiliac joint inflammation (0.8%), lumbosacral plexus tumor (0.4%), sacral fracture (0.4%), and no diagnosis (4.2%). Open MR-guided Marcaine injection into the piriformis muscle produced the following results: no response (15.7%), relief of greater than 8 months (14.9%), relief lasting 2 to 4 months with continuing relief after second injection (7.5%), relief for 2 to 4 months with subsequent recurrence (36.6%), and relief for 1 to 14 days with full recurrence (25.4%). Piriformis surgery (62 operations; 3-cm incision, transgluteal approach, 55% outpatient; 40% with local or epidural anesthesia) resulted in excellent outcome in 58.5%, good outcome in 22.6%, limited benefit in 13.2%, no benefit in 3.8%, and worsened symptoms in 1.9%.\n\n\nCONCLUSIONS\nThis Class A quality evaluation of MR neurography's diagnostic efficacy revealed that piriformis muscle asymmetry and sciatic nerve hyperintensity at the sciatic notch exhibited a 93% specificity and 64% sensitivity in distinguishing patients with piriformis syndrome from those without who had similar symptoms (p < 0.01). Evaluation of the nerve beyond the proximal foramen provided eight additional diagnostic categories affecting 96% of these patients. More than 80% of the population good or excellent functional outcome was achieved.",
"title": ""
},
{
"docid": "617189999dd72a73f5097f87d9874ae5",
"text": "In this study, we present a novel ranking model based on learning the nearest neighbor relationships embedded in the index space. Given a query point, a conventional nearest neighbor search approach calculates the distances to the cluster centroids, before ranking the clusters from near to far based on the distances. The data indexed in the top-ranked clusters are retrieved and treated as the nearest neighbor candidates for the query. However, the loss of quantization between the data and cluster centroids will inevitably harm the search accuracy. To address this problem, the proposed model ranks clusters based on their nearest neighbor probabilities rather than the query-centroid distances to the query. The nearest neighbor probabilities are estimated by employing neural networks to characterize the neighborhood relationships as a nonlinear function, i.e., the density distribution of nearest neighbors with respect to the query. The proposed probability-based ranking model can replace the conventional distance-based ranking model as a coarse filter for candidate clusters, and the nearest neighbor probability can be used to determine the data quantity to be retrieved from the candidate cluster. Our experimental results demonstrated that implementation of the proposed ranking model for two state-of-the-art nearest neighbor quantization and search methods could boost the search performance effectively in billion-scale datasets.",
"title": ""
},
{
"docid": "20d186b7db540be57492daa805b51b31",
"text": "Printability, the capability of a 3D printer to closely reproduce a 3D model, is a complex decision involving several geometrical attributes like local thickness, shape of the thin regions and their surroundings, and topology with respect to thin regions. We present a method for assessment of 3D shape printability which efficiently and effectively computes such attributes. Our method uses a simple and efficient voxel-based representation and associated computations. Using tools from multi-scale morphology and geodesic analysis, we propose several new metrics for various printability problems. We illustrate our method with results taken from a real-life application.",
"title": ""
},
{
"docid": "0688abcb05069aa8a0956a0bd1d9bf54",
"text": "Sex differences in mortality rates stem from genetic, physiological, behavioral, and social causes that are best understood when integrated in an evolutionary life history framework. This paper investigates the Male-to-Female Mortality Ratio (M:F MR) from external and internal causes and across contexts to illustrate how sex differences shaped by sexual selection interact with the environment to yield a pattern with some consistency, but also with expected variations due to socioeconomic and other factors.",
"title": ""
},
{
"docid": "ac156d7b3069ff62264bd704b7b8dfc9",
"text": "Rynes, Colbert, and Brown (2002) presented the following statement to 959 members of the Society for Human Resource Management (SHRM): “Surveys that directly ask employees how important pay is to them are likely to overestimate pay’s true importance in actual decisions” (p. 158). If our interpretation (and that of Rynes et al.) of the research literature is accurate, then the correct true-false answer to the above statement is “false.” In other words, people are more likely to underreport than to overreport the importance of pay as a motivational factor in most situations. Put another way, research suggests that pay is much more important in people’s actual choices and behaviors than it is in their self-reports of what motivates them, much like the cartoon viewers mentioned in the quote above. Yet, only 35% of the respondents in the Rynes et al. study answered in a way consistent with research findings (i.e., chose “false”). Our objective in this article is to show that employee surveys regarding the importance of various factors in motivation generally produce results that are inconsistent with studies of actual employee behavior. In particular, we focus on well-documented findings that employees tend to say that pay THE IMPORTANCE OF PAY IN EMPLOYEE MOTIVATION: DISCREPANCIES BETWEEN WHAT PEOPLE SAY AND WHAT THEY DO",
"title": ""
}
] |
scidocsrr
|
e1fa8c6e51abe619a6680f8fffd49b4a
|
Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks
|
[
{
"docid": "22ad4568fbf424592c24783fb3037f62",
"text": "We propose an unsupervised learning technique for extracting information about authors and topics from large text collections. We model documents as if they were generated by a two-stage stochastic process. An author is represented by a probability distribution over topics, and each topic is represented as a probability distribution over words. The probability distribution over topics in a multi-author paper is a mixture of the distributions associated with the authors. The topic-word and author-topic distributions are learned from data in an unsupervised manner using a Markov chain Monte Carlo algorithm. We apply the methodology to three large text corpora: 150,000 abstracts from the CiteSeer digital library, 1740 papers from the Neural Information Processing Systems (NIPS) Conferences, and 121,000 emails from the Enron corporation. We discuss in detail the interpretation of the results discovered by the system including specific topic and author models, ranking of authors by topic and topics by author, parsing of abstracts by topics and authors, and detection of unusual papers by specific authors. Experiments based on perplexity scores for test documents and precision-recall for document retrieval are used to illustrate systematic differences between the proposed author-topic model and a number of alternatives. Extensions to the model, allowing for example, generalizations of the notion of an author, are also briefly discussed.",
"title": ""
}
] |
[
{
"docid": "2c63d6e44d9582355d9ac4b471fe28c3",
"text": "Introduction Immediate implant placement is a well-recognized and successful treatment option following tooth removal.1 Although the success rates for both immediate and delayed implant techniques are comparable, the literature reports that one can expect there to be recession of the buccal / facial gingiva of at least 1 mm following immediate implant placement, with the recession to possibly worsen in thin gingival biotypes.2 Low aesthetic value areas may be of less concern, however this recession and ridge collapse can pose an aesthetic disaster in areas such as the anterior maxilla. Compromised aesthetics may be masked to some degree by a low lip-line, thick gingival biotype, when treating single tooth cases, and so forth, but when implant therapy is carried out in patients with high lip-lines, patients with high aesthetic demands, with a very thin gingival biotype or multiple missing teeth where there is more extensive tissue deficit, then the risk for an aesthetic failure is far greater.3 The socket-shield (SS) technique provides a promising treatment adjunct to better manage these risks and preserve the post-extraction tissues in aesthetically challenging cases.4 The principle is to prepare the root of a tooth indicated for extraction in such a manner that the buccal / facial root section remains in-situ with its physiologic relation to the buccal plate intact. The tooth root section’s periodontal attachment apparatus (periodontal ligament (PDL), attachment fibers, vascularization, root cementum, bundle bone, alveolar bone) is intended to remain vital and undamaged so as to prevent the expected post-extraction socket remodeling and to support the buccal / facial tissues. Hereafter a case is presented where the SS technique was carried out at implant placement and the results from the case followed up at 1 year post-treatment demonstrate the degree of facial ridge tissue preservation achieved. C L I N I C A L",
"title": ""
},
{
"docid": "6af7bb1d2a7d8d44321a5b162c9781a2",
"text": "In this paper, we propose a deep metric learning (DML) approach for robust visual tracking under the particle filter framework. Unlike most existing appearance-based visual trackers, which use hand-crafted similarity metrics, our DML tracker learns a nonlinear distance metric to classify the target object and background regions using a feed-forward neural network architecture. Since there are usually large variations in visual objects caused by varying deformations, illuminations, occlusions, motions, rotations, scales, and cluttered backgrounds, conventional linear similarity metrics cannot work well in such scenarios. To address this, our proposed DML tracker first learns a set of hierarchical nonlinear transformations in the feed-forward neural network to project both the template and particles into the same feature space where the intra-class variations of positive training pairs are minimized and the interclass variations of negative training pairs are maximized simultaneously. Then, the candidate that is most similar to the template in the learned deep network is identified as the true target. Experiments on the benchmark data set including 51 challenging videos show that our DML tracker achieves a very competitive performance with the state-of-the-art trackers.",
"title": ""
},
{
"docid": "3163d0e199440aff02b5568991abcf05",
"text": "The Internet of Things, an emerging global Internet-based technical architecture facilitating the exchange of goods and services in global supply chain networks has an impact on the security and privacy of the involved stakeholders. Measures ensuring the architecture’s resilience to attacks, data authentication, access control and client privacy need to be established. An adequate legal framework must take the underlying technology into account and would best be established by an international legislator, which is supplemented by the private sector according to specific needs and thereby becomes easily adjustable. The contents of the respective legislation must encompass the right to information, provisions prohibiting or restricting the use of mechanisms of the Internet of Things, rules on IT-security-legislation, provisions supporting the use of mechanisms of the Internet of Things and the establishment of a task force doing research on the legal challenges of the IoT. a 2010 Prof Rolf H. Weber. Published by Elsevier Ltd. All rights reserved. 1. Internet of Things: notion and technical primarily RFID-tagged items (Radio-Frequency Identificabackground The Internet of Things (IoT) is an emerging global Internetbased information architecture facilitating the exchange of goods and services in global supply chain networks. For example, the lack of certain goods would automatically be reported to the provider which in turn immediately causes electronic or physical delivery. From a technical point of view, the architecture is based on data communication tools, Internet of Things – Need k and locate assets; the u Kevin Ashton in a prese , From Internet of Data to onference-lux_en.pdf). ackground of the IoT see ternet of Things, Berlin/H k/London 2008. vices for the Internet of ting see Davy Preuvene a note 4, 288, at 296 ss. olf H. Weber. Published b tion). The IoT has the purpose of providing an IT-infrastructure facilitating the exchanges of ‘‘things’’ in a secure and reliable manner. The most popular industry proposal for the new IT-infrastructure of the IoT is based on an Electronic Product Code (EPC), introduced by EPCglobal and GS1. The ‘‘things’’ are physical objects carrying RFID tags with a unique EPC; the infrastructure can offer and query EPC Information Services (EPCIS) both locally and remotely to subscribers. The for a New Legal Environment? [2009] 25 Computer Law & Security niversal, unique identification of individual items through the EPC ntation in 1998 (see Gerald Santucci, Paper for the International Internet of Things, at p. 2, available at: ftp://ftp.cordis.europa.eu/ Christian Floerkemeier/Marc Langheinrich/Elgar Fleisch/Friedeeidelberg 2008; Lu Yan/Yan Zhang/Laurence T. Yang/Huansheng Things, Thesis, Berlin 2008, 30/31; to the details of the service ers/Yolande Berbers, Internet of Things: A Context-Awareness y Elsevier Ltd. All rights reserved. c o m p u t e r l a w & s e c u r i t y r e v i e w 2 6 ( 2 0 1 0 ) 2 3 – 3 0 24 information is not fully saved on an RFID tag, but a supply of the information by distributed servers on the Internet is made available through linking and cross-linking with the help of an Object Naming Service (ONS). The ONS is authoritative (linking metadata and services) in the sense that the entity having – centralized – change control over the information about the EPC is the same entity that assigned the EPC to the concerned item. Thereby, the architecture can also serve as backbone for ubiquitous computing, enabling smart environments to recognize and identify objects, and receive information from the Internet to facilitate their adaptive functionality. The central ONS root is operated by the (private) company VeriSign, a provider of Internet infrastructure services. The ONS is based on the well-known Domain Name System (DNS). Technically, in order to use the DNS to find information about an item, the item’s EPC must be converted into a format that the DNS can understand, which is the typical, ‘‘dot’’ delimited, left to right form of all domain names. Since EPC is encoded into syntactically correct domain name and then used within the existing DNS infrastructure, the ONS can be considered as subset of the DNS. For this reason, however, the ONS will also inherit all of the well-documented DNS weaknesses, such as the limited redundancy in practical implementations and the creation of single points of failure. 2. Security and privacy needs 2.1. Requirements related to IoT technology The described technical architecture of the IoT has an impact on the security and privacy of the involved stakeholders. Privacy includes the concealment of personal information as well as the ability to control what happens with this information. The right to privacy can be considered as either 7 Fabian, supra note 6, at 33. 8 EPCglobal, Object Naming Service (ONS) Version 1.0.1, at para 4.2, available at: http://www.epcglobalinc.org/standards/ons/ ons_1_0_1-standard-20080529.pdf. 9 Fabian, supra note 6, at 1. 10 EPCglobal, Object Naming Service (ONS) Version 1.0.1, supra note 8, at para 5.2. 11 For more details see Weber, supra note 1. 12 Seda F. Gürses/Bettina Berendt/Thomas Santen, Multilateral Security Requirements Analysis for Preserving Privacy in Ubiquitous Environments, in: Bettina Berendt/Ernestina Menasalvas (eds), Workshop on Ubiquitous Knowledge Discovery for Users (UKDU ’06), at 51–64; for privacy as freedom see Gus Hosein, Privacy as Freedom, in: Rikke Frank Jørgensen (ed.), Human Rights in the Global Information Society, Cambridge/Massachusetts 2006, at 121–147. 13 Gürses/Berendt/Santen, supra note 12, at 54. 14 See also Ari Juels, RFID Security and Privacy: A Research Survey, IEEE Journal on Selected Areas in Communications, Vol. 24, 2006, 381–394, at 383; Marc Langheinrich Marc/Friedemann Mattern, Wenn der Computer verschwindet, digma 2002, 138–142, at 139; Friedemann Mattern, Ubiquitous Computing: Eine Einführung mit Anmerkungen zu den sozialen und rechtlichen Folgen, in: Jürgen Taeger/Andreas Wiebe (eds), Mobilität. Telematik, Recht, Köln 2005, 1–34, at 18 s. a basic and inalienable human right, or as a personal right or possession. The attribution of tags to objects may not be known to users, and there may not be an acoustic or visual signal to draw the attention of the object’s user. Thereby, individuals can be followed without them even knowing about it and would leave their data or at least traces thereof in cyberspace. Further aggravating the problem, it is not anymore only the state that is interested in collecting the respective data, but also private actors such as marketing enterprises. Since business processes are concerned, a high degree of reliability is needed. In the literature, the following security and privacy requirements are described: Resilience to attacks: The system has to avoid single points of failure and should adjust itself to node failures. Data authentication: As a principle, retrieved address and object information must be authenticated. Access control: Information providers must be able to implement access control on the data provided. Client privacy: Measures need to be taken that only the information provider is able to infer from observing the use of the lookup system related to a specific customer; at least, inference should be very hard to conduct. Private enterprises using IoT technology will have to include these requirements into their risk management concept governing the business activities in general. 2.2. Privacy enhancing technologies (PET) The fulfilment of customer privacy requirements is quite difficult. A number of technologies have been developed in order to achieve information privacy goals. These Privacy Enhancing Technologies (PET) can be described in short as follows: Virtual Private Networks (VPN) are extranets established by close groups of business partners. As only partners have access, they promise to be confidential and have integrity. However, this solution does not allow for a dynamic global information exchange and is impractical with regard to third parties beyond the borders of the extranet. Transport Layer Security (TLS), based on an appropriate global trust structure, could also improve confidentiality and integrity of the IoT. However, as each ONS delegation step 15 Mattern, supra note 14, at 24. 16 See Benjamin Fabian/Oliver Günther, Distributed ONS and its Impact on Privacy, 1223, 1225, available at: http://ieeexplore.ieee. org/stamp/stamp.jsp?arnumber1⁄404288878. 17 For RFID authentication see Juels, supra note 14, at 384 s; Rolf H. Weber/Annette Willi, IT-Sicherheit und Recht, Zurich 2006, at 284. 18 See also Eberhard Grummt/Markus Müller, Fine-Grained Access Control for EPC Information Services, in: Floerkemeier/ Langheinrich/Fleisch/Mattern/Sarma, supra note 4, at 35–49. 19 Fabian, supra note 6, 61 s; Benjamin Fabian/Oliver Günther, Security Challenges of the EPCglobal Network, Communications of the ACM, Vol. 52, July 2009, 121–125, at 124 s. 25 Jürgen Müller/Matthias Handy, RFID als Technik des Ubiquitous Computing – Eine Gefahr für die Privatsphäre?, at 17, available at: c o m p u t e r l a w & s e c u r i t y r e v i e w 2 6 ( 2 0 1 0 ) 2 3 – 3 0 25 requires a new TLS connection, the search of information would be negatively affected by many additional layers. DNS Security Extensions (DNSSEC) make use of public-key cryptography to sign resource records in order to guarantee origin authenticity and integrity of delivered information. However, DNSSEC could only assure global ONS information authenticity if the entire Internet community adopts it. Onion Routing encrypts and mixes Internet traffic from many different sources, i.e. data is wrapped into multiple encryption layers, using the public keys of the onion routers on the transmission path. This process would impede matching a particular Internet Protocol packet to a particular source. However, onion routing increases waiting times and thereby",
"title": ""
},
{
"docid": "48dd3e8e071e7dd580ea42b528ee9427",
"text": "Information systems (IS) implementation is costly and has a relatively low success rate. Since the seventies, IS research has contributed to a better understanding of this process and its outcomes. The early efforts concentrated on the identification of factors that facilitated IS use. This produced a long list of items that proved to be of little practical value. It became obvious that, for practical reasons, the factors had to be grouped into a model in a way that would facilitate analysis of IS use. In 1985, Fred Davis suggested the technology acceptance model (TAM). It examines the mediating role of perceived ease of use and perceived usefulness in their relation between systems characteristics (external variables) and the probability of system use (an indicator of system success). More recently, Davis proposed a new version of his model: TAM2. It includes subjective norms, and was tested with longitudinal research designs. Overall the two explain about 40% of system’s use. Analysis of empirical research using TAM shows that results are not totally consistent or clear. This suggests that significant factors are not included in the models. We conclude that TAM is a useful model, but has to be integrated into a broader one which would include variables related to both human and social change processes, and to the adoption of the innovation model. # 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "1b2d7b2895ae4b996797ea64ddbae14e",
"text": "For the past decade, query processing on relational data has been studied extensively, and many theoretical and practical solutions to query processing have been proposed under various scenarios. With the recent popularity of cloud computing, users now have the opportunity to outsource their data as well as the data management tasks to the cloud. However, due to the rise of various privacy issues, sensitive data (e.g., medical records) need to be encrypted before outsourcing to the cloud. In addition, query processing tasks should be handled by the cloud; otherwise, there would be no point to outsource the data at the first place. To process queries over encrypted data without the cloud ever decrypting the data is a very challenging task. In this paper, we focus on solving the k-nearest neighbor (kNN) query problem over encrypted database outsourced to a cloud: a user issues an encrypted query record to the cloud, and the cloud returns the k closest records to the user. We first present a basic scheme and demonstrate that such a naive solution is not secure. To provide better security, we propose a secure kNN protocol that protects the confidentiality of the data, user's input query, and data access patterns. Also, we empirically analyze the efficiency of our protocols through various experiments. These results indicate that our secure protocol is very efficient on the user end, and this lightweight scheme allows a user to use any mobile device to perform the kNN query.",
"title": ""
},
{
"docid": "8001e848f42df09e9e240599de307fec",
"text": "Data videos, or short data-driven motion graphics, are an increasingly popular medium for storytelling. However, creating data videos is difficult as it involves pulling together a unique combination of skills. We introduce DataClips, an authoring tool aimed at lowering the barriers to crafting data videos. DataClips allows non-experts to assemble data-driven “clips” together to form longer sequences. We constructed the library of data clips by analyzing the composition of over 70 data videos produced by reputable sources such as The New York Times and The Guardian. We demonstrate that DataClips can reproduce over 90% of our data videos corpus. We also report on a qualitative study comparing the authoring process and outcome achieved by (1) non-experts using DataClips, and (2) experts using Adobe Illustrator and After Effects to create data-driven clips. Results indicated that non-experts are able to learn and use DataClips with a short training period. In the span of one hour, they were able to produce more videos than experts using a professional editing tool, and their clips were rated similarly by an independent audience.",
"title": ""
},
{
"docid": "ba6fe1b26d76d7ff3e84ddf3ca5d3e35",
"text": "The spacing effect describes the robust finding that long-term learning is promoted when learning events are spaced out in time rather than presented in immediate succession. Studies of the spacing effect have focused on memory processes rather than for other types of learning, such as the acquisition and generalization of new concepts. In this study, early elementary school children (5- to 7-year-olds; N = 36) were presented with science lessons on 1 of 3 schedules: massed, clumped, and spaced. The results revealed that spacing lessons out in time resulted in higher generalization performance for both simple and complex concepts. Spaced learning schedules promote several types of learning, strengthening the implications of the spacing effect for educational practices and curriculum.",
"title": ""
},
{
"docid": "54bb841f149a7c94a92bc51dcd413872",
"text": "An extensive set of research efforts have explored Channel State Information for human activity detection. By extracting CSI from a sequence of packets, one can statistically analyze the temporal variations embedded therein and recognize corresponding human activities. In this paper, we present Wi-Chase, a sensorless system based on CSI from ubiquitous WiFi packets for human activity detection. Different from existing schemes utilizing only CSI of one or a small subset of subcarriers, Wi-Chase fully utilizes all available subcarriers of the WiFi signal and incorporates variations in both their phases and magnitudes. As each subcarrier carries integral information that will improve the recognition accuracy because of detailed correlated information content in different subcarriers, we can achieve much higher detection accuracy. To the best of our knowledge, this is the first system that gathers information from all the subcarriers to identify and classify multiple activities. Our experimental results show that Wi-Chase is robust and achieves an average classification accuracy greater than 97% for multiple communication links.",
"title": ""
},
{
"docid": "292fe6afb5cb4c6b2694033d57b9012a",
"text": "the goal of this paper is to survey access control models, protocols and frameworks in IoT. We provide a literature overview and discuss in a qualitative way the most relevant IoT related-projects over recent years.",
"title": ""
},
{
"docid": "f9d91253c5c276bb020daab4a4127822",
"text": "Conveying a narrative with visualizations often requires choosing an order in which to present visualizations. While evidence exists that narrative sequencing in traditional stories can affect comprehension and memory, little is known about how sequencing choices affect narrative visualization. We consider the forms and reactions to sequencing in narrative visualization presentations to provide a deeper understanding with a focus on linear, 'slideshow-style' presentations. We conduct a qualitative analysis of 42 professional narrative visualizations to gain empirical knowledge on the forms that structure and sequence take. Based on the results of this study we propose a graph-driven approach for automatically identifying effective sequences in a set of visualizations to be presented linearly. Our approach identifies possible transitions in a visualization set and prioritizes local (visualization-to-visualization) transitions based on an objective function that minimizes the cost of transitions from the audience perspective. We conduct two studies to validate this function. We also expand the approach with additional knowledge of user preferences for different types of local transitions and the effects of global sequencing strategies on memory, preference, and comprehension. Our results include a relative ranking of types of visualization transitions by the audience perspective and support for memory and subjective rating benefits of visualization sequences that use parallelism as a structural device. We discuss how these insights can guide the design of narrative visualization and systems that support optimization of visualization sequence.",
"title": ""
},
{
"docid": "95a3ad24715dba54a4c9630590db090d",
"text": "Decades after the first patients were treated with antibiotics, bacterial infections have again become a threat because of the rapid emergence of resistant bacteria-a crisis attributed to abuse of these medications and a lack of new drug development.",
"title": ""
},
{
"docid": "300553571302f85f39ce4902f84ca527",
"text": "Student motivation as an academic enabler for school success is discussed. Contrary to many views, however, the authors conceive of student motivation as a multifaceted construct with different components. Accordingly, the article includes a discussion of four key components of student motivation including academic self-efficacy, attributions, intrinsic motivation, and achievement goals. Research on each of these four components is described, research relating these four components to academic achievement and other academic enablers is reviewed, and suggestions are offered for instruction and assessment. Psychologists and educators have long considered the role of motivation in student achievement and learning (for a review see Graham & Weiner, 1996). Much of the early research on student achievement and learning separated cognitive and motivational factors and pursued very distinct lines of research that did not integrate cognition and motivation. However, since at least the 1980s there has been a sustained research focus on how motivational and cognitive factors interact and jointly influence student learning and achievement. In more colloquial terms, there is a recognition that students need both the cognitive skill and the motivational will to do well in school (Pintrich & Schunk, 2002). This miniseries continues in this tradition by highlighting the contribution of both motivational and cognitive factors for student academic success. The integration of motivational and cognitive factors was facilitated by the shift in motivational theories from traditional achievement motivation models to social cognitive models of motivation (Pintrich & Schunk, 2002). One of the most important assumptions of social cognitive models of motivation is that motivation is a dynamic, multifaceted phenomenon that contrasts with the quantitative view taken by traditional models of motivation. In other words, these newer social cognitive models do not assume that students are either “motivated” or “not motivated” or that student motivation can be characterized in some quantitative manner between two endpoints on a single continuum. Rather, social cognitive models stress that students can be motivated in multiple ways and the important issue is understanding how and why students are motivated for school achievement. This change in focus implies that teachers or school psychologists should not label students as “motivated” or “not motivated” in some global fashion. Furthermore, assessment instruments that generate a single global “motivation” score for students may be misleading in terms of a more multifaceted understanding of student motivation. Accordingly, in the discussion of motivation as an academic enabler, many aspects of student motivation including self-efficacy, attributions, intrinsic motivation, and goals are considered. A second important assumption of social cognitive models of motivation is that motivation is not a stable trait of an individual, but is more situated, contextual, and domain-specific. In other words, not only are students motivated in multiple ways, but their motivation can vary depending on the situation or context in the classroom or school. Although this assumption makes it more difficult for research and assessment efforts, it means that student motivation is conceived as being inherently changeable and sensitive to the context. This provides hope for teachers and school psychologists and suggests that instructional efforts and the design of classrooms and schools can make a difference in motivating students for academic achievement. This situated assumption means that student motivation probably varies as a function of subject matter domains and classrooms (e.g., Bong, 2001). For example, within social cognitive models, motivation is usually assessed for a specific subject area such as math, reading, science, or social studies and in reference to a specific classroom or teacher. In some ways, this also fits with teachers' and parents' own perceptions and experiences as they find that some children are quite motivated for mathematics, whereas others hate it, and also observe these motivational differences with other subject areas as well. However, this implies that assessment instruments that assess general student motivation for school or academics may not be as useful as more domain or context specific assessment tools. A third assumption concerns the central role of cognition in social cognitive models of motivation. That is, it is not just the individual's cultural, demographic, or personality characteristics that influence motivation and achievement directly, or just the contextual characteristics of the classroom environment that shape motivation and achievement, but rather the individual's active regulation of his or her motivation, thinking, and behavior that mediates the relationships between the person, context, and eventual achievement. That is, students' own thoughts about their motivation and learning play a key role in mediating their engagement and subsequent achievement. Following from these three general assumptions, social cognitive motivational theorists have proposed a large number of different motivational constructs that may facilitate or constrain student achievement and learning. Although there are good theoretical reasons for some of these distinctions among different motivational theories and constructs, in many cases they can be confusing and less than helpful in developing applications to improve student motivation and subsequent learning in school (Pintrich, 2000a). Rather than discussing all the different motivational constructs that may be enablers of student achievement and learning, this article will focus on four key families of motivational beliefs (self-efficacy, attributions, intrinsic motivation, and goal orientations). These four families represent the currently accepted major social cognitive motivational theories (Eccles, Wigfield, & Schiefele, 1998; Graham & Weiner, 1996; Pintrich & Schunk, 2002) and, therefore, seem most relevant when thinking about how motivation relates to achievement and other academic enablers. For each of the four general components, the components are defined, a summarization is given for how the motivational component is related to student achievement and learning as well as the other academic enablers discussed in this special issue, and some implications for instruction and assessment are suggested. Although these four families are interrelated, it is beyond the scope of this article to present an interrelated model of self-efficacy, attributions, intrinsic motivation, and goal orientations. Readers interested in a more comprehensive overview may refer to Pintrich and Schunk's (2002) detailed discussion of motivational processes in schooling. Adaptive Self-Efficacy Beliefs as Enablers of Success A common layperson's definition of motivation is that it involves a strong personal interest in a particular subject or activity. Students who are interested are motivated and they learn and achieve because of this strong interest. Although interest as a component of student motivation will be discussed later, one of the more important motivational beliefs for student achievement is self-efficacy, which concerns beliefs about capabilities to do a task or activity. More specifically, self-efficacy has been defined as individuals' beliefs about their performance capabilities in a particular context or a specific task or domain (Bandura, 1997). Self-efficacy is assumed to be situated and contextualized, not a general belief about self-concept or self-esteem. For example, a student might have high self-efficacy for doing algebra problems, but a lower self-efficacy for geometry problems or other subject areas, depending on past successes and failures. These self-efficacy beliefs are distinct from general self-concept beliefs or self-esteem. Although the role of self-efficacy has been studied in a variety of domains including mental health and health behavior such as coping with depression or smoking cessation, business management, and athletic performance, a number of educational psychologists have examined how self-efficacy relates to behavior in elementary and secondary academic settings (e.g., Bandura, 1997; Eccles et al., 1998; Pintrich, 2000b; Pintrich & De Groot, 1990; Schunk, 1989a, 1989b, 1991). In particular, self-efficacy has been positively related to higher levels of achievement and learning as well as a wide variety of adaptive academic outcomes such as higher levels of effort and increased persistence on difficult tasks in both experimental and correlational studies involving students from a variety of age groups (Bandura, 1997; Pintrich & Schunk, 2002). Students who have more positive self-efficacy beliefs (i.e., they believe they can do the task) are more likely to work harder, persist, and eventually achieve at higher levels. In addition, there is evidence that students who have positive self-efficacy beliefs are more likely to choose to continue to take more difficult courses (e.g., advanced math courses) over the course of schooling (Eccles et al., 1998). In our own correlational research with junior high students in Michigan, we have consistently found that self-efficacy beliefs are positively related to student cognitive engagement and their use of self-regulatory strategies (similar in some ways to study skills) as well as general achievement as indexed by grades (e.g., Pintrich, 2000b; Pintrich & De Groot, 1990; Welters, Yu, & Pintrich, 1996). In summary, both experimental and correlational research in schools suggests that self-efficacy is positively related to a host of positive outcomes of schooling such as choice, persistence, cognitive engagement, use of self-regulatory strategies, and actual achievement. This generalization seems to apply to all students, as it is relatively stable across different ages and grades",
"title": ""
},
{
"docid": "8ea1c8609b2c9e52574bed84236e77fa",
"text": "We address the problem of person detection and tracking in crowded video scenes. While the detection of individual objects has been improved significantly over the recent years, crowd scenes remain particularly challenging for the detection and tracking tasks due to heavy occlusions, high person densities and significant variation in people's appearance. To address these challenges, we propose to leverage information on the global structure of the scene and to resolve all detections jointly. In particular, we explore constraints imposed by the crowd density and formulate person detection as the optimization of a joint energy function combining crowd density estimation and the localization of individual people. We demonstrate how the optimization of such an energy function significantly improves person detection and tracking in crowds. We validate our approach on a challenging video dataset of crowded scenes.",
"title": ""
},
{
"docid": "f1ab979a80ffed5ac002ad13d9a0c2ea",
"text": "Interleaved multiphase synchronous buck converters are often used to power computer CPU, GPU, and memory to meet the demands for increasing load current and fast current slew rate of the processors. This paper reports and explains undesired coupling between discrete inductors in typical commercial multiphase applications where space is limited. In this paper, equations of coupling coefficient are derived for single-turn and multiturn Ferrite core inductors commonly used in multiphase converters and are verified by Maxwell static simulation. The influence of the coupling effect on inductor current waveforms is demonstrated by Maxwell transient simulation and confirmed by experiments. The analysis provides a useful tool for mitigating the coupling effect in multiphase converters to avoid early inductor saturation and interference between phases. Design guidelines and recommendations are provided to minimize the unwanted coupling effect in multiphase converters.",
"title": ""
},
{
"docid": "5e67446d23a9e282478520f0486731c9",
"text": "This paper proposes a new ladder FeRAM ar chitecture with capacitance-coupled-bitline (CCB) cells for high-end embedded applications. The ladder FeRAM architecture short-circuits both electrodes of each ferroelectric capacitor at every standby cycle. This overcomes the fatal disturbance problem inherent to the CCB cell, and halves read/write cycle time by sharing a plateline and its driver with 32 cells in two neighboring ladder blocks. This configuration realizes small 0.35 μm2 cell using a highly reliable ferroelectric capacitor of as large as 0.145 μm2 size, and a highly compatible process with logic-LSI. A slow plateline drive of the CCB cell due to a resistive plateline using an active area is minimized to 2.5 ns by introducing thick M3 shunt-path and distributed M3 platelines. The area penalty of the shunt is 4.7% of an array. A serious bitline-to-bitline coupling noise in edge bitlines up to the noise/signal ratio of 0.38 due to the operation peculiar to FeRAM is eliminated by introducing activated dummy bitlines and their sense amplifiers. The design of 16 cells in a ladder block is optimal for effective cell size, cell signal, and active power dissipation. A new early plateline pull-down read scheme omits \"0\"-data rewrite operation without read disturbance. A 64 Kb ladder FeRAM with the CCB cells and the early plateline pull-down read scheme achieves a fast random read/write of 10 ns cycle and 8 ns access at 150°C.",
"title": ""
},
{
"docid": "f83be6d305aed2929130ec6bab038820",
"text": "A design of single-feed dual-frequency patch antennas with different polarizations and radiation patterns is proposed. The antenna structure is composed of two stacked patches, in which the top is a square patch and the bottom is a corner-truncated square-ring patch, and the two patches are connected together with four conducting strips. Two operating frequencies can be found in the antenna structure. The radiations at the lower and higher frequencies are a broadside pattern with circular polarization and a conical pattern with linear polarization, respectively. A prototype operating at 1575 and 2400 MHz bands is constructed. Both experimental and simulated results show that the prototype has good performances and is suitable for GPS and WLAN applications.",
"title": ""
},
{
"docid": "04edbcc6006a76e538cffb0cc09d9fc5",
"text": "Feature extraction is a fundamental step when mammography image analysis is addressed using learning based approaches. Traditionally, problem dependent handcrafted features are used to represent the content of images. An alternative approach successfully applied in other domains is the use of neural networks to automatically discover good features. This work presents an evaluation of convolutional neural networks to learn features for mammography mass lesions before feeding them to a classification stage. Experimental results showed that this approach is a suitable strategy outperforming the state-of-the-art representation from 79.9% to 86% in terms of area under the ROC curve.",
"title": ""
},
{
"docid": "1bf145cceb4ec049940ebb284d4d3d88",
"text": "The upper limb of the human body has a higher likeliness of getting affected by muscle fatigue and injury, due to repetitive motion. This letter presents an interactive glove for the wrist of the human body. The assistive model is enabled with pneumatic actuators and stretch sensors, which support the user in performing wrist flexion, extension, pronation, and supination. The purpose of this glove is to provide the subject with a force-feedback feeling, and while a repetitive practice of such training is done, rehabilitation can be achieved. For evaluation, surface electromyographical signals were first used to study the effect of force feedback on associated muscle activation. Average % maximum voluntary isometric contraction of all subjects showed statistically reduced muscle activity in at least one associated muscle while performing all motions, enabled with actuators. Second, the force feedback generated by the glove at the user's wrist has been modeled using experiments with a force transducer. An application was then developed where the user was given the ability to interact with virtual objects and receive feedback through the glove. The evaluation of the feasibility of this application was done by measuring and comparing force profiles while generating an equal force in an ideal and a predicted environment.",
"title": ""
},
{
"docid": "a059b3ef66c54ecbe43aa0e8d35b9da8",
"text": "Completion of lagging strand DNA synthesis requires processing of up to 50 million Okazaki fragments per cell cycle in mammalian cells. Even in yeast, the Okazaki fragment maturation happens approximately a million times during a single round of DNA replication. Therefore, efficient processing of Okazaki fragments is vital for DNA replication and cell proliferation. During this process, primase-synthesized RNA/DNA primers are removed, and Okazaki fragments are joined into an intact lagging strand DNA. The processing of RNA/DNA primers requires a group of structure-specific nucleases typified by flap endonuclease 1 (FEN1). Here, we summarize the distinct roles of these nucleases in different pathways for removal of RNA/DNA primers. Recent findings reveal that Okazaki fragment maturation is highly coordinated. The dynamic interactions of polymerase δ, FEN1 and DNA ligase I with proliferating cell nuclear antigen allow these enzymes to act sequentially during Okazaki fragment maturation. Such protein-protein interactions may be regulated by post-translational modifications. We also discuss studies using mutant mouse models that suggest two distinct cancer etiological mechanisms arising from defects in different steps of Okazaki fragment maturation. Mutations that affect the efficiency of RNA primer removal may result in accumulation of unligated nicks and DNA double-strand breaks. These DNA strand breaks can cause varying forms of chromosome aberrations, contributing to development of cancer that associates with aneuploidy and gross chromosomal rearrangement. On the other hand, mutations that impair editing out of polymerase α incorporation errors result in cancer displaying a strong mutator phenotype.",
"title": ""
},
{
"docid": "d518f1b11f2d0fd29dcef991afe17d17",
"text": "Applications must be able to synchronize accesses to operating system resources in order to ensure correctness in the face of concurrency and system failures. System transactions allow the programmer to specify updates to heterogeneous system resources with the OS guaranteeing atomicity, consistency, isolation, and durability (ACID). System transactions efficiently and cleanly solve persistent concurrency problems that are difficult to address with other techniques. For example, system transactions eliminate security vulnerabilities in the file system that are caused by time-of-check-to-time-of-use (TOCTTOU) race conditions. System transactions enable an unsuccessful software installation to roll back without disturbing concurrent, independent updates to the file system.\n This paper describes TxOS, a variant of Linux 2.6.22 that implements system transactions. TxOS uses new implementation techniques to provide fast, serializable transactions with strong isolation and fairness between system transactions and non-transactional activity. The prototype demonstrates that a mature OS running on commodity hardware can provide system transactions at a reasonable performance cost. For instance, a transactional installation of OpenSSH incurs only 10% overhead, and a non-transactional compilation of Linux incurs negligible overhead on TxOS. By making transactions a central OS abstraction, TxOS enables new transactional services. For example, one developer prototyped a transactional ext3 file system in less than one month.",
"title": ""
}
] |
scidocsrr
|
929fdc25c5a8dd003c2259c213a7a550
|
Inter-dependent CNNs for joint scene and object recognition
|
[
{
"docid": "9c540b058e851cd9fa3a0195b039b965",
"text": "The proposed active learning framework learns scene and object classification models simultaneously. Both scene and object classification models take advantage of the interdependence between them to select the most informative samples with the least manual labeling cost. To the best of our knowledge, any previous work using active learning to classify scene and objects together is unknown. Leveraging upon the inter-relationships between scene and objects, we propose a new information-theoretic sample selection strategy. [Figure] This figure presents a pictorial representation of the proposed framework. Overview of Our Joint Active Learning Framework",
"title": ""
},
{
"docid": "332d517d07187d2403a672b08365e5ef",
"text": "Please cite this article in press as: C. Galleguillos doi:10.1016/j.cviu.2010.02.004 The goal of object categorization is to locate and identify instances of an object category within an image. Recognizing an object in an image is difficult when images include occlusion, poor quality, noise or background clutter, and this task becomes even more challenging when many objects are present in the same scene. Several models for object categorization use appearance and context information from objects to improve recognition accuracy. Appearance information, based on visual cues, can successfully identify object classes up to a certain extent. Context information, based on the interaction among objects in the scene or global scene statistics, can help successfully disambiguate appearance inputs in recognition tasks. In this work we address the problem of incorporating different types of contextual information for robust object categorization in computer vision. We review different ways of using contextual information in the field of object categorization, considering the most common levels of extraction of context and the different levels of contextual interactions. We also examine common machine learning models that integrate context information into object recognition frameworks and discuss scalability, optimizations and possible future approaches. 2010 Elsevier Inc. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "81748f85693f48a2a454d097b9885eb3",
"text": "The paper analyses the severity of gridlocks in interbank payment systems operating on a real time basis and evaluates by means of simulations the merits of a gridlock resolution algorithm. Data used in the simulations consist of actual payments settled in the Danish and Finnish RTGS systems. The algorithm is found to be applicable to a real time environment and effective in reducing queuing in the systems at all levels of liquidity, but in particular when intra-day liquidity is scarce.",
"title": ""
},
{
"docid": "8a6da37bae9c4ed6a771905a98b4cafc",
"text": "Compressing convolutional neural networks (CNNs) has received ever-increasing research focus. However, most existing CNN compression methods do not interpret their inherent structures to distinguish the implicit redundancy. In this paper, we investigate the problem of CNN compression from a novel interpretable perspective. The relationship between the input feature maps and 2D kernels is revealed in a theoretical framework, based on which a kernel sparsity and entropy (KSE) indicator is proposed to quantitate the feature map importance in a feature-agnostic manner to guide model compression. Kernel clustering is further conducted based on the KSE indicator to accomplish highprecision CNN compression. KSE is capable of simultaneously compressing each layer in an efficient way, which is significantly faster compared to previous data-driven feature map pruning methods. We comprehensively evaluate the compression and speedup of the proposed method on CIFAR-10, SVHN and ImageNet 2012. Our method demonstrates superior performance gains over previous ones. In particular, it achieves 4.7× FLOPs reduction and 2.9× compression on ResNet-50 with only a Top-5 accuracy drop of 0.35% on ImageNet 2012, which significantly outperforms state-of-the-art methods.",
"title": ""
},
{
"docid": "4bca13cc04fc128844ecc48c0357b974",
"text": "From its roots in physics, mathematics, and biology, the study of complexity science, or complex adaptive systems, has expanded into the domain of organizations and systems of organizations. Complexity science is useful for studying the evolution of complex organizations -entities with multiple, diverse, interconnected elements. Evolution of complex organizations often is accompanied by feedback effects, nonlinearity, and other conditions that add to the complexity of existing organizations and the unpredictability of the emergence of new entities. Health care organizations are an ideal setting for the application of complexity science due to the diversity of organizational forms and interactions among organizations that are evolving. Too, complexity science can benefit from attention to the world’s most complex human organizations. Organizations within and across the health care sector are increasingly interdependent. Not only are new, highly powerful and diverse organizational forms being created, but also the restructuring has occurred within very short periods of time. In this chapter, we review the basic tenets of complexity science. We identify a series of key differences between the complexity science and established theoretical approaches to studying health organizations, based on the ways in which time, space, and constructs are framed. The contrasting perspectives are demonstrated using two case examples drawn from healthcare innovation and healthcare integrated systems research. Complexity science broadens and deepens the scope of inquiry into health care organizations, expands corresponding methods of research, and increases the ability of theory to generate valid research on complex organizational forms. Formatted",
"title": ""
},
{
"docid": "1224987c5fdd228cc38bf1ee3aeb6f2d",
"text": "Many existing studies of social media focus on only one platform, but the reality of users' lived experiences is that most users incorporate multiple platforms into their communication practices in order to access the people and networks they desire to influence. In order to better understand how people make sharing decisions across multiple sites, we asked our participants (N=29) to categorize all modes of communication they used, with the goal of surfacing their mental models about managing sharing across platforms. Our interview data suggest that people simultaneously consider \"audience\" and \"content\" when sharing and these needs sometimes compete with one another; that they have the strong desire to both maintain boundaries between platforms as well as allowing content and audience to permeate across these boundaries; and that they strive to stabilize their own communication ecosystem yet need to respond to changes necessitated by the emergence of new tools, practices, and contacts. We unpack the implications of these tensions and suggest future design possibilities.",
"title": ""
},
{
"docid": "97ed18e26a80a2ae078f78c70becfe8c",
"text": "A fully-integrated 18.5 kHz RC time-constant-based oscillator is designed in 65 nm CMOS for sleep-mode timers in wireless sensors. A comparator offset cancellation scheme achieves 4× to 25× temperature stability improvement, leading to an accuracy of ±0.18% to ±0.55% over -40 to 90 °C. Sub-threshold operation and low-swing oscillations result in ultra-low power consumption of 130 nW. The architecture also provides timing noise suppression, leading to 10× reduction in long-term Allan deviation. It is measured to have a stability of 20 ppm or better for measurement intervals over 0.5 s. The oscillator also has a fast startup-time, with the period settling in 4 cycles.",
"title": ""
},
{
"docid": "17faf590307caf41095530fcec1069c7",
"text": "Fine-grained visual recognition typically depends on modeling subtle difference from object parts. However, these parts often exhibit dramatic visual variations such as occlusions, viewpoints, and spatial transformations, making it hard to detect. In this paper, we present a novel attention-based model to automatically, selectively and accurately focus on critical object regions with higher importance against appearance variations. Given an image, two different Convolutional Neural Networks (CNNs) are constructed, where the outputs of two CNNs are correlated through bilinear pooling to simultaneously focus on discriminative regions and extract relevant features. To capture spatial distributions among the local regions with visual attention, soft attention based spatial LongShort Term Memory units (LSTMs) are incorporated to realize spatially recurrent yet visually selective over local input patterns. All the above intuitions equip our network with the following novel model: two-stream CNN layers, bilinear pooling layer, spatial recurrent layer with location attention are jointly trained via an end-to-end fashion to serve as the part detector and feature extractor, whereby relevant features are localized and extracted attentively. We show the significance of our network against two well-known visual recognition tasks: fine-grained image classification and person re-identification.",
"title": ""
},
{
"docid": "7e38ba11e394acd7d5f62d6a11253075",
"text": "The body-schema concept is revisited in the context of embodied cognition, further developing the theory formulated by Marc Jeannerod that the motor system is part of a simulation network related to action, whose function is not only to shape the motor system for preparing an action (either overt or covert) but also to provide the self with information on the feasibility and the meaning of potential actions. The proposed computational formulation is based on a dynamical system approach, which is linked to an extension of the equilibrium-point hypothesis, called Passive Motor Paradigm: this dynamical system generates goal-oriented, spatio-temporal, sensorimotor patterns, integrating a direct and inverse internal model in a multi-referential framework. The purpose of such computational model is to operate at the same time as a general synergy formation machinery for planning whole-body actions in humanoid robots and/or for predicting coordinated sensory-motor patterns in human movements. In order to illustrate the computational approach, the integration of simultaneous, even partially conflicting tasks will be analyzed in some detail with regard to postural-focal dynamics, which can be defined as the fusion of a focal task, namely reaching a target with the whole-body, and a postural task, namely maintaining overall stability.",
"title": ""
},
{
"docid": "b72c8a92e8d0952970a258bb43f5d1da",
"text": "Neural networks excel in detecting regular patterns but are less successful in representing and manipulating complex data structures, possibly due to the lack of an external memory. This has led to the recent development of a new line of architectures known as Memory-Augmented Neural Networks (MANNs), each of which consists of a neural network that interacts with an external memory matrix. However, this RAM-like memory matrix is unstructured and thus does not naturally encode structured objects. Here we design a new MANN dubbed Relational Dynamic Memory Network (RDMN) to bridge the gap. Like existing MANNs, RDMN has a neural controller but its memory is structured as multi-relational graphs. RDMN uses the memory to represent and manipulate graph-structured data in response to query; and as a neural network, RDMN is trainable from labeled data. Thus RDMN learns to answer queries about a set of graph-structured objects without explicit programming. We evaluate the capability of RDMN on several important prediction problems, including software vulnerability, molecular bioactivity and chemical-chemical interaction. Results demonstrate the efficacy of the proposed model.",
"title": ""
},
{
"docid": "f82a9c15e88ba24dbf8f5d4678b8dffd",
"text": "Numerous existing object segmentation frameworks commonly utilize the object bounding box as a prior. In this paper, we address semantic segmentation assuming that object bounding boxes are provided by object detectors, but no training data with annotated segments are available. Based on a set of segment hypotheses, we introduce a simple voting scheme to estimate shape guidance for each bounding box. The derived shape guidance is used in the subsequent graph-cut-based figure-ground segmentation. The final segmentation result is obtained by merging the segmentation results in the bounding boxes. We conduct an extensive analysis of the effect of object bounding box accuracy. Comprehensive experiments on both the challenging PASCAL VOC object segmentation dataset and GrabCut-50 image segmentation dataset show that the proposed approach achieves competitive results compared to previous detection or bounding box prior based methods, as well as other state-of-the-art semantic segmentation methods.",
"title": ""
},
{
"docid": "0016ef3439b78a29c76a14e8db2a09be",
"text": "In tasks such as pursuit and evasion, multiple agents need to coordinate their behavior to achieve a common goal. An interesting question is, how can such behavior be best evolved? A powerful approach is to control the agents with neural networks, coevolve them in separate subpopulations, and test them together in the common task. In this paper, such a method, called multiagent enforced subpopulations (multiagent ESP), is proposed and demonstrated in a prey-capture task. First, the approach is shown to be more efficient than evolving a single central controller for all agents. Second, cooperation is found to be most efficient through stigmergy, i.e., through role-based responses to the environment, rather than communication between the agents. Together these results suggest that role-based cooperation is an effective strategy in certain multiagent tasks.",
"title": ""
},
{
"docid": "5b021c0223ee25535508eb1d6f63ff55",
"text": "A 32-KB standard CMOS antifuse one-time programmable (OTP) ROM embedded in a 16-bit microcontroller as its program memory is designed and implemented in 0.18-mum standard CMOS technology. The proposed 32-KB OTP ROM cell array consists of 4.2 mum2 three-transistor (3T) OTP cells where each cell utilizes a thin gate-oxide antifuse, a high-voltage blocking transistor, and an access transistor, which are all compatible with standard CMOS process. In order for high density implementation, the size of the 3T cell has been reduced by 80% in comparison to previous work. The fabricated total chip size, including 32-KB OTP ROM, which can be programmed via external I 2C master device such as universal I2C serial EEPROM programmer, 16-bit microcontroller with 16-KB program SRAM and 8-KB data SRAM, peripheral circuits to interface other system building blocks, and bonding pads, is 9.9 mm2. This paper describes the cell, design, and implementation of high-density CMOS OTP ROM, and shows its promising possibilities in embedded applications",
"title": ""
},
{
"docid": "f55cd152f6c9e32ed33e4cca1a91cf2e",
"text": "This study investigated whether being charged with a child pornography offense is a valid diagnostic indicator of pedophilia, as represented by an index of phallometrically assessed sexual arousal to children. The sample of 685 male patients was referred between 1995 and 2004 for a sexological assessment of their sexual interests and behavior. As a group, child pornography offenders showed greater sexual arousal to children than to adults and differed from groups of sex offenders against children, sex offenders against adults, and general sexology patients. The results suggest child pornography offending is a stronger diagnostic indicator of pedophilia than is sexually offending against child victims. Theoretical and clinical implications are discussed.",
"title": ""
},
{
"docid": "729b29b5ab44102541f3ebf8d24efec3",
"text": "In the cognitive neuroscience literature on the distinction between categorical and coordinate spatial relations, it has often been observed that categorical spatial relations are referred to linguistically by words like English prepositions, many of which specify binary oppositions-e.g., above/below, left/right, on/off, in/out. However, the actual semantic content of English prepositions, and of comparable word classes in other languages, has not been carefully considered. This paper has three aims. The first and most important aim is to inform cognitive neuroscientists interested in spatial representation about relevant research on the kinds of categorical spatial relations that are encoded in the 6000+ languages of the world. Emphasis is placed on cross-linguistic similarities and differences involving deictic relations, topological relations, and projective relations, the last of which are organized around three distinct frames of reference--intrinsic, relative, and absolute. The second aim is to review what is currently known about the neuroanatomical correlates of linguistically encoded categorical spatial relations, with special focus on the left supramarginal and angular gyri, and to suggest ways in which cross-linguistic data can help guide future research in this area of inquiry. The third aim is to explore the interface between language and other mental systems, specifically by summarizing studies which suggest that although linguistic and perceptual/cognitive representations of space are at least partially distinct, language nevertheless has the power to bring about not only modifications of perceptual sensitivities but also adjustments of cognitive styles.",
"title": ""
},
{
"docid": "77df82cf7a9ddca2038433fa96a43cef",
"text": "In this study, new algorithms are proposed for exposing forgeries in soccer images. We propose a new and automatic algorithm to extract the soccer field, field side and the lines of field in order to generate an image of real lines for forensic analysis. By comparing the image of real lines and the lines in the input image, the forensic analyzer can easily detect line displacements of the soccer field. To expose forgery in the location of a player, we measure the height of the player using the geometric information in the soccer image and use the inconsistency of the measured height with the true height of the player as a clue for detecting the displacement of the player. In this study, two novel approaches are proposed to measure the height of a player. In the first approach, the intersections of white lines in the soccer field are employed for automatic calibration of the camera. We derive a closed-form solution to calculate different camera parameters. Then the calculated parameters of the camera are used to measure the height of a player using an interactive approach. In the second approach, the geometry of vanishing lines and the dimensions of soccer gate are used to measure a player height. Various experiments using real and synthetic soccer images show the efficiency of the proposed algorithms.",
"title": ""
},
{
"docid": "5408e83074ead68bb897071f9975c3e9",
"text": "Vehicle detection at night time is a challenging problem due to low visibility and light distortion caused by motion and illumination in urban environments. This paper presents a method based on the deformable object model for detecting and classifying vehicles by using monocular infra-red cameras. As some features of vehicles, such as headlight and taillights are more visible at night time, we propose a weighted version of the deformable part model. We define weights for different features in the deformable part model of the vehicle and try to learn the weights through an enormous number of positive and negative samples. Experimental results prove the effectiveness of the algorithm for detecting close and medium range vehicles in urban scenes at night time.",
"title": ""
},
{
"docid": "279302300cbdca5f8d7470532928f9bd",
"text": "The problem of feature selection is a difficult combinatorial task in Machine Learning and of high practical relevance, e.g. in bioinformatics. Genetic Algorithms (GAs) of fer a natural way to solve this problem. In this paper we present a special Genetic Algorithm, which especially take s into account the existing bounds on the generalization erro r for Support Vector Machines (SVMs). This new approach is compared to the traditional method of performing crossvalidation and to other existing algorithms for feature selection.",
"title": ""
},
{
"docid": "54eb416e4bc32654c6e55a58baacd853",
"text": "Condom catheters are often used in the management of male urinary incontinence, and are considered to be safe. As condom catheters are placed on the male genitalia, sometimes adequate care is not taken after placement owing to poor medical care of debilitated patients and feelings of embarrassment and shame. Similarly, sometimes the correct size of penile sheath is not used. Strangulation of penis due to condom catheter is a rare condition; only few such cases have been reported in the literature. Proper application and routine care of condom catheters are important in preventing this devastating complication especially in a neurologically debilitated population. We present a case of penile necrosis due to condom catheter. We will also discuss proper catheter care and treatment of possible complications.",
"title": ""
},
{
"docid": "c048e9d40670f07e642c00e1cb7874e0",
"text": "Dietary carbohydrates are a group of chemically defined substances with a range of physical and physiological properties and health benefits. As with other macronutrients, the primary classification of dietary carbohydrate is based on chemistry, that is character of individual monomers, degree of polymerization (DP) and type of linkage (α or β), as agreed at the Food and Agriculture Organization/World Health Organization Expert Consultation in 1997. This divides carbohydrates into three main groups, sugars (DP 1–2), oligosaccharides (short-chain carbohydrates) (DP 3–9) and polysaccharides (DP⩾10). Within this classification, a number of terms are used such as mono- and disaccharides, polyols, oligosaccharides, starch, modified starch, non-starch polysaccharides, total carbohydrate, sugars, etc. While effects of carbohydrates are ultimately related to their primary chemistry, they are modified by their physical properties. These include water solubility, hydration, gel formation, crystalline state, association with other molecules such as protein, lipid and divalent cations and aggregation into complex structures in cell walls and other specialized plant tissues. A classification based on chemistry is essential for a system of measurement, predication of properties and estimation of intakes, but does not allow a simple translation into nutritional effects since each class of carbohydrate has overlapping physiological properties and effects on health. This dichotomy has led to the use of a number of terms to describe carbohydrate in foods, for example intrinsic and extrinsic sugars, prebiotic, resistant starch, dietary fibre, available and unavailable carbohydrate, complex carbohydrate, glycaemic and whole grain. This paper reviews these terms and suggests that some are more useful than others. A clearer understanding of what is meant by any particular word used to describe carbohydrate is essential to progress in translating the growing knowledge of the physiological properties of carbohydrate into public health messages.",
"title": ""
},
{
"docid": "029c3f6528a4c80e8afe05c9397cc06a",
"text": "There are five types of taste receptor cell, sweet, salt, bitter, sour, and umami (protein taste). There are 1000 olfactory receptor genes each specifying a different type of receptor each for a set of odors. Tastes are primary, unlearned, rewards and punishers, and are important in emotion. Pheromones and some other olfactory stimuli are primary reinforcers, but for many odors the reward value is learned by stimulus–reinforcer association learning. The primary taste cortex in the anterior insula provides separate and combined representations of the taste, temperature, and texture (including fat texture) of food in the mouth independently of hunger and thus of reward value and pleasantness. One synapse on, in the orbitofrontal cortex, these sensory inputs are for some neurons combined by learning with olfactory and visual inputs, and these neurons encode food reward value in that they only respond to food when hungry, and in that activations correlate with subjective pleasantness. Cognitive factors, including word-level descriptions, and attention, modulate the representation of the reward value of taste, odor, and flavor in the orbitofrontal cortex and a region to which it projects, the anterior cingulate cortex. Further, there are individual differences in the representation of the reward value of food in the orbitofrontal cortex. Overeating and obesity are related in many cases to an increased reward value of the sensory inputs produced by foods, and their modulation by cognition and attention that override existing satiety signals. Rapid advances have been made recently in understanding the receptors for taste and smell, the neural systems for taste and smell, the separation of sensory from hedonic processing of taste and smell, and how taste and smell and also the texture of food are important in the palatability of food and appetite control. Emphasis is placed on these advances. Taste receptors. There are receptors on the tongue for sweet, salt, bitter, sour, and the fifth taste, umami as exemplified by monosodium glutamate (Chandrashekar et al., 2006; Chaudhari and Roper, 2010). Umami taste is found in a diversity of foods rich in glutamate like fish, meat, human mothers' milk, tomatoes and some vegetables, and is enhanced by some ribonucleotides (including inosine and guanosine nucleotides), which are present in, for example, meat and some fish. The mixture of these umami components, which act synergistically at the receptor, underlies the rich taste characteristic of many cuisines (Rolls, 2009). Olfactory receptors. There are approximately 1000 different types of …",
"title": ""
},
{
"docid": "7edb29f1b41347995febb525cc4cba2e",
"text": "Keyword queries enjoy widespread usage as they represent an intuitive way of specifying information needs. Recently, answering keyword queries on graph-structured data has emerged as an important research topic. The prevalent approaches build on dedicated indexing techniques as well as search algorithms aiming at finding substructures that connect the data elements matching the keywords. In this paper, we introduce a novel keyword search paradigm for graph-structured data, focusing in particular on the RDF data model. Instead of computing answers directly as in previous approaches, we first compute queries from the keywords, allowing the user to choose the appropriate query, and finally, process the query using the underlying database engine. Thereby, the full range of database optimization techniques can be leveraged for query processing. For the computation of queries, we propose a novel algorithm for the exploration of top-k matching subgraphs. While related techniques search the best answer trees, our algorithm is guaranteed to compute all k subgraphs with lowest costs, including cyclic graphs. By performing exploration only on a summary data structure derived from the data graph, we achieve promising performance improvements compared to other approaches.",
"title": ""
}
] |
scidocsrr
|
b74e70b331c0108cec91ee3ac69baf4b
|
Sentiment Analysis : A Review
|
[
{
"docid": "355fca41993ea19b08d2a9fc19e25722",
"text": "People and companies selling goods or providing services have always desired to know what people think about their products. The number of opinions on the Web has significantly increased with the emergence of microblogs. In this paper we present a novel method for sentiment analysis of a text that allows the recognition of opinions in microblogs which are connected to a particular target or an entity. This method differs from other approaches in utilizing appraisal theory, which we employ for the analysis of microblog posts. The results of the experiments we performed on Twitter showed that our method improves sentiment classification and is feasible even for such specific content as presented on microblogs.",
"title": ""
}
] |
[
{
"docid": "7432009332e13ebc473c9157505cb59c",
"text": "The use of future contextual information is typically shown to be helpful for acoustic modeling. However, for the recurrent neural network (RNN), it’s not so easy to model the future temporal context effectively, meanwhile keep lower model latency. In this paper, we attempt to design a RNN acoustic model that being capable of utilizing the future context effectively and directly, with the model latency and computation cost as low as possible. The proposed model is based on the minimal gated recurrent unit (mGRU) with an input projection layer inserted in it. Two context modules, temporal encoding and temporal convolution, are specifically designed for this architecture to model the future context. Experimental results on the Switchboard task and an internal Mandarin ASR task show that, the proposed model performs much better than long short-term memory (LSTM) and mGRU models, whereas enables online decoding with a maximum latency of 170 ms. This model even outperforms a very strong baseline, TDNN-LSTM, with smaller model latency and almost half less parameters.",
"title": ""
},
{
"docid": "f835e60133415e3ec53c2c9490048172",
"text": "Probabilistic databases have received considerable attention recently due to the need for storing uncertain data produced by many real world applications. The widespread use of probabilistic databases is hampered by two limitations: (1) current probabilistic databases make simplistic assumptions about the data (e.g., complete independence among tuples) that make it difficult to use them in applications that naturally produce correlated data, and (2) most probabilistic databases can only answer a restricted subset of the queries that can be expressed using traditional query languages. We address both these limitations by proposing a framework that can represent not only probabilistic tuples, but also correlations that may be present among them. Our proposed framework naturally lends itself to the possible world semantics thus preserving the precise query semantics extant in current probabilistic databases. We develop an efficient strategy for query evaluation over such probabilistic databases by casting the query processing problem as an inference problem in an appropriately constructed probabilistic graphical model. We present several optimizations specific to probabilistic databases that enable efficient query evaluation. We validate our approach by presenting an experimental evaluation that illustrates the effectiveness of our techniques at answering various queries using real and synthetic datasets.",
"title": ""
},
{
"docid": "ac7591b1a0011b38ae88f5a4dd7ad200",
"text": "A succinct overview of some of the major research approaches to the study of leadership is provided as a foundation for the introduction of a multicomponent model of leadership that draws on those findings, complexity theory, and the concept of emergence. The major aspects of the model include: the personal characteristics and capacities, thoughts, feelings, behaviors, and human working relationships of leaders, followers, and other stake holders, the organization’s systems, including structures, processes, contents, and internal situations, the organization’s performance and outcomes, and the external environment(s), ecological niches, and external situations in which an enterprise functions. The relationship between this model and other approaches in the literature as well as directions in research on leadership and implications for consulting practice are discussed.",
"title": ""
},
{
"docid": "313a902049654e951860b9225dc5f4e8",
"text": "Financial portfolio management is the process of constant redistribution of a fund into different financial products. This paper presents a financial-model-free Reinforcement Learning framework to provide a deep machine learning solution to the portfolio management problem. The framework consists of the Ensemble of Identical Independent Evaluators (EIIE) topology, a Portfolio-Vector Memory (PVM), an Online Stochastic Batch Learning (OSBL) scheme, and a fully exploiting and explicit reward function. This framework is realized in three instants in this work with a Convolutional Neural Network (CNN), a basic Recurrent Neural Network (RNN), and a Long Short-Term Memory (LSTM). They are, along with a number of recently reviewed or published portfolio-selection strategies, examined in three back-test experiments with a trading period of 30 minutes in a cryptocurrency market. Cryptocurrencies are electronic and decentralized alternatives to government-issued money, with Bitcoin as the best-known example of a cryptocurrency. All three instances of the framework monopolize the top three positions in all experiments, outdistancing other compared trading algorithms. Although with a high commission rate of 0.25% in the backtests, the framework is able to achieve at least 4-fold returns in 50 days.",
"title": ""
},
{
"docid": "c66069fc52e1d6a9ab38f699b6a482c6",
"text": "An understanding of the age of the Acheulian and the transition to the Middle Stone Age in southern Africa has been hampered by a lack of reliable dates for key sequences in the region. A number of researchers have hypothesised that the Acheulian first occurred simultaneously in southern and eastern Africa at around 1.7-1.6 Ma. A chronological evaluation of the southern African sites suggests that there is currently little firm evidence for the Acheulian occurring before 1.4 Ma in southern Africa. Many researchers have also suggested the occurrence of a transitional industry, the Fauresmith, covering the transition from the Early to Middle Stone Age, but again, the Fauresmith has been poorly defined, documented, and dated. Despite the occurrence of large cutting tools in these Fauresmith assemblages, they appear to include all the technological components characteristic of the MSA. New data from stratified Fauresmith bearing sites in southern Africa suggest this transitional industry maybe as old as 511-435 ka and should represent the beginning of the MSA as a broad entity rather than the terminal phase of the Acheulian. The MSA in this form is a technology associated with archaic H. sapiens and early modern humans in Africa with a trend of greater complexity through time.",
"title": ""
},
{
"docid": "44bbc67f44f4f516db97b317ae16a22a",
"text": "Although the number of occupational therapists working in mental health has dwindled, the number of people who need our services has not. In our tendency to cling to a medical model of service provision, we have allowed the scope and content of our services to be limited to what has been supported within this model. A social model that stresses functional adaptation within the community, exemplified in psychosocial rehabilitation, offers a promising alternative. A strongly proactive stance is needed if occupational therapists are to participate fully. Occupational therapy can survive without mental health specialists, but a large and deserving population could ultimately be deprived of a valuable service.",
"title": ""
},
{
"docid": "9668d1cc357a70780282dfdfe9ed4bda",
"text": "A challenge in estimating students’ changing knowledge from sequential observations of their performance arises when each observed step involves multiple subskills. To overcome this mismatch in grain size between modelled skills and observed actions, we use logistic regression over each step’s subskills in a dynamic Bayes net (LR-DBN) to model transition probabilities for the overall knowledge required by the step. Unlike previous methods, LR-DBN can trace knowledge of the individual subskills without assuming they are independent. We evaluate how well it fits children’s oral reading fluency data logged by Project LISTEN’s Reading Tutor, compared to other methods.",
"title": ""
},
{
"docid": "a6773662bc858664d95e3df315d11f6c",
"text": "In this paper, we examine the strength of deep learning technique for diagnosing lung cancer on medical image analysis problem. Convolutional neural networks (CNNs) models become popular among the pattern recognition and computer vision research area because of their promising outcome on generating high-level image representations. We propose a new deep learning architecture for learning high-level image representation to achieve high classification accuracy with low variance in medical image binary classification tasks. We aim to learn discriminant compact features at beginning of our deep convolutional neural network. We evaluate our model on Kaggle Data Science Bowl 2017 (KDSB17) data set, and compare it with some related works proposed in the Kaggle competition.",
"title": ""
},
{
"docid": "999a1fbc3830ca0453760595046edb6f",
"text": "This paper introduces BoostMap, a method that can significantly reduce retrieval time in image and video database systems that employ computationally expensive distance measures, metric or non-metric. Database and query objects are embedded into a Euclidean space, in which similarities can be rapidly measured using a weighted Manhattan distance. Embedding construction is formulated as a machine learning task, where AdaBoost is used to combine many simple, ID embeddings into a multidimensional embedding that preserves a significant amount of the proximity structure in the original space. Performance is evaluated in a hand pose estimation system, and a dynamic gesture recognition system, where the proposed method is used to retrieve approximate nearest neighbors under expensive image and video similarity measures: In both systems, in quantitative experiments, BoostMap significantly increases efficiency, with minimal losses in accuracy. Moreover, the experiments indicate that BoostMap compares favorably with existing embedding methods that have been employed in computer vision and database applications, i.e., FastMap and Bourgain embeddings.",
"title": ""
},
{
"docid": "81840452c52d61024ba5830437e6a2c4",
"text": "Motivated by a real world application, we study the multiple knapsack problem with assignment restrictions (MKAR). We are given a set of items, each with a positive real weight, and a set of knapsacks, each with a positive real capacity. In addition, for each item a set of knapsacks that can hold that item is specified. In a feasible assignment of items to knapsacks, each item is assigned to at most one knapsack, assignment restrictions are satisfied, and knapsack capacities are not exceeded. We consider the objectives of maximizing assigned weight and minimizing utilized capacity. We focus on obtaining approximate solutions in polynomial computational time. We show that simple greedy approaches yield 1/3-approximation algorithms for the objective of maximizing assigned weight. We give two different 1/2-approximation algorithms: the first one solves single knapsack problems successively and the second one is based on rounding the LP relaxation solution. For the bicriteria problem of minimizing utilized capacity subject to a minimum requirement on assigned weight, we give an (1/3,2)-approximation algorithm.",
"title": ""
},
{
"docid": "b4b0cbc448b45d337627b39029b6c60e",
"text": "Multi-task learning (MTL) improves the prediction performance on multiple, different but related, learning problems through shared parameters or representations. One of the most prominent multi-task learning algorithms is an extension to support vector machines (svm) by Evgeniou et al. [15]. Although very elegant, multi-task svm is inherently restricted by the fact that support vector machines require each class to be addressed explicitly with its own weight vector which, in a multi-task setting, requires the different learning tasks to share the same set of classes. This paper proposes an alternative formulation for multi-task learning by extending the recently published large margin nearest neighbor (lmnn) algorithm to the MTL paradigm. Instead of relying on separating hyperplanes, its decision function is based on the nearest neighbor rule which inherently extends to many classes and becomes a natural fit for multi-task learning. We evaluate the resulting multi-task lmnn on real-world insurance data and speech classification problems and show that it consistently outperforms single-task kNN under several metrics and state-of-the-art MTL classifiers.",
"title": ""
},
{
"docid": "42d6072e6cff71043e345f474d880c18",
"text": "The main purpose of this research is to design and develop complete system of a remote-operated multi-direction Unmanned Ground Vehicle (UGV). The development involved PIC microcontroller in remote-controlled and UGV robot, Xbee Pro modules, Graphic LCD 84×84, Vexta brushless DC electric motor and mecanum wheels. This paper show the study the movement of multidirectional UGV by using Mecanum wheels with differences drive configuration. The 16-bits Microchips microcontroller were used in the UGV's system that embed with Xbee Pro through variable baud-rate value via UART protocol and control the direction of wheels. The successful develop UGV demonstrated clearly the potential application of this type of vehicle, and incorporated the necessary technology for further research of this type of vehicle.",
"title": ""
},
{
"docid": "f583bd78a154d3317453e1cb02026b2d",
"text": "PURPOSE\nTo evaluate the clinical performance of lithium disilicate (LiDiSi) crowns with a feather-edge finish line margin over a 9-year period.\n\n\nMATERIALS AND METHODS\nIn total, 110 lithium disilicate crowns, 40 anterior (36.3%) and 70 posterior (63.7%), were cemented with resin cement after fluoridric acid and silane surface treatment and observed by a different clinician. The data were analyzed using the Kaplan-Meier method. The clinical evaluation used the California Dental Association (CDA) modified criteria after recalling all patients between January and April 2013.\n\n\nRESULTS\nTwo crowns had failed and were replaced due to core fractures. One chipping occurred on a first molar and the ceramic surface was polished. The overall survival probability was 96.1% up to 9 years, with a failure rate of 1.8%.\n\n\nCONCLUSION\nIn this retrospective analysis, lithium disilicate with a vertical finish line used in single-crown restorations had a low clinical failure rate up to 9 years.",
"title": ""
},
{
"docid": "d6d2e1c4da299fcc8dc1cff9c9999b1c",
"text": "Purpose – The purpose of this paper is to describe the successful use of a knowledge management (KM) model in a public sector organization. Design/methodology/approach – Building on the theoretical foundation of others, this case study demonstrates the value of KM modeling in a science-based initiative in the Canadian public service. Findings – The Inukshuk KM model, which comprises the five elements of technology, leadership, culture, measurement, and process, provides a holistic approach in public service KM. Practical implications – The proposed model can be employed by other initiatives to facilitate KM planning and implementation. Originality/value – This the first project to consider how KM models may be implemented in a Canadian public service environment.",
"title": ""
},
{
"docid": "5746e92b929d6635284f62280d7bf6bd",
"text": "The essentially infinite storage space offered by Cloud Computing is quickly becoming a problem for forensics investigators in regards to evidence acquisition, forensic imaging and extended time for data analysis. It is apparent that the amount of stored data will at some point become impossible to practically image for the forensic investigators to complete a full investigation. In this paper, we address these issues by determining the relationship between acquisition times on the different storage capacities, using remote acquisition to obtain data from virtual machines in the cloud. A hypothetical case study is used to investigate the importance of using a partial and full approach for acquisition of data from the cloud and to determine how each approach affects the duration and accuracy of the forensics investigation and outcome. Our results indicate that the relation between the time taken for image acquisition and different storage volumes is not linear, owing to several factors affecting remote acquisition, especially over the Internet. Performing the acquisition using cloud resources showed a considerable reduction in time when compared to the conventional imaging method. For a 30GB storage volume, the least time was recorded for the snapshot functionality of the cloud and dd command. The time using this method is reduced by almost 77 percent. FTK Remote Agent proved to be most efficient showing an almost 12 percent reduction in time over other methods of acquisition. Furthermore, the timelines produced with the help of the case study, showed that the hybrid approach should be preferred to complete approach for performing acquisition from the cloud, especially in time critical scenarios.",
"title": ""
},
{
"docid": "29f46a8f8275fe22cb1506c8ba4175a6",
"text": "Improving disaster management and recovery techniques is one of national priorities given the huge toll caused by man-made and nature calamities. Data-driven disaster management aims at applying advanced data collection and analysis technologies to achieve more effective and responsive disaster management, and has undergone considerable progress in the last decade. However, to the best of our knowledge, there is currently no work that both summarizes recent progress and suggests future directions for this emerging research area. To remedy this situation, we provide a systematic treatment of the recent developments in data-driven disaster management. Specifically, we first present a general overview of the requirements and system architectures of disaster management systems and then summarize state-of-the-art data-driven techniques that have been applied on improving situation awareness as well as in addressing users’ information needs in disaster management. We also discuss and categorize general data-mining and machine-learning techniques in disaster management. Finally, we recommend several research directions for further investigations.",
"title": ""
},
{
"docid": "cd1bf567e2e8bfbf460abb3ac1a0d4a5",
"text": "Memory channel contention is a critical performance bottleneck in modern systems that have highly parallelized processing units operating on large data sets. The memory channel is contended not only by requests from different user applications (CPU access) but also by system requests for peripheral data (IO access), usually controlled by Direct Memory Access (DMA) engines. Our goal, in this work, is to improve system performance byeliminating memory channel contention between CPU accesses and IO accesses. To this end, we propose a hardware-software cooperative data transfer mechanism, Decoupled DMA (DDMA) that provides a specialized low-cost memory channel for IO accesses. In our DDMA design, main memoryhas two independent data channels, of which one is connected to the processor (CPU channel) and the other to the IO devices (IO channel), enabling CPU and IO accesses to be served on different channels. Systemsoftware or the compiler identifies which requests should be handled on the IO channel and communicates this to the DDMA engine, which then initiates the transfers on the IO channel. By doing so, our proposal increasesthe effective memory channel bandwidth, thereby either accelerating data transfers between system components, or providing opportunities to employ IO performance enhancement techniques (e.g., aggressive IO prefetching)without interfering with CPU accessesWe demonstrate the effectiveness of our DDMA framework in two scenarios: (i) CPU-GPU communication and (ii) in-memory communication (bulk datacopy/initialization within the main memory). By effectively decoupling accesses for CPU-GPU communication and in-memory communication from CPU accesses, our DDMA-based design achieves significant performanceimprovement across a wide variety of system configurations (e.g., 20% average performance improvement on a typical 2-channel 2-rank memory system).",
"title": ""
},
{
"docid": "cacef3b17bafadd25cf9a49e826ee066",
"text": "Road accidents are frequent and many cause casualties. Fast handling can minimize the number of deaths from traffic accidents. In addition to victims of traffic accidents, there are also patients who need emergency handling of the disease he suffered. One of the first help that can be given to the victim or patient is to use an ambulance equipped with medical personnel and equipment needed. The availability of ambulance and accurate information about victims and road conditions can help the first aid process for victims or patients. Supportive treatment can be done to deal with patients by determining the best route (nearest and fastest) to the nearest hospital. The best route can be known by utilizing the collaboration between the Dijkstra algorithm and the Floyd-warshall algorithm. This application applies Dijkstra's algorithm to determine the fastest travel time to the nearest hospital. The Floyd-warshall algorithm is implemented to determine the closest distance to the hospital. Data on some nearby hospitals will be collected by the system using Dijkstra's algorithm and then the system will calculate the fastest distance based on the last traffic condition using the Floyd-warshall algorithm to determine the best route to the nearest hospital recommended by the system. This application is built with the aim of providing support for the first handling process to the victim or the emergency patient by giving the ambulance calling report and determining the best route to the nearest hospital.",
"title": ""
},
{
"docid": "39179bd76aef590fe02606e6be29029d",
"text": "The environment continues to be a source of ill-health for many people, particularly in developing countries. International environmental law offers a viable strategy for enhancing public health through the promotion of increased awareness of the linkages between health and environment, mobilization of technical and financial resources, strengthening of research and monitoring, enforcement of health-related standards, and promotion of global cooperation. An enhanced capacity to utilize international environmental law could lead to significant worldwide gains in public health.",
"title": ""
},
{
"docid": "40083241b498dc6ac14de7dcc0b38399",
"text": "We report on an automated runtime anomaly detection method at the application layer of multi-node computer systems. Although several network management systems are available in the market, none of them have sufficient capabilities to detect faults in multi-tier Web-based systems with redundancy. We model a Web-based system as a weighted graph, where each node represents a \"service\" and each edge represents a dependency between services. Since the edge weights vary greatly over time, the problem we address is that of anomaly detection from a time sequence of graphs.In our method, we first extract a feature vector from the adjacency matrix that represents the activities of all of the services. The heart of our method is to use the principal eigenvector of the eigenclusters of the graph. Then we derive a probability distribution for an anomaly measure defined for a time-series of directional data derived from the graph sequence. Given a critical probability, the threshold value is adaptively updated using a novel online algorithm.We demonstrate that a fault in a Web application can be automatically detected and the faulty services are identified without using detailed knowledge of the behavior of the system.",
"title": ""
}
] |
scidocsrr
|
962c0f559111a095b53257bae69c438b
|
WSMO-Lite and hRESTS: Lightweight semantic annotations for Web services and RESTful APIs
|
[
{
"docid": "3a1cc60b1b6729e06f178ab62d19c59c",
"text": "The Web 2.0 wave brings, among other aspects, the Programmable Web:increasing numbers of Web sites provide machine-oriented APIs and Web services. However, most APIs are only described with text in HTML documents. The lack of machine-readable API descriptions affects the feasibility of tool support for developers who use these services. We propose a microformat called hRESTS (HTML for RESTful Services) for machine-readable descriptions of Web APIs, backed by a simple service model. The hRESTS microformat describes main aspects of services, such as operations, inputs and outputs. We also present two extensions of hRESTS:SA-REST, which captures the facets of public APIs important for mashup developers, and MicroWSMO, which provides support for semantic automation.",
"title": ""
}
] |
[
{
"docid": "a16ced3651034a33a926fe20b9093af8",
"text": "Most existing automated debugging techniques focus on reducing the amount of code to be inspected and tend to ignore an important component of software failures: the inputs that cause the failure to manifest. In this paper, we present a new technique based on dynamic tainting for automatically identifying subsets of a program's inputs that are relevant to a failure. The technique (1) marks program inputs when they enter the application, (2) tracks them as they propagate during execution, and (3) identifies, for an observed failure, the subset of inputs that are potentially relevant for debugging that failure. To investigate feasibility and usefulness of our technique, we created a prototype tool, PENUMBRA, and used it to evaluate our technique on several failures in real programs. Our results are promising, as they show that PENUMBRA can point developers to inputs that are actually relevant for investigating a failure and can be more practical than existing alternative approaches.",
"title": ""
},
{
"docid": "e882efea987b4f248c0374c1555c668a",
"text": "This paper describes the Sonic Banana, a bend-sensor based alternative MIDI controller.",
"title": ""
},
{
"docid": "287873a6428cfbf8fc9066c24d977d50",
"text": "Deployment of embedded technologies is increasingly being examined in industrial supply chains as a means for improving efficiency through greater control over purchase orders, inventory and product related information. Central to this development has been the advent of technologies such as bar codes, Radio Frequency Identification (RFID) systems, and wireless sensors which when attached to a product, form part of the product’s embedded systems infrastructure. The increasing integration of these technologies dramatically contributes to the evolving notion of a “smart product”, a product which is capable of incorporating itself into both physical and information environments. The future of this revolution in objects equipped with smart embedded technologies is one in which objects can not only identify themselves, but can also sense and store their condition, communicate This work was partly funded as part of the BRIDGE project by the European Commission within the Sixth Framework Programme (2002-2006) IP Nr. IST-FP6-033546. T. Sánchez López (B) · B. Patkai · D. McFarlane Engineering Department, Institute for Manufacturing, University of Cambridge, 16 Mill Lane, Cambridge CB2 1RX, UK e-mail: tsl26@cam.ac.uk B. Patkai e-mail: bp282@cam.ac.uk D. McFarlane e-mail: dcm@cam.ac.uk D. C. Ranasinghe The School of Computer Science, The University of Adelaide, Adelaide, South Australia, 5005, Australia e-mail: damith@cs.adelaide.edu.au with other objects and distributed infrastructures, and take decisions related to managing their life cycle. The object can essentially “plug” itself into a compatible systems infrastructure owned by different partners in a supply chain. However, as in any development process that will involve more than one end user, the establishment of a common foundation and understanding is essential for interoperability, efficient communication among involved parties and for developing novel applications. In this paper, we contribute to creating that common ground by providing a characterization to aid the specification and construction of “smart objects” and their underlying technologies. Furthermore, our work provides an extensive set of examples and potential applications of different categories of smart objects.",
"title": ""
},
{
"docid": "2fba3b2ae27e1389557794673137480d",
"text": "The paper provides an OWL ontology for legal cases with an instantiation of the legal case Popov v. Hayashi. The ontology makes explicit the conceptual knowledge of the legal case domain, supports reasoning about the domain, and can be used to annotate the text of cases, which in turn can be used to populate the ontology. A populated ontology is a case base which can be used for information retrieval, information extraction, and case based reasoning. The ontology contains not only elements for indexing the case (e.g. the parties, jurisdiction, and date), but as well elements used to reason to a decision such as argument schemes and the components input to the schemes. We use the Protégé ontology editor and knowledge acquisition system, current guidelines for ontology development, and tools for visual and linguistic presentation of the ontology.",
"title": ""
},
{
"docid": "d4783da5ba8daa92d93ce34ee9980c85",
"text": "Separation of text and non-text is an essential processing step for any document analysis system. Therefore, it is important to have a clear understanding of the state-of-the-art of text/non-text separation in order to facilitate the development of efficient document processing systems. This paper first summarizes the technical challenges of performing text/non-text separation. It then categorizes offline document images into different classes according to the nature of the challenges one faces, in an attempt to provide insight into various techniques presented in the literature. The pros and cons of various techniques are explained wherever possible. Along with the evaluation protocols, benchmark databases, this paper also presents a performance comparison of different methods. Finally, this article highlights the future research challenges and directions in this domain.",
"title": ""
},
{
"docid": "feca14524ff389c59a4d6f79954f26e3",
"text": "Zero shot learning (ZSL) is about being able to recognize gesture classes that were never seen before. This type of recognition involves the understanding that the presented gesture is a new form of expression from those observed so far, and yet carries embedded information universal to all the other gestures (also referred as context). As part of the same problem, it is required to determine what action/command this new gesture conveys, in order to react to the command autonomously. Research in this area may shed light to areas where ZSL occurs, such as spontaneous gestures. People perform gestures that may be new to the observer. This occurs when the gesturer is learning, solving a problem or acquiring a new language. The ability of having a machine recognizing spontaneous gesturing, in the same manner as humans do, would enable more fluent human-machine interaction. In this paper, we describe a new paradigm for ZSL based on adaptive learning, where it is possible to determine the amount of transfer learning carried out by the algorithm and how much knowledge is acquired from a new gesture observation. Another contribution is a procedure to determine what are the best semantic descriptors for a given command and how to use those as part of the ZSL approach proposed.",
"title": ""
},
{
"docid": "19bd7a6c21dd50c5dc8d14d5cfd363ab",
"text": "Frontotemporal dementia (FTD) is one of the most common forms of dementia in persons younger than 65 years. Variants include behavioral variant FTD, semantic dementia, and progressive nonfluent aphasia. Behavioral and language manifestations are core features of FTD, and patients have relatively preserved memory, which differs from Alzheimer disease. Common behavioral features include loss of insight, social inappropriateness, and emotional blunting. Common language features are loss of comprehension and object knowledge (semantic dementia), and nonfluent and hesitant speech (progressive nonfluent aphasia). Neuroimaging (magnetic resonance imaging) usually demonstrates focal atrophy in addition to excluding other etiologies. A careful history and physical examination, and judicious use of magnetic resonance imaging, can help distinguish FTD from other common forms of dementia, including Alzheimer disease, dementia with Lewy bodies, and vascular dementia. Although no cure for FTD exists, symptom management with selective serotonin reuptake inhibitors, antipsychotics, and galantamine has been shown to be beneficial. Primary care physicians have a critical role in identifying patients with FTD and assembling an interdisciplinary team to care for patients with FTD, their families, and caregivers.",
"title": ""
},
{
"docid": "f56d5487c5f59d9b951841b993cbec07",
"text": "We present Air+Touch, a new class of interactions that interweave touch events with in-air gestures, offering a unified input modality with expressiveness greater than each input modality alone. We demonstrate how air and touch are highly complementary: touch is used to designate targets and segment in-air gestures, while in-air gestures add expressivity to touch events. For example, a user can draw a circle in the air and tap to trigger a context menu, do a finger 'high jump' between two touches to select a region of text, or drag and in-air 'pigtail' to copy text to the clipboard. Through an observational study, we devised a basic taxonomy of Air+Touch interactions, based on whether the in-air component occurs before, between or after touches. To illustrate the potential of our approach, we built four applications that showcase seven exemplar Air+Touch interactions we created.",
"title": ""
},
{
"docid": "a5f3b862a02fb26fa7b96ad0a10e762a",
"text": "Thesis for the degree of Doctor of Science (Technology) to be presented with due permission for the public examination and criticism in the Auditorium 1382 at High dynamic performance of an electric motor is a fundamental prerequisite in motion control applications, also known as servo drives. Recent developments in the field of microprocessors and power electronics have enabled faster and faster movements with an electric motor. In such a dynamically demanding application, the dimensioning of the motor differs substantially from the industrial motor design, where feasible characteristics of the motor are for example high efficiency, a high power factor, and a low price. In motion control instead, such characteristics as high overloading capability, high-speed operation, high torque density and low inertia are required. The thesis investigates how the dimensioning of a high-performance servomotor differs from the dimensioning of industrial motors. The two most common servomotor types are examined; an induction motor and a permanent magnet synchronous motor. The suitability of these two motor types in dynamically demanding servo applications is assessed, and the design aspects that optimize the servo characteristics of the motors are analyzed. Operating characteristics of a high performance motor are studied, and some methods for improvements are suggested. The main focus is on the induction machine, which is frequently compared to the permanent magnet synchronous motor. A 4 kW prototype induction motor was designed and manufactured for the verification of the simulation results in the laboratory conditions. Also a dynamic simulation model for estimating the thermal behaviour of the induction motor in servo applications was constructed. The accuracy of the model was improved by coupling it with the electromagnetic motor model in order to take into account the variations in the motor electromagnetic characteristics due to the temperature rise.",
"title": ""
},
{
"docid": "c94abfc9bac978544366f43788843bbe",
"text": "In this paper, we propose a new feature extraction approach for face recognition based on Curvelet transform and local binary pattern operator. The motivation of this approach is based on two observations. One is that Curvelet transform is a new anisotropic multi-resolution analysis tool, which can effectively represent image edge discontinuities; the other is that local binary pattern operator is one of the best current texture descriptors for face images. As the curvelet features in different frequency bands represent different information of the original image, we extract such features using different methods for different frequency bands. Technically, the lowest frequency band component is processed using the local binary urvelet transform ocal binary pattern ocal property preservation pattern method, and only the medium frequency band components are normalized. And then, we combine them to create a feature set, and use the local preservation projection to reduce its dimension. Finally, we classify the test samples using the nearest neighbor classifier in the reduced space. Extensive experiments on the Yale database, the extended Yale B database, the PIE pose 09 database, and the FRGC database illustrate the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "f8c654b24abbe7d0239db559513021aa",
"text": "We address the unsupervised learning of several interconnected problems in low-level vision: single view depth prediction, camera motion estimation, optical flow, and segmentation of a video into the static scene and moving regions. Our key insight is that these four fundamental vision problems are coupled through geometric constraints. Consequently, learning to solve them together simplifies the problem because the solutions can reinforce each other. We go beyond previous work by exploiting geometry more explicitly and segmenting the scene into static and moving regions. To that end, we introduce Competitive Collaboration, a framework that facilitates the coordinated training of multiple specialized neural networks to solve complex problems. Competitive Collaboration works much like expectation-maximization, but with neural networks that act as both competitors to explain pixels that correspond to static or moving regions, and as collaborators through a moderator that assigns pixels to be either static or independently moving. Our novel method integrates all these problems in a common framework and simultaneously reasons about the segmentation of the scene into moving objects and the static background, the camera motion, depth of the static scene structure, and the optical flow of moving objects. Our model is trained without any supervision and achieves state-of-the-art performance among joint unsupervised methods on all sub-problems. .",
"title": ""
},
{
"docid": "e04f1d787b897fb941e147dcf253ce4f",
"text": "1. Developmental Perspective on the Evolution of Behavioral Strategies: Approach 1 2. Evidence of Ontogenetic Behavioral Linkages and Dependencies 3 2.1 Early Developmental Origins of Behavioral Variation 3 2.2 Trade-offs in Neural Processes and Personality 5 2.3 Maintenance of Variation in Behavioral Expression Along Trade-off Axes 8 3. Developmental Origins of Behavioral Variation 10 3.1 Design Principles of the Brain and Mechanisms Underlying Neural Tradeoffs 10 3.2 Developmental Channeling: Mechanism for Separating Individuals Along Trade-off Axes 12 4. Applying the Concept of Developmental Channeling: Dispersal Strategies as an Example 17 4.1 Evolution of Dispersal Strategies 17 4.2 Maternally Induced Dispersal Behavior 21 5. Conclusion and Future Directions 25 Acknowledgments 27 References 27",
"title": ""
},
{
"docid": "228d7fa684e1caf43769fa13818b938f",
"text": "Optimal tuning of proportional-integral-derivative (PID) controller parameters is necessary for the satisfactory operation of automatic voltage regulator (AVR) system. This study presents a tuning fuzzy logic approach to determine the optimal PID controller parameters in AVR system. The developed fuzzy system can give the PID parameters on-line for different operating conditions. The suitability of the proposed approach for PID controller tuning has been demonstrated through computer simulations in an AVR system.",
"title": ""
},
{
"docid": "649797f21efa24c523361afee80419c5",
"text": "Web search engines typically provide search results without considering user interests or context. We propose a personalized search approach that can easily extend a conventional search engine on the client side. Our mapping framework automatically maps a set of known user interests onto a group of categories in the Open Directory Project (ODP) and takes advantage of manually edited data available in ODP for training text classifiers that correspond to, and therefore categorize and personalize search results according to user interests. In two sets of controlled experiments, we compare our personalized categorization system (PCAT) with a list interface system (LIST) that mimics a typical search engine and with a nonpersonalized categorization system (CAT). In both experiments, we analyze system performances on the basis of the type of task and query length. We find that PCAT is preferable to LIST for information gathering types of tasks and for searches with short queries, and PCAT outperforms CAT in both information gathering and finding types of tasks, and for searches associated with free-form queries. From the subjects' answers to a questionnaire, we find that PCAT is perceived as a system that can find relevant Web pages quicker and easier than LIST and CAT.",
"title": ""
},
{
"docid": "799a7754fbcd9c5d42a0165448c89471",
"text": "• How accurate are people in judging\" traits of other users? • Are there systematic biases humans\" are subject to? • What are the implications of using\" human perception as a proxy for truth? • Which textual cues lead to a false \" perception of the truth? • Which textual cues make people\" more or less confident in their ratings? • Gender 2,607 authors, age – 826 authors • we use 100 tweets per author, 9 Mturk votes per author • URLs and mentions anonymized, English only filtered, duplicates eliminated, same 6 month time interval GENDER PERCEPTION",
"title": ""
},
{
"docid": "b3a10257beae1f64d3b00ad3837c955f",
"text": "We propose a novel shape representation useful for analyzing and processing shape collections, as well for a variety of learning and inference tasks. Unlike most approaches that capture variability in a collection by using a template model or a base shape, we show that it is possible to construct a full shape representation by using the latent space induced by a functional map network, allowing us to represent shapes in the context of a collection without the bias induced by selecting a template shape. Key to our construction is a novel analysis of latent functional spaces, which shows that after proper regularization they can be endowed with a natural geometric structure, giving rise to a well-defined, stable and fully informative shape representation. We demonstrate the utility of our representation in shape analysis tasks, such as highlighting the most distorted shape parts in a collection or separating variability modes between shape classes. We further exploit our representation in learning applications by showing how it can naturally be used within deep learning and convolutional neural networks for shape classification or reconstruction, significantly outperforming existing point-based techniques.",
"title": ""
},
{
"docid": "fda10c187c97f5c167afaa0f84085953",
"text": "We provide empirical evidence that suggests social media and stock markets have a nonlinear causal relationship. We take advantage of an extensive data set composed of social media messages related to DJIA index components. By using information-theoretic measures to cope for possible nonlinear causal coupling between social media and stock markets systems, we point out stunning differences in the results with respect to linear coupling. Two main conclusions are drawn: First, social media significant causality on stocks’ returns are purely nonlinear in most cases; Second, social media dominates the directional coupling with stock market, an effect not observable within linear modeling. Results also serve as empirical guidance on model adequacy in the investigation of sociotechnical and financial systems.",
"title": ""
},
{
"docid": "e34b8fd3e1fba5306a88e4aac38c0632",
"text": "1 Jomo was an Assistant Secretary General in the United Nations system responsible for economic research during 2005-2015.; Chowdhury (Chief, Multi-Stakeholder Engagement & Outreach, Financing for Development Office, UN-DESA); Sharma (Senior Economic Affairs Officer, Financing for Development Office, UN-DESA); Platz (Economic Affairs Officer, Financing for Development Office, UN-DESA); corresponding author: Anis Chowdhury (chowdhury4@un.org; anis.z.chowdhury@gmail.com). Thanks to colleagues at the Financing for Development Office of UN-DESA and an anonymous referee for their helpful comments. Thanks also to Alexander Kucharski for his excellent support in gathering data and producing figure charts and to Jie Wei for drawing the flow charts. However, the usual caveats apply. ABSTRACT",
"title": ""
},
{
"docid": "a75919f4a4abcc0796ae6ba269cb91c1",
"text": "Interacting systems are prevalent in nature, from dynamical systems in physics to complex societal dynamics. The interplay of components can give rise to complex behavior, which can often be explained using a simple model of the system’s constituent parts. In this work, we introduce the neural relational inference (NRI) model: an unsupervised model that learns to infer interactions while simultaneously learning the dynamics purely from observational data. Our model takes the form of a variational auto-encoder, in which the latent code represents the underlying interaction graph and the reconstruction is based on graph neural networks. In experiments on simulated physical systems, we show that our NRI model can accurately recover ground-truth interactions in an unsupervised manner. We further demonstrate that we can find an interpretable structure and predict complex dynamics in real motion capture and sports tracking data.",
"title": ""
}
] |
scidocsrr
|
0b2407af3de5ebd08004c37614e1f080
|
A robust eye gaze estimation using geometric eye features
|
[
{
"docid": "026191acb86a5c59889e0cf0491a4f7d",
"text": "We present a new dataset, ideal for Head Pose and Eye Gaze Estimation algorithm testings. Our dataset was recorded using a monocular system, and no information regarding camera or environment parameters is offered, making the dataset ideal to be tested with algorithms that do not utilize such information and do not require any specific equipment in terms of hardware.",
"title": ""
}
] |
[
{
"docid": "a1bff389a9a95926a052ded84c625a9e",
"text": "Automatically assessing the subjective quality of a photo is a challenging area in visual computing. Previous works study the aesthetic quality assessment on a general set of photos regardless of the photo's content and mainly use features extracted from the entire image. In this work, we focus on a specific genre of photos: consumer photos with faces. This group of photos constitutes an important part of consumer photo collections. We first conduct an online study on Mechanical Turk to collect ground-truth and subjective opinions for a database of consumer photos with faces. We then extract technical features, perceptual features, and social relationship features to represent the aesthetic quality of a photo, by focusing on face-related regions. Experiments show that our features perform well for categorizing or predicting the aesthetic quality.",
"title": ""
},
{
"docid": "5454fbb1a924f3360a338c11a88bea89",
"text": "PURPOSE OF REVIEW\nThis review describes the most common motor neuron disease, ALS. It discusses the diagnosis and evaluation of ALS and the current understanding of its pathophysiology, including new genetic underpinnings of the disease. This article also covers other motor neuron diseases, reviews how to distinguish them from ALS, and discusses their pathophysiology.\n\n\nRECENT FINDINGS\nIn this article, the spectrum of cognitive involvement in ALS, new concepts about protein synthesis pathology in the etiology of ALS, and new genetic associations will be covered. This concept has changed over the past 3 to 4 years with the discovery of new genes and genetic processes that may trigger the disease. As of 2014, two-thirds of familial ALS and 10% of sporadic ALS can be explained by genetics. TAR DNA binding protein 43 kDa (TDP-43), for instance, has been shown to cause frontotemporal dementia as well as some cases of familial ALS, and is associated with frontotemporal dysfunction in ALS.\n\n\nSUMMARY\nThe anterior horn cells control all voluntary movement: motor activity, respiratory, speech, and swallowing functions are dependent upon signals from the anterior horn cells. Diseases that damage the anterior horn cells, therefore, have a profound impact. Symptoms of anterior horn cell loss (weakness, falling, choking) lead patients to seek medical attention. Neurologists are the most likely practitioners to recognize and diagnose damage or loss of anterior horn cells. ALS, the prototypical motor neuron disease, demonstrates the impact of this class of disorders. ALS and other motor neuron diseases can represent diagnostic challenges. Neurologists are often called upon to serve as a \"medical home\" for these patients: coordinating care, arranging for durable medical equipment, and leading discussions about end-of-life care with patients and caregivers. It is important for neurologists to be able to identify motor neuron diseases and to evaluate and treat patients affected by them.",
"title": ""
},
{
"docid": "8919fb37c9cb09e01a949849b326a02b",
"text": "Soil and nutrient depletion from intensive use of land is a critical issue for food production. An understanding of whether the soil is adequately treated with appropriate crop management practices in real-time during production cycles could prevent soil erosion and the overuse of natural or artificial resources to keep the soil healthy and suitable for planting. Precision agriculture traditionally uses expensive techniques to monitor the health of soil and crops including images from satellites and airplanes. Recently there are several studies using drones and a multitude of sensors connected to farm machinery to observe and measure the health of soil and crops during planting and harvesting. This paper describes a real-time, in-situ agricultural internet of things (IoT) device designed to monitor the state of the soil and the environment. This device was designed to be compatible with open hardware and it is composed of temperature and humidity sensors (soil and environment), electrical conductivity of the soil and luminosity, Global Positioning System (GPS) and a ZigBee radio for data communication. The field trial involved soil testing and measurements of the local climate in Sao Paulo, Brazil. The measurements of soil temperature, humidity and conductivity are used to monitor soil conditions. The local climate data could be used to support decisions about irrigation and other activities related to crop health. On-going research includes methods to reduce the consumption of energy and increase the number of sensors. Future applications include the use of the IoT device to detect fire in crops, a common problem in sugar cane crops and the integration of the IoT device with irrigation management systems to improve water usage.",
"title": ""
},
{
"docid": "8a18e35f037920295c5d45bcf97db0ad",
"text": "We introduce a powerful recurrent neural network based method for novelty detection to the application of detecting radio anomalies. This approach holds promise in significantly increasing the ability of naive anomaly detection to detect small anomalies in highly complex complexity multi-user radio bands. We demonstrate the efficacy of this approach on a number of common real over the air radio communications bands of interest and quantify detection performance in terms of probability of detection an false alarm rates across a range of interference to band power ratios and compare to baseline methods.",
"title": ""
},
{
"docid": "b56467b5761a1294bb2b1739d6504ef2",
"text": "This paper presents the creation of a robot capable of drawing artistic portraits. The application is purely entertaining and based on existing tools for face detection and image reconstruction, as well as classical tools for trajectory planning of a 4 DOFs robot arm. The innovation of the application lies in the care we took to make the whole process as human-like as possible. The robot's motions and its drawings follow a style characteristic to humans. The portraits conserve the esthetics features of the original images. The whole process is interactive, using speech recognition and speech synthesis to conduct the scenario",
"title": ""
},
{
"docid": "d8b8aeb2cb7f2dd29af1c0363b31dfef",
"text": "As cloud computing becomes prevalent, more and more sensitive data is being centralized into the cloud for sharing, which brings forth new challenges for outsourced data security and privacy. Attributebased encryption (ABE) is a promising cryptographic primitive, which has been widely applied to design fine-grained access control system recently. However, ABE is being criticized for its high scheme overhead as the computational cost grows with the complexity of the access formula. This disadvantage becomes more serious for mobile devices because they have constrained computing resources. Aiming at tackling the challenge above, we present a generic and efficient solution to implement attribute-based access control system by introducing secure outsourcing techniques into ABE. More precisely, two cloud service providers (CSPs), namely key generation-cloud service provider (KG-CSP) and decryption-cloud service provider (D-CSP) are introduced to perform the outsourced key-issuing and decryption on behalf of attribute authority and users respectively. In order to outsource heavy computation to both CSPs without private information leakage, we formulize an underlying primitive called outsourced ABE (OABE) and propose several constructions with outsourced decryption and keyissuing. Finally, extensive experiment demonstrates that with the help of KG-CSP and D-CSP, efficient key-issuing and decryption are achieved in our constructions.",
"title": ""
},
{
"docid": "7a9e1026f89d4fee43d87faad387a198",
"text": "Today, Deep learning algorithms have quickly become essential in the field of medical image analysis. Compared to the traditional methods, these Deep learning techniques are more efficient in extracting compact information leading towards significant improvement performance of medical image analysis system. We present in this paper a new technique for sphenoid sinus automatic segmentation using a 3D Convolutional Neural Networks (CNN). Due to the scarcity of medical data, we chose to used a 3D CNN model learned on a small training set. Mathematical morphology operations are then used to automatically detect and segment the region of interest. Our proposed method is tested and compared with a semi-automatic method and manual delineations made by a specialist. The preliminary results from the Computed Tomography (CT) volumes seem to be very promising.",
"title": ""
},
{
"docid": "787979d6c1786f110ff7a47f09b82907",
"text": "Imbalance settlement markets are managed by the system operators and provide a mechanism for settling the inevitable discrepancies between contractual agreements and physical delivery. In European power markets, settlements schemes are mainly based on heuristic penalties. These arrangements have disadvantages: First, they do not provide transparency about the cost of the reserve capacity that the system operator may have obtained ahead of time, nor about the cost of the balancing energy that is actually deployed. Second, they can be gamed if market participants use the imbalance settlement as an opportunity for market arbitrage, for example if market participants use balancing energy to avoid higher costs through regular trade on illiquid energy markets. Third, current practice hinders the market-based integration of renewable energy and the provision of financial incentives for demand response through rigid penalty rules. In this paper we try to remedy these disadvantages by proposing an imbalance settlement procedure with an incentive compatible cost allocation scheme for reserve capacity and deployed energy. Incentive compatible means that market participants voluntarily and truthfully state their valuation of ancillary services. We show that this approach guarantees revenue sufficiency for the system operator and provides financial incentives for balance responsible parties to keep imbalances close to zero.",
"title": ""
},
{
"docid": "ed3b4ace00c68e9ad2abe6d4dbdadfcb",
"text": "With decreasing costs of high-quality surveillance systems, human activity detection and tracking has become increasingly practical. Accordingly, automated systems have been designed for numerous detection tasks, but the task of detecting illegally parked vehicles has been left largely to the human operators of surveillance systems. We propose a methodology for detecting this event in real time by applying a novel image projection that reduces the dimensionality of the data and, thus, reduces the computational complexity of the segmentation and tracking processes. After event detection, we invert the transformation to recover the original appearance of the vehicle and to allow for further processing that may require 2-D data. We evaluate the performance of our algorithm using the i-LIDS vehicle detection challenge datasets as well as videos we have taken ourselves. These videos test the algorithm in a variety of outdoor conditions, including nighttime video and instances of sudden changes in weather.",
"title": ""
},
{
"docid": "b02992d4ffe592d3afb7efcbdc64a195",
"text": "Neural Machine Translation (NMT) has obtained state-of-the art performance for several language pairs, while only using parallel data for training. Targetside monolingual data plays an important role in boosting fluency for phrasebased statistical machine translation, and we investigate the use of monolingual data for NMT. In contrast to previous work, which combines NMT models with separately trained language models, we note that encoder-decoder NMT architectures already have the capacity to learn the same information as a language model, and we explore strategies to train with monolingual data without changing the neural network architecture. By pairing monolingual training data with an automatic backtranslation, we can treat it as additional parallel training data, and we obtain substantial improvements on the WMT 15 task English↔German (+2.8–3.7 BLEU), and for the low-resourced IWSLT 14 task Turkish→English (+2.1–3.4 BLEU), obtaining new state-of-the-art results. We also show that fine-tuning on in-domain monolingual and parallel data gives substantial improvements for the IWSLT 15 task English→German.",
"title": ""
},
{
"docid": "802f77b4e2b8c8cdfb68f80fe31d7494",
"text": "In this article, we use three clustering methods (K-means, self-organizing map, and fuzzy K-means) to find properly graded stock market brokerage commission rates based on the 3-month long total trades of two different transaction modes (representative assisted and online trading system). Stock traders for both modes are classified in terms of the amount of the total trade as well as the amount of trade of each transaction mode, respectively. Results of our empirical analysis indicate that fuzzy K-means cluster analysis is the most robust approach for segmentation of customers of both transaction modes. We then propose a decision tree based rule to classify three groups of customers and suggest different brokerage commission rates of 0.4, 0.45, and 0.5% for representative assisted mode and 0.06, 0.1, and 0.18% for online trading system, respectively. q 2003 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "b4910e355c44077eb27c62a0c8237204",
"text": "Our proof is built on Perron-Frobenius theorem, a seminal work in matrix theory (Meyer 2000). By Perron-Frobenius theorem, the power iteration algorithm for predicting top K persuaders converges to a unique C and this convergence is independent of the initialization of C if the persuasion probability matrix P is nonnegative, irreducible, and aperiodic (Heath 2002). We first show that P is nonnegative. Each component of the right hand side of Equation (10) is positive except nD $ 0; thus, persuasion probability pij estimated with Equation (10) is positive, for all i, j = 1, 2, ..., n and i ... j. Because all diagonal elements of P are equal to zero and all non-diagonal elements of P are positive persuasion probabilities, P is nonnegative.",
"title": ""
},
{
"docid": "0afb2a40553e1bef9d8250a3c5012180",
"text": "Attacks to networks are becoming more complex and sophisticated every day. Beyond the so-called script-kiddies and hacking newbies, there is a myriad of professional attackers seeking to make serious profits infiltrating in corporate networks. Either hostile governments, big corporations or mafias are constantly increasing their resources and skills in cybercrime in order to spy, steal or cause damage more effectively. With the ability and resources of hackers growing, the traditional approaches to Network Security seem to start hitting their limits and it’s being recognized the need for a smarter approach to threat detections. This paper provides an introduction on the need for evolution of Cyber Security techniques and how Artificial Intelligence (AI) could be of application to help solving some of the problems. It provides also, a high-level overview of some state of the art AI Network Security techniques, to finish analysing what is the foreseeable future of the application of AI to Network Security. Applications of Artificial Intelligence (AI) to Network Security 3",
"title": ""
},
{
"docid": "c19f986d747f4d6a3448607f76d961ab",
"text": "We propose Stochastic Neural Architecture Search (SNAS), an economical endto-end solution to Neural Architecture Search (NAS) that trains neural operation parameters and architecture distribution parameters in same round of backpropagation, while maintaining the completeness and differentiability of the NAS pipeline. In this work, NAS is reformulated as an optimization problem on parameters of a joint distribution for the search space in a cell. To leverage the gradient information in generic differentiable loss for architecture search, a novel search gradient is proposed. We prove that this search gradient optimizes the same objective as reinforcement-learning-based NAS, but assigns credits to structural decisions more efficiently. This credit assignment is further augmented with locally decomposable reward to enforce a resource-efficient constraint. In experiments on CIFAR-10, SNAS takes fewer epochs to find a cell architecture with state-of-theart accuracy than non-differentiable evolution-based and reinforcement-learningbased NAS, which is also transferable to ImageNet. It is also shown that child networks of SNAS can maintain the validation accuracy in searching, with which attention-based NAS requires parameter retraining to compete, exhibiting potentials to stride towards efficient NAS on big datasets.",
"title": ""
},
{
"docid": "9533193407869250854157e89d2815eb",
"text": "Life events are often described as major forces that are going to shape tomorrow's consumer need, behavior and mood. Thus, the prediction of life events is highly relevant in marketing and sociology. In this paper, we propose a data-driven, real-time method to predict individual life events, using readily available data from smartphones. Our large-scale user study with more than 2000 users shows that our method is able to predict life events with 64.5% higher accuracy, 183.1% better precision and 88.0% higher specificity than a random model on average.",
"title": ""
},
{
"docid": "6ab1bc5fced659803724f2f7916be355",
"text": "Statistical Analysis of a Telephone Call Center Lawrence Brown, Noah Gans, Avishai Mandelbaum, Anat Sakov, Haipeng Shen, Sergey Zeltyn and Linda Zhao Lawrence Brown is Professor, Department of Statistics, The Wharton School, University of Pennsylvania, Philadelphia, PA 19104 . Noah Gans is Associate Professor, Department of Operations and Information Management, The Wharton School, University of Pennsylvania, Philadelphia, PA 19104 . Avishai Mandelbaum is Professor, Faculty of Industrial Engineering and Management, Technion, Haifa, Israel . Anat Sakov is Postdoctoral Fellow, Tel-Aviv University, Tel-Aviv, Israel . Haipeng Shen is Assistant Professor, Department of Statistics, University of North Carolina, Durham, NC 27599 . Sergey Zeltyn is Ph.D. Candidate, Faculty of Industrial Engineering and Management, Technion, Haifa, Israel . Linda Zhao is Associate Professor, Department of Statistics, The Wharton School, University of Pennsylvania, Philadelphia, PA 19104 . This work was supported by National Science Foundation DMS-99-71751 and DMS-99-71848, the Sloane Foundation, Israeli Science Foundation grants 388/99 and 126/02, the Wharton Financial Institutions Center, and Technion funds for the promotion of research and sponsored research. Version of record first published: 31 Dec 2011.",
"title": ""
},
{
"docid": "097879c593aa68602564c176b806a74b",
"text": "We study the recognition of surfaces made from different materials such as concrete, rug, marble, or leather on the basis of their textural appearance. Such natural textures arise from spatial variation of two surface attributes: (1) reflectance and (2) surface normal. In this paper, we provide a unified model to address both these aspects of natural texture. The main idea is to construct a vocabulary of prototype tiny surface patches with associated local geometric and photometric properties. We call these 3D textons. Examples might be ridges, grooves, spots or stripes or combinations thereof. Associated with each texton is an appearance vector, which characterizes the local irradiance distribution, represented as a set of linear Gaussian derivative filter outputs, under different lighting and viewing conditions. Given a large collection of images of different materials, a clustering approach is used to acquire a small (on the order of 100) 3D texton vocabulary. Given a few (1 to 4) images of any material, it can be characterized using these textons. We demonstrate the application of this representation for recognition of the material viewed under novel lighting and viewing conditions. We also illustrate how the 3D texton model can be used to predict the appearance of materials under novel conditions.",
"title": ""
},
{
"docid": "4770ca4d8c12f949b8010e286d147802",
"text": "The clinical, radiologic, and pathologic findings in radiation injury of the brain are reviewed. Late radiation injury is the major, dose-limiting complication of brain irradiation and occurs in two forms, focal and diffuse, which differ significantly in clinical and radiologic features. Focal and diffuse injuries both include a wide spectrum of abnormalities, from subclinical changes detectable only by MR imaging to overt brain necrosis. Asymptomatic focal edema is commonly seen on CT and MR following focal or large-volume irradiation. Focal necrosis has the CT and MR characteristics of a mass lesion, with clinical evidence of focal neurologic abnormality and raised intracranial pressure. Microscopically, the lesion shows characteristic vascular changes and white matter pathology ranging from demyelination to coagulative necrosis. Diffuse radiation injury is characterized by periventricular decrease in attenuation of CT and increased signal on proton-density and T2-weighted MR images. Most patients are asymptomatic. When clinical manifestations occur, impairment of mental function is the most prominent feature. Pathologic findings in focal and diffuse radiation necrosis are similar. Necrotizing leukoencephalopathy is the form of diffuse white matter injury that follows chemotherapy, with or without irradiation. Vascular disease is less prominent and the latent period is shorter than in diffuse radiation injury; radiologic findings and clinical manifestations are similar. Late radiation injury of large arteries is an occasional cause of postradiation cerebral injury, and cerebral atrophy and mineralizing microangiopathy are common radiologic findings of uncertain clinical significance. Functional imaging by positron emission tomography can differentiate recurrent tumor from focal radiation necrosis with positive and negative predictive values for tumor of 80-90%. Positron emission tomography of the blood-brain barrier, glucose metabolism, and blood flow, together with MR imaging, have demonstrated some of the pathophsiology of late radiation necrosis. Focal glucose hypometabolism on positron emissin tomography in irradiated patients may have prognostic significance for subsequent development of clinically evident radiation necrosis.",
"title": ""
},
{
"docid": "08675a0dc7a2f370d33704470297cec3",
"text": "Construal level theory (CLT) is an account of how psychological distance influences individuals' thoughts and behavior. CLT assumes that people mentally construe objects that are psychologically near in terms of low-level, detailed, and contextualized features, whereas at a distance they construe the same objects or events in terms of high-level, abstract, and stable characteristics. Research has shown that different dimensions of psychological distance (time, space, social distance, and hypotheticality) affect mental construal and that these construals, in turn, guide prediction, evaluation, and behavior. The present paper reviews this research and its implications for consumer psychology.",
"title": ""
},
{
"docid": "1a5f56c7c7a9d44a762ba94297f3ca7a",
"text": "BACKGROUND\nFloods are the most common type of global natural disaster. Floods have a negative impact on mental health. Comprehensive evaluation and review of the literature are lacking.\n\n\nOBJECTIVE\nTo systematically map and review available scientific evidence on mental health impacts of floods caused by extended periods of heavy rain in river catchments.\n\n\nMETHODS\nWe performed a systematic mapping review of published scientific literature in five languages for mixed studies on floods and mental health. PUBMED and Web of Science were searched to identify all relevant articles from 1994 to May 2014 (no restrictions).\n\n\nRESULTS\nThe electronic search strategy identified 1331 potentially relevant papers. Finally, 83 papers met the inclusion criteria. Four broad areas are identified: i) the main mental health disorders-post-traumatic stress disorder, depression and anxiety; ii] the factors associated with mental health among those affected by floods; iii) the narratives associated with flooding, which focuses on the long-term impacts of flooding on mental health as a consequence of the secondary stressors; and iv) the management actions identified. The quantitative and qualitative studies have consistent findings. However, very few studies have used mixed methods to quantify the size of the mental health burden as well as exploration of in-depth narratives. Methodological limitations include control of potential confounders and short-term follow up.\n\n\nLIMITATIONS\nFloods following extreme events were excluded from our review.\n\n\nCONCLUSIONS\nAlthough the level of exposure to floods has been systematically associated with mental health problems, the paucity of longitudinal studies and lack of confounding controls precludes strong conclusions.\n\n\nIMPLICATIONS\nWe recommend that future research in this area include mixed-method studies that are purposefully designed, using more rigorous methods. Studies should also focus on vulnerable groups and include analyses of policy and practical responses.",
"title": ""
}
] |
scidocsrr
|
fa946ba4ce17c66d7a2aaa9e0f44b9cd
|
Online Speed Adaptation Using Supervised Learning for High-Speed, Off-Road Autonomous Driving
|
[
{
"docid": "047480185afbea439eee2ee803b9d1f9",
"text": "The ability to perceive and analyze terrain is a key problem in mobile robot navigation. Terrain perception problems arise in planetary robotics, agriculture, mining, and, of course, self-driving cars. Here, we introduce the PTA (probabilistic terrain analysis) algorithm for terrain classification with a fastmoving robot platform. The PTA algorithm uses probabilistic techniques to integrate range measurements over time, and relies on efficient statistical tests for distinguishing drivable from nondrivable terrain. By using probabilistic techniques, PTA is able to accommodate severe errors in sensing, and identify obstacles with nearly 100% accuracy at speeds of up to 35mph. The PTA algorithm was an essential component in the DARPA Grand Challenge, where it enabled our robot Stanley to traverse the entire course in record time.",
"title": ""
},
{
"docid": "1a2b8e09251e6b041d40da157051e61c",
"text": "Abstract. Unmanned ground vehicles have important applications in high speed, rough terrain scenarios. In these scenarios unexpected and dangerous situations can occur that require rapid hazard avoidance maneuvers. At high speeds, there is limited time to perform navigation and hazard avoidance calculations based on detailed vehicle and terrain models. This paper presents a method for high speed hazard avoidance based on the “trajectory space,” which is a compact model-based representation of a robot’s dynamic performance limits in rough, natural terrain. Simulation and experimental results on a small gasoline-powered unmanned ground vehicle demonstrate the method’s effectiveness on sloped and rough terrain.",
"title": ""
}
] |
[
{
"docid": "8fcc9f13f34b03d68f59409b2e3b007a",
"text": "Despite defensive advances, malicious software (malware) remains an ever present cyber-security threat. Cloud environments are far from malware immune, in that: i) they innately support the execution of remotely supplied code, and ii) escaping their virtual machine (VM) confines has proven relatively easy to achieve in practice. The growing interest in clouds by industries and governments is also creating a core need to be able to formally address cloud security and privacy issues. VM introspection provides one of the core cyber-security tools for analyzing the run-time behaviors of code. Traditionally, introspection approaches have required close integration with the underlying hypervisors and substantial re-engineering when OS updates and patches are applied. Such heavy-weight introspection techniques, therefore, are too invasive to fit well within modern commercial clouds. Instead, lighter-weight introspection techniques are required that provide the same levels of within-VM observability but without the tight hypervisor and OS patch-level integration. This work introduces Maitland as a prototype proof-of-concept implementation a lighter-weight introspection tool, which exploits paravirtualization to meet these end-goals. The work assesses Maitland's performance, highlights its use to perform packer-independent malware detection, and assesses whether, with further optimizations, Maitland could provide a viable approach for introspection in commercial clouds.",
"title": ""
},
{
"docid": "bc6be8b5fd426e7f8d88645a2b21ff6a",
"text": "irtually everyone would agree that a primary, yet insufficiently met, goal of schooling is to enable students to think critically. In layperson’s terms, critical thinking consists of seeing both sides of an issue, being open to new evidence that disconfirms your ideas, reasoning dispassionately, demanding that claims be backed by evidence, deducing and inferring conclusions from available facts, solving problems, and so forth. Then too, there are specific types of critical thinking that are characteristic of different subject matter: That’s what we mean when we refer to “thinking like a scientist” or “thinking like a historian.” This proper and commonsensical goal has very often been translated into calls to teach “critical thinking skills” and “higher-order thinking skills”—and into generic calls for teaching students to make better judgments, reason more logically, and so forth. In a recent survey of human resource officials and in testimony delivered just a few months ago before the Senate Finance Committee, business leaders have repeatedly exhorted schools to do a better job of teaching students to think critically. And they are not alone. Organizations and initiatives involved in education reform, such as the National Center on Education and the Economy, the American Diploma Project, and the Aspen Institute, have pointed out the need for students to think and/or reason critically. The College Board recently revamped the SAT to better assess students’ critical thinking. And ACT, Inc. offers a test of critical thinking for college students. These calls are not new. In 1983, A Nation At Risk, a report by the National Commission on Excellence in Education, found that many 17-year-olds did not possess the “‘higher-order’ intellectual skills” this country needed. It claimed that nearly 40 percent could not draw inferences from written material and only onefifth could write a persuasive essay. Following the release of A Nation At Risk, programs designed to teach students to think critically across the curriculum became extremely popular. By 1990, most states had initiatives designed to encourage educators to teach critical thinking, and one of the most widely used programs, Tactics for Thinking, sold 70,000 teacher guides. But, for reasons I’ll explain, the programs were not very effective—and today we still lament students’ lack of critical thinking. After more than 20 years of lamentation, exhortation, and little improvement, maybe it’s time to ask a fundamental question: Can critical thinking actually be taught? Decades of cognitive research point to a disappointing answer: not really. People who have sought to teach critical thinking have assumed that it is a skill, like riding a bicycle, and that, like other skills, once you learn it, you can apply it in any situation. Research from cognitive science shows that thinking is not that sort of skill. The processes of thinking are intertwined with the content of thought (that is, domain knowledge). Thus, if you remind a student to “look at an issue from multiple perspectives” often enough, he will learn that he ought to do so, but if he doesn’t know much about Critical Thinking",
"title": ""
},
{
"docid": "2774af56977d08ba42ae54d3c583f8a3",
"text": "Most recently, Yang et al proposed a new set of security requirements for two-factor smart-card-based password mutual authentication and then suggested a new scheme satisfying all their security requirements. In this paper, however, we first show one critical security weakness being overlooked, i.e., allowing key-compromise impersonation. We provide an attack to illustrate the adversary is able to masquerade any user to access the server's service in their protocol once if the long-term key of the server is compromised. Thereafter, we suggests key-compromise impersonation resilience should be added as one more important security requirement for two-factor smart-card based password mutual authentication and then propose an improved protocol to eliminate the security weakness existing in Yang et al's protocol.",
"title": ""
},
{
"docid": "ca203c2286b0e250b8a2e5ead0bdcaed",
"text": "It is widely recognized that data visualization may be a powerful methodology for exploratory analysis. In order to fulfill this claim, visualization software must be carefully designed taking into account two principal aspects: characteristics of the data to be visualized and the exploratory tasks to be supported. The tasks that may potentially arise in data exploration are, in their turn, dependent on the data. In the chapter, we present visualization software tools for three different types of spatio-temporal data developed using a task-driven approach to design. We demonstrate that different exploratory tasks may be anticipated in these three cases and that different techniques are required to properly support exploration of the data. Prior to the consideration of the examples, we briefly describe the typologies of data and tasks we use in our work. 10.1 Scope and Perspective This chapter offers a view on geovisualization from the perspective of computer scientists with an extensive experience in developing software tools for exploratory analysis of spatial data. Our tools are mostly based on combination of familiar techniques from various disciplines: Cartography, Statistical Graphics, Information Visualization, and Human–Computer Interaction. Traditional mapping and graphing techniques are enhanced with interactivity and manipulability. Typically, the ideas concerning useful technique combinations and enhancements come to us when we examine some specific datasets received from people interested in exploring these data. It is commonly recognized that techniques used for graphical representation of data must correspond to characteristics of the data (Bertin, 1983), and the same applies to software tools for visual data exploration. However, as we have learned from our Exploring Geovisualization J. Dykes, A.M. MacEachren, M.-J. Kraak (Editors) q 2005 Elsevier Ltd. All rights reserved. 201 preprint : November 2004 do not redistribute. J. Dykes, A.M. MacEachren, M-J. Kraak (2005), Exploring Geovisualization, Pergamon, 732pp. 0-08-044531-4 experience, the route from data characteristics to the development of appropriate tools consists of two parts: first, data characteristics determine the potential questions (tasks) that may emerge in the process of the data exploration; second, the tasks make requirements of the tools and thereby define the space of possible design options. In this chapter, we advocate the task – analytical approach to the selection of appropriate visualization techniques and design of tools for the exploratory analysis of geographically referenced data. For this purpose, we offer three examples of geovisualization tool design for different types of spatio-temporal data. Prior to the consideration of the examples, we introduce the typological framework we use for revealing the set of potential tasks from the characteristics of datasets to analyze. We hope this material will be useful both for designers of geovisualization tools and for analysts applying existing tools to their data.",
"title": ""
},
{
"docid": "34760f81c5486936e48c1a334acf61d8",
"text": "Sponsored search is a multi-billion dollar business that generates most of the revenue for search engines. Predicting the probability that users click on ads is crucial to sponsored search because the prediction is used to influence ranking, filtering, placement, and pricing of ads. Ad ranking, filtering and placement have a direct impact on the user experience, as users expect the most useful ads to rank high and be placed in a prominent position on the page. Pricing impacts the advertisers' return on their investment and revenue for the search engine. The objective of this paper is to present a framework for the personalization of click models in sponsored search. We develop user-specific and demographic-based features that reflect the click behavior of individuals and groups. The features are based on observations of search and click behaviors of a large number of users of a commercial search engine. We add these features to a baseline non-personalized click model and perform experiments on offline test sets derived from user logs as well as on live traffic. Our results demonstrate that the personalized models significantly improve the accuracy of click prediction.",
"title": ""
},
{
"docid": "2c3b85bcef5ac7dd15e7411a1d10da22",
"text": "Revision history 2009-01-09 Corrected grammar in the paragraph which precedes Equation (17). Changed datestamp format in the revision history. 2008-07-05 Corrected caption for Figure (2). Added conditioning on θn for l in convergence discussion in Section (3.2). Changed email contact info to reduce spam. 2006-10-14 Added explanation and disambiguating parentheses in the development leading to Equation (14). Minor corrections. 2006-06-28 Added Figure (1). Corrected typo above Equation (5). Minor corrections. Added hyperlinks. 2005-08-26 Minor corrections. 2004-07-18 Initial revision.",
"title": ""
},
{
"docid": "d7ec6d060760e1c80459277f3f663743",
"text": "a r t i c l e i n f o Keywords: Supply chain management Analytical capabilities Information systems Business process management Performance SCOR The paper investigates the relationship between analytical capabilities in the plan, source, make and deliver area of the supply chain and its performance using information system support and business process orientation as moderators. Structural equation modeling employs a sample of 310 companies from different industries from the USA, Europe, Canada, Brazil and China. The findings suggest the existence of a statistically significant relationship between analytical capabilities and performance. The moderation effect of information systems support is considerably stronger than the effect of business process orientation. The results provide a better understanding of the areas where the impact of business analytics may be the strongest. In the modern world competition is no longer between organizations , but among supply chains ('SCs'). Effective supply chain management ('SCM') has therefore become a potentially valuable way of securing a competitive advantage and improving organizational performance [47,79]. However, the understanding of the why and how SCM affects firm performance, which areas are especially important and which are the important moderator effects is still incomplete. This paper thus analyses the impact of business analytics ('BA') in a SC on the improvement of SC performance. The topic is important since enhancing the effectiveness and efficiency of SC analytics is a critical component of a chain's ability to achieve its competitive advantage [68]. BA have been identified as an important \" tool \" for SCM [44] and optimization techniques have become an integral part of organizational business processes [80]. A correct relevant business decision based on bundles of very large volumes of both internal and external data is only possible with BA [68]. It is therefore not surprising that research interest in BA use has been increasing [43]. However, despite certain anecdotic evidence (see for instance the examples given in [19]) or optimistic reports of return-on-investment exceeding 100% (see e.g. [25]) a systematic and structured analysis of the impact of BA use on SC performance has not yet been conducted. Accordingly, the main contribution of our paper is its analysis of the impact of the use of BA in different areas of the SC (based on the Supply Chain Operations Reference ('SCOR') model) on the performance of the chain. Further, the mediating effects of two important constructs, namely information systems ('IS') support and business …",
"title": ""
},
{
"docid": "5debe55b333b1c3e4a9b3212865652a8",
"text": "Cryptorchidism and hypospadias have been related to prenatal estrogen exposure in animal models. Some chemicals used in farming and gardening have been shown to possess estrogenic and other hormone-disrupting effects. Earlier studies have indicated increased risks of urogenital malformations in the sons of pesticide appliers. In the present study, parental occupation in the farming and gardening industry among 6,177 cases of cryptorchidism, 1,345 cases of hypospadias, and 23,273 controls, born live from 1983 to 1992 in Denmark, was investigated in a register-based case-control study. A significantly increased risk of cryptorchidism but not hypospadias was found in sons of women working in gardening (adjusted odds ratio = 1.67; 95% confidence interval, 1.14-2.47). The risks were not increased in sons of men working in farming or gardening. The increased risk of cryptorchidism among sons of female gardeners could suggest an association with prenatal exposure to occupationally related chemicals.",
"title": ""
},
{
"docid": "b2e493de6e09766c4ddbac7de071e547",
"text": "In this paper we describe and evaluate some recently innovated coupling metrics for object oriented OO design The Coupling Between Objects CBO metric of Chidamber and Kemerer C K are evaluated empirically using ve OO systems and compared with an alternative OO design metric called NAS which measures the Number of Associations between a class and its peers The NAS metric is directly collectible from design documents such as the Object Model of OMT Results from all systems studied indicate a strong relationship between CBO and NAS suggesting that they are not orthogonal We hypothesised that coupling would be related to understandability the number of errors and error density No relationships were found for any of the systems between class understandability and coupling However we did nd partial support for our hypothesis linking increased coupling to increased error density The work described in this paper is part of the Metrics for OO Programming Systems MOOPS project which aims are to evaluate existing OO metrics and to innovate and evaluate new OO analysis and design metrics aimed speci cally at the early stages of development",
"title": ""
},
{
"docid": "c9972414881db682c219d69d59efa34a",
"text": "“Employee turnover” as a term is widely used in business circles. Although several studies have been conducted on this topic, most of the researchers focus on the causes of employee turnover. This research looked at extent of influence of various factors on employee turnover in urban and semi urban banks. The research was aimed at achieving the following objectives: identify the key factors of employee turnover; determine the extent to which the identified factors are influencing employees’ turnover. The study is based on the responses of the employees of leading banks. A self-developed questionnaire, measured on a Likert Scale was used to collect data from respondents. Quantitative research design was used and this design was chosen because its findings are generaliseable and data objective. The reliability of the data collected is done by split half method.. The collected data were being analyzed using a program called Statistical Package for Social Science (SPSS ver.16.0 For Windows). The data analysis is carried out by calculating mean, standard deviation and linear correlation. The difference between means of variable was estimated by using t-test. The following factors have significantly influenced employee turnover in banking sector: Work Environment, Job Stress, Compensation (Salary), Employee relationship with management, Career Growth.",
"title": ""
},
{
"docid": "a1d9c897f926fa4cc45ebc6209deb6bc",
"text": "This paper addresses the relationship between the ego, id, and internal objects. While ego psychology views the ego as autonomous of the drives, a less well-known alternative position views the ego as constituted by the drives. Based on Freud's ego-instinct account, this position has developed into a school of thought which postulates that the drives act as knowers. Given that there are multiple drives, this position proposes that personality is constituted by multiple knowers. Following on from Freud, the ego is viewed as a composite sub-set of the instinctual drives (ego-drives), whereas those drives cut off from expression form the id. The nature of the \"self\" is developed in terms of identification and the possibility of multiple personalities is also established. This account is then extended to object-relations and the explanatory value of the ego-drive account is discussed in terms of the addressing the nature of ego-structures and the dynamic nature of internal objects. Finally, the impact of psychological conflict and the significance of repression for understanding the nature of splits within the psyche are also discussed.",
"title": ""
},
{
"docid": "4a39ad1bac4327a70f077afa1d08c3f0",
"text": "Machine learning plays a role in many aspects of modern IR systems, and deep learning is applied in all of them. The fast pace of modern-day research has given rise to many approaches to many IR problems. The amount of information available can be overwhelming both for junior students and for experienced researchers looking for new research topics and directions. The aim of this full- day tutorial is to give a clear overview of current tried-and-trusted neural methods in IR and how they benefit IR.",
"title": ""
},
{
"docid": "33447e2bf55a419dfec2520e9449ef0e",
"text": "We present a unified unsupervised statistical model for text normalization. The relationship between standard and non-standard tokens is characterized by a log-linear model, permitting arbitrary features. The weights of these features are trained in a maximumlikelihood framework, employing a novel sequential Monte Carlo training algorithm to overcome the large label space, which would be impractical for traditional dynamic programming solutions. This model is implemented in a normalization system called UNLOL, which achieves the best known results on two normalization datasets, outperforming more complex systems. We use the output of UNLOL to automatically normalize a large corpus of social media text, revealing a set of coherent orthographic styles that underlie online language variation.",
"title": ""
},
{
"docid": "600097dd56f98fadde8c2ac7be5e4876",
"text": "To study heart beat pulse wave propagation in real time and to evaluate the vascular blood flow resistance an important physiological parameter for vascular diagnostics. Photoplethysmography is a non-invasive technique that measures relative blood volume changes in the blood vessels close to the skin. PPG analysis emphasizes the importance of early evaluation of the diseases We present the results of analysis of photoplethysmography (PPG) signal having motion artifacts which are as alike Gaussian noise in nature. We have proposed a methodology in detecting Heart rate and respiration rate after performing noise cancellation i.e. removing the motion artifacts of the PPG signal. Significance of different Wavelets such as db4, bior3.3, coif1, sym2, haar discussed for removing the motion artifacts. A novel beat rate extraction algorithm (BREA) is implemented monitor the heart rate and respiratory rate of peripheral pulse which has a steep rise and notch on falling slope in the subjects and a more gradual rise and fall and very small dicrotic notch.",
"title": ""
},
{
"docid": "eb7990a677cd3f96a439af6620331400",
"text": "Solving the visual symbol grounding problem has long been a goal of artificial intelligence. The field appears to be advancing closer to this goal with recent breakthroughs in deep learning for natural language grounding in static images. In this paper, we propose to translate videos directly to sentences using a unified deep neural network with both convolutional and recurrent structure. Described video datasets are scarce, and most existing methods have been applied to toy domains with a small vocabulary of possible words. By transferring knowledge from 1.2M+ images with category labels and 100,000+ images with captions, our method is able to create sentence descriptions of open-domain videos with large vocabularies. We compare our approach with recent work using language generation metrics, subject, verb, and object prediction accuracy, and a human evaluation.",
"title": ""
},
{
"docid": "ff9ac94a02a799e63583127ac300b0b4",
"text": "Latent variable models have been widely applied for the analysis and visualization of large datasets. In the case of sequential data, closed-form inference is possible when the transition and observation functions are linear. However, approximate inference techniques are usually necessary when dealing with nonlinear dynamics and observation functions. Here, we propose a novel variational inference framework for the explicit modeling of time series, Variational Inference for Nonlinear Dynamics (VIND), that is able to uncover nonlinear observation and transition functions from sequential data. The framework includes a structured approximate posterior, and an algorithm that relies on the fixed-point iteration method to find the best estimate for latent trajectories. We apply the method to several datasets and show that it is able to accurately infer the underlying dynamics of these systems, in some cases substantially outperforming state-of-the-art methods.",
"title": ""
},
{
"docid": "b4efebd49c8dd2756a4c2fb86b854798",
"text": "Mobile technologies (including handheld and wearable devices) have the potential to enhance learning activities from basic medical undergraduate education through residency and beyond. In order to use these technologies successfully, medical educators need to be aware of the underpinning socio-theoretical concepts that influence their usage, the pre-clinical and clinical educational environment in which the educational activities occur, and the practical possibilities and limitations of their usage. This Guide builds upon the previous AMEE Guide to e-Learning in medical education by providing medical teachers with conceptual frameworks and practical examples of using mobile technologies in medical education. The goal is to help medical teachers to use these concepts and technologies at all levels of medical education to improve the education of medical and healthcare personnel, and ultimately contribute to improved patient healthcare. This Guide begins by reviewing some of the technological changes that have occurred in recent years, and then examines the theoretical basis (both social and educational) for understanding mobile technology usage. From there, the Guide progresses through a hierarchy of institutional, teacher and learner needs, identifying issues, problems and solutions for the effective use of mobile technology in medical education. This Guide ends with a brief look to the future.",
"title": ""
},
{
"docid": "f20a3c60d7415186b065dc7782af16ef",
"text": "The present research examined how implicit racial associations and explicit racial attitudes of Whites relate to behaviors and impressions in interracial interactions. Specifically, the authors examined how response latency and self-report measures predicted bias and perceptions of bias in verbal and nonverbal behavior exhibited by Whites while they interacted with a Black partner. As predicted, Whites' self-reported racial attitudes significantly predicted bias in their verbal behavior to Black relative to White confederates. Furthermore, these explicit attitudes predicted how much friendlier Whites felt that they behaved toward White than Black partners. In contrast, the response latency measure significantly predicted Whites' nonverbal friendliness and the extent to which the confederates and observers perceived bias in the participants' friendliness.",
"title": ""
},
{
"docid": "6aed3ffa374139fa9c4e0b7c1afb7841",
"text": "Recent longitudinal and cross-sectional aging research has shown that personality traits continue to change in adulthood. In this article, we review the evidence for mean-level change in personality traits, as well as for individual differences in change across the life span. In terms of mean-level change, people show increased selfconfidence, warmth, self-control, and emotional stability with age. These changes predominate in young adulthood (age 20-40). Moreover, mean-level change in personality traits occurs in middle and old age, showing that personality traits can change at any age. In terms of individual differences in personality change, people demonstrate unique patterns of development at all stages of the life course, and these patterns appear to be the result of specific life experiences that pertain to a person's stage of life.",
"title": ""
},
{
"docid": "3b1d73691176ada154bab7716c6e776c",
"text": "Purpose – The purpose of this paper is to investigate the factors that affect the adoption of cloud computing by firms belonging to the high-tech industry. The eight factors examined in this study are relative advantage, complexity, compatibility, top management support, firm size, technology readiness, competitive pressure, and trading partner pressure. Design/methodology/approach – A questionnaire-based survey was used to collect data from 111 firms belonging to the high-tech industry in Taiwan. Relevant hypotheses were derived and tested by logistic regression analysis. Findings – The findings revealed that relative advantage, top management support, firm size, competitive pressure, and trading partner pressure characteristics have a significant effect on the adoption of cloud computing. Research limitations/implications – The research was conducted in the high-tech industry, which may limit the generalisability of the findings. Practical implications – The findings offer cloud computing service providers with a better understanding of what affects cloud computing adoption characteristics, with relevant insight on current promotions. Originality/value – The research contributes to the application of new technology cloud computing adoption in the high-tech industry through the use of a wide range of variables. The findings also help firms consider their information technologies investments when implementing cloud computing.",
"title": ""
}
] |
scidocsrr
|
22485c51a3472aedf98f7ad01376a746
|
Touch-less palm print biometrics: Novel design and implementation
|
[
{
"docid": "31d30d78a436acb347cea424c1e4fd63",
"text": "Biometrics-based authentication is a veri,cation approach using the biological features inherent in each individual. They are processed based on the identical, portable, and arduous duplicate characteristics. In this paper, we propose a scanner-based personal authentication system by using the palm-print features. It is very suitable in many network-based applications. The authentication system consists of enrollment and veri,cation stages. In the enrollment stage, the training samples are collected and processed by the pre-processing, feature extraction, and modeling modules to generate the matching templates. In the veri,cation stage, a query sample is also processed by the pre-processing and feature extraction modules, and then is matched with the reference templates to decide whether it is a genuine sample or not. The region of interest (ROI) for each sample is ,rst obtained from the pre-processing module. Then, the palm-print features are extracted from the ROI by using Sobel and morphological operations. The reference templates for a speci,c user are generated in the modeling module. Last, we use the template-matching and the backpropagation neural network to measure the similarity in the veri,cation stage. Experimental results verify the validity of our proposed approaches in personal authentication.? 2002 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "c3b1ad57bab87d796562a771d469b18d",
"text": "The focus of this paper is on one diode photovoltaic cell model. The theory as well as the construction and working of photovoltaic cells using single diode method is also presented. Simulation studies are carried out with different temperatures. Based on this study a conclusion is drawn with comparison with ideal diode. General TermssIn recent years, significant photovoltaic (PV) deployment has occurred, particularly in Germany, Spain and Japan [1]. Also, PV energy is going to become an important source in coming years in Portugal, as it has highest source of sunshine radiation in Europe. Presently the tenth largest PV power plant in the world is in Moura, Portugal, which has an installed capacity of 46 MW and aims to reach 1500 MW of installed capacity by 2020, as stated by the Portuguese National Strategy ENE 2020, multiplying tenfold the existing capacity [2]. The solar cells are basically made of semiconductors which are manufactured using different process. These semiconductors [4]. The intrinsic properties and the incoming solar radiation are responsible for the type of electric energy produced [5]. The solar radiation is composed of photons of different energies, and some are absorbed at the p-n junction. Photons with energies lower than the bandgap of the solar cell are useless and generate no voltage or electric current. Photons with energy superior to the band gap generate electricity, but only the energy corresponding to the band gap is used. The remainder of energy is dissipated as heat in the body of the solar cell [6]. KeywordssPV cell, solar cell, one diode model",
"title": ""
},
{
"docid": "55f11df001ffad95e07cd20b3b27406d",
"text": "CNNs have proven to be a very successful yet computationally expensive technique which made them slow to be adopted in mobile and embedded systems. There is a number of possible optimizations: minimizing the memory footprint, using lower precision and approximate computation, reducing computation cost of convolutions with FFTs. These have been explored recently and were shown to work. This project take ideas of using FFTs further and develops an alternative way to computing CNN – purely in frequency domain. As a side result it develops intuition about nonlinear elements: why do they work and how new types can be created.",
"title": ""
},
{
"docid": "91108c364f2a8eb82a1d7fffdcc32fc0",
"text": "Robustly extracting the features of lane markings under different lighting and weather conditions such as shadows, glows, sunset and night is the a key technology of the lane departure warning system (LDWS). In this paper, we propose a robust lane marking feature extraction method. By useing the characteristics of the lane marking to detect candidate areas. The final lane marking features are extracted by first finding the center points of the lane marking in the candidate area then these center point pixels are labeled according to the intensity similarity along the direction of the vanishing point. The performance of the proposed method is evaluated by experiment at results using real world lane data.",
"title": ""
},
{
"docid": "297a61a2c04c8553da9168d0f72a1d64",
"text": "CONTEXT\nSelf-myofascial release (SMR) is a technique used to treat myofascial restrictions and restore soft-tissue extensibility.\n\n\nPURPOSE\nTo determine whether the pressure and contact area on the lateral thigh differ between a Multilevel rigid roller (MRR) and a Bio-Foam roller (BFR) for participants performing SMR.\n\n\nPARTICIPANTS\nTen healthy young men and women.\n\n\nMETHODS\nParticipants performed an SMR technique on the lateral thigh using both myofascial rollers. Thin-film pressure sensels recorded pressure and contact area during each SMR trial.\n\n\nRESULTS\nMean sensel pressure exerted on the soft tissue of the lateral thigh by the MRR (51.8 +/- 10.7 kPa) was significantly (P < .001) greater than that of the conventional BFR (33.4 +/- 6.4 kPa). Mean contact area of the MRR (47.0 +/- 16.1 cm2) was significantly (P < .005) less than that of the BFR (68.4 +/- 25.3 cm2).\n\n\nCONCLUSION\nThe significantly higher pressure and isolated contact area with the MRR suggest a potential benefit in SMR.",
"title": ""
},
{
"docid": "0208d66e905292e1c83cf4af43f2b8aa",
"text": "Dynamic time warping (DTW), which finds the minimum path by providing non-linear alignments between two time series, has been widely used as a distance measure for time series classification and clustering. However, DTW does not account for the relative importance regarding the phase difference between a reference point and a testing point. This may lead to misclassification especially in applications where the shape similarity between two sequences is a major consideration for an accurate recognition. Therefore, we propose a novel distance measure, called a weighted DTW (WDTW), which is a penaltybased DTW. Our approach penalizes points with higher phase difference between a reference point and a testing point in order to prevent minimum distance distortion caused by outliers. The rationale underlying the proposed distance measure is demonstrated with some illustrative examples. A new weight function, called the modified logistic weight function (MLWF), is also proposed to systematically assign weights as a function of the phase difference between a reference point and a testing point. By applying different weights to adjacent points, the proposed algorithm can enhance the detection of similarity between two time series. We show that some popular distance measures such as DTW and Euclidean distance are special cases of our proposed WDTW measure. We extend the proposed idea to other variants of DTW such as derivative dynamic time warping (DDTW) and propose the weighted version of DDTW. We have compared the performances of our proposed procedures with other popular approaches using public data sets available through the UCR Time Series Data Mining Archive for both time series classification and clustering problems. The experimental results indicate that the proposed approaches can achieve improved accuracy for time series classification and clustering problems. & 2011 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "3b9df74123b17342b6903120c16242e3",
"text": "Surgical eyebrow lift has been described by using many different open and endoscopic methods. Difficult techniques and only short time benefits oft lead to patients' complaints. We present a safe and simple temporal Z-incision technique for eyebrow lift in 37 patients. Besides simplicity and safety, our technique shows long lasting aesthetic results with hidden scars and a high rate of patient satisfaction.",
"title": ""
},
{
"docid": "9649df6c5aab87091244f9271f46df5c",
"text": "With about 2.2 million Americans currently using wheeled mobility devices, wheelchairs are frequently provided to people with impaired mobility to provide accessibility to the community. Individuals with spinal cord injuries, arthritis, balance disorders, and other conditions or diseases are typical users of wheelchairs. However, secondary injuries and wheelchair-related accidents are risks introduced by wheelchairs. Research is underway to advance wheelchair design to prevent or accommodate secondary injuries related to propulsion and transfer biomechanics, while improving safe, functional performance and accessibility to the community. This paper summarizes research and development underway aimed at enhancing safety and optimizing wheelchair design",
"title": ""
},
{
"docid": "2b4f3b7791a4f98d4ce4a7f7b6164573",
"text": "Development of reliable and eco-friendly process for the synthesis of metallic nanoparticles is an important step in the field of application of nanotechnology. We have developed modern method by using agriculture waste to synthesize silver nanoparticles by employing an aqueous peel extract of Annona squamosa in AgNO(3). Controlled growth of silver nanoparticles was formed in 4h at room temperature (25°C) and 60°C. AgNPs were irregular spherical in shape and the average particle size was about 35±5 nm and it is consistent with particle size obtained by XRD Scherer equation.",
"title": ""
},
{
"docid": "8fb7249b1caefa84ffa13eff7e026e8e",
"text": "Investigators across many disciplines and organizations must sift through large collections of text documents to understand and piece together information. Whether they are fighting crime, curing diseases, deciding what car to buy, or researching a new field, inevitably investigators will encounter text documents. Taking a visual analytics approach, we integrate multiple text analysis algorithms with a suite of interactive visualizations to provide a flexible and powerful environment that allows analysts to explore collections of documents while sensemaking. Our particular focus is on the process of integrating automated analyses with interactive visualizations in a smooth and fluid manner. We illustrate this integration through two example scenarios: An academic researcher examining InfoVis and VAST conference papers and a consumer exploring car reviews while pondering a purchase decision. Finally, we provide lessons learned toward the design and implementation of visual analytics systems for document exploration and understanding.",
"title": ""
},
{
"docid": "421516992f06a42aba5e6d312ab342bf",
"text": "We present a fully unsupervised method for automated construction of WordNets based upon recent advances in distributional representations of sentences and word-senses combined with readily available machine translation tools. The approach requires very few linguistic resources and is thus extensible to multiple target languages. To evaluate our method we construct two 600-word test sets for word-to-synset matching in French and Russian using native speakers and evaluate the performance of our method along with several other recent approaches. Our method exceeds the best language-specific and multi-lingual automated WordNets in F-score for both languages. The databases we construct for French and Russian, both languages without large publicly available manually constructed WordNets, will be publicly released along with the test sets.",
"title": ""
},
{
"docid": "d2e0309b503a23a9c0dd4360d0d26294",
"text": "The rapid emergence of user-generated content (UGC) inspires knowledge sharing among Internet users. A good example is the well-known travel site TripAdvisor.com, which enables users to share their experiences and express their opinions on attractions, accommodations, restaurants, etc. The UGC about travel provide precious information to the users as well as staff in travel industry. In particular, how to identify reviews that are noteworthy for hotel management is critical to the success of hotels in the competitive travel industry. We have employed two hotel managers to conduct an examination on Taiwan’s hotel reviews in Tripadvisor.com and found that noteworthy reviews can be characterized by their content features, sentiments, and review qualities. Through the experiments using tripadvisor.com data, we find that all three types of features are important in identifying noteworthy hotel reviews. Specifically, content features are shown to have the most impact, followed by sentiments and review qualities. With respect to the various methods for representing content features, LDA method achieves comparable performance to TF-IDF method with higher recall and much fewer features.",
"title": ""
},
{
"docid": "2e5800ac4d65ac6556dd5c1be22fd6bf",
"text": "The issues of cyberbullying and online harassment have gained considerable coverage in the last number of years. Social media providers need to be able to detect abusive content both accurately and efficiently in order to protect their users. Our aim is to investigate the application of core text mining techniques for the automatic detection of abusive content across a range of social media sources include blogs, forums, media-sharing, Q&A and chat using datasets from Twitter, YouTube, MySpace, Kongregate, Formspring and Slashdot. Using supervised machine learning, we compare alternative text representations and dimension reduction approaches, including feature selection and feature enhancement, demonstrating the impact of these techniques on detection accuracies. In addition, we investigate the need for sampling on imbalanced datasets. Our conclusions are: (1) Dataset balancing boosts accuracies significantly for social media abusive content detection; (2) Feature reduction, important for large feature sets that are typical of social media datasets, improves efficiency whilst maintaining detection accuracies; (3) The use of generic structural features common across all our datasets proved to be of limited use in the automatic detection of abusive content. Our findings can support practitioners in selecting appropriate text mining strategies in this area.",
"title": ""
},
{
"docid": "5ddcfb5404ceaffd6957fc53b4b2c0d8",
"text": "A router's main function is to allow communication between different networks as quickly as possible and in efficient manner. The communication can be between LAN or between LAN and WAN. A firewall's function is to restrict unwanted traffic. In big networks, routers and firewall tasks are performed by different network devices. But in small networks, we want both functions on same device i.e. one single device performing both routing and firewalling. We call these devices as routing firewall. In Traditional networks, the devices are already available. But the next generation networks will be powered by Software Defined Networks. For wide adoption of SDN, we need northbound SDN applications such as routers, load balancers, firewalls, proxy servers, Deep packet inspection devices, routing firewalls running on OpenFlow based physical and virtual switches. But the SDN is still in early stage, so still there is very less availability of these applications. There already exist simple L3 Learning application which provides very elementary router function and also simple stateful firewalls providing basic access control. In this paper, we are implementing one SDN Routing Firewall Application which will perform both the routing and firewall function.",
"title": ""
},
{
"docid": "b12defb3d9d7c5ccda8c3e0b0858f55f",
"text": "We investigate a simple yet effective method to introduce inhibitory and excitatory interactions between units in the layers of a deep neural network classifier. The method is based on the greedy layer-wise procedure of deep learning algorithms and extends the denoising autoencoder (Vincent et al., 2008) by adding asymmetric lateral connections between its hidden coding units, in a manner that is much simpler and computationally more efficient than previously proposed approaches. We present experiments on two character recognition problems which show for the first time that lateral connections can significantly improve the classification performance of deep networks.",
"title": ""
},
{
"docid": "53acdb714d51d9eca25f1e635f781afa",
"text": "Research in several areas provides scientific guidance for use of graphical encoding to convey information in an information visualization display. By graphical encoding we mean the use of visual display elements such as icon color, shape, size, or position to convey information about objects represented by the icons. Literature offers inconclusive and often conflicting viewpoints, including the suggestion that the effectiveness of a graphical encoding depends on the type of data represented. Our empirical study suggests that the nature of the users’ perceptual task is more indicative of the effectiveness of a graphical encoding than the type of data represented. 1. Overview of Perceptual Issues In producing a design to visualize search results for a digital library called Envision [12, 13, 19], we found that choosing graphical devices and document attributes to be encoded with each graphical device is a surprisingly difficult task. By graphical devices we mean those visual display elements (e.g., icon color hue, color saturation, flash rate, shape, size, alphanumeric identifiers, position, etc.) used to convey encoded information. Providing access to graphically encoded information requires attention to a range of human cognitive activities, explored by researchers under at least three rubrics: psychophysics of visual search and identification tasks, graphical perception, and graphical language development. Research in these areas provides scientific guidance for design and evaluation of graphical encoding that might otherwise be reduced to opinion and personal taste. Because of space limits, we discuss here only a small portion of the research on graphical encoding that has been conducted. Additional information is in [20]. Ware [29] provides a broader review of perceptual issues pertaining to information visualization. Especially useful for designers are rankings by effectiveness of various graphical devices in communicating different types of data (e.g., nominal, ordinal, or quantitative). Christ [6] provides such rankings in the context of visual search and identification tasks and provides some empirical evidence to support his findings. Mackinlay [17] suggests rankings of graphical devices for conveying nominal, ordinal, and quantitative data in the context of graphical language design, but these rankings have not been empirically validated [personal communication]. Cleveland and McGill [8, 9] have empirically validated their ranking of graphical devices for quantitative data. The rankings suggested by Christ, Mackinlay, and Cleveland and McGill are not the same, while other literature offers more conflicting viewpoints, suggesting the need for further research. 1.1 Visual Search and Identification Tasks Psychophysics is a branch of psychology concerned with the \"relationship between characteristics of physical stimuli and the psychological experience they produce\" [28]. Studies in the psychophysics of visual search and identification tasks have roots in signal detection theory pertaining to air traffic control, process control, and cockpit displays. These studies suggest rankings of graphical devices [6, 7] described later in this paper and point out significant perceptual interactions among graphical devices used in multidimensional displays. Visual search tasks require visual scanning to locate one or more targets [6, 7, 31]. With a scatterplotlike display (sometimes known as a starfield display [1]), users perform a visual search task when they scan the display to determine the presence of one or more symbols meeting some specific criterion and to locate those symbols if present. For identification tasks, users go beyond visual search to report semantic data about symbols of interest, typically by answering true/false questions or by noting facts about encoded data [6, 7]. Measures of display effectiveness for visual search and identification tasks include time, accuracy, and cognitive workload. A more thorough introduction to signal detection theory may be found in Wickens’ book [31]. Issues involved in studies that influenced the Envision design are complex and findings are sometimes contradictory. Following is a representative overview, but many imProceedings of the IEEE Symposium on Information Visualization 2002 (InfoVis’02) 1522-404X/02 $17.00 © 2002 IEEE portant details are necessarily omitted due to space limitations. 1.1.1 Unidimensional Displays. For unidimensional displays — those involving a single graphical code — Christ’s [6, 7] meta-analysis of 42 prior studies suggests the following ranking of graphical devices by effectiveness: color, size, brightness or alphanumeric, and shape. Other studies confirm that color is the most effective graphical device for reducing display search time [7, 14, 25] but find it followed by shape and then letters or digits [7]. Benefits of color-coding increase for high-density displays [15, 16], but using shapes too similar to one another actually increases search time [22]. For identification tasks measuring accuracy with unidimensional displays, Christ’s work [6, 7] suggests the following ranking of graphical devices by effectiveness: alphanumeric, color, brightness, size, and shape. In a later study, Christ found that digits gave the most accurate results but that color, letters, and familiar geometric shapes all produced equal results with experienced subjects [7]. However, Jubis [14] found that shape codes yielded faster mean reaction times than color codes, while Kopala [15] found no significant difference among codes for identification tasks. 1.1.2 Multidimensional Displays. For multidimensional displays — those using multiple graphical devices combined in one visual object to encode several pieces of information — codes may be either redundant or non-redundant. A redundant code using color and shape to encode the same information yields average search speeds even faster than non-redundant color or shape encoding [7]. Used redundantly with other codes, color yields faster results than shape, and either color or shape is superior as a redundant code to both letters and digits [7]. Jubis [14] confirms that a redundant code involving both color and shape is superior to shape coding but is approximately equal to non-redundant color-coding. For difficult tasks, using redundant color-coding may significantly reduce reaction time and increase accuracy [15]. Benefits of redundant color-coding increase as displays become more cluttered or complex [15]. 1.1.3 Interactions Among Graphical Devices . Significant interactions among graphical devices complicate design for multidimensional displays. Color-coding interferes with all achromatic codes, reducing accuracy by as much as 43% [6]. Indeed, Luder [16] suggests that color has such cognitive dominance that it should only be used to encode the most important data and in situations where dependence on color-coding does not increase risk. While we found no supporting empirical evidence, we believe size and shape interact, causing the shape of very small objects to be perceived less accurately. 1.1.4 Ranges of Graphical Devices. The number of instances of each graphical device (e.g., how many colors or shapes are used in the code) is significant because it limits the range or number of values encoded using that device [3]. The conservative recommendation is to use only five or six distinct colors or shapes [3, 7, 27, 31]. However, some research suggests that 10 [3] to 18 [24] colors may be used for search tasks. 1.1.5 Integration vs. Non-integration Tasks. Later research has focused on how humans extract information from a multidimensional display to perform both integration and non-integration tasks [4, 26, 27]. An integration task uses information encoded non-redundantly with two or more graphical devices to reach a single decision or action, while a non-integration task bases decisions or actions on information encoded in only one graphical device. Studies [4, 30] provide evidence that object displays, in which multiple visual attributes of a single object present information about multiple characteristics, facilitate integration tasks, especially where multiple graphical encodings all convey information relevant to the task at hand. However, object displays hinder non-integration tasks, as additional effort is required to filter out unwanted information communicated by the objects. 1.2 Graphical Perception Graphical perception is “the visual decoding of the quantitative and qualitative information encoded on graphs,” where visual decoding means “instantaneous perception of the visual field that comes without apparent mental effort” [9, p. 828]. Cleveland and McGill studied the perception of quantitative data such as “numerical values of a variable...that are not highly discrete...” [9, p. 828]. They have identified and empirically validated a ranking of graphical devices for displaying quantitative data, ordered as follows from most to least accurately perceived [9, p. 830]: Position along a common scale; Position on identical but non-aligned scales; Length; Angle or Slope; Area; Volume, Density, and/or Color saturation; Color hue. 1.3 Graphical Language Development Graphical language development is based on the assertion that graphical devices communicate information equivalent to sentences [17] and thus call for attention to appropriate use of each graphical device. In his discussion of graphical languages, Mackinlay [17] suggests three different rankings of the effectiveness of various graphical devices in communicating quantitative (numerical), ordinal (ranked), and nominal (non-ordinal textual) data about objects. Although based on psychophysical and graphical perception research, Mackinlay's rankings have not been experimentally validated [personal communication]. 1.4 Observations on Prior Research These studies make it clear that no single graphical device works equally well for all users, nor does an",
"title": ""
},
{
"docid": "cc2a7d6ac63f12b29a6d30f20b5547be",
"text": "The CyberDesk project is aimed at providing a software architecture that dynamically integrates software modules. This integration is driven by a user’s context, where context includes the user’s physical, social, emotional, and mental (focus-of-attention) environments. While a user’s context changes in all settings, it tends to change most frequently in a mobile setting. We have used the CyberDesk ystem in a desktop setting and are currently using it to build an intelligent home nvironment.",
"title": ""
},
{
"docid": "8e437e1acb78f737e259f6cd7a0de47d",
"text": "Being able to evaluate the accuracy of an informant is essential to communication. Three experiments explored preschoolers' (N=119) understanding that, in cases of conflict, information from reliable informants is preferable to information from unreliable informants. In Experiment 1, children were presented with previously accurate and inaccurate informants who presented conflicting names for novel objects. 4-year-olds-but not 3-year-olds-predicted whether an informant would be accurate in the future, sought, and endorsed information from the accurate over the inaccurate informant. In Experiment 2, both age groups displayed trust in knowledgeable over ignorant speakers. In Experiment 3, children extended selective trust when learning both verbal and nonverbal information. These experiments demonstrate that preschoolers have a key strategy for assessing the reliability of information.",
"title": ""
},
{
"docid": "9e4417a0ea21de3ffffb9017f0bad705",
"text": "Distributed optimization algorithms are highly attractive for solving big data problems. In particular, many machine learning problems can be formulated as the global consensus optimization problem, which can then be solved in a distributed manner by the alternating direction method of multipliers (ADMM) algorithm. However, this suffers from the straggler problem as its updates have to be synchronized. In this paper, we propose an asynchronous ADMM algorithm by using two conditions to control the asynchrony: partial barrier and bounded delay. The proposed algorithm has a simple structure and good convergence guarantees (its convergence rate can be reduced to that of its synchronous counterpart). Experiments on different distributed ADMM applications show that asynchrony reduces the time on network waiting, and achieves faster convergence than its synchronous counterpart in terms of the wall clock time.",
"title": ""
},
{
"docid": "150413496e581f272b4ebb416cf0ebfd",
"text": "0957-4174/$ see front matter 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.eswa.2013.12.031 ⇑ Corresponding author. Tel.: +86 029 8849 4701. E-mail addresses: t.c.li@mail.nwpu.edu.cn, lit3@lsbu.ac.uk (T. Li), sdsun@ nwpu.edu.cn (S. Sun), sattartp@lsbu.ac.uk (T.P. Sattar), corchado@usal.es (J.M. Corchado). Tiancheng Li a,⇑, Shudong Sun , Tariq Pervez Sattar , Juan Manuel Corchado c",
"title": ""
},
{
"docid": "ab7a4c1d7615f38c93392903a1174fa3",
"text": "Motivated by the application of fact-level image understanding, we present an automatic method for data collection of structured visual facts from images with captions. Example structured facts include attributed objects (e.g., <flower, red>), actions (e.g., <baby, smile>), interactions (e.g., <man, walking, dog>), and positional information (e.g., <vase, on, table>). The collected annotations are in the form of fact-image pairs (e.g.,<man, walking, dog> and an image region containing this fact). With a language approach, the proposed method is able to collect hundreds of thousands of visual fact annotations with accuracy of 83% according to human judgment. Our method automatically collected more than 380,000 visual fact annotations and more than 110,000 unique visual facts from images with captions and localized them in images in less than one day of processing time on standard CPU platforms.",
"title": ""
}
] |
scidocsrr
|
d3e19967f3537403a170b1a1b56d8c4c
|
Web table taxonomy and formalization
|
[
{
"docid": "211058f2d0d5b9cf555a6e301cd80a5d",
"text": "We present a method based on header paths for efficient and complete extraction of labeled data from tables meant for humans. Although many table configurations yield to the proposed syntactic analysis, some require access to semantic knowledge. Clicking on one or two critical cells per table, through a simple interface, is sufficient to resolve most of these problem tables. Header paths, a purely syntactic representation of visual tables, can be transformed (\"factored\") into existing representations of structured data such as category trees, relational tables, and RDF triples. From a random sample of 200 web tables from ten large statistical web sites, we generated 376 relational tables and 34,110 subject-predicate-object RDF triples.",
"title": ""
},
{
"docid": "a15f80b0a0ce17ec03fa58c33c57d251",
"text": "The World-Wide Web consists of a huge number of unstructured documents, but it also contains structured data in the form of HTML tables. We extracted 14.1 billion HTML tables from Google’s general-purpose web crawl, and used statistical classification techniques to find the estimated 154M that contain high-quality relational data. Because each relational table has its own “schema” of labeled and typed columns, each such table can be considered a small structured database. The resulting corpus of databases is larger than any other corpus we are aware of, by at least five orders of magnitude. We describe the WebTables system to explore two fundamental questions about this collection of databases. First, what are effective techniques for searching for structured data at search-engine scales? Second, what additional power can be derived by analyzing such a huge corpus? First, we develop new techniques for keyword search over a corpus of tables, and show that they can achieve substantially higher relevance than solutions based on a traditional search engine. Second, we introduce a new object derived from the database corpus: the attribute correlation statistics database (AcsDB) that records corpus-wide statistics on cooccurrences of schema elements. In addition to improving search relevance, the AcsDB makes possible several novel applications: schema auto-complete, which helps a database designer to choose schema elements; attribute synonym finding, which automatically computes attribute synonym pairs for schema matching; and join-graph traversal, which allows a user to navigate between extracted schemas using automatically-generated join links. ∗Work done while all authors were at Google, Inc. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commer cial advantage, the VLDB copyright notice and the title of the publication an d its date appear, and notice is given that copying is by permission of the Very L arge Data Base Endowment. To copy otherwise, or to republish, to post o n servers or to redistribute to lists, requires a fee and/or special pe rmission from the publisher, ACM. VLDB ’08 Auckland, New Zealand Copyright 2008 VLDB Endowment, ACM 000-0-00000-000-0/00/ 00.",
"title": ""
}
] |
[
{
"docid": "32cbc8e2652d16c6e29b0a5b9a26bbf3",
"text": "Automated multi-document extractive text summarization is a widely studied research problem in the field of natural language understanding. Such extractive mechanisms compute in some form the worthiness of a sentence to be included into the summary. While the conventional approaches rely on human crafted document-independent features to generate a summary, we develop a data-driven novel summary system called HNet, which exploits the various semantic and compositional aspects latent in a sentence to capture document independent features. The network learns sentence representation in a way that, salient sentences are closer in the vector space than non-salient sentences. This semantic and compositional feature vector is then concatenated with the documentdependent features for sentence ranking. Experiments on the DUC benchmark datasets (DUC-2001, DUC-2002 and DUC2004) indicate that our model shows significant performance gain of around 1.5-2 points in terms of ROUGE score compared with the state-of-the-art baselines.",
"title": ""
},
{
"docid": "a3bff96ab2a6379d21abaea00bc54391",
"text": "In view of the advantages of deep networks in producing useful representation, the generated features of different modality data (such as image, audio) can be jointly learned using Multimodal Restricted Boltzmann Machines (MRB-M). Recently, audiovisual speech recognition based the M-RBM has attracted much attention, and the MRBM shows its effectiveness in learning the joint representation across audiovisual modalities. However, the built networks have weakness in modeling the multimodal sequence which is the natural property of speech signal. In this paper, we will introduce a novel temporal multimodal deep learning architecture, named as Recurrent Temporal Multimodal RB-M (RTMRBM), that models multimodal sequences by transforming the sequence of connected MRBMs into a probabilistic series model. Compared with existing multimodal networks, it's simple and efficient in learning temporal joint representation. We evaluate our model on audiovisual speech datasets, two public (AVLetters and AVLetters2) and one self-build. The experimental results demonstrate that our approach can obviously improve the accuracy of recognition compared with standard MRBM and the temporal model based on conditional RBM. In addition, RTMRBM still outperforms non-temporal multimodal deep networks in the presence of the weakness of long-term dependencies.",
"title": ""
},
{
"docid": "6b7de13e2e413885e0142e3b6bf61dc9",
"text": "OBJECTIVE\nTo compare the healing at elevated sinus floors augmented either with deproteinized bovine bone mineral (DBBM) or autologous bone grafts and followed by immediate implant installation.\n\n\nMATERIAL AND METHODS\nTwelve albino New Zealand rabbits were used. Incisions were performed along the midline of the nasal dorsum. The nasal bone was exposed. A circular bony widow with a diameter of 3 mm was prepared bilaterally, and the sinus mucosa was detached. Autologous bone (AB) grafts were collected from the tibia. Similar amounts of AB or DBBM granules were placed below the sinus mucosa. An implant with a moderately rough surface was installed into the elevated sinus bilaterally. The animals were sacrificed after 7 (n = 6) or 40 days (n = 6).\n\n\nRESULTS\nThe dimensions of the elevated sinus space at the DBBM sites were maintained, while at the AB sites, a loss of 2/3 was observed between 7 and 40 days of healing. The implants showed similar degrees of osseointegration after 7 (7.1 ± 1.7%; 9.9 ± 4.5%) and 40 days (37.8 ± 15%; 36.0 ± 11.4%) at the DBBM and AB sites, respectively. Similar amounts of newly formed mineralized bone were found in the elevated space after 7 days at the DBBM (7.8 ± 6.6%) and AB (7.2 ± 6.0%) sites while, after 40 days, a higher percentage of bone was found at AB (56.7 ± 8.8%) compared to DBBM (40.3 ± 7.5%) sites.\n\n\nCONCLUSIONS\nBoth Bio-Oss® granules and autologous bone grafts contributed to the healing at implants installed immediately in elevated sinus sites in rabbits. Bio-Oss® maintained the dimensions, while autologous bone sites lost 2/3 of the volume between the two periods of observation.",
"title": ""
},
{
"docid": "d061ac8a6c312c768a9dfc6e59cfe6a8",
"text": "The assessment of crop yield losses is needed for the improvement of production systems that contribute to the incomes of rural families and food security worldwide. However, efforts to quantify yield losses and identify their causes are still limited, especially for perennial crops. Our objectives were to quantify primary yield losses (incurred in the current year of production) and secondary yield losses (resulting from negative impacts of the previous year) of coffee due to pests and diseases, and to identify the most important predictors of coffee yields and yield losses. We established an experimental coffee parcel with full-sun exposure that consisted of six treatments, which were defined as different sequences of pesticide applications. The trial lasted three years (2013-2015) and yield components, dead productive branches, and foliar pests and diseases were assessed as predictors of yield. First, we calculated yield losses by comparing actual yields of specific treatments with the estimated attainable yield obtained in plots which always had chemical protection. Second, we used structural equation modeling to identify the most important predictors. Results showed that pests and diseases led to high primary yield losses (26%) and even higher secondary yield losses (38%). We identified the fruiting nodes and the dead productive branches as the most important and useful predictors of yields and yield losses. These predictors could be added in existing mechanistic models of coffee, or can be used to develop new linear mixed models to estimate yield losses. Estimated yield losses can then be related to production factors to identify corrective actions that farmers can implement to reduce losses. The experimental and modeling approaches of this study could also be applied in other perennial crops to assess yield losses.",
"title": ""
},
{
"docid": "c713e4a5536c065d8d40c1e2482557bc",
"text": "In this paper, we propose a robust and accurate method to detect fingertips of hand palm with a down-looking camera mounted on an eyeglass for the utilization of hand gestures for user interaction between human and computers. To ensure consistent performance under unconstrained environments, we propose a novel method to precisely locate fingertips by combing both statistical information of palm edge distribution and structure information of convex null analysis on palm contour. Briefly, first SVM (support vector machine) with a statistical nine-bin based HOG (histogram of oriented gradient) features is introduced for robust hand detection from video stream. Then, binary image regions are segmented out by an adaptive Cg-Cr model on detected hands. With the prior information of hand contour, it takes a global optimization approach of convex hull analysis to locate hand fingertip. The experimental results have demonstrated that the proposed approach performs well because it can well detect all hand fingertips even under some extreme environments.",
"title": ""
},
{
"docid": "c8b4ea815c449872fde2df910573d137",
"text": "Two clinically distinct forms of Blount disease (early-onset and late-onset), based on whether the lower-limb deformity develops before or after the age of four years, have been described. Although the etiology of Blount disease may be multifactorial, the strong association with childhood obesity suggests a mechanical basis. A comprehensive analysis of multiplanar deformities in the lower extremity reveals tibial varus, procurvatum, and internal torsion along with limb shortening. Additionally, distal femoral varus is commonly noted in the late-onset form. When a patient has early-onset disease, a realignment tibial osteotomy before the age of four years decreases the risk of recurrent deformity. Gradual correction with distraction osteogenesis is an effective means of achieving an accurate multiplanar correction, especially in patients with late-onset disease.",
"title": ""
},
{
"docid": "d156813b45cb419d86280ee2947b6cde",
"text": "Within the realm of service robotics, researchers have placed a great amount of effort into learning motions and manipulations for task execution by robots. The task of robot learning is very broad, as it involves many tasks such as object detection, action recognition, motion planning, localization, knowledge representation and retrieval, and the intertwining of computer vision and machine learning techniques. In this paper, we focus on how knowledge can be gathered, represented, and reproduced to solve problems as done by researchers in the past decades. We discuss the problems which have existed in robot learning and the solutions, technologies or developments (if any) which have contributed to solving them. Specifically, we look at three broad categories involved in task representation and retrieval for robotics: 1) activity recognition from demonstrations, 2) scene understanding and interpretation, and 3) task representation in robotics datasets and networks. Within each section, we discuss major breakthroughs and how their methods address present issues in robot learning and manipulation.",
"title": ""
},
{
"docid": "17984e5eb982085a7ff6d891d6b58d90",
"text": "Authors/Task Force Members: Stavros Konstantinides* (Chairperson) (Germany/ Greece), Adam Torbicki* (Co-chairperson) (Poland), Giancarlo Agnelli (Italy), Nicolas Danchin (France), David Fitzmaurice (UK), Nazzareno Galiè (Italy), J. Simon R. Gibbs (UK), Menno Huisman (The Netherlands), Marc Humbert† (France), Nils Kucher (Switzerland), Irene Lang (Austria), Mareike Lankeit (Germany), John Lekakis (Greece), Christoph Maack (Germany), Eckhard Mayer (Germany), Nicolas Meneveau (France), Arnaud Perrier (Switzerland), Piotr Pruszczyk (Poland), Lars H. Rasmussen (Denmark), Thomas H. Schindler (USA), Pavel Svitil (Czech Republic), Anton Vonk Noordegraaf (The Netherlands), Jose Luis Zamorano (Spain), Maurizio Zompatori (Italy)",
"title": ""
},
{
"docid": "a5e4199c16668f66656474f4eeb5d663",
"text": "Advances in information technology, particularly in the e-business arena, are enabling firms to rethink their supply chain strategies and explore new avenues for inter-organizational cooperation. However, an incomplete understanding of the value of information sharing and physical flow coordination hinder these efforts. This research attempts to help fill these gaps by surveying prior research in the area, categorized in terms of information sharing and flow coordination. We conclude by highlighting gaps in the current body of knowledge and identifying promising areas for future research. Subject Areas: e-Business, Inventory Management, Supply Chain Management, and Survey Research.",
"title": ""
},
{
"docid": "28fcee5c28c2b3aae6f4761afb00ebc2",
"text": "The presence of sarcasm in text can hamper the performance of sentiment analysis. The challenge is to detect the existence of sarcasm in texts. This challenge is compounded when bilingual texts are considered, for example using Malay social media data. In this paper a feature extraction process is proposed to detect sarcasm using bilingual texts; more specifically public comments on economic related posts on Facebook. Four categories of feature that can be extracted using natural language processing are considered; lexical, pragmatic, prosodic and syntactic. We also investigated the use of idiosyncratic feature to capture the peculiar and odd comments found in a text. To determine the effectiveness of the proposed process, a non-linear Support Vector Machine was used to classify texts, in terms of the identified features, according to whether they included sarcastic content or not. The results obtained demonstrate that a combination of syntactic, pragmatic and prosodic features produced the best performance with an F-measure score of 0.852.",
"title": ""
},
{
"docid": "3a7dca2e379251bd08b32f2331329f00",
"text": "Canonical correlation analysis (CCA) is a method for finding linear relations between two multidimensional random variables. This paper presents a generalization of the method to more than two variables. The approach is highly scalable, since it scales linearly with respect to the number of training examples and number of views (standard CCA implementations yield cubic complexity). The method is also extended to handle nonlinear relations via kernel trick (this increases the complexity to quadratic complexity). The scalability is demonstrated on a large scale cross-lingual information retrieval task.",
"title": ""
},
{
"docid": "c36fec7cebe04627ffcd9a689df8c5a2",
"text": "In seems there are two dimensions that underlie most judgments of traits, people, groups, and cultures. Although the definitions vary, the first makes reference to attributes such as competence, agency, and individualism, and the second to warmth, communality, and collectivism. But the relationship between the two dimensions seems unclear. In trait and person judgment, they are often positively related; in group and cultural stereotypes, they are often negatively related. The authors report 4 studies that examine the dynamic relationship between these two dimensions, experimentally manipulating the location of a target of judgment on one and examining the consequences for the other. In general, the authors' data suggest a negative dynamic relationship between the two, moderated by factors the impact of which they explore.",
"title": ""
},
{
"docid": "77326d21f3bfdbf0d6c38c2cde871bf5",
"text": "There have been a number of linear, feature-based models proposed by the information retrieval community recently. Although each model is presented differently, they all share a common underlying framework. In this paper, we explore and discuss the theoretical issues of this framework, including a novel look at the parameter space. We then detail supervised training algorithms that directly maximize the evaluation metric under consideration, such as mean average precision. We present results that show training models in this way can lead to significantly better test set performance compared to other training methods that do not directly maximize the metric. Finally, we show that linear feature-based models can consistently and significantly outperform current state of the art retrieval models with the correct choice of features.",
"title": ""
},
{
"docid": "fe702971f36a5ffab26f405ba7b6bda7",
"text": "The blind application of machine learning runs the risk of amplifying biases present in data. Such a danger is facing us with word embedding, a popular framework to represent text data as vectors which has been used in many machine learning and natural language processing tasks. We show that even word embeddings trained on Google News articles exhibit female/male gender stereotypes to a disturbing extent. This raises concerns because their widespread use, as we describe, often tends to amplify these biases. Geometrically, gender bias is first shown to be captured by a direction in the word embedding. Second, gender neutral words are shown to be linearly separable from gender definition words in the word embedding. Using these properties, we provide a methodology for modifying an embedding to remove gender stereotypes, such as the association between between the words receptionist and female, while maintaining desired associations such as between the words queen and female. We define metrics to quantify both direct and indirect gender biases in embeddings, and develop algorithms to “debias” the embedding. Using crowd-worker evaluation as well as standard benchmarks, we empirically demonstrate that our algorithms significantly reduce gender bias in embeddings while preserving the its useful properties such as the ability to cluster related concepts and to solve analogy tasks. The resulting embeddings can be used in applications without amplifying gender bias.",
"title": ""
},
{
"docid": "105b0c048852de36d075b1db929c1fa4",
"text": "OBJECTIVES\nThis study was carried out to investigate the potential of titanium to induce hypersensitivity in patients chronically exposed to titanium-based dental or endoprosthetic implants.\n\n\nMETHODS\nFifty-six patients who had developed clinical symptoms after receiving titanium-based implants were tested in the optimized lymphocyte transformation test MELISA against 10 metals including titanium. Out of 56 patients, 54 were patch-tested with titanium as well as with other metals. The implants were removed in 54 patients (2 declined explantation), and 15 patients were retested in MELISA.\n\n\nRESULTS\nOf the 56 patients tested in MELISA, 21 (37.5%) were positive, 16 (28.6%) ambiguous, and 19 (33.9%) negative to titanium. In the latter group, 11 (57.9%) showed lymphocyte reactivity to other metals, including nickel. All 54 patch-tested patients were negative to titanium. Following removal of the implants, all 54 patients showed remarkable clinical improvement. In the 15 retested patients, this clinical improvement correlated with normalization in MELISA reactivity.\n\n\nCONCLUSION\nThese data clearly demonstrate that titanium can induce clinically-relevant hypersensitivity in a subgroup of patients chronically exposed via dental or endoprosthetic implants.",
"title": ""
},
{
"docid": "21af4f870f466baa4bdb02b37c4d9656",
"text": "Software maps -- linking rectangular 3D-Treemaps, software system structure, and performance indicators -- are commonly used to support informed decision making in software-engineering processes. A key aspect for this decision making is that software maps provide the structural context required for correct interpretation of these performance indicators. In parallel, source code repositories and collaboration platforms are an integral part of today's software-engineering tool set, but cannot properly incorporate software maps since implementations are only available as stand-alone applications. Hence, software maps are 'disconnected' from the main body of this tool set, rendering their use and provisioning overly complicated, which is one of the main reasons against regular use. We thus present a web-based rendering system for software maps that achieves both fast client-side page load time and interactive frame rates even with large software maps. We significantly reduce page load time by efficiently encoding hierarchy and geometry data for the net transport. Apart from that, appropriate interaction, layouting, and labeling techniques as well as common image enhancements aid evaluation of project-related quality aspects. Metrics provisioning can further be implemented by predefined attribute mappings to simplify communication of project specific quality aspects. The system is integrated into dashboards to demonstrate how our web-based approach makes software maps more accessible to many different stakeholders in software-engineering projects.",
"title": ""
},
{
"docid": "274829e884c6ba5f425efbdce7604108",
"text": "The Internet of Things (IoT) is constantly evolving and is giving unique solutions to the everyday problems faced by man. “Smart City” is one such implementation aimed at improving the lifestyle of human beings. One of the major hurdles in most cities is its solid waste management, and effective management of the solid waste produced becomes an integral part of a smart city. This paper aims at providing an IoT based architectural solution to tackle the problems faced by the present solid waste management system. By providing a complete IoT based system, the process of tracking, collecting, and managing the solid waste can be easily automated and monitored efficiently. By taking the example of the solid waste management crisis of Bengaluru city, India, we have come up with the overall system architecture and protocol stack to give a IoT based solution to improve the reliability and efficiency of the system. By making use of sensors, we collect data from the garbage bins and send them to a gateway using LoRa technology. The data from various garbage bins are collected by the gateway and sent to the cloud over the Internet using the MQTT (Message Queue Telemetry Transport) protocol. The main advantage of the proposed system is the use of LoRa technology for data communication which enables long distance data transmission along with low power consumption as compared to Wi-Fi, Bluetooth or Zigbee.",
"title": ""
},
{
"docid": "1d14a2ff9e8dd162ee2ea80480527eef",
"text": "Feature learning on point clouds has shown great promise, with the introduction of effective and generalizable deep learning frameworks such as pointnet++. Thus far, however, point features have been abstracted in an independent and isolated manner, ignoring the relative layout of neighboring points as well as their features. In the present article, we propose to overcome this limitation by using spectral graph convolution on a local graph, combined with a novel graph pooling strategy. In our approach, graph convolution is carried out on a nearest neighbor graph constructed from a point’s neighborhood, such that features are jointly learned. We replace the standard max pooling step with a recursive clustering and pooling strategy, devised to aggregate information from within clusters of nodes that are close to one another in their spectral coordinates, leading to richer overall feature descriptors. Through extensive experiments on diverse datasets, we show a consistent demonstrable advantage for the tasks of both point set classification and segmentation.",
"title": ""
},
{
"docid": "e5a69aa4eaf7e38a5372fb3d39571669",
"text": "A widespread folklore for explaining the success of Convolutional Neural Networks (CNNs) is that CNNs use a more compact representation than the Fullyconnected Neural Network (FNN) and thus require fewer training samples to accurately estimate their parameters. We initiate the study of rigorously characterizing the sample complexity of estimating CNNs. We show that for an m-dimensional convolutional filter with linear activation acting on a d-dimensional input, the sample complexity of achieving population prediction error of is r Opm{ q 2, whereas the sample-complexity for its FNN counterpart is lower bounded by Ωpd{ q samples. Since, in typical settings m ! d, this result demonstrates the advantage of using a CNN. We further consider the sample complexity of estimating a onehidden-layer CNN with linear activation where both the m-dimensional convolutional filter and the r-dimensional output weights are unknown. For this model, we show that the sample complexity is r O ` pm` rq{ 2 ̆ when the ratio between the stride size and the filter size is a constant. For both models, we also present lower bounds showing our sample complexities are tight up to logarithmic factors. Our main tools for deriving these results are a localized empirical process analysis and a new lemma characterizing the convolutional structure. We believe that these tools may inspire further developments in understanding CNNs.",
"title": ""
},
{
"docid": "44d4d0b9fea72fe2ac5584c80a122b72",
"text": "We derive a new intrinsic social motivation for multi-agent reinforcement learning (MARL), in which agents are rewarded for having causal influence over another agent’s actions. Causal influence is assessed using counterfactual reasoning. The reward does not depend on observing another agent’s reward function, and is thus a more realistic approach to MARL than taken in previous work. We show that the causal influence reward is related to maximizing the mutual information between agents’ actions. We test the approach in challenging social dilemma environments, where it consistently leads to enhanced cooperation between agents and higher collective reward. Moreover, we find that rewarding influence can lead agents to develop emergent communication protocols. We therefore employ influence to train agents to use an explicit communication channel, and find that it leads to more effective communication and higher collective reward. Finally, we show that influence can be computed by equipping each agent with an internal model that predicts the actions of other agents. This allows the social influence reward to be computed without the use of a centralised controller, and as such represents a significantly more general and scalable inductive bias for MARL with independent agents.",
"title": ""
}
] |
scidocsrr
|
43079322e7d540f44cff9e2f198eab87
|
Increasing the Interpretability of Recurrent Neural Networks Using Hidden Markov Models
|
[
{
"docid": "fb58d6fe77092be4bce5dd0926c563de",
"text": "We present the Mind the Gap Model (MGM), an approach for interpretable feature extraction and selection. By placing interpretability criteria directly into the model, we allow for the model to both optimize parameters related to interpretability and to directly report a global set of distinguishable dimensions to assist with further data exploration and hypothesis generation. MGM extracts distinguishing features on real-world datasets of animal features, recipes ingredients, and disease co-occurrence. It also maintains or improves performance when compared to related approaches. We perform a user study with domain experts to show the MGM’s ability to help with dataset exploration.",
"title": ""
},
{
"docid": "1fc6b2ffedfddb0dc476c3470c52fb13",
"text": "Exponential growth in Electronic Healthcare Records (EHR) has resulted in new opportunities and urgent needs for discovery of meaningful data-driven representations and patterns of diseases in Computational Phenotyping research. Deep Learning models have shown superior performance for robust prediction in computational phenotyping tasks, but suffer from the issue of model interpretability which is crucial for clinicians involved in decision-making. In this paper, we introduce a novel knowledge-distillation approach called Interpretable Mimic Learning, to learn interpretable phenotype features for making robust prediction while mimicking the performance of deep learning models. Our framework uses Gradient Boosting Trees to learn interpretable features from deep learning models such as Stacked Denoising Autoencoder and Long Short-Term Memory. Exhaustive experiments on a real-world clinical time-series dataset show that our method obtains similar or better performance than the deep learning models, and it provides interpretable phenotypes for clinical decision making.",
"title": ""
},
{
"docid": "251210e932884c2103f7f2d71c5ec519",
"text": "Recent work on deep neural networks as acoustic models for automatic speech recognition (ASR) have demonstrated substantial performance improvements. We introduce a model which uses a deep recurrent auto encoder neural network to denoise input features for robust ASR. The model is trained on stereo (noisy and clean) audio features to predict clean features given noisy input. The model makes no assumptions about how noise affects the signal, nor the existence of distinct noise environments. Instead, the model can learn to model any type of distortion or additive noise given sufficient training data. We demonstrate the model is competitive with existing feature denoising approaches on the Aurora2 task, and outperforms a tandem approach where deep networks are used to predict phoneme posteriors directly.",
"title": ""
},
{
"docid": "71b5c8679979cccfe9cad229d4b7a952",
"text": "Despite widespread adoption, machine learning models remain mostly black boxes. Understanding the reasons behind predictions is, however, quite important in assessing trust, which is fundamental if one plans to take action based on a prediction, or when choosing whether to deploy a new model. Such understanding also provides insights into the model, which can be used to transform an untrustworthy model or prediction into a trustworthy one.\n In this work, we propose LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally varound the prediction. We also propose a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem. We demonstrate the flexibility of these methods by explaining different models for text (e.g. random forests) and image classification (e.g. neural networks). We show the utility of explanations via novel experiments, both simulated and with human subjects, on various scenarios that require trust: deciding if one should trust a prediction, choosing between models, improving an untrustworthy classifier, and identifying why a classifier should not be trusted.",
"title": ""
}
] |
[
{
"docid": "3c695b12b47f358012f10dc058bf6f6a",
"text": "This paper addresses the problem of classifying places in the environment of a mobile robot into semantic categories. We believe that semantic information about the type of place improves the capabilities of a mobile robot in various domains including localization, path-planning, or human-robot interaction. Our approach uses AdaBoost, a supervised learning algorithm, to train a set of classifiers for place recognition based on laser range data. In this paper we describe how this approach can be applied to distinguish between rooms, corridors, doorways, and hallways. Experimental results obtained in simulation and with real robots demonstrate the effectiveness of our approach in various environments.",
"title": ""
},
{
"docid": "c367d19e00816538753e6226785d05fd",
"text": "BACKGROUND AND OBJECTIVE\nMildronate, an inhibitor of carnitine-dependent metabolism, is considered to be an anti-ischemic drug. This study is designed to evaluate the efficacy and safety of mildronate injection in treating acute ischemic stroke.\n\n\nMETHODS\nWe performed a randomized, double-blind, multicenter clinical study of mildronate injection for treating acute cerebral infarction. 113 patients in the experimental group received mildronate injection, and 114 patients in the active-control group received cinepazide injection. In addition, both groups were given aspirin as a basic treatment. Modified Rankin Scale (mRS) score was performed at 2 weeks and 3 months after treatment. National Institutes of Health Stroke Scale (NIHSS) score and Barthel Index (BI) score were performed at 2 weeks after treatment, and then vital signs and adverse events were evaluated.\n\n\nRESULTS\nA total of 227 patients were randomized to treatment (n = 113, mildronate; n = 114, active-control). After 3 months, there was no significant difference for the primary endpoint between groups categorized in terms of mRS scores of 0-1 and 0-2 (p = 0.52 and p = 0.07, respectively). There were also no significant differences for the secondary endpoint between groups categorized in terms of NIHSS scores of >5 and >8 (p = 0.98 and p = 0.97, respectively) or BI scores of >75 and >95 (p = 0.49 and p = 0.47, respectively) at 15 days. The incidence of serious adverse events was similar between the two groups.\n\n\nCONCLUSION\nMildronate injection is as effective and safe as cinepazide injection in treating acute cerebral infarction.",
"title": ""
},
{
"docid": "bf4a085beb7e8ce496f12493e705ee2a",
"text": "The Byzantine attack in cooperative spectrum sensing (CSS), also known as the spectrum sensing data falsification (SSDF) attack in the literature, is one of the key adversaries to the success of cognitive radio networks (CRNs). Over the past couple of years, the research on the Byzantine attack and defense strategies has gained worldwide increasing attention. In this paper, we provide a comprehensive survey and tutorial on the recent advances in the Byzantine attack and defense for CSS in CRNs. Specifically, we first briefly present the preliminaries of CSS for general readers, including signal detection techniques, hypothesis testing, and data fusion. Second, we propose a taxonomy of the existing Byzantine attack behaviors and elaborate on the corresponding attack parameters, which determine where, who, how, and when to launch attacks. Then, from the perspectives of homogeneous or heterogeneous scenarios, we classify the existing defense algorithms, and provide an in-depth tutorial on the state-of-the-art Byzantine defense schemes, commonly known as robust or secure CSS in the literature. Furthermore, we analyze the spear-and-shield relation between Byzantine attack and defense from an interactive game-theoretical perspective. Moreover, we highlight the unsolved research challenges and depict the future research directions.",
"title": ""
},
{
"docid": "fc625b8d0ffe9038a2fad1b91b93b1fe",
"text": "Three groups of informants--two in California, one in Atlanta--recalled their experiences of the 1989 Loma Prieta earthquake shortly after the event and again 11/2 years later. The Californians' recalls of their own earthquake experiences were virtually perfect. Even their recalls of hearing the news of an earthquake-related event were very good: much higher than Atlantan recalls of hearing about the quake itself. Atlantans who had relatives in the affected area remembered significantly more than those who did not. These data show that personal involvement in the quake led to greatly improved recall, but do not show why. Many Californian informants reported low levels of stress/arousal during the event; arousal ratings were not significantly correlated with recall. The authors suggest that repeated narrative rehearsals may have played an important role.",
"title": ""
},
{
"docid": "2a91eeedbb43438f9ed449e14d93ce8e",
"text": "In this paper, we introduce the concept of green noise—the midfrequency component of white noise—and its advantages over blue noise for digital halftoning. Unlike blue-noise dither patterns, which are composed exclusively of isolated pixels, green-noise dither patterns are composed of pixel-clusters making them less susceptible to image degradation from nonideal printing artifacts such as dot-gain. Although they are not the only techniques which generate clustered halftones, error-diffusion with output-dependent feedback and variations based on filter weight perturbation are shown to be good generators of green noise, thereby allowing for tunable coarseness. Using statistics developed for blue noise, we closely examine the spectral content of resulting dither patterns. We introduce two spatial-domain statistics for analyzing the spatial arrangement of pixels in aperiodic dither patterns, because greennoise patterns may be anisotropic, and therefore spectral statistics based on radial averages may be inappropriate for the study of these patterns.",
"title": ""
},
{
"docid": "be6a9831aba65c9ec91833a3105c6427",
"text": "In last few years there are major changes and evolution has been done on classification of data. As the application area of technology is increases the size of data also increases. Classification of data becomes difficult because of unbounded size and imbalance nature of data. Class imbalance problem become greatest issue in data mining. Imbalance problem occur where one of the two classes having more sample than other classes. The most of algorithm are more focusing on classification of major sample while ignoring or misclassifying minority sample. The minority samples are those that rarely occur but very important. There are different methods available for classification of imbalance data set which is divided into three main categories, the algorithmic approach, datapreprocessing approach and feature selection approach. Each of this technique has their own advantages and disadvantages. In this paper systematic study of each approach is define which gives the right direction for research in class imbalance problem.",
"title": ""
},
{
"docid": "67c6fa2bd51680245e3f81cd392a05cb",
"text": "This paper presents compact CMOS quadrature hybrids by using the transformer over-coupling technique to eliminate significant phase error in the presence of low-Q CMOS components. The technique includes the inductive and capacitive couplings, where the former is realized by employing a tightly inductive-coupled transformer and the latter by an additional capacitor across the transformer winding. Their phase balance effects are investigated and the design methodology is presented. The measurement results show that the designed 24-GHz CMOS quadrature hybrid has excellent phase balance within plusmn0.6deg and amplitude balance less than plusmn 0.3 dB over a 16% fractional bandwidth with extremely compact size of 0.05 mm2. For the 2.4-GHz hybrid monolithic microwave integrated circuit, it has measured phase balance of plusmn0.8deg and amplitude balance of plusmn 0.3 dB over a 10% fractional bandwidth with a chip area of 0.1 mm2 .",
"title": ""
},
{
"docid": "95c2567392ace11531904c87b5e5630c",
"text": "This paper presents a human-scale virtual environment (VE) with haptic feedback along with two experiments performed in the context of product design. The user interacts with a virtual mock-up using a large-scale bimanual string-based haptic interface called SPIDAR (Space Interface Device for Artificial Reality). An original self-calibration method is proposed. A vibro-tactile glove was developed and integrated to the SPIDAR to provide tactile cues to the operator. The purpose of the first experiment was: (1) to examine the effect of tactile feedback in a task involving reach-and-touch of different parts of a digital mock-up, and (2) to investigate the use of sensory substitution in such tasks. The second experiment aimed to investigate the effect of visual and auditory feedback in a car-light maintenance task. Results of the first experiment indicate that the users could easily and quickly access and finely touch the different parts of the digital mock-up when sensory feedback (either visual, auditory, or tactile) was present. Results of the of the second experiment show that visual and auditory feedbacks improve average placement accuracy by about 54 % and 60% respectively compared to the open loop case. Index Terms — Virtual reality, virtual environment, haptic interaction, sensory substitution, human performance.",
"title": ""
},
{
"docid": "93810beca2ba988e29852cd1bc4b8ab6",
"text": "Emotion dysregulation is thought to be critical to the development of negative psychological outcomes. Gross (1998b) conceptualized the timing of regulation strategies as key to this relationship, with response-focused strategies, such as expressive suppression, as less effective and more detrimental compared to antecedent-focused ones, such as cognitive reappraisal. In the current study, we examined the relationship between reappraisal and expressive suppression and measures of psychopathology, particularly for stress-related reactions, in both undergraduate and trauma-exposed community samples of women. Generally, expressive suppression was associated with higher, and reappraisal with lower, self-reported stress-related symptoms. In particular, expressive suppression was associated with PTSD, anxiety, and depression symptoms in the trauma-exposed community sample, with rumination partially mediating this association. Finally, based on factor analysis, expressive suppression and cognitive reappraisal appear to be independent constructs. Overall, expressive suppression, much more so than cognitive reappraisal, may play an important role in the experience of stress-related symptoms. Further, given their independence, there are potentially relevant clinical implications, as interventions that shift one of these emotion regulation strategies may not lead to changes in the other.",
"title": ""
},
{
"docid": "d81c25a953bc14e3316e2ae7485c023a",
"text": "The amphibious robot is so attractive and challenging for its broad application and its complex working environment. It should walk on rough ground, maneuver underwater and pass through transitional terrain such as sand and mud, simultaneously. To tackle with such a complex task, a novel amphibious robot (AmphiHex-I) with transformable leg-flipper composite propulsion is proposed and developed. This paper presents the detailed structure design of the transformable leg-flipper propulsion mechanism and its drive module, which enables the amphibious robot passing through the terrain, water and transitional zone between them. A preliminary theoretical analysis is conducted to study the interaction between the elliptic leg and transitional environment such as granular medium. An orthogonal experiment is designed to study the leg locomotion in the sandy and muddy terrain with different water content. Finally, basic propulsion experiments of AmphiHex-I are launched, which verified the locomotion capability on land and underwater is achieved by the transformable leg-flipper mechanism.",
"title": ""
},
{
"docid": "d170aec1225da4ec34d3847a2807d9b5",
"text": "By leveraging advances in deep learning, challenging pattern recognition problems have been solved in computer vision, speech recognition, natural language processing, and more. Mobile computing has also adopted these powerful modeling approaches, delivering astonishing success in the field’s core application domains, including the ongoing transformation of human activity recognition technology through machine learning.",
"title": ""
},
{
"docid": "897a077ce07c49b5946bdfd154d2e46e",
"text": "This paper presents the effect of incorporating the Single-Machine-Infinite-Bus (SMIB) power system embraced with highly penetrated Photovoltaic (PV) generator with shunt compensation. The conventional variable shunt capacitor is connected at a common coupling point where the terminals of the two generators are combined. The dynamics of the Automatic Voltage Regulator (AVR) and Turbine Governor (TG) of the synchronous generator and the nonlinearity of the output characteristics of the PV generator are included. The stability of the operating point of the system is firstly studied through the linearized model eigenvalues. Time domain simulations based on the complete nonlinear model in d-q stationary reference frame are then presented after step changes on the values of the shunt compensation. The study is carried out at three realistic solar irradiance levels. It is concluded that shunt compensation of SMIB power system equipped with PV generator can control the total reactive power injected to the grid. During numerical simulations, it has been concluded that to have a stable system operating point, the value of the shunt compensation should be reduced as the solar irradiance level decreases. Additionally, the lower the solar irradiance level, the narrower the range of shunt compensation values which can provide a stable system operating point.",
"title": ""
},
{
"docid": "1ccc1b904fa58b1e31f4f3f4e2d76707",
"text": "When children and adolescents are the target population in dietary surveys many different respondent and observer considerations surface. The cognitive abilities required to self-report food intake include an adequately developed concept of time, a good memory and attention span, and a knowledge of the names of foods. From the age of 8 years there is a rapid increase in the ability of children to self-report food intake. However, while cognitive abilities should be fully developed by adolescence, issues of motivation and body image may hinder willingness to report. Ten validation studies of energy intake data have demonstrated that mis-reporting, usually in the direction of under-reporting, is likely. Patterns of under-reporting vary with age, and are influenced by weight status and the dietary survey method used. Furthermore, evidence for the existence of subject-specific responding in dietary assessment challenges the assumption that repeated measurements of dietary intake will eventually obtain valid data. Unfortunately, the ability to detect mis-reporters, by comparison with presumed energy requirements, is limited unless detailed activity information is available to allow the energy intake of each subject to be evaluated individually. In addition, high variability in nutrient intakes implies that, if intakes are valid, prolonged dietary recording will be required to rank children correctly for distribution analysis. Future research should focus on refining dietary survey methods to make them more sensitive to different ages and cognitive abilities. The development of improved techniques for identification of mis-reporters and investigation of the issue of differential reporting of foods should also be given priority.",
"title": ""
},
{
"docid": "c9c9af3680df50d4dd72c73c90a41893",
"text": "BACKGROUND\nVideo games provide extensive player involvement for large numbers of children and adults, and thereby provide a channel for delivering health behavior change experiences and messages in an engaging and entertaining format.\n\n\nMETHOD\nTwenty-seven articles were identified on 25 video games that promoted health-related behavior change through December 2006.\n\n\nRESULTS\nMost of the articles demonstrated positive health-related changes from playing the video games. Variability in what was reported about the games and measures employed precluded systematically relating characteristics of the games to outcomes. Many of these games merged the immersive, attention-maintaining properties of stories and fantasy, the engaging properties of interactivity, and behavior-change technology (e.g., tailored messages, goal setting). Stories in video games allow for modeling, vicarious identifying experiences, and learning a story's \"moral,\" among other change possibilities.\n\n\nCONCLUSIONS\nResearch is needed on the optimal use of game-based stories, fantasy, interactivity, and behavior change technology in promoting health-related behavior change.",
"title": ""
},
{
"docid": "95db9ce9faaf13e8ff8d5888a6737683",
"text": "Measurements of pH, acidity, and alkalinity are commonly used to describe water quality. The three variables are interrelated and can sometimes be confused. The pH of water is an intensity factor, while the acidity and alkalinity of water are capacity factors. More precisely, acidity and alkalinity are defined as a water’s capacity to neutralize strong bases or acids, respectively. The term “acidic” for pH values below 7 does not imply that the water has no alkalinity; likewise, the term “alkaline” for pH values above 7 does not imply that the water has no acidity. Water with a pH value between 4.5 and 8.3 has both total acidity and total alkalinity. The definition of pH, which is based on logarithmic transformation of the hydrogen ion concentration ([H+]), has caused considerable disagreement regarding the appropriate method of describing average pH. The opinion that pH values must be transformed to [H+] values before averaging appears to be based on the concept of mixing solutions of different pH. In practice, however, the averaging of [H+] values will not provide the correct average pH because buffers present in natural waters have a greater effect on final pH than does dilution alone. For nearly all uses of pH in fisheries and aquaculture, pH values may be averaged directly. When pH data sets are transformed to [H+] to estimate average pH, extreme pH values will distort the average pH. Values of pH conform more closely to a normal distribution than do values of [H+], making the pH values more acceptable for use in statistical analysis. Moreover, electrochemical measurements of pH and many biological responses to [H+] are described by the Nernst equation, which states that the measured or observed response is linearly related to 10-fold changes in [H+]. Based on these considerations, pH rather than [H+] is usually the most appropriate variable for use in statistical analysis. *Corresponding author: boydce1@auburn.edu Received November 2, 2010; accepted February 7, 2011 Published online September 27, 2011 Temperature, salinity, hardness, pH, acidity, and alkalinity are fundamental variables that define the quality of water. Although all six variables have precise, unambiguous definitions, the last three variables are often misinterpreted in aquaculture and fisheries studies. In this paper, we explain the concepts of pH, acidity, and alkalinity, and we discuss practical relationships among those variables. We also discuss the concept of pH averaging as an expression of the central tendency of pH measurements. The concept of pH averaging is poorly understood, if not controversial, because many believe that pH values, which are log-transformed numbers, cannot be averaged directly. We argue that direct averaging of pH values is the simplest and most logical approach for most uses and that direct averaging is based on sound practical and statistical principles. THE pH CONCEPT The pH is an index of the hydrogen ion concentration ([H+]) in water. The [H+] affects most chemical and biological processes; thus, pH is an important variable in water quality endeavors. Water temperature probably is the only water quality variable that is measured more commonly than pH. The pH concept has its basis in the ionization of water:",
"title": ""
},
{
"docid": "f489bf5d0e2697dfa670eac3e8d2c82c",
"text": "Ideally, rationally designed tissue engineering scaffolds promote natural wound healing and regeneration. Therefore, we sought to synthesize a biomimetic hydrogel specifically designed to promote tissue repair and chose hyaluronic acid (HA; also called hyaluronan) as our initial material. Hyaluronic acid is a naturally occurring polymer associated with various cellular processes involved in wound healing, such as angiogenesis. Hyaluronic acid also presents unique advantages: it is easy to produce and modify, hydrophilic and nonadhesive, and naturally biodegradable. We prepared a range of glycidyl methacrylate-HA (GMHA) conjugates, which were subsequently photopolymerized to form crosslinked GMHA hydrogels. A range of hydrogel degradation rates was achieved as well as a corresponding, modest range of material properties (e.g., swelling, mesh size). Increased amounts of conjugated methacrylate groups corresponded with increased crosslink densities and decreased degradation rates and yet had an insignificant effect on human aortic endothelial cell cytocompatibility and proliferation. Rat subcutaneous implants of the GMHA hydrogels showed good biocompatibility, little inflammatory response, and similar levels of vascularization at the implant edge compared with those of fibrin positive controls. Therefore, these novel GMHA hydrogels are suitable for modification with adhesive peptide sequences (e.g., RGD) and use in a variety of wound-healing applications.",
"title": ""
},
{
"docid": "0e1f0eb73d2e27269ad305645eb4e236",
"text": "Multi-label learning deals with data associated with multiple labels simultaneously. Previous work on multi-label learning assumes that for each instance, the “full” label set associated with each training instance is given by users. In many applications, however, to get the full label set for each instance is difficult and only a “partial” set of labels is available. In such cases, the appearance of a label means that the instance is associated with this label, while the absence of a label does not imply that this label is not proper for the instance. We call this kind of problem “weak label” problem. In this paper, we propose the WELL (WEak Label Learning) method to solve the weak label problem. We consider that the classification boundary for each label should go across low density regions, and that each label generally has much smaller number of positive examples than negative examples. The objective is formulated as a convex optimization problem which can be solved efficiently. Moreover, we exploit the correlation between labels by assuming that there is a group of low-rank base similarities, and the appropriate similarities between instances for different labels can be derived from these base similarities. Experiments validate the performance of WELL.",
"title": ""
},
{
"docid": "e89db5214e5bea32b37539471fccb226",
"text": "In this paper, we survey the basic paradigms and notions of secure multiparty computation and discuss their relevance to the field of privacy-preserving data mining. In addition to reviewing definitions and constructions for secure multiparty computation, we discuss the issue of efficiency and demonstrate the difficulties involved in constructing highly efficient protocols. We also present common errors that are prevalent in the literature when secure multiparty computation techniques are applied to privacy-preserving data mining. Finally, we discuss the relationship between secure multiparty computation and privacy-preserving data mining, and show which problems it solves and which problems it does not.",
"title": ""
},
{
"docid": "2608c1955590c8646dcbc6dbadf797a3",
"text": "This study aims to examine the key factors affecting users' adoption of e-book by an extension of unified theory of acceptance and use of technology (UTAUT) with environment concern, perceived benefit, and benevolence trust. 343 samples of data were analyzed using a structural equation modeling (SEM). Results showed that user's adoption of e-book is significantly determined by all of the factors.",
"title": ""
}
] |
scidocsrr
|
034fd5f04c38a95b847b43254c370df3
|
Unsupervised Surgical Task Segmentation with Milestone Learning
|
[
{
"docid": "bad378dceb9e4c060fa52acdf328d845",
"text": "Autonomous robot execution of surgical sub-tasks has the potential to reduce surgeon fatigue and facilitate supervised tele-surgery. This paper considers the sub-task of surgical debridement: removing dead or damaged tissue fragments to allow the remaining healthy tissue to heal. We present an autonomous multilateral surgical debridement system using the Raven, an open-architecture surgical robot with two cable-driven 7 DOF arms. Our system combines stereo vision for 3D perception with trajopt, an optimization-based motion planner, and model predictive control (MPC). Laboratory experiments involving sensing, grasping, and removal of 120 fragments suggest that an autonomous surgical robot can achieve robustness comparable to human performance. Our robot system demonstrated the advantage of multilateral systems, as the autonomous execution was 1.5× faster with two arms than with one; however, it was two to three times slower than a human. Execution speed could be improved with better state estimation that would allow more travel between MPC steps and fewer MPC replanning cycles. The three primary contributions of this paper are: (1) introducing debridement as a sub-task of interest for surgical robotics, (2) demonstrating the first reliable autonomous robot performance of a surgical sub-task using the Raven, and (3) reporting experiments that highlight the importance of accurate state estimation for future research. Further information including code, photos, and video is available at: http://rll.berkeley.edu/raven.",
"title": ""
},
{
"docid": "951f79f828d3375c7544129cdb575940",
"text": "In this paper, we deal with imitation learning of arm movements in humanoid robots. Hidden Markov models (HMM) are used to generalize movements demonstrated to a robot multiple times. They are trained with the characteristic features (key points) of each demonstration. Using the same HMM, key points that are common to all demonstrations are identified; only those are considered when reproducing a movement. We also show how HMM can be used to detect temporal dependencies between both arms in dual-arm tasks. We created a model of the human upper body to simulate the reproduction of dual-arm movements and generate natural-looking joint configurations from tracked hand paths. Results are presented and discussed",
"title": ""
},
{
"docid": "3ff06c4ecf9b8619150c29c9c9a940b9",
"text": "It has recently been shown that only a small number of samples from a low-rank matrix are necessary to reconstruct the entire matrix. We bring this to bear on computer vision problems that utilize low-dimensional subspaces, demonstrating that subsampling can improve computation speed while still allowing for accurate subspace learning. We present GRASTA, Grassmannian Robust Adaptive Subspace Tracking Algorithm, an online algorithm for robust subspace estimation from randomly subsampled data. We consider the specific application of background and foreground separation in video, and we assess GRASTA on separation accuracy and computation time. In one benchmark video example [16], GRASTA achieves a separation rate of 46.3 frames per second, even when run in MATLAB on a personal laptop.",
"title": ""
},
{
"docid": "ecd79e88962ca3db82eaf2ab94ecd5f4",
"text": "Automatic surgical gesture segmentation and recognition can provide useful feedback for surgical training in robotic surgery. Most prior work in this field relies on the robot's kinematic data. Although recent work [1,2] shows that the robot's video data can be equally effective for surgical gesture recognition, the segmentation of the video into gestures is assumed to be known. In this paper, we propose a framework for joint segmentation and recognition of surgical gestures from kinematic and video data. Unlike prior work that relies on either frame-level kinematic cues, or segment-level kinematic or video cues, our approach exploits both cues by using a combined Markov/semi-Markov conditional random field (MsM-CRF) model. Our experiments show that the proposed model improves over a Markov or semi-Markov CRF when using video data alone, gives results that are comparable to state-of-the-art methods on kinematic data alone, and improves over state-of-the-art methods when combining kinematic and video data.",
"title": ""
}
] |
[
{
"docid": "fd9d857d7299bf37bba90bf1b5adf300",
"text": "How should we assess the comparability of driving on a road and ‘‘driving’’ in a simulator? If similar patterns of behaviour are observed, with similar differences between individuals, then we can conclude that driving in the simulator will deliver representative results and the advantages of simulators (controlled environments, hazardous situations) can be appreciated. To evaluate a driving simulator here we compare hazard detection while driving on roads, while watching short film clips recorded from a vehicle moving through traffic, and while driving through a simulated city in a fully instrumented fixed-base simulator with a 90 degree forward view (plus mirrors) that is under the speed/direction control of the driver. In all three situations we find increased scanning by more experienced and especially professional drivers, and earlier eye fixations on hazardous objects for experienced drivers. This comparability encourages the use of simulators in drivers training and testing. ! 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "c9f9673f3e46bb6fe17075fd212ef3ef",
"text": "This paper presents a new version of Dropout called Split Dropout (sDropout) and rotational convolution techniques to improve CNNs’ performance on image classification. The widely used standard Dropout has advantage of preventing deep neural networks from overfitting by randomly dropping units during training. Our sDropout randomly splits the data into two subsets and keeps both rather than discards one subset. We also introduce two rotational convolution techniques, i.e. rotate-pooling convolution (RPC) and flip-rotate-pooling convolution (FRPC) to boost CNNs’ performance on the robustness for rotation transformation. These two techniques encode rotation invariance into the network without adding extra parameters. Experimental evaluations on ImageNet2012 classification task demonstrate that sDropout not only enhances the performance but also converges faster. Additionally, RPC and FRPC make CNNs more robust for rotation transformations. Overall, FRPC together with sDropout bring 1.18% (model of Zeiler and Fergus [24], 10-view, top-1) accuracy increase in ImageNet 2012 classification task compared to the original network.",
"title": ""
},
{
"docid": "a6d26826ee93b3b5dec8282d0c632f8e",
"text": "Superficial Acral Fibromyxoma is a rare tumor of soft tissues. It is a relatively new entity described in 2001 by Fetsch et al. It probably represents a fibrohistiocytic tumor with less than 170 described cases. We bring a new case of SAF on the 5th toe of the right foot, in a 43-year-old woman. After surgical excision with safety margins which included the nail apparatus, it has not recurred (22 months of follow up). We carried out a review of the location of all SAF published up to the present day.",
"title": ""
},
{
"docid": "361a340945df32d535fcf92a7288f0fe",
"text": "Digital soil mapping has been widely used as a cost-effective method for generating soil maps. However, current DSM data representation rarely incorporate contextual information of the landscape. DSM models are usually calibrated using point observations intersected with spatially corresponding point covariates. Here, we demonstrate the use of the convolutional neural network model that incorporates contextual information surrounding an observation to significantly improve the prediction accuracy over conventional DSM models. We describe a convolutional neural network (CNN) model that takes inputs 5 as images of covariates and explores spatial contextual information by finding non-linear local spatial relationships of neighbouring pixels. Unique features of the proposed model include: input represented as 3D stack of images, data augmentation to reduce overfitting, and simultaneously predicting multiple outputs. Using a soil mapping example in Chile, the CNN model was trained to simultaneously predict soil organic carbon at multiples depths across the country. The results showed the CNN model reduced the error by 30% compared with conventional techniques that only used point information of covariates. In the 10 example of country-wide mapping at 100 m resolution, the neighbourhood size from 3 to 9 pixels is more effective than at a point location and larger neighbourhood sizes. In addition, the CNN model produces less prediction uncertainty and it is able to predict soil carbon at deeper soil layers more accurately. Because the CNN model takes covariate represented as images, it offers a simple and effective framework for future DSM models. Copyright statement. Author(s) 2018. CC BY 4.0 License 15",
"title": ""
},
{
"docid": "bc8a6c63603c34587acd4a5b2c2a36e1",
"text": "This paper presents adoption of a new hash algorithm in digital signature. Digital signature presents a technique to endorse the content of the message. This message has not been altered throughout the communication process. Due to this, it increased the receiver confidence that the message was unchanged. If the message is digitally signed, any changes in the message will invalidate the signature. The comparison of digital signature between Rivest, Shamir and Adleman (RSA) algorithms are summarized. The finding reveals that previous algorithms used large file sizes. Finally the new encoding and decoding dynamic hash algorithm is proposed in a digital signature. The proposed algorithm had reduced significantly the file sizes (8 bytes) during the transferring message.",
"title": ""
},
{
"docid": "72b67938df75b1668218e290dc2e1478",
"text": "Forensic entomology, the use of insects and other arthropods in forensic investigations, is becoming increasingly more important in such investigations. To ensure its optimal use by a diverse group of professionals including pathologists, entomologists and police officers, a common frame of guidelines and standards is essential. Therefore, the European Association for Forensic Entomology has developed a protocol document for best practice in forensic entomology, which includes an overview of equipment used for collection of entomological evidence and a detailed description of the methods applied. Together with the definitions of key terms and a short introduction to the most important methods for the estimation of the minimum postmortem interval, the present paper aims to encourage a high level of competency in the field of forensic entomology.",
"title": ""
},
{
"docid": "914a9f6945aab20ece40cb0979126ad9",
"text": "Large-scale cover song recognition involves calculating itemto-item similarities that can accommodate differences in timing and tempo, rendering simple Euclidean measures unsuitable. Expensive solutions such as dynamic time warping do not scale to million of instances, making them inappropriate for commercial-scale applications. In this work, we transform a beat-synchronous chroma matrix with a 2D Fourier transform and show that the resulting representation has properties that fit the cover song recognition task. We can also apply PCA to efficiently scale comparisons. We report the best results to date on the largest available dataset of around 18,000 cover songs amid one million tracks, giving a mean average precision of 3.0%.",
"title": ""
},
{
"docid": "492f99ab4470578ce8ac207c1da726fe",
"text": "In recent years, there has been an intense research effort to understand the cognitive processes and structures underlying expert behaviour. Work in different fields, including scientific domains, sports, games, and mnemonics, has shown that there are vast differences in perceptual abilities between experts and novices, and that these differences may underpin other cognitive differences in learning, memory, and problem solving. In this article, we evaluate the progress made in the last years through the eyes of an outstanding, albeit fictional, expert: Sherlock Holmes. We first use the Sherlock Holmes character to illustrate expert processes as described by current research and theories. In particular, the role of perception, as well as the nature and influence of expert knowledge, are all present in the description of Conan Doyle’s hero. In the second part of the article, we discuss a number of issues that current research on expertise has barely addressed. These gaps include, for example, several forms of reasoning, the influence of emotions on cognition, and the effect of age on experts’ knowledge and cognitive processes. Thus, although nearly 120 years old, Conan Doyle’s books show remarkable illustrations of expert behaviour, including the coverage of themes that have mostly been overlooked by current research.",
"title": ""
},
{
"docid": "4be57bfa4e510cdf0e8ad833034d7fce",
"text": "Dynamic data flow tracking (DFT) is a technique broadly used in a variety of security applications that, unfortunately, exhibits poor performance, preventing its adoption in production systems. We present ShadowReplica, a new and efficient approach for accelerating DFT and other shadow memory-based analyses, by decoupling analysis from execution and utilizing spare CPU cores to run them in parallel. Our approach enables us to run a heavyweight technique, like dynamic taint analysis (DTA), twice as fast, while concurrently consuming fewer CPU cycles than when applying it in-line. DFT is run in parallel by a second shadow thread that is spawned for each application thread, and the two communicate using a shared data structure. We avoid the problems suffered by previous approaches, by introducing an off-line application analysis phase that utilizes both static and dynamic analysis methodologies to generate optimized code for decoupling execution and implementing DFT, while it also minimizes the amount of information that needs to be communicated between the two threads. Furthermore, we use a lock-free ring buffer structure and an N-way buffering scheme to efficiently exchange data between threads and maintain high cache-hit rates on multi-core CPUs. Our evaluation shows that ShadowReplica is on average ~2.3× faster than in-line DFT (~2.75× slowdown over native execution) when running the SPEC CPU2006 benchmark, while similar speed ups were observed with command-line utilities and popular server software. Astoundingly, ShadowReplica also reduces the CPU cycles used up to 30%.",
"title": ""
},
{
"docid": "6998297aeba2e02133a6d62aa94508be",
"text": "License Plate Detection and Recognition System is an image processing technique used to identify a vehicle by its license plate. Here we propose an accurate and robust method of license plate detection and recognition from an image using contour analysis. The system is composed of two phases: the detection of the license plate, and the character recognition. The license plate detection is performed for obtaining the candidate region of the vehicle license plate and determined using the edge based text detection technique. In the recognition phase, the contour analysis is used to recognize the characters after segmenting each character. The performance of the proposed system has been tested on various images and provides better results.",
"title": ""
},
{
"docid": "090f5cb05d2f9d6d2456b3eb02a3a663",
"text": "The mesialization of molars in the lower jaw represents a particularly demanding scenario for the quality of orthodontic anchorage. The use of miniscrew implants has proven particularly effective; whereby, these orthodontic implants are either directly loaded (direct anchorage) or employed indirectly to stabilize a dental anchorage block (indirect anchorage). The objective of this study was to analyze the biomechanical differences between direct and indirect anchorage and their effects on the primary stability of the miniscrew implants. For this purpose, several computer-aided design/computer-aided manufacturing (CAD-CAM)-models were prepared from the CT data of a 21-year-old patient, and these were combined with virtually constructed models of brackets, arches, and miniscrew implants. Based on this, four finite element method (FEM) models were generated by three-dimensional meshing. Material properties, boundary conditions, and the quality of applied forces (direction and magnitude) were defined. After solving the FEM equations, strain values were recorded at predefined measuring points. The calculations made using the FEM models with direct and indirect anchorage were statistically evaluated. The loading of the compact bone in the proximity of the miniscrew was clearly greater with direct than it was with indirect anchorage. The more anchor teeth were integrated into the anchoring block with indirect anchorage, the smaller was the peri-implant loading of the bone. Indirect miniscrew anchorage is a reliable possibility to reduce the peri-implant loading of the bone and to reduce the risk of losing the miniscrew. The more teeth are integrated into the anchoring block, the higher is this protective effect. In clinical situations requiring major orthodontic forces, it is better to choose an indirect anchorage in order to minimize the risk of losing the miniscrew.",
"title": ""
},
{
"docid": "aae97dd982300accb15c05f9aa9202cd",
"text": "Personal robots and robot technology (RT)-based assistive devices are expected to play a major role in our elderly-dominated society, with an active participation to joint works and community life with humans, as partner and as friends for us. The authors think that the emotion expression of a robot is effective in joint activities of human and robot. In addition, we also think that bipedal walking is necessary to robots which are active in human living environment. But, there was no robot which has those functions. And, it is not clear what kinds of functions are effective actually. Therefore we developed a new bipedal walking robot which is capable to express emotions. In this paper, we present the design and the preliminary evaluation of the new head of the robot with only a small number of degrees of freedom for facial expression.",
"title": ""
},
{
"docid": "67c8047fbb9e027f92910c4a4f93347a",
"text": "Mastocytosis is a rare, heterogeneous disease of complex etiology, characterized by a marked increase in mast cell density in the skin, bone marrow, liver, spleen, gastrointestinal mucosa and lymph nodes. The most frequent site of organ involvement is the skin. Cutaneous lesions include urticaria pigmentosa, mastocytoma, diffuse and erythematous cutaneous mastocytosis, and telangiectasia macularis eruptiva perstans. Human mast cells originate from CD34 progenitors, under the influence of stem cell factor (SCF); a substantial number of patients exhibit activating mutations in c-kit, the receptor for SCF. Mast cells can synthesize a variety of cytokines that could affect the skeletal system, increasing perforating bone resorption and leading to osteoporosis. The coexistence of hematologic disorders, such as myeloproliferative or myelodysplastic syndromes, or of lymphoreticular malignancies, is common. Compared with radiographs, Tc-99m methylenediphosphonate (MDP) scintigraphy is better able to show the widespread skeletal involvement in patients with diffuse disease. T1-weighted MR imaging is a sensitive technique for detecting marrow abnormalities in patients with systemic mastocytosis, showing several different patterns of marrow involvement. We report the imaging findings a 36-year old male with well-documented urticaria pigmentosa. In order to evaluate mastocytic bone marrow involvement, 99mTc-MDP scintigraphy, T1-weighted spin echo and short tau inversion recovery MRI at 1.0 T, were performed. Both scan findings were consistent with marrow hyperactivity. Thus, the combined use of bone scan and MRI may be useful in order to recognize marrow involvement in suspected systemic mastocytosis, perhaps avoiding bone biopsy.",
"title": ""
},
{
"docid": "47a81a3dc982326877f6d8a15c6ae05b",
"text": "Traditional rumors detection methods often rely on statistical analysis to manually select features to construct classifiers. Not only is the message feature selection difficult, but the gap, between the representation space where the shallow statistical features of information exist and the representation space where the highly abstract features including semantics and emotion of information exist, is very big. Thus, the result of traditional classifiers based on the shallow or middle features is not so good. Due to this problem, a rumors deteciton method based on Deep Bidirectional Gated Recurrent Unit (D-Bi-GRU) is presented. To capture the evolution of group response information of microblog events over time, we consider the forward and backward sequences of microblog flow of group response information along time line simultaneously. The evolution representations of deep latent space including semantic and emotion learned by stack multi layers Bi-GRUs to rumor detection. Experimental results on a real world data set showed that rumor events detection by considering bidirectional sequence of group response information simultaneously can obtain a better performance, and stack multi-layers Bi-GRUs can better detect rumor events in microblog.",
"title": ""
},
{
"docid": "2122697f764fbffc588f9a407105c5ba",
"text": "Very rare cases of human T cell acute lymphoblastic leukemia (T-ALL) harbor chromosomal translocations that involve NOTCH1, a gene encoding a transmembrane receptor that regulates normal T cell development. Here, we report that more than 50% of human T-ALLs, including tumors from all major molecular oncogenic subtypes, have activating mutations that involve the extracellular heterodimerization domain and/or the C-terminal PEST domain of NOTCH1. These findings greatly expand the role of activated NOTCH1 in the molecular pathogenesis of human T-ALL and provide a strong rationale for targeted therapies that interfere with NOTCH signaling.",
"title": ""
},
{
"docid": "be73344151ac52835ba9307e363f36d9",
"text": "BACKGROUND AND OBJECTIVE\nSmoking is the largest preventable cause of death and diseases in the developed world, and advances in modern electronics and machine learning can help us deliver real-time intervention to smokers in novel ways. In this paper, we examine different machine learning approaches to use situational features associated with having or not having urges to smoke during a quit attempt in order to accurately classify high-urge states.\n\n\nMETHODS\nTo test our machine learning approaches, specifically, Bayes, discriminant analysis and decision tree learning methods, we used a dataset collected from over 300 participants who had initiated a quit attempt. The three classification approaches are evaluated observing sensitivity, specificity, accuracy and precision.\n\n\nRESULTS\nThe outcome of the analysis showed that algorithms based on feature selection make it possible to obtain high classification rates with only a few features selected from the entire dataset. The classification tree method outperformed the naive Bayes and discriminant analysis methods, with an accuracy of the classifications up to 86%. These numbers suggest that machine learning may be a suitable approach to deal with smoking cessation matters, and to predict smoking urges, outlining a potential use for mobile health applications.\n\n\nCONCLUSIONS\nIn conclusion, machine learning classifiers can help identify smoking situations, and the search for the best features and classifier parameters significantly improves the algorithms' performance. In addition, this study also supports the usefulness of new technologies in improving the effect of smoking cessation interventions, the management of time and patients by therapists, and thus the optimization of available health care resources. Future studies should focus on providing more adaptive and personalized support to people who really need it, in a minimum amount of time by developing novel expert systems capable of delivering real-time interventions.",
"title": ""
},
{
"docid": "0a557bbd59817ceb5ae34699c72d79ee",
"text": "In this paper, we propose a PTS-based approach to solve the high peak-to-average power ratio (PAPR) problem in filter bank multicarrier (FBMC) system with the consider of the prototype filter and the overlap feature of the symbols in time domain. In this approach, we improve the performance of the traditional PTS approach by modifying the choice of the best weighting factors with the consideration of the overlap between the present symbol and the past symbols. The simulation result shows this approach performs better than traditional PTS approach in the reduction of PAPR in FBMC system.",
"title": ""
},
{
"docid": "8c11b7c29b4f3f4a7fe98b432b97c2b4",
"text": "Chromosome 8 is the largest autosome in which mosaic trisomy is compatible with life. Constitutional trisomy 8 (T8) is estimated to occur in approximately 0.1% of all recognized pregnancies. The estimated frequency of trisomy 8 mosaicism (T8M), also known as Warkany syndrome, is about 1/25,000 to 50,000 liveborns, and is found to be more prevalent in males than females, 5:1. T8M is known to demonstrate extreme clinical variability affecting multiple systems including central nervous, ocular, cardiac, gastrointestinal, genitourinary, and musculoskeletal. There appears to be little correlation between the level of mosaicism and the extent of the clinical phenotype. Additionally, the exact mechanism that causes the severity of phenotype in patients with T8M remains unknown. We report on a mildly dysmorphic male patient with partial low-level T8M due to a pseudoisodicentric chromosome 8 with normal 6.0 SNP microarray and high resolution chromosome analyses in lymphocytes. The aneuploidy was detected in fibroblasts and confirmed by FISH in lymphocytes. This report elaborates further the clinical variability seen in trisomy 8 mosaicism.",
"title": ""
},
{
"docid": "2559eeb2a4f2f58f82d215de134f32be",
"text": "We propose FCLT – a fully-correlational long-term tracker. The two main components of FCLT are a shortterm tracker which localizes the target in each frame and a detector which re-detects the target when it is lost. Both the short-term tracker and the detector are based on correlation filters. The detector exploits properties of the recent constrained filter learning and is able to re-detect the target in the whole image efficiently. A failure detection mechanism based on correlation response quality is proposed. The FCLT is tested on recent short-term and long-term benchmarks. It achieves state-of-the-art results on the short-term benchmarks and it outperforms the current best-performing tracker on the long-term benchmark by over 18%.",
"title": ""
}
] |
scidocsrr
|
32615bc3774e40366d7335fda3a285fc
|
Experimenting at scale with google chrome's SSL warning
|
[
{
"docid": "ad7f49832562d27534f11b162e28f51b",
"text": "Gaze is an important component of social interaction. The function, evolution and neurobiology of gaze processing are therefore of interest to a number of researchers. This review discusses the evolutionary role of social gaze in vertebrates (focusing on primates), and a hypothesis that this role has changed substantially for primates compared to other animals. This change may have been driven by morphological changes to the face and eyes of primates, limitations in the facial anatomy of other vertebrates, changes in the ecology of the environment in which primates live, and a necessity to communicate information about the environment, emotional and mental states. The eyes represent different levels of signal value depending on the status, disposition and emotional state of the sender and receiver of such signals. There are regions in the monkey and human brain which contain neurons that respond selectively to faces, bodies and eye gaze. The ability to follow another individual's gaze direction is affected in individuals with autism and other psychopathological disorders, and after particular localized brain lesions. The hypothesis that gaze following is \"hard-wired\" in the brain, and may be localized within a circuit linking the superior temporal sulcus, amygdala and orbitofrontal cortex is discussed.",
"title": ""
},
{
"docid": "93df5e4d848158d82bd29a125e5f3c84",
"text": "We empirically assess whether browser security warnings are as ineffective as suggested by popular opinion and previous literature. We used Mozilla Firefox and Google Chrome’s in-browser telemetry to observe over 25 million warning impressions in situ. During our field study, users continued through a tenth of Mozilla Firefox’s malware and phishing warnings, a quarter of Google Chrome’s malware and phishing warnings, and a third of Mozilla Firefox’s SSL warnings. This demonstrates that security warnings can be effective in practice; security experts and system architects should not dismiss the goal of communicating security information to end users. We also find that user behavior varies across warnings. In contrast to the other warnings, users continued through 70.2% of Google Chrome’s SSL warnings. This indicates that the user experience of a warning can have a significant impact on user behavior. Based on our findings, we make recommendations for warning designers and researchers.",
"title": ""
}
] |
[
{
"docid": "b77bf3a4cfba0033a7fcdf777c803da4",
"text": "Argumentation mining involves automatically identifying the premises, conclusion, and type of each argument as well as relationships between pairs of arguments in a document. We describe our plan to create a corpus from the biomedical genetics research literature, annotated to support argumentation mining research. We discuss the argumentation elements to be annotated, theoretical challenges, and practical issues in creating such a corpus.",
"title": ""
},
{
"docid": "ed80c1ad22dbf51bfb20351b3d7a2b8b",
"text": "Three central problems in the recent literature on visual attention are reviewed. The first concerns the control of attention by top-down (or goal-directed) and bottom-up (or stimulus-driven) processes. The second concerns the representational basis for visual selection, including how much attention can be said to be location- or object-based. Finally, we consider the time course of attention as it is directed to one stimulus after another.",
"title": ""
},
{
"docid": "dc119b4f06012c82c31fe2a30d0ccb2d",
"text": "Multipliers are the most commonly used elements in today's digital devices. In order to achieve high data throughput in digital signal processing systems, hardware multiplication is the important factor. Depending on the applications which are emerging with the electronics devices, various types of multipliers are emerged. Among all the multipliers, the basic multiplier is Array Multiplier. This paper aims at design of an optimized, low power and high speed 4- bit array multiplier by proposing Modified Gate Diffusion Input (MOD-GDI) technique. With this technique the total propagation delay, power dissipation, and no. of transistors required to design are much more decreased than compared to the Gate Diffusion Input (GDI) and CMOS techniques. Simulations are carried out on mentor graphics tools with 90nm Process technology.",
"title": ""
},
{
"docid": "25e8bbe28852ed0a4175ec8dfaf3b40b",
"text": "Differential evolution (DE) is a well-known optimization algorithm that utilizes the difference of positions between individuals to perturb base vectors and thus generate new mutant individuals. However, the difference between the fitness values of individuals, which may be helpful to improve the performance of the algorithm, has not been used to tune parameters and choose mutation strategies. In this paper, we propose a novel variant of DE with an individual-dependent mechanism that includes an individual-dependent parameter (IDP) setting and an individual-dependent mutation (IDM) strategy. In the IDP setting, control parameters are set for individuals according to the differences in their fitness values. In the IDM strategy, four mutation operators with different searching characteristics are assigned to the superior and inferior individuals, respectively, at different stages of the evolution process. The performance of the proposed algorithm is then extensively evaluated on a suite of the 28 latest benchmark functions developed for the 2013 Congress on Evolutionary Computation special session. Experimental results demonstrate the algorithm's outstanding performance.",
"title": ""
},
{
"docid": "ce48548c0004b074b18f95792f3e6ce8",
"text": "In this paper, we study domain adaptation with a state-of-the-art hierarchical neural network for document-level sentiment classification. We first design a new auxiliary task based on sentiment scores of domain-independent words. We then propose two neural network architectures to respectively induce document embeddings and sentence embeddings that work well for different domains. When these document and sentence embeddings are used for sentiment classification, we find that with both pseudo and external sentiment lexicons, our proposed methods can perform similarly to or better than several highly competitive domain adaptation methods on a benchmark dataset of product reviews.",
"title": ""
},
{
"docid": "fef24d203d0a2e5d52aa887a0a442cf3",
"text": "The property that has given humans a dominant advantage over other species is not strength or speed, but intelligence. If progress in artificial intelligence continues unabated, AI systems will eventually exceed humans in general reasoning ability. A system that is “superintelligent” in the sense of being “smarter than the best human brains in practically every field” could have an enormous impact upon humanity (Bostrom 2014). Just as human intelligence has allowed us to develop tools and strategies for controlling our environment, a superintelligent system would likely be capable of developing its own tools and strategies for exerting control (Muehlhauser and Salamon 2012). In light of this potential, it is essential to use caution when developing AI systems that can exceed human levels of general intelligence, or that can facilitate the creation of such systems.",
"title": ""
},
{
"docid": "323c217fa6e4b0c097779379d8ca8561",
"text": "Photosynthetic antenna complexes capture and concentrate solar radiation by transferring the excitation to the reaction center that stores energy from the photon in chemical bonds. This process occurs with near-perfect quantum efficiency. Recent experiments at cryogenic temperatures have revealed that coherent energy transfer--a wave-like transfer mechanism--occurs in many photosynthetic pigment-protein complexes. Using the Fenna-Matthews-Olson antenna complex (FMO) as a model system, theoretical studies incorporating both incoherent and coherent transfer as well as thermal dephasing predict that environmentally assisted quantum transfer efficiency peaks near physiological temperature; these studies also show that this mechanism simultaneously improves the robustness of the energy transfer process. This theory requires long-lived quantum coherence at room temperature, which never has been observed in FMO. Here we present evidence that quantum coherence survives in FMO at physiological temperature for at least 300 fs, long enough to impact biological energy transport. These data prove that the wave-like energy transfer process discovered at 77 K is directly relevant to biological function. Microscopically, we attribute this long coherence lifetime to correlated motions within the protein matrix encapsulating the chromophores, and we find that the degree of protection afforded by the protein appears constant between 77 K and 277 K. The protein shapes the energy landscape and mediates an efficient energy transfer despite thermal fluctuations.",
"title": ""
},
{
"docid": "1c7131fcb031497b2c1487f9b25d8d4e",
"text": "Biases in information processing undoubtedly play an important role in the maintenance of emotion and emotional disorders. In an attentional cueing paradigm, threat words and angry faces had no advantage over positive or neutral words (or faces) in attracting attention to their own location, even for people who were highly state-anxious. In contrast, the presence of threatening cues (words and faces) had a strong impact on the disengagement of attention. When a threat cue was presented and a target subsequently presented in another location, high state-anxious individuals took longer to detect the target relative to when either a positive or a neutral cue was presented. It is concluded that threat-related stimuli affect attentional dwell time and the disengage component of attention, leaving the question of whether threat stimuli affect the shift component of attention open to debate.",
"title": ""
},
{
"docid": "c13c97749874fd32972f6e8b75fd20d1",
"text": "Text categorization is the task of automatically assigning unlabeled text documents to some predefined category labels by means of an induction algorithm. Since the data in text categorization are high-dimensional, feature selection is broadly used in text categorization systems for reducing the dimensionality. In the literature, there are some widely known metrics such as information gain and document frequency thresholding. Recently, a generative graphical model called latent dirichlet allocation (LDA) that can be used to model and discover the underlying topic structures of textual data, was proposed. In this paper, we use the hidden topic analysis of LDA for feature selection and compare it with the classical feature selection metrics in text categorization. For the experiments, we use SVM as the classifier and tf∗idf weighting for weighting the terms. We observed that almost in all metrics, information gain performs best at all keyword numbers while the LDA-based metrics perform similar to chi-square and document frequency thresholding.",
"title": ""
},
{
"docid": "fdc18ccdccefc1fd9c3f79daf549f015",
"text": "An overview of the current design practices in the field of Renewable Energy (RE) is presented; also paper delineates the background to the development of unique and novel techniques for power generation using the kinetic energy of tidal streams and other marine currents. Also this study focuses only on vertical axis tidal turbine. Tidal stream devices have been developed as an alternative method of extracting the energy from the tides. This form of tidal power technology poses less threat to the environment and does not face the same limiting factors associated with tidal barrage schemes, therefore making it a more feasible method of electricity production. Large companies are taking interest in this new source of power. There is a rush to research and work with this new energy source. Marine scientists are looking into how much these will affect the environment, while engineers are developing turbines that are harmless for the environment. In addition, the progression of technological advancements tracing several decades of R & D efforts on vertical axis turbines is highlighted.",
"title": ""
},
{
"docid": "abc709735ff3566b9d3efa3bb9babd6e",
"text": "Disaster scenarios involve a multitude of obstacles that are difficult to traverse for humans and robots alike. Most robotic search and rescue solutions to this problem involve large, tank-like robots that use brute force to cross difficult terrain; however, these large robots may cause secondary damage. H.E.R.A.L.D, the Hybrid Exploration Robot for Air and Land Deployment, is a novel integrated system of three nimble, lightweight robots which can travel over difficult obstacles by air, but also travel through rubble. We present the design methodology and optimization of each robot, as well as design and testing of the physical integration of the system as a whole, and compare the performance of the robots to the state of the art.",
"title": ""
},
{
"docid": "22d17576fef96e5fcd8ef3dd2fb0cc5f",
"text": "I n a previous article (\" Agile Software Development: The Business of Innovation , \" Computer, Sept. 2001, pp. 120-122), we introduced agile software development through the problem it addresses and the way in which it addresses the problem. Here, we describe the effects of working in an agile style. Over recent decades, while market forces, systems requirements, implementation technology, and project staff were changing at a steadily increasing rate, a different development style showed its advantages over the traditional one. This agile style of development directly addresses the problems of rapid change. A dominant idea in agile development is that the team can be more effective in responding to change if it can • reduce the cost of moving information between people, and • reduce the elapsed time between making a decision to seeing the consequences of that decision. To reduce the cost of moving information between people, the agile team works to • place people physically closer, • replace documents with talking in person and at whiteboards, and • improve the team's amicability—its sense of community and morale— so that people are more inclined to relay valuable information quickly. To reduce the time from decision to feedback, the agile team • makes user experts available to the team or, even better, part of the team and • works incrementally. Making user experts available as part of the team gives developers rapid feedback on the implications to the user of their design choices. The user experts, seeing the growing software in its earliest stages, learn both what the developers misunderstood and also which of their requests do not work as well in practice as they had thought. The term agile, coined by a group of people experienced in developing software this way, has two distinct connotations. The first is the idea that the business and technology worlds have become turbulent , high speed, and uncertain, requiring a process to both create change and respond rapidly to change. The first connotation implies the second one: An agile process requires responsive people and organizations. Agile development focuses on the talents and skills of individuals and molds process to specific people and teams, not the other way around. The most important implication to managers working in the agile manner is that it places more emphasis on people factors in the project: amicability, talent, skill, and communication. These qualities become a primary concern …",
"title": ""
},
{
"docid": "bf760ee2c4fe9c04f07638bd91d9675e",
"text": "Agile development methods are commonly used to iteratively develop the information systems and they can easily handle ever-changing business requirements. Scrum is one of the most popular agile software development frameworks. The popularity is caused by the simplified process framework and its focus on teamwork. The objective of Scrum is to deliver working software and demonstrate it to the customer faster and more frequent during the software development project. However the security requirements for the developing information systems have often a low priority. This requirements prioritization issue results in the situations where the solution meets all the business requirements but it is vulnerable to potential security threats. The major benefit of the Scrum framework is the iterative development approach and the opportunity to automate penetration tests. Therefore the security vulnerabilities can be discovered and solved more often which will positively contribute to the overall information system protection against potential hackers. In this research paper the authors propose how the agile software development framework Scrum can be enriched by considering the penetration tests and related security requirements during the software development lifecycle. Authors apply in this paper the knowledge and expertise from their previous work focused on development of the new information system penetration tests methodology PETA with focus on using COBIT 4.1 as the framework for management of these tests, and on previous work focused on tailoring the project management framework PRINCE2 with Scrum. The outcomes of this paper can be used primarily by the security managers, users, developers and auditors. The security managers may benefit from the iterative software development approach and penetration tests automation. The developers and users will better understand the importance of the penetration tests and they will learn how to effectively embed the tests into the agile development lifecycle. Last but not least the auditors may use the outcomes of this paper as recommendations for companies struggling with penetrations testing embedded in the agile software development process.",
"title": ""
},
{
"docid": "b9698f9fecf0b1098204dc8684ee7e4b",
"text": "In this paper we present an initial performance evaluation of the 3GPP UTRA long term evolution (UTRA LTE) uplink with baseline settings. The performance results are obtained from a detailed UTRA LTE uplink link level simulator supporting OFDMA and SC-FDMA schemes. The basic transmission scheme for uplink direction is based on single-carrier transmission in the form of DFT spread OFDM with an MMSE receiver. Two antenna configurations, SISO and 1times2 SIMO are considered in the analysis of spectral efficiency in addition to adaptive modulation and coding (AMC) and L1-HARQ. For assessment purposes, the performance results of SC-FDMA are compared with OFDMA. It is shown that 1times2 SIMO greatly increases the spectral efficiency of SC-FDMA making it comparable to OFDMA, especially for high coding rate. Furthermore, SC-FDMA has a flexibility to increase BLER performance by exploiting frequency diversity.",
"title": ""
},
{
"docid": "becd45d50ead03dd5af399d5618f1ea3",
"text": "This paper presents a new paradigm of cryptography, quantum public-key cryptosystems. In quantum public-key cryptosystems, all parties including senders, receivers and adversaries are modeled as quantum (probabilistic) poly-time Turing (QPT) machines and only classical channels (i.e., no quantum channels) are employed. A quantum trapdoor one-way function, f , plays an essential role in our system, in which a QPT machine can compute f with high probability, any QPT machine can invert f with negligible probability, and a QPT machine with trapdoor data can invert f . This paper proposes a concrete scheme for quantum public-key cryptosystems: a quantum public-key encryption scheme or quantum trapdoor one-way function. The security of our schemes is based on the computational assumption (over QPT machines) that a class of subset-sum problems is intractable against any QPT machine. Our scheme is very efficient and practical if Shor’s discrete logarithm algorithm is efficiently realized on a quantum machine.",
"title": ""
},
{
"docid": "9f5d77e73fb63235a6e094d437f1be7e",
"text": "An improved zero-voltage and zero-current-switching (ZVZCS) full bridge dc-dc converter is proposed based on phase shift control. With an auxiliary center tapped rectifier at the secondary side, an auxiliary voltage source is applied to reset the primary current of the transformer winding. Therefore, zero-voltage switching for the leading leg switches and zero-current switching for the lagging leg switches can be achieved, respectively, without any increase of current and voltage stresses. Since the primary current in the circulating interval for the phase shift full bridge converter is eliminated, the conduction loss in primary switches is reduced. A 1 kW prototype is made to verify the theoretical analysis.",
"title": ""
},
{
"docid": "ead93ea218664f371de64036e1788aa5",
"text": "OBJECTIVE\nTo assess the diagnostic efficacy of the first-trimester anomaly scan including first-trimester fetal echocardiography as a screening procedure in a 'medium-risk' population.\n\n\nMETHODS\nIn a prospective study, we evaluated 3094 consecutive fetuses with a crown-rump length (CRL) of 45-84 mm and gestational age between 11 + 0 and 13 + 6 weeks, using transabdominal and transvaginal ultrasonography. The majority of patients were referred without prior abnormal scan or increased nuchal translucency (NT) thickness, the median maternal age was, however, 35 (range, 15-46) years, and 53.8% of the mothers (1580/2936) were 35 years or older. This was therefore a self-selected population reflecting an increased percentage of older mothers opting for prenatal diagnosis. The follow-up rate was 92.7% (3117/3363).\n\n\nRESULTS\nThe prevalence of major abnormalities in 3094 fetuses was 2.8% (86/3094). The detection rate of major anomalies at the 11 + 0 to 13 + 6-week scan was 83.7% (72/86), 51.9% (14/27) for NT < 2.5 mm and 98.3% (58/59) for NT >or= 2.5 mm. The prevalence of major congenital heart defects (CHD) was 1.2% (38/3094). The detection rate of major CHD at the 11 to 13 + 6-week scan was 84.2% (32/38), 37.5% (3/8) for NT < 2.5 mm and 96.7% (29/30) for NT >or= 2.5 mm.\n\n\nCONCLUSION\nThe overall detection rate of fetal anomalies including fetal cardiac defects following a specialist scan at 11 + 0 to 13 + 6 weeks' gestation is about 84% and is increased when NT >or= 2.5 mm. This extends the possibilities of a first-trimester scan beyond risk assessment for fetal chromosomal defects. In experienced hands with adequate equipment, the majority of severe malformations as well as major CHD may be detected at the end of the first trimester, which offers parents the option of deciding early in pregnancy how to deal with fetuses affected by genetic or structural abnormalities without pressure of time.",
"title": ""
},
{
"docid": "437e4883116d3e2cf8ab1fe3b571d3f6",
"text": "An electrophysiological study on the effect of aging on the visual pathway and various levels of visual information processing (primary cortex, associate visual motion processing cortex and cognitive cortical areas) was performed. We examined visual evoked potentials (VEPs) to pattern-reversal, motion-onset (translation and radial motion) and visual stimuli with a cognitive task (cognitive VEPs - P300 wave) at luminance of 17 cd/m(2). The most significant age-related change in a group of 150 healthy volunteers (15-85 years of age) was the increase in the P300 wave latency (2 ms per 1 year of age). Delays of the motion-onset VEPs (0.47 ms/year in translation and 0.46 ms/year in radial motion) and the pattern-reversal VEPs (0.26 ms/year) and the reductions of their amplitudes with increasing subject age (primarily in P300) were also found to be significant. The amplitude of the motion-onset VEPs to radial motion remained the most constant parameter with increasing age. Age-related changes were stronger in males. Our results indicate that cognitive VEPs, despite larger variability of their parameters, could be a useful criterion for an objective evaluation of the aging processes within the CNS. Possible differences in aging between the motion-processing system and the form-processing system within the visual pathway might be indicated by the more pronounced delay in the motion-onset VEPs and by their preserved size for radial motion (a biologically significant variant of motion) compared to the changes in pattern-reversal VEPs.",
"title": ""
},
{
"docid": "a803773ad3d9fe09c2e24b26f96cadf8",
"text": "In this paper, we propose to use hardware performance counters (HPC) to detect malicious program modifications at load time (static) and at runtime (dynamic). HPC have been used for program characterization and testing, system testing and performance evaluation, and as side channels. We propose to use HPCs for static and dynamic integrity checking of programs.. The main advantage of HPC-based integrity checking is that it is almost free in terms of hardware cost; HPCs are built into almost all processors. The runtime performance overhead is minimal because we use the operating system for integrity checking, which is called anyway for process scheduling and other interrupts. Our preliminary results confirm that HPC very efficiently detect program modifications with very low cost.",
"title": ""
},
{
"docid": "347ffb664378b56a5ae3a45d1251d7b7",
"text": "We present Essentia 2.0, an open-source C++ library for audio analysis and audio-based music information retrieval released under the Affero GPL license. It contains an extensive collection of reusable algorithms which implement audio input/output functionality, standard digital signal processing blocks, statistical characterization of data, and a large set of spectral, temporal, tonal and high-level music descriptors. The library is also wrapped in Python and includes a number of predefined executable extractors for the available music descriptors, which facilitates its use for fast prototyping and allows setting up research experiments very rapidly. Furthermore, it includes a Vamp plugin to be used with Sonic Visualiser for visualization purposes. The library is cross-platform and currently supports Linux, Mac OS X, and Windows systems. Essentia is designed with a focus on the robustness of the provided music descriptors and is optimized in terms of the computational cost of the algorithms. The provided functionality, specifically the music descriptors included in-the-box and signal processing algorithms, is easily expandable and allows for both research experiments and development of large-scale industrial applications.",
"title": ""
}
] |
scidocsrr
|
82467e2a9fab03fbc0abf287843a3aed
|
Navigation Techniques in Augmented and Mixed Reality: Crossing the Virtuality Continuum
|
[
{
"docid": "b8ca0badcbd28507655245bae05638a1",
"text": "In this work we investigate building indoor location based applications for a mobile augmented reality system. We believe that augmented reality is a natural interface to visualize spacial information such as position or direction of locations and objects for location based applications that process and present information based on the user’s position in the real world. To enable such applications we construct an indoor tracking system that covers a substantial part of a building. It is based on visual tracking of fiducial markers enhanced with an inertial sensor for fast rotational updates. To scale such a system to a whole building we introduce a space partitioning scheme to reuse fiducial markers throughout the environment. Finally we demonstrate two location based applications built upon this facility, an indoor navigation aid and a library search applica-",
"title": ""
}
] |
[
{
"docid": "c09e479d4adb8861884be6a83561b16d",
"text": "Open-domain social dialogue is one of the long-standing goals of Artificial Intelligence. This year, the Amazon Alexa Prize challenge was announced for the first time, where real customers get to rate systems developed by leading universities worldwide. The aim of the challenge is to converse “coherently and engagingly with humans on popular topics for 20 minutes”. We describe our Alexa Prize system (called ‘Alana’) consisting of an ensemble of bots, combining rule-based and machine learning systems, and using a contextual ranking mechanism to choose a system response. The ranker was trained on real user feedback received during the competition, where we address the problem of how to train on the noisy and sparse feedback obtained during the competition.",
"title": ""
},
{
"docid": "4ace08e06cd27fdfb85708cc95791952",
"text": "In this research communication on commutative algebra it was proposed to deal with Grobner Bases and its applications in signals and systems domain.This is one of the pioneering communications in dealing with Cryo-EM Image Processing application using multi-disciplinary concepts involving thermodynamics and electromagnetics based on first principles approach. keywords: Commutative Algebra/HOL/Scala/JikesRVM/Cryo-EM Images/CoCoALib/JAS Introduction & Inspiration : Cryo-Electron Microscopy (Cryo-EM) is an expanding structural biology technique that has recently undergone a quantum leap progression in its applicability to the study of challenging nano-bio systems,because crystallization is not required,only small amounts of sample are needed, and because images can be classified using a computer, the technique has the promising potential to deal with compositional as well as conformational mixtures.Cryo-EM can be used to investigate the complete and fully functional macromolecular complexes in different functional states, providing a richness of nano-bio systems insight. In this short communication,pointing to some of the principles behind the Cryo-EM methodology of single particle analysis via references and discussing Grobner bases application to challenging systems of paramount nano-bio importance is interesting. Special emphasis is on new methodological developments that are leading to an explosion of new studies, many of which are reaching resolutions that could only be dreamed of just few years ago.[1-9][Figures I-IV] There are two main challenges facing researchers in Cryo-EM Image Processing : “(1) The first challenge is that the projection images are extremely noisy (due to the low electron dose that can interact with each molecule before it is destroyed). (2) The second is that the orientations of the molecules that produced every image is unknown (unlike crystallography where the molecules are packed in a form of a crystal and therefore share the same known orientation).Overcoming these two challenges are very much principal in the science of CryoEM. “ according to Prof. Hadani. In the context of above mentioned challenges we intend to investigate and suggest Grobner bases to process Cryo-EM Images using Thermodynamics and Electromagnetics principles.The inspiration to write this short communication was derived mainly from the works of Prof.Buchberger and Dr.Rolf Landauer. source : The physical nature of information Rolf Landauer IBM T.J. Watson Research Center, P.O. Box 218. Yorktown Heights, NY 10598, USA . source : Gröbner Bases:A Short Introduction for Systems Theorists -Bruno Buchberger Research Institute for Symbolic Computation University of Linz,A4232 Schloss,Hagenberg,Austria. Additional interesting facts are observed from an article by Jon Cohen : “Structural Biology – Is HighTech View of HIV Too Good To Be True ?”. (http://davidcrowe.ca/SciHealthEnv/papers/9599-IsHighTechViewOfHIVTooGoodToBeTrue.pdf) Researchers are only interested in finding better software tools to refine the cryo-em image processing tasks on hand using all the mathematical tools at their disposal.Commutative Algebra is one such promising tool.Hence the justification for using Grobner Bases. Informatics Framework Design,Implementation & Analysis : Figure I. Mathematical Algorithm Implementation and Software Architecture -Overall Idea presented in the paper.Self Explanatory Graphical Algorithm Please Note : “Understanding JikesRVM in the Context of Cryo-EM/TEM/SEM Imaging Algorithms and Applications – A General Informatics Introduction from a Software Architecture View Point” by Nirmal & Gagik 2016 could be useful. Figure II. Mathematical Algorithm with various Grobner Bases Mathematical Tools/Software.Self Explanatory Graphical Algorithm Figure III.Scala and Java based Software Architecture Flow Self Explanatory Graphical Algorithm Figure IV. Mathematical Algorithm involving EM Field Theory & Thermodynamics Self Explanatory Graphical Algorithm",
"title": ""
},
{
"docid": "f702a8c28184a6d49cd2f29a1e4e7ea4",
"text": "Recent deep learning based approaches have shown promising results for the challenging task of inpainting large missing regions in an image. These methods can generate visually plausible image structures and textures, but often create distorted structures or blurry textures inconsistent with surrounding areas. This is mainly due to ineffectiveness of convolutional neural networks in explicitly borrowing or copying information from distant spatial locations. On the other hand, traditional texture and patch synthesis approaches are particularly suitable when it needs to borrow textures from the surrounding regions. Motivated by these observations, we propose a new deep generative model-based approach which can not only synthesize novel image structures but also explicitly utilize surrounding image features as references during network training to make better predictions. The model is a feedforward, fully convolutional neural network which can process images with multiple holes at arbitrary locations and with variable sizes during the test time. Experiments on multiple datasets including faces (CelebA, CelebA-HQ), textures (DTD) and natural images (ImageNet, Places2) demonstrate that our proposed approach generates higher-quality inpainting results than existing ones. Code, demo and models are available at: https://github.com/JiahuiYu/generative_inpainting.",
"title": ""
},
{
"docid": "e1a6009d79b55610980db11c8a1def8b",
"text": "This paper presents the design and implementation of a high-performance fully-digital PWM DAC and switching output stage which can drive a speaker in portable devices, including cellular phones. Thanks to the quaternary pulse-width modulation scheme, filter-less implementation are possible. A pre-modulation DSP algorithm eliminates the harmonic distortion inherent to the employed modulation process, and an oversampling noise shaper reduces the modulator clock speed to facilitate the hardware implementation while keeping high-fidelity quality. Radiated electromagnetic field emission of the class D amplifier is reduced thanks to a clock spreading technique with only a minor impact on audio performance characteristics. Clock jitter effects on the audio amplifier performance are presented, showing very low degradation for jitter value up to a few nanoseconds. The digital section works with a 1.2 V power supply voltage, while the output switching stage and its driver are supplied from a high-efficiency DC-DC converter either at 3.6 V or 5 V. An output power of 0.5 W at 3.6 V and 1 W at 5 V over an 8 Ω load with efficiency (digital section included) of about 79% and 81%, respectively, has been achieved. The total harmonic distortion (THD) at maximum output level is about 0.2%, while the dynamic range is 104 dB A-weighted. The active area is about 0.94 mm2 in a 0.13 μm single-poly, five-metal, N-well digital CMOS technology with double-oxide option (0.5 μm minimum length).",
"title": ""
},
{
"docid": "0efe3ccc1c45121c5167d3792a7fcd25",
"text": "This paper addresses the motion planning problem while considering Human-Robot Interaction (HRI) constraints. The proposed planner generates collision-free paths that are acceptable and legible to the human. The method extends our previous work on human-aware path planning to cluttered environments. A randomized cost-based exploration method provides an initial path that is relevant with respect to HRI and workspace constraints. The quality of the path is further improved with a local path-optimization method. Simulation results on mobile manipulators in the presence of humans demonstrate the overall efficacy of the approach.",
"title": ""
},
{
"docid": "75b075bb5f125031d30361f07dbafb65",
"text": "Real world prediction problems often involve the simultaneous prediction of multiple target variables using the same set of predictive variables. When the target variables are binary, the prediction task is called multi-label classification while when the target variables are realvalued the task is called multi-target regression. Although multi-target regression attracted the attention of the research community prior to multi-label classification, the recent advances in this field motivate a study of whether newer state-of-the-art algorithms developed for multilabel classification are applicable and equally successful in the domain of multi-target regression. In this paper we introduce two new multitarget regression algorithms: multi-target stacking (MTS) and ensemble of regressor chains (ERC), inspired by two popular multi-label classification approaches that are based on a single-target decomposition of the multi-target problem and the idea of treating the other prediction targets as additional input variables that augment the input space. Furthermore, we detect an important shortcoming on both methods related to the methodology used to create the additional input variables and develop modified versions of the algorithms (MTSC and ERCC) to tackle it. All methods are empirically evaluated on 12 real-world multi-target regression datasets, 8 of which are first introduced in this paper and are made publicly available for future benchmarks. The experimental results show that ERCC performs significantly better than both a strong baseline that learns a single model for each target using bagging of regression trees and the state-of-the-art multi-objective random forest approach. Also, the proposed modification results in significant performance gains for both MTS and ERC.",
"title": ""
},
{
"docid": "651ddcbc6d514da005d0d4319a325e96",
"text": "Convolutional Neural Networks (CNNs) have recently demonstrated a superior performance in computer vision applications; including image retrieval. This paper introduces a bilinear CNN-based model for the first time in the context of Content-Based Image Retrieval (CBIR). The proposed architecture consists of two feature extractors using a pre-trained deep CNN model fine-tuned for image retrieval task to generate a Compact Root Bilinear CNN (CRB-CNN) architecture. Image features are directly extracted from the activations of convolutional layers then pooled at image locations. Additionally, the output size of bilinear features is largely reduced to a compact but high descriminative image representation using kernal-based low-dimensional projection and pooling, which is a fundamental improvement in the retrieval performance in terms of search speed and memory size. An end-to-end training is applied by back-probagation to learn the parameters of the final CRB-CNN. Experimental results reported on the standard Holidays image dataset show the efficiency of the architecture at extracting and learning even complex features for CBIR tasks. Specifically, using a vector of 64-dimension, it achieves 95.13% mAP accuracy and outperforms the best results of state-of-the-art approaches.",
"title": ""
},
{
"docid": "17642e2f5ac7d6594df72deacab332fb",
"text": "Paraphrase patterns are semantically equivalent patterns, which are useful in both paraphrase recognition and generation. This paper presents a pivot approach for extracting paraphrase patterns from bilingual parallel corpora, whereby the paraphrase patterns in English are extracted using the patterns in another language as pivots. We make use of log-linear models for computing the paraphrase likelihood between pattern pairs and exploit feature functions based on maximum likelihood estimation (MLE), lexical weighting (LW), and monolingual word alignment (MWA). Using the presented method, we extract more than 1 million pairs of paraphrase patterns from about 2 million pairs of bilingual parallel sentences. The precision of the extracted paraphrase patterns is above 78%. Experimental results show that the presented method significantly outperforms a well-known method called discovery of inference rules from text (DIRT). Additionally, the log-linear model with the proposed feature functions are effective. The extracted paraphrase patterns are fully analyzed. Especially, we found that the extracted paraphrase patterns can be classified into five types, which are useful in multiple natural language processing (NLP) applications.",
"title": ""
},
{
"docid": "4c87f3fb470cb01781b563889b1261d2",
"text": "Problems at the intersection of vision and language are of significant importance both as challenging research questions and for the rich set of applications they enable. However, inherent structure in our world and bias in our language tend to be a simpler signal for learning than visual modalities, resulting in models that ignore visual information, leading to an inflated sense of their capability. We propose to counter these language priors for the task of Visual Question Answering (VQA) and make vision (the V in VQA) matter! Specifically, we balance the popular VQA dataset (Antol et al., ICCV 2015) by collecting complementary images such that every question in our balanced dataset is associated with not just a single image, but rather a pair of similar images that result in two different answers to the question. Our dataset is by construction more balanced than the original VQA dataset and has approximately twice the number of image-question pairs. Our complete balanced dataset is publicly available at http://visualqa.org/ as part of the 2nd iteration of the Visual Question Answering Dataset and Challenge (VQA v2.0). We further benchmark a number of state-of-art VQA models on our balanced dataset. All models perform significantly worse on our balanced dataset, suggesting that these models have indeed learned to exploit language priors. This finding provides the first concrete empirical evidence for what seems to be a qualitative sense among practitioners. Finally, our data collection protocol for identifying complementary images enables us to develop a novel interpretable model, which in addition to providing an answer to the given (image, question) pair, also provides a counter-example based explanation. Specifically, it identifies an image that is similar to the original image, but it believes has a different answer to the same question. This can help in building trust for machines among their users.",
"title": ""
},
{
"docid": "7cfd90a3c9091c296e621ff34fc471e6",
"text": "The study aimed to develop machine learning models that have strong prediction power and interpretability for diagnosis of glaucoma based on retinal nerve fiber layer (RNFL) thickness and visual field (VF). We collected various candidate features from the examination of retinal nerve fiber layer (RNFL) thickness and visual field (VF). We also developed synthesized features from original features. We then selected the best features proper for classification (diagnosis) through feature evaluation. We used 100 cases of data as a test dataset and 399 cases of data as a training and validation dataset. To develop the glaucoma prediction model, we considered four machine learning algorithms: C5.0, random forest (RF), support vector machine (SVM), and k-nearest neighbor (KNN). We repeatedly composed a learning model using the training dataset and evaluated it by using the validation dataset. Finally, we got the best learning model that produces the highest validation accuracy. We analyzed quality of the models using several measures. The random forest model shows best performance and C5.0, SVM, and KNN models show similar accuracy. In the random forest model, the classification accuracy is 0.98, sensitivity is 0.983, specificity is 0.975, and AUC is 0.979. The developed prediction models show high accuracy, sensitivity, specificity, and AUC in classifying among glaucoma and healthy eyes. It will be used for predicting glaucoma against unknown examination records. Clinicians may reference the prediction results and be able to make better decisions. We may combine multiple learning models to increase prediction accuracy. The C5.0 model includes decision rules for prediction. It can be used to explain the reasons for specific predictions.",
"title": ""
},
{
"docid": "cd7b967fd59f37d1feccb7cb74bac816",
"text": "A comprehensive security solution is no longer an option, and needs to be designed bottom-up into the car software. The architecture needs to be scalable and tiered, leveraging the proven technologies, processes and policies from the mature industries. The objective is to detect, defend and recover from any attack before harm comes to passengers, data and instrumentation. No matter how hardened security is there is always a need to patch any security vulnerabilities. This paper presents high level framework for security and over the air (OTA) framework.",
"title": ""
},
{
"docid": "6acb7aa3228dd128266438d0ae3ed22a",
"text": "Purpose: of this paper is to introduce the reader to the characteristics of PDCA tool and Six Sigma (DMAIC, DFSS) techniques and EFQM Excellence Model (RADAR matrix), which are possible to use for the continuous quality improvement of products, processes and services in organizations. Design/methodology/approach: We compared the main characteristics of the presented methodologies aiming to show the main prerequisites, differences, strengths and limits in their application. Findings: Depending on the purpose every organization will have to find a proper way and a combination of methodologies in its implementation process. The PDCA cycle is a well known fundamental concept of continuousimprovement processes, RADAR matrix provides a structured approach assessing the organizational performance, DMAIC is a systematic, and fact based approach providing framework of results-oriented project management, DFSS is a systematic approach to new products or processes design focusing on prevent activities. Research limitations/implications: This paper provides general information and observations on four presented methodologies. Further research could be done towards more detailed study of characteristics and positive effects of these methodologies. Practical implications: The paper presents condensed presentation of main characteristics, strengths and limitations of presented methodologies. Our findings could be used as solid information for management decisions about the introduction of various quality programmes. Originality/value: We compared four methodologies and showed their main characteristics and differences. We showed that some methodologies are more simple and therefore easily to understand and introduce (e.g. PDCA cycle). On the contrary Six Sigma and EFQM Excellence model are more complex and demanding methodologies and therefore need more time and resources for their proper implementation.",
"title": ""
},
{
"docid": "38037437ce3e86cda024f81cbd81cd6f",
"text": "BACKGROUND\nIt is widely known that more boys are born during and immediately after wars, but there has not been any ultimate (evolutionary) explanation for this 'returning soldier effect'. Here, I suggest that the higher sex ratios during and immediately after wars might be a byproduct of the fact that taller soldiers are more likely to survive battle and that taller parents are more likely to have sons.\n\n\nMETHODS\nI analyze a large sample of British Army service records during World War I.\n\n\nRESULTS\nSurviving soldiers were on average more than one inch (3.33 cm) taller than fallen soldiers.\n\n\nCONCLUSIONS\nConservative estimates suggest that the one-inch height advantage alone is more than twice as sufficient to account for all the excess boys born in the UK during and after World War I. While it remains unclear why taller soldiers are more likely to survive battle, I predict that the returning soldier effect will not happen in more recent and future wars.",
"title": ""
},
{
"docid": "a12b3f0a2426cba3f0bbcb2e2b687959",
"text": "There has been an increasing interest in the millimeter wave (mmW) frequency regime in the design of the next-generation wireless systems. The focus of this paper is on understanding mmW channel properties that have an important bearing on the feasibility of mmW systems in practice and have a significant impact on physical layer design. In this direction, simultaneous channel sounding measurements at 2.9, 29, and 61 GHz are performed at a number of transmit–receive location pairs in indoor office, shopping mall, and outdoor environments. Based on these measurements, this paper first studies large-scale properties, such as path loss and delay spread across different carrier frequencies in these scenarios. Toward the goal of understanding the feasibility of outdoor-to-indoor coverage, material measurements corresponding to mmW reflection and penetration are studied and significant notches in signal reception spread over a few gigahertz are reported. Finally, implications of these measurements on system design are discussed, and multiple solutions are proposed to overcome these impairments.",
"title": ""
},
{
"docid": "37fce1406c54de9a31efe0c9e836cab5",
"text": "The field of the neurobiology of language is experiencing a paradigm shift in which the predominant Broca-Wernicke-Geschwind language model is being revised in favor of models that acknowledge that language is processed within a distributed cortical and subcortical system. While it is important to identify the brain regions that are part of this system, it is equally important to establish the anatomical connectivity supporting their functional interactions. The most promising framework moving forward is one in which language is processed via two interacting \"streams\"--a dorsal and ventral stream--anchored by long association fiber pathways, namely the superior longitudinal fasciculus/arcuate fasciculus, uncinate fasciculus, inferior longitudinal fasciculus, inferior fronto-occipital fasciculus, and two less well-established pathways, the middle longitudinal fasciculus and extreme capsule. In this article, we review the most up-to-date literature on the anatomical connectivity and function of these pathways. We also review and emphasize the importance of the often overlooked cortico-subcortical connectivity for speech via the \"motor stream\" and associated fiber systems, including a recently identified cortical association tract, the frontal aslant tract. These pathways anchor the distributed cortical and subcortical systems that implement speech and language in the human brain.",
"title": ""
},
{
"docid": "40be421f4d66283357c22fa9cd59790f",
"text": "We have examined standards required for successful e-commerce (EC) architectures and evaluated the strengths and limitations of current systems that have been developed to support EC. We find that there is an unfilled need for systems that can reliably locate buyers and sellers in electronic marketplaces and also facilitate automated transactions. The notion of a ubiquitous network where loosely coupled buyers and sellers can reliably find each other in real time, evaluate products, negotiate prices, and conduct transactions is not adequately supported by current systems. These findings were based on an analysis of mainline EC architectures: EDI, company Websites, B2B hubs, e-Procurement systems, and Web Services. Limitations of each architecture were identified. Particular attention was given to the strengths and weaknesses of the Web Services architecture, since it may overcome some limitations of the other approaches. # 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "03aac64e2d209d628874614d061b90f9",
"text": "Patterns of reading development were examined in native English-speaking (L1) children and children who spoke English as a second language (ESL). Participants were 978 (790 L1 speakers and 188 ESL speakers) Grade 2 children involved in a longitudinal study that began in kindergarten. In kindergarten and Grade 2, participants completed standardized and experimental measures including reading, spelling, phonological processing, and memory. All children received phonological awareness instruction in kindergarten and phonics instruction in Grade 1. By the end of Grade 2, the ESL speakers' reading skills were comparable to those of L1 speakers, and ESL speakers even outperformed L1 speakers on several measures. The findings demonstrate that a model of early identification and intervention for children at risk is beneficial for ESL speakers and also suggest that the effects of bilingualism on the acquisition of early reading skills are not negative and may be positive.",
"title": ""
},
{
"docid": "4e1ba3178e40738ccaf2c2d76dd417d8",
"text": "We present the results of a recent large-scale subjective study of video quality on a collection of videos distorted by a variety of application-relevant processes. Methods to assess the visual quality of digital videos as perceived by human observers are becoming increasingly important, due to the large number of applications that target humans as the end users of video. Owing to the many approaches to video quality assessment (VQA) that are being developed, there is a need for a diverse independent public database of distorted videos and subjective scores that is freely available. The resulting Laboratory for Image and Video Engineering (LIVE) Video Quality Database contains 150 distorted videos (obtained from ten uncompressed reference videos of natural scenes) that were created using four different commonly encountered distortion types. Each video was assessed by 38 human subjects, and the difference mean opinion scores (DMOS) were recorded. We also evaluated the performance of several state-of-the-art, publicly available full-reference VQA algorithms on the new database. A statistical evaluation of the relative performance of these algorithms is also presented. The database has a dedicated web presence that will be maintained as long as it remains relevant and the data is available online.",
"title": ""
},
{
"docid": "f5713b2e233848cab82db0007099a39c",
"text": "The term 'critical design' is on the upswing in HCI. We analyze how discourses around 'critical design' are diverging in Design and HCI. We argue that this divergence undermines HCI's ability to learn from and appropriate the design approaches signaled by this term. Instead, we articulate two ways to broaden and deepen connections between Design and HCI: (1) develop a broader collective understanding of what these design approaches can be, without forcing them to be about 'criticality' or 'critical design,' narrowly construed; and (2) shape a variation of design criticism to better meet Design practices, terms, and ways of knowing.",
"title": ""
}
] |
scidocsrr
|
db20f8e08b1c673bb8a89c7d6e43c23e
|
VRID-1: A basic vehicle re-identification dataset for similar vehicles
|
[
{
"docid": "0b4db8d0b2347de537c5d21bc260d68b",
"text": "Vehicle, as a significant object class in urban surveillance, attracts massive focuses in computer vision field, such as detection, tracking, and classification. Among them, vehicle re-identification (Re-Id) is an important yet frontier topic, which not only faces the challenges of enormous intra-class and subtle inter-class differences of vehicles in multicameras, but also suffers from the complicated environments in urban surveillance scenarios. Besides, the existing vehicle related datasets all neglect the requirements of vehicle Re-Id: 1) massive vehicles captured in real-world traffic environment; and 2) applicable recurrence rate to give cross-camera vehicle search for vehicle Re-Id. To facilitate vehicle Re-Id research, we propose a large-scale benchmark dataset for vehicle Re-Id in the real-world urban surveillance scenario, named “VeRi”. It contains over 40,000 bounding boxes of 619 vehicles captured by 20 cameras in unconstrained traffic scene. Moreover, each vehicle is captured by 2~18 cameras in different viewpoints, illuminations, and resolutions to provide high recurrence rate for vehicle Re-Id. Finally, we evaluate six competitive vehicle Re-Id methods on VeRi and propose a baseline which combines the color, texture, and highlevel semantic information extracted by deep neural network.",
"title": ""
}
] |
[
{
"docid": "333b3349cdcb6ddf44c697e827bcfe62",
"text": "Harmful cyanobacterial blooms, reflecting advanced eutrophication, are spreading globally and threaten the sustainability of freshwater ecosystems. Increasingly, non-nitrogen (N(2))-fixing cyanobacteria (e.g., Microcystis) dominate such blooms, indicating that both excessive nitrogen (N) and phosphorus (P) loads may be responsible for their proliferation. Traditionally, watershed nutrient management efforts to control these blooms have focused on reducing P inputs. However, N loading has increased dramatically in many watersheds, promoting blooms of non-N(2) fixers, and altering lake nutrient budgets and cycling characteristics. We examined this proliferating water quality problem in Lake Taihu, China's 3rd largest freshwater lake. This shallow, hyper-eutrophic lake has changed from bloom-free to bloom-plagued conditions over the past 3 decades. Toxic Microcystis spp. blooms threaten the use of the lake for drinking water, fisheries and recreational purposes. Nutrient addition bioassays indicated that the lake shifts from P limitation in winter-spring to N limitation in cyanobacteria-dominated summer and fall months. Combined N and P additions led to maximum stimulation of growth. Despite summer N limitation and P availability, non-N(2) fixing blooms prevailed. Nitrogen cycling studies, combined with N input estimates, indicate that Microcystis thrives on both newly supplied and previously-loaded N sources to maintain its dominance. Denitrification did not relieve the lake of excessive N inputs. Results point to the need to reduce both N and P inputs for long-term eutrophication and cyanobacterial bloom control in this hyper-eutrophic system.",
"title": ""
},
{
"docid": "dc88dfc1085a38c41179a625b4856ba2",
"text": "One critical aspect neural network designers face today is choosing an appropriate network size for a given application. Network size involves in the case of layered neural network architectures, the number of layers in a network, the number of nodes per layer, and the number of connections. Roughly speaking, a neural network implements a nonlinear mapping of u=G(x). The mapping function G is established during a training phase where the network learns to correctly associate input patterns x to output patterns u. Given a set of training examples (x, u), there is probably an infinite number of different size networks that can learn to map input patterns x into output patterns u. The question is, which network size is more appropriate for a given problem? Unfortunately, the answer to this question is not always obvious. Many researchers agree that the quality of a solution found by a neural network depends strongly on the network size used. In general, network size affects network complexity, and learning time. It also affects the generalization capabilities of the network; that is, its ability-to produce accurate results on patterns outside its training set.<<ETX>>",
"title": ""
},
{
"docid": "c180b23f5c25fa57b27408b1229d7037",
"text": "We study the Relay-based Cooperative Data Exchange (RCDE) problem, where initially each client has access to a subset of a set of n original packets, referred to as their side information, and wants to retrieve all other original packets via co-operation. Unlike traditional Cooperative Data Exchange (CDE), in our proposed relay-based model, clients can only cooperate via a relay. The data exchange is completed over two phases, namely Uploading Phase and Downloading Phase. In the Uploading Phase, the clients will encode the original packets and transmit the coded packets to the relay. In the Downloading Phase, the relay will reencode the received packets and multicast the reencoded packets, each to a subgroup of clients. The coded packets in the two phases are carefully selected so that each client can retrieve all original packets with minimum total transmission delay, based on its initial side information and on the coded packets it receives from the relay. In addition, we assume that the bandwidths between the relay and different clients are different, and that the upload/download bandwidths between the relay and the same client are also different. We establish a coding scheme that has the minimum total delay and show that it can be found in polynomial time, for sufficiently large underlying fields. We also design a heuristic algorithm that has a low complexity with binary field size. Our simulations show that the performance of the binary solution is very close to that of the optimal solution. All coding schemes considered in this work are scalar.",
"title": ""
},
{
"docid": "5cccc7cc748d3461dc3c0fb42a09245f",
"text": "The self and attachment difficulties associated with chronic childhood abuse and other forms of pervasive trauma must be understood and addressed in the context of the therapeutic relationship for healing to extend beyond resolution of traditional psychiatric symptoms and skill deficits. The authors integrate contemporary research and theory about attachment and complex developmental trauma, including dissociation, and apply it to psychotherapy of complex trauma, especially as this research and theory inform the therapeutic relationship. Relevant literature on complex trauma and attachment is integrated with contemporary trauma theory as the background for discussing relational issues that commonly arise in this treatment, highlighting common challenges such as forming a therapeutic alliance, managing frame and boundaries, and working with dissociation and reenactments.",
"title": ""
},
{
"docid": "905bab5da79334c528efb2efd3b0a7c4",
"text": "A time-optimal speed schedule results in a mobile robot driving along a planned path at or near the limits of the robot's capability. However, deriving models to predict the effect of increased speed can be very difficult. In this paper, we present a speed scheduler that uses previous experience, instead of complex models, to generate time-optimal speed schedules. The algorithm is designed for a vision-based, path-repeating mobile robot and uses experience to ensure reliable localization, low path-tracking errors, and realizable control inputs while maximizing the speed along the path. To our knowledge, this is the first speed scheduler to incorporate experience from previous path traversals in order to address system constraints. The proposed speed scheduler was tested in over 4 km of path traversals in outdoor terrain using a large Ackermann-steered robot travelling between 0.5 m/s and 2.0 m/s. The approach to speed scheduling is shown to generate fast speed schedules while remaining within the limits of the robot's capability.",
"title": ""
},
{
"docid": "78d7c61f7ca169a05e9ae1393712cd69",
"text": "Designing an automatic solver for math word problems has been considered as a crucial step towards general AI, with the ability of natural language understanding and logical inference. The state-of-the-art performance was achieved by enumerating all the possible expressions from the quantities in the text and customizing a scoring function to identify the one with the maximum probability. However, it incurs exponential search space with the number of quantities and beam search has to be applied to trade accuracy for efficiency. In this paper, we make the first attempt of applying deep reinforcement learning to solve arithmetic word problems. The motivation is that deep Q-network has witnessed success in solving various problems with big search space and achieves promising performance in terms of both accuracy and running time. To fit the math problem scenario, we propose our MathDQN that is customized from the general deep reinforcement learning framework. Technically, we design the states, actions, reward function, together with a feed-forward neural network as the deep Q-network. Extensive experimental results validate our superiority over state-ofthe-art methods. Our MathDQN yields remarkable improvement on most of datasets and boosts the average precision among all the benchmark datasets by 15%.",
"title": ""
},
{
"docid": "f8d256bf6fea179847bfb4cc8acd986d",
"text": "We present a logic for stating properties such as, “after a request for service there is at least a 98% probability that the service will be carried out within 2 seconds”. The logic extends the temporal logic CTL by Emerson, Clarke and Sistla with time and probabilities. Formulas are interpreted over discrete time Markov chains. We give algorithms for checking that a given Markov chain satisfies a formula in the logic. The algorithms require a polynomial number of arithmetic operations, in size of both the formula and the Markov chain. A simple example is included to illustrate the algorithms.",
"title": ""
},
{
"docid": "6291f21727c70d3455a892a8edd3b18c",
"text": "Given a single column of values, existing approaches typically employ regex-like rules to detect errors by finding anomalous values inconsistent with others. Such techniques make local decisions based only on values in the given input column, without considering a more global notion of compatibility that can be inferred from large corpora of clean tables. We propose \\sj, a statistics-based technique that leverages co-occurrence statistics from large corpora for error detection, which is a significant departure from existing rule-based methods. Our approach can automatically detect incompatible values, by leveraging an ensemble of judiciously selected generalization languages, each of which uses different generalizations and is sensitive to different types of errors. Errors so detected are based on global statistics, which is robust and aligns well with human intuition of errors. We test \\sj on a large set of public Wikipedia tables, as well as proprietary enterprise Excel files. While both of these test sets are supposed to be of high-quality, \\sj makes surprising discoveries of over tens of thousands of errors in both cases, which are manually verified to be of high precision (over 0.98). Our labeled benchmark set on Wikipedia tables is released for future research.",
"title": ""
},
{
"docid": "3f85dea7d56f696b30d30dc74676cc48",
"text": "hch@lst.de X F s I s a F I l e s y s t e m t h at w a s d e signed from day one for computer systems with large numbers of CPUs and large disk arrays. It focuses on supporting large files and good streaming I/O performance. It also has some interesting administrative features not supported by other Linux file systems. This article gives some background information on why XFS was created and how it differs from the familiar Linux file systems. You may discover that XFS is just what your project needs instead of making do with the default Linux file system.",
"title": ""
},
{
"docid": "2781df07db142da8eefbe714631a59b2",
"text": "Snapchat is a social media platform that allows users to send images, videos, and text with a specified amount of time for the receiver(s) to view the content before it becomes permanently inaccessible to the receiver. Using focus group methodology and in-depth interviews, the current study sought to understand young adult (18e23 years old; n 1⁄4 34) perceptions of how Snapchat behaviors influenced their interpersonal relationships (family, friends, and romantic). Young adults indicated that Snapchat served as a double-edged swordda communication modality that could lead to relational challenges, but also facilitate more congruent communication within young adult interpersonal relationships. © 2016 Elsevier Ltd. All rights reserved. Technology is now a regular part of contemporary young adult (18e25 years old) life (Coyne, Padilla-Walker, & Howard, 2013; Vaterlaus, Jones, Patten, & Cook, 2015). With technological convergence (i.e. accessibility of multiple media on one device; Brown & Bobkowski, 2011) young adults can access both entertainment media (e.g., television, music) and social media (e.g., social networking, text messaging) on a single device. Among adults, smartphone ownership is highest among young adults (85% of 18e29 year olds; Smith, 2015). Perrin (2015) reported that 90% of young adults (ages 18e29) use social media. Facebook remains the most popular social networking platform, but several new social media apps (i.e., applications) have begun to gain popularity among young adults (e.g., Twitter, Instagram, Pinterest; Duggan, Ellison, Lampe, Lenhart, & Madden, 2015). Considering the high frequency of social media use, Subrahmanyam and Greenfield (2008) have advocated for more research on how these technologies influence interpersonal relationships. The current exploratory study aterlaus), Kathryn_barnett@ (C. Roche), youngja2@unk. was designed to understand the perceived role of Snapchat (see www.snapchat.com) in young adults' interpersonal relationships (i.e. family, social, and romantic). 1. Theoretical framework Uses and Gratifications Theory (U&G) purports that media and technology users are active, self-aware, and goal directed (Katz, Blumler, & Gurevitch, 1973). Technology consumers link their need gratification with specific technology options, which puts different technology sources in competition with one another to satisfy a consumer's needs. Since the emergence of U&G nearly 80 years ago, there have been significant advances in media and technology, which have resulted in many more media and technology options for consumers (Ruggiero, 2000). Writing about the internet and U&G in 2000, Roggiero forecasted: “If the internet is a technology that many predict will be genuinely transformative, it will lead to profound changes in media users' personal and social habits and roles” (p.28). Advances in accessibility to the internet and the development of social media, including Snapchat, provide support for the validity of this prediction. Despite the advances in technology, the needs users seek to gratify are likely more consistent over time. Supporting this point Katz, Gurevitch, and Haas J.M. Vaterlaus et al. / Computers in Human Behavior 62 (2016) 594e601 595",
"title": ""
},
{
"docid": "df48f9d3096d8528e9f517783a044df8",
"text": "We propose a novel generative neural network architecture for Dialogue Act classification. Building upon the Recurrent Neural Network framework, our model incorporates a new attentional technique and a label-to-label connection for sequence learning, akin to Hidden Markov Models. Our experiments show that both of these innovations enable our model to outperform strong baselines for dialogue-act classification on the MapTask and Switchboard corpora. In addition, we analyse empirically the effectiveness of each of these innovations.",
"title": ""
},
{
"docid": "2472f4bdd73c5676761adfa067dbf432",
"text": "This extensively modified version of the Sassouni Cephalometric Analysis is very beneficial to the dentist treating functional orthodontic and TMD patients. Some practitioners even derive benefits from its application when determining vertical in the edentulous patient. The NFO analysis has been shown to be of great benefit to determine vertical proportion and growth potential of the young patient. The analysis has the ability to show incisor placement relative to opening and closing trajectory and where to place the mandible for functional advancement. Practitioners need a diagnostic cephalogram that is visual and descriptive of the skeletal and dental malocclusion. This analysis provides many tools that will assist the clinician in making those decisions.",
"title": ""
},
{
"docid": "fae2ba1403ac0f98ea221825dfc3c82e",
"text": "Generative models have long been the dominant approach for speech recognition. The success of these models however relies on the use of sophisticated recipes and complicated machinery that is not easily accessible to non-practitioners. Recent innovations in Deep Learning have given rise to an alternative – discriminative models called Sequence-to-Sequence models, that can almost match the accuracy of state of the art generative models. While these models are easy to train as they can be trained end-to-end in a single step, they have a practical limitation that they can only be used for offline recognition. This is because the models require that the entirety of the input sequence be available at the beginning of inference, an assumption that is not valid for instantaneous speech recognition. To address this problem, online sequence-to-sequence models were recently introduced. These models are able to start producing outputs as data arrives, and the model feels confident enough to output partial transcripts. These models, like sequence-to-sequence are causal – the output produced by the model until any time, t, affects the features that are computed subsequently. This makes the model inherently more powerful than generative models that are unable to change features that are computed from the data. This paper highlights two main contributions – an improvement to online sequence-to-sequence model training, and its application to noisy settings with mixed speech from two speakers.",
"title": ""
},
{
"docid": "0dfd5345c2dc3fe047dcc635760ffedd",
"text": "This paper presents a fast, joint spatial- and Doppler velocity-based, probabilistic approach for ego-motion estimation for single and multiple radar-equipped robots. The normal distribution transform is used for the fast and accurate position matching of consecutive radar detections. This registration technique is successfully applied to laser-based scan matching. To overcome discontinuities of the original normal distribution approach, an appropriate clustering technique provides a globally smooth mixed-Gaussian representation. It is shown how this matching approach can be significantly improved by taking the Doppler information into account. The Doppler information is used in a density-based approach to extend the position matching to a joint likelihood optimization function. Then, the estimated ego-motion maximizes this function. Large-scale real world experiments in an urban environment using a 77 GHz radar show the robust and accurate ego-motion estimation of the proposed algorithm. In the experiments, comparisons are made to state-of-the-art algorithms, the vehicle odometry, and a high-precision inertial measurement unit.",
"title": ""
},
{
"docid": "e4e7b1b9ec8f0688d2d10206be59cd99",
"text": "Recognizing TimeML events and identifying their attributes, are important tasks in natural language processing (NLP). Several NLP applications like question answering, information retrieval, summarization, and temporal information extraction need to have some knowledge about events of the input documents. Existing methods developed for this task are restricted to limited number of languages, and for many other languages including Persian, there has not been any effort yet. In this paper, we introduce two different approaches for automatic event recognition and classification in Persian. For this purpose, a corpus of events has been built based on a specific version of ISO-TimeML for Persian. We present the specification of this corpus together with the results of applying mentioned approaches to the corpus. Considering these methods are the first effort towards Persian event extraction, the results are comparable to that of successful methods in English. TITLE AND ABSTRACT IN PERSIAN اھداديور جارختسا زا یسراف نوتم فيرعت رب انب ISO-TimeML نتفاي اھداديور یگژيو و اھنآ یاھ ساسا رب TimeML زا یکي لئاسم هزوح رد مھم یعيبط یاھ نابز شزادرپ ی تسا . نابز شزادرپ یاھدربراک زا یرايسب هناماس دننام یعيبط یاھ و یزاس هص2خ ،تاع2طا جارختسا ،خساپ و شسرپ یاھ ات دنراد زاين ینامز تاع2طا جارختسا هرابرد یشناد یاھداديور رد دوجوم نوتم یدورو شور .دنشاب هتشاد هک یياھ نيا دروم رد نونکات هدش داجيا هلئسم نابز دنچ هب دودحم ، صاخ نابز زا یرايسب رد و تسا اھ هلمج زا ،یسراف نابز یراک نونکات هدشن ماجنا هطبار نيا رد یسراف نابز رد اھداديور جارختسا یارب فلتخم شور ود ام ،هلاقم نيا رد .تسا یم هئارا .ميھد یارب هرکيپ ،راک نيا اب قباطم یا ISO-TimeML ، سن هتبلا هخ دش هتخاس ،نآ یسراف صاخ ی ام . ناشن ار ،نآ یور رب لصاح جياتن و هرکيپ نيا تاصخشم یم ميھد شور جياتن . هئارا یاھ هدش هلاقم نيا رد ناونع هب ، شور نيلوا هدايپ یاھ اب ،یسراف نابز یور رب هدش یزاس .تسا هسياقم لباق یسيلگنا نابز رد قفوم یاھ شور",
"title": ""
},
{
"docid": "28ab0f1a25c449ee35909334afa7d7a2",
"text": "Data from the U.S. Defense Meteorological Satellite Program’s Operational Line-scan System are often used to map impervious surface area (ISA) distribution at regional and global scales, but its coarse spatial resolution and data saturation produce high inaccuracy in ISA estimation. Suomi National Polar-orbiting Partnership (SNPP) Visible Infrared Imaging Radiometer Suite’s Day/Night Band (VIIRS-DNB) with its high spatial resolution and dynamic data range may provide new insights but has not been fully examined in mapping ISA distribution. In this paper, a new variable—Large-scale Impervious Surface Index (LISI)—is proposed to integrate VIIRS-DNB and Moderate Resolution Imaging Spectroradiometer (MODIS) normalized difference vegetation index (NDVI) data for mapping ISA distribution. A regression model was established, in which LISI was used as an independent variable and the reference ISA from Landsat images was a dependent variable. The results indicated a better estimation performance using LISI than using a single VIIRS-DNB or MODIS NDVI variable. The LISI-based approach provides accurate spatial patterns from high values in core urban areas to low values in rural areas, with an overall root mean squared error of 0.11. The LISI-based approach is recommended for fractional ISA estimation in a large area. OPEN ACCESS Remote Sens. 2015, 7 12460",
"title": ""
},
{
"docid": "046f6c5cc6065c1cb219095fb0dfc06f",
"text": "In this paper, we describe COLABA, a large effort to create resources and processing tools for Dialectal Arabic Blogs. We describe the objectives of the project, the process flow and the interaction between the different components. We briefly describe the manual annotation effort and the resources created. Finally, we sketch how these resources and tools are put together to create DIRA, a termexpansion tool for information retrieval over dialectal Arabic collections using Modern Standard Arabic queries.",
"title": ""
},
{
"docid": "edf350dfe9680f40a38f7dd2fde42fbb",
"text": "Multimodal sentiment analysis is drawing an increasing amount of attention these days. It enables mining of opinions in video reviews which are now available aplenty on online platforms. However, multimodal sentiment analysis has only a few high-quality data sets annotated for training machine learning algorithms. These limited resources restrict the generalizability of models, where, for example, the unique characteristics of a few speakers (e.g., wearing glasses) may become a confounding factor for the sentiment classification task. In this paper, we propose a Select-Additive Learning (SAL) procedure that improves the generalizability of trained neural networks for multimodal sentiment analysis. In our experiments, we show that our SAL approach improves prediction accuracy significantly in all three modalities (verbal, acoustic, visual), as well as in their fusion. Our results show that SAL, even when trained on one dataset, achieves good generalization across two new test datasets.",
"title": ""
},
{
"docid": "2878ed8d0da40bd3363f7b8eabb79faf",
"text": "In this chapter, we present the current knowledge on de novo assembly, growth, and dynamics of striated myofibrils, the functional architectural elements developed in skeletal and cardiac muscle. The data were obtained in studies of myofibrils formed in cultures of mouse skeletal and quail myotubes, in the somites of living zebrafish embryos, and in mouse neonatal and quail embryonic cardiac cells. The comparative view obtained revealed that the assembly of striated myofibrils is a three-step process progressing from premyofibrils to nascent myofibrils to mature myofibrils. This process is specified by the addition of new structural proteins, the arrangement of myofibrillar components like actin and myosin filaments with their companions into so-called sarcomeres, and in their precise alignment. Accompanying the formation of mature myofibrils is a decrease in the dynamic behavior of the assembling proteins. Proteins are most dynamic in the premyofibrils during the early phase and least dynamic in mature myofibrils in the final stage of myofibrillogenesis. This is probably due to increased interactions between proteins during the maturation process. The dynamic properties of myofibrillar proteins provide a mechanism for the exchange of older proteins or a change in isoforms to take place without disassembling the structural integrity needed for myofibril function. An important aspect of myofibril assembly is the role of actin-nucleating proteins in the formation, maintenance, and sarcomeric arrangement of the myofibrillar actin filaments. This is a very active field of research. We also report on several actin mutations that result in human muscle diseases.",
"title": ""
},
{
"docid": "29f84c19870151abc266a1f37702643c",
"text": "Choice poetics is a formalist framework that seeks to concretely describe the impacts choices have on player experiences within narrative games. Developed in part to support algorithmic generation of narrative choices, the theory includes a detailed analytical framework for understanding the impressions choice structures make by analyzing the relationships among options, outcomes, and player goals. The theory also emphasizes the need to account for players’ various modes of engagement, which vary both during play and between players. In this work, we illustrate the non-computational application of choice poetics to the analysis of two different games to further develop the theory and make it more accessible to others. We focus first on using choice poetics to examine the central repeated choice in “Undertale,” and show how it can be used to contrast two different player types that will approach a choice differently. Finally, we give an example of fine-grained analysis using a choice from the game “Papers, Please,” which breaks down options and their outcomes to illustrate exactly how the choice pushes players towards complicity via the introduction of uncertainty. Through all of these examples, we hope to show the usefulness of choice poetics as a framework for understanding narrative choices, and to demonstrate concretely how one could productively apply it to choices “in the wild.”",
"title": ""
}
] |
scidocsrr
|
d8fd0b3004cfa565f0687ba76adbc6f3
|
Synthetic Word Parsing Improves Chinese Word Segmentation
|
[
{
"docid": "1aeace70da31d29cb880e61817432bf7",
"text": "This paper investigates improving supervised word segmentation accuracy with unlabeled data. Both large-scale in-domain data and small-scale document text are considered. We present a unified solution to include features derived from unlabeled data to a discriminative learning model. For the large-scale data, we derive string statistics from Gigaword to assist a character-based segmenter. In addition, we introduce the idea about transductive, document-level segmentation, which is designed to improve the system recall for out-ofvocabulary (OOV) words which appear more than once inside a document. Novel features1 result in relative error reductions of 13.8% and 15.4% in terms of F-score and the recall of OOV words respectively.",
"title": ""
}
] |
[
{
"docid": "add026119d82ec730038fcc3521304c5",
"text": "Deep Learning has emerged as a new area in machine learning and is applied to a number of signal and image applications.The main purpose of the work presented in this paper, is to apply the concept of a Deep Learning algorithm namely, Convolutional neural networks (CNN) in image classification. The algorithm is tested on various standard datasets, like remote sensing data of aerial images (UC Merced Land Use Dataset) and scene images from SUN database. The performance of the algorithm is evaluated based on the quality metric known as Mean Squared Error (MSE) and classification accuracy. The graphical representation of the experimental results is given on the basis of MSE against the number of training epochs. The experimental result analysis based on the quality metrics and the graphical representation proves that the algorithm (CNN) gives fairly good classification accuracy for all the tested datasets.",
"title": ""
},
{
"docid": "75ed78f9a59ec978432f16fd4407df60",
"text": "The transition from user requirements to UML diagrams is a difficult task for the designer espec ially when he handles large texts expressing these needs. Modelin g class Diagram must be performed frequently, even during t he development of a simple application. This paper prop oses an approach to facilitate class diagram extraction from textual requirements using NLP techniques and domain ontolog y. Keywords-component; Class Diagram, Natural Language Processing, GATE, Domain ontology, requirements.",
"title": ""
},
{
"docid": "c3ca9b126dc12065e80faf472fe41375",
"text": "Continued advancements in Si-based technologies - in particular SiGe BiCMOS technologies - have enabled mmWave integrated circuits designed for 77GHz automotive radar systems to reach production-level maturity. This paper will discuss the technology requirements for mmWave automotive radar products, and also present the evolution of the respective key figures of merit. Si-based CMOS challenges and opportunities will also be discussed.",
"title": ""
},
{
"docid": "d585ab801ecc1c5e33092cd17c7d2f81",
"text": "This study empirically examined the informational and normative based determinants of perceived credibility of online consumer recommendations in China. Past literature demonstrated that informational influence is important in affecting reader’s evaluation of incoming information and the effectiveness of a communication. This study extends from the previous Word-of-Mouth studies by including the normative factors. Since online consumer discussion is characterized by its social aggregation, we argue that several normative cues could be salient and play significant roles in shaping a reader’s credibility evaluation towards the eWOM recommendation. The informational determinants (argument strength, source credibility and confirmation with receiver’s prior belief) and the normative determinants (recommendation consistency and rating) are investigated via an online survey to users of a famous online consumer discussion site in China (myetone.com). Results supported our proposed research model which substantiates the effects of perceived eWOM review credibility from both informational-based and normative-based determinants. This research provides researcher and practitioners with insights on receiver’s eWOM evaluation.",
"title": ""
},
{
"docid": "a3e8dd1f3fbca95857a96c0635eb60c6",
"text": "Many maximum power point tracking techniques for photovoltaic systems have been developed to maximize the produced energy and a lot of these are well established in the literature. These techniques vary in many aspects as: simplicity, convergence speed, digital or analogical implementation, sensors required, cost, range of effectiveness, and in other aspects. This paper presents a comparative study of ten widely-adopted MPPT algorithms; their performance is evaluated on the energy point of view, by using the simulation tool Simulink®, considering different solar irradiance variations. Key-Words: Maximum power point (MPP), maximum power point tracking (MPPT), photovoltaic (PV), comparative study, PV Converter.",
"title": ""
},
{
"docid": "a66b984f8a5e67e9da8a2dea5bf93faa",
"text": "Time series motifs have been in the literature for about fifteen years, but have only recently begun to receive significant attention in the research community. This is perhaps due to the growing realization that they implicitly offer solutions to a host of time series problems, including rule discovery, anomaly detection, density estimation, semantic segmentation, etc. Recent work has improved the scalability to the point where exact motifs can be computed on datasets with up to a million data points in tenable time. However, in some domains, for example seismology, there is an insatiable need to address even larger datasets. In this work we show that a combination of a novel algorithm and a high-performance GPU allows us to significantly improve the scalability of motif discovery. We demonstrate the scalability of our ideas by finding the full set of exact motifs on a dataset with one hundred million subsequences, by far the largest dataset ever mined for time series motifs. Furthermore, we demonstrate that our algorithm can produce actionable insights in seismology and other domains.",
"title": ""
},
{
"docid": "0d382cf8e63e65521e600f6f91920eb1",
"text": "Bioactive plant secondary products are frequently the drivers of complex rhizosphere interactions, including those with other plants, herbivores and microbiota. These chemically diverse molecules typically accumulate in a highly regulated manner in specialized plant tissues and organelles. We studied the production and localization of bioactive naphthoquinones (NQs) in the roots of Echium plantagineum, an invasive endemic weed in Australia. Roots of E. plantagineum produced red-coloured NQs in the periderm of primary and secondary roots, while seedling root hairs exuded NQs in copious quantities. Confocal imaging and microspectrofluorimetry confirmed that bioactive NQs were deposited in the outer layer of periderm cells in mature roots, resulting in red colouration. Intracellular examination revealed that periderm cells contained numerous small red vesicles for storage and intracellular transport of shikonins, followed by subsequent extracellular deposition. Periderm and root hair extracts of field- and phytotron-grown plants were analysed by UHPLC/Q-ToF MS (ultra high pressure liquid chromatography coupled to quadrupole time of flight mass spectrometry) and contained more than nine individual NQs, with dimethylacrylshikonin, and phytotoxic shikonin, deoxyshikonin and acetylshikonin predominating. In seedlings, shikonins were first found 48h following germination in the root-hypocotyl junction, as well as in root hair exudates. In contrast, the root cortices of both seedling and mature root tissues were devoid of NQs. SPRE (solid phase root zone extraction) microprobes strategically placed in soil surrounding living E. plantagineum plants successfully extracted significant levels of bioactive shikonins from living roots, rhizosphere and bulk soil surrounding roots. These findings suggest important roles for accumulation of shikonins in the root periderm and subsequent rhizodeposition in plant defence, interference, and invasion success.",
"title": ""
},
{
"docid": "e81b7c70e05b694a917efdd52ef59132",
"text": "Last several years, industrial and information technology field have undergone profound changes, entering \"Industry 4.0\" era. Industry4.0, as a representative of the future of the Fourth Industrial Revolution, evolved from embedded system to the Cyber Physical System (CPS). Manufacturing will be via the Internet, to achieve Internal and external network integration, toward the intelligent direction. This paper introduces the development of Industry 4.0, and the Cyber Physical System is introduced with the example of the Wise Information Technology of 120 (WIT120), then the application of Industry 4.0 in intelligent manufacturing is put forward through the digital factory to the intelligent factory. Finally, the future development direction of Industry 4.0 is analyzed, which provides reference for its application in intelligent manufacturing.",
"title": ""
},
{
"docid": "ef689b2437b3adec5bb5426d1fda68f5",
"text": "This paper presents the Photovoltaic (PV) array Emulator based on DC-DC buck converter control by the pole placement technique. A photovoltaic array emulator has the similar electrical characteristics to a photovoltaic panel or module. The photovoltaic array emulator independent from the environment condition so it generate and maintain same characteristics of real photovoltaic system. Here pole placement technique is used to control photovoltaic emulator's voltage and current characteristics which obtain by PV model. This whole system has been evaluated in MATLAB with linear and nonlinear loads. The simulation results shows that the PV array emulator gives similar behavior like real PV system.",
"title": ""
},
{
"docid": "f7696fca636f8959a1d0fbeba9b2fb67",
"text": "With the rise in popularity of artificial intelligence, the technology of verbal communication between man and machine has received an increasing amount of attention, but generating a good conversation remains a difficult task. The key factor in human-machine conversation is whether the machine can give good responses that are appropriate not only at the content level (relevant and grammatical) but also at the emotion level (consistent emotional expression). In our paper, we propose a new model based on long short-term memory, which is used to achieve an encoder-decoder framework, and we address the emotional factor of conversation generation by changing the model’s input using a series of input transformations: a sequence without an emotional category, a sequence with an emotional category for the input sentence, and a sequence with an emotional category for the output responses. We perform a comparison between our work and related work and find that we can obtain slightly better results with respect to emotion consistency. Although in terms of content coherence our result is lower than those of related work, in the present stage of research, our method can generally generate emotional responses in order to control and improve the user’s emotion. Our experiment shows that through the introduction of emotional intelligence, our model can generate responses appropriate not only in content but also in emotion.",
"title": ""
},
{
"docid": "322f6321bc34750344064d474206fddb",
"text": "BACKGROUND AND PURPOSE\nThis study was undertaken to elucidate whether and how age influences stroke outcome.\n\n\nMETHODS\nThis prospective and community-based study comprised 515 consecutive acute stroke patients. Computed tomographic scan was performed in 79% of patients. Activities of daily living (ADL) and neurological status were assessed weekly during hospital stay using the Barthel Index (BI) and the Scandinavian Stroke Scale (SSS), respectively. Information regarding social condition and comorbidity before stroke was also registered. A multiple regression model was used to analyze the independent influence of age on stroke outcome.\n\n\nRESULTS\nAge was not related to the type of stroke lesion or infarct size. However, age independently influenced initial BI (-4 points per 10 years, P < .01), initial SSS (-2 points per 10 years, P = .01), and discharge BI (-3 points per 10 years, P < .01). No independent influence of age was found regarding mortality within 3 months, discharge SSS, length of hospital stay, and discharge placement. ADL improvement was influenced independently by age (-3 points per 10 years, P < .01), whereas age had no influence on neurological improvement or on speed of recovery.\n\n\nCONCLUSIONS\nAge independently influences stroke outcome selectively in ADL-related aspects (BI) but not in neurological aspects (SSS), suggesting a poorer compensatory ability in elderly stroke patients. Therefore, rehabilitation of elderly stroke patients should be focused more on ADL and compensation rather than on the recovery of neurological status, and age itself should not be a selection criterion for rehabilitation.",
"title": ""
},
{
"docid": "7c097c95fb50750c082877ab7e277cd9",
"text": "40BAbstract: Disease Intelligence (DI) is based on the acquisition and aggregation of fragmented knowledge of diseases at multiple sources all over the world to provide valuable information to doctors, researchers and information seeking community. Some diseases have their own characteristics changed rapidly at different places of the world and are reported on documents as unrelated and heterogeneous information which may be going unnoticed and may not be quickly available. This research presents an Ontology based theoretical framework in the context of medical intelligence and country/region. Ontology is designed for storing information about rapidly spreading and changing diseases with incorporating existing disease taxonomies to genetic information of both humans and infectious organisms. It further maps disease symptoms to diseases and drug effects to disease symptoms. The machine understandable disease ontology represented as a website thus allows the drug effects to be evaluated on disease symptoms and exposes genetic involvements in the human diseases. Infectious agents which have no known place in an existing classification but have data on genetics would still be identified as organisms through the intelligence of this system. It will further facilitate researchers on the subject to try out different solutions for curing diseases.",
"title": ""
},
{
"docid": "b27862cd75c2dd58ccca1826122e89f2",
"text": "Smart grids consist of suppliers, consumers, and other parts. The main suppliers are normally supervised by industrial control systems. These systems rely on programmable logic controllers (PLCs) to control industrial processes and communicate with the supervisory system. Until recently, industrial operators relied on the assumption that these PLCs are isolated from the online world and hence cannot be the target of attacks. Recent events, such as the infamous Stuxnet attack [15] directed the attention of the security and control system community to the vulnerabilities of control system elements, such as PLCs. In this paper, we design and implement the Crysys PLC honeypot (CryPLH) system to detect targeted attacks against industrial control systems. This PLC honeypot can be implemented as part of a larger security monitoring system. Our honeypot implementation improves upon existing solutions in several aspects: most importantly in level of interaction and ease of configuration. Results of an evaluation show that our honeypot is largely indistinguishable from a real device from the attacker’s perspective. As a collateral of our analysis, we were able to identify some security issues in the real PLC device we tested and implemented specific firewall rules to protect the device from targeted attacks.",
"title": ""
},
{
"docid": "350c7855cf36fcde407a84f8b66f33d8",
"text": "This paper describes Tacotron 2, a neural network architecture for speech synthesis directly from text. The system is composed of a recurrent sequence-to-sequence feature prediction network that maps character embeddings to mel-scale spectrograms, followed by a modified WaveNet model acting as a vocoder to synthesize time-domain waveforms from those spectrograms. Our model achieves a mean opinion score (MOS) of 4.53 comparable to a MOS of 4.58 for professionally recorded speech. To validate our design choices, we present ablation studies of key components of our system and evaluate the impact of using mel spectrograms as the conditioning input to WaveNet instead of linguistic, duration, and $F_{0}$ features. We further show that using this compact acoustic intermediate representation allows for a significant reduction in the size of the WaveNet architecture.",
"title": ""
},
{
"docid": "11d458252bc83d062c8cf46a556c6c80",
"text": "A new bio-inspired optimisation algorithm: Bird Swarm Algorithm Xian-Bing Meng, X.Z. Gao, Lihua Lu, Yu Liu & Hengzhen Zhang a College of Information Engineering, Shanghai Maritime University, Shanghai, P.R. China b Chengdu Green Energy and Green Manufacturing R&D Center, Chengdu, P.R. China c Department of Electrical Engineering and Automation, Aalto University School of Electrical Engineering, Aalto, Finland d School of Computer Science, Fudan University, Shanghai, P.R. China e College of Mathematics and Information Science, Zhengzhou University of Light Industry, Zhengzhou, P.R. China Published online: 17 Jun 2015.",
"title": ""
},
{
"docid": "eccbc87e4b5ce2fe28308fd9f2a7baf3",
"text": "3",
"title": ""
},
{
"docid": "a1d96f46cd4fa625da9e1bf2f6299c81",
"text": "The availability of increasingly higher power commercial microwave monolithic integrated circuit (MMIC) amplifiers enables the construction of solid state amplifiers achieving output powers and performance previously achievable only from traveling wave tube amplifiers (TWTAs). A high efficiency power amplifier incorporating an antipodal finline antenna array within a coaxial waveguide is investigated at Ka Band. The coaxial waveguide combiner structure is used to demonstrate a 120 Watt power amplifier from 27 to 31GHz by combining quantity (16), 10 Watt GaN MMIC devices; achieving typical PAE of 25% for the overall power amplifier assembly.",
"title": ""
},
{
"docid": "dc207fb8426f468dde2cb1d804b33539",
"text": "This paper presents a webcam-based spherical coordinate conversion system using OpenCL massive parallel computing for panorama video image stitching. With multi-core architecture and its high-bandwidth data transmission rate of memory accesses, modern programmable GPU makes it possible to process multiple video images in parallel for real-time interaction. To get a panorama view of 360 degrees, we use OpenCL to stitch multiple webcam video images into a panorama image and texture mapped it to a spherical object to compose a virtual reality immersive environment. The experimental results show that when we use NVIDIA 9600GT to process eight 640×480 images, OpenCL can achieve ninety times speedups.",
"title": ""
},
{
"docid": "79dde5e366caf6ad0376eb72f9e582eb",
"text": "Most of the datasets normally contain either numeric or categorical features. Mixed data comprises of both numeric and categorical features, and they frequently occur in various domains, such as health, nance, marketing, etc. Clustering is oen sought on mixed data to nd structures and to group similar objects. However, clustering mixed data is challenging because it is dicult to directly apply mathematical operations, such as summation, average etc. on the feature values of these datasets. In this paper, we review various types of mixed data clustering techniques in detail. We present a taxonomy to identify ten types of dierent mixed data clustering techniques. We also compare the performance of several mixed data clustering methods on publicly available datasets. e paper further identies challenges in developing dierent mixed data clustering algorithms and provides guidelines for future directions in this area.",
"title": ""
},
{
"docid": "e6e74971af2576ff119d277927727659",
"text": "In Germany there is limited information available about the distribution of the tropical rat mite (Ornithonyssus bacoti) in rodents. A few case reports show that this hematophagous mite species may also cause dermatitis in man. Having close body contact to small rodents is an important question for patients with pruritic dermatoses. The definitive diagnosis of this ectoparasitosis requires the detection of the parasite, which is more likely to be found in the environment of its host (in the cages, in the litter or in corners or cracks of the living area) than on the hosts' skin itself. A case of infestation with tropical rat mites in a family is reported here. Three mice that had been removed from the home two months before were the reservoir. The mites were detected in a room where the cage with the mice had been placed months ago. Treatment requires the eradication of the parasites on its hosts (by a veterinarian) and in the environment (by an exterminator) with adequate acaricides such as permethrin.",
"title": ""
}
] |
scidocsrr
|
aab8cb1dcb19f1be52078f62a3815da4
|
Cross-lingual Models of Word Embeddings: An Empirical Comparison
|
[
{
"docid": "9aa18b930fd24da1ba056cc94b64ecf4",
"text": "Recent work in learning bilingual representations tend to tailor towards achieving good performance on bilingual tasks, most often the crosslingual document classification (CLDC) evaluation, but to the detriment of preserving clustering structures of word representations monolingually. In this work, we propose a joint model to learn word representations from scratch that utilizes both the context coocurrence information through the monolingual component and the meaning equivalent signals from the bilingual constraint. Specifically, we extend the recently popular skipgram model to learn high quality bilingual representations efficiently. Our learned embeddings achieve a new state-of-the-art accuracy of 80.3 for the German to English CLDC task and a highly competitive performance of 90.7 for the other classification direction. At the same time, our models outperform best embeddings from past bilingual representation work by a large margin in the monolingual word similarity evaluation.1",
"title": ""
},
{
"docid": "8acd410ff0757423d09928093e7e8f63",
"text": "We present a simple log-linear reparameterization of IBM Model 2 that overcomes problems arising from Model 1’s strong assumptions and Model 2’s overparameterization. Efficient inference, likelihood evaluation, and parameter estimation algorithms are provided. Training the model is consistently ten times faster than Model 4. On three large-scale translation tasks, systems built using our alignment model outperform IBM Model 4. An open-source implementation of the alignment model described in this paper is available from http://github.com/clab/fast align .",
"title": ""
}
] |
[
{
"docid": "1e18d34152a15d84993124b1e689714a",
"text": "Objectives\nEconomic, social, technical, and political drivers are fundamentally changing the nature of work and work environments, with profound implications for the field of occupational health. Nevertheless, researchers and practitioners entering the field are largely being trained to assess and control exposures using approaches developed under old models of work and risks.\n\n\nMethods\nA speaker series and symposium were organized to broadly explore current challenges and future directions for the occupational health field. Broad themes identified throughout these discussions are characterized and discussed to highlight important future directions of occupational health.\n\n\nFindings\nDespite the relatively diverse group of presenters and topics addressed, some important cross-cutting themes emerged. Changes in work organization and the resulting insecurity and precarious employment arrangements change the nature of risk to a large fraction of the workforce. Workforce demographics are changing, and economic disparities among working groups are growing. Globalization exacerbates the 'race to the bottom' for cheap labor, poor regulatory oversight, and limited labor rights. Largely, as a result of these phenomena, the historical distinction between work and non-work exposures has become largely artificial and less useful in understanding risks and developing effective public health intervention models. Additional changes related to climate change, governmental and regulatory limitations, and inadequate surveillance systems challenge and frustrate occupational health progress, while new biomedical and information technologies expand the opportunities for understanding and intervening to improve worker health.\n\n\nConclusion\nThe ideas and evidences discussed during this project suggest that occupational health training, professional practice, and research evolve towards a more holistic, public health-oriented model of worker health. This will require engagement with a wide network of stakeholders. Research and training portfolios need to be broadened to better align with the current realities of work and health and to prepare practitioners for the changing array of occupational health challenges.",
"title": ""
},
{
"docid": "1a4c510097bb45346590b611c75a78c4",
"text": "We augment adversarial training (AT) with worst case adversarial training (WCAT) which improves adversarial robustness by 11% over the current stateof-the-art result in the `2 norm on CIFAR-10. We obtain verifiable average case and worst case robustness guarantees, based on the expected and maximum values of the norm of the gradient of the loss. We interpret adversarial training as Total Variation Regularization, which is a fundamental tool in mathematical image processing, and WCAT as Lipschitz regularization.",
"title": ""
},
{
"docid": "0a81730588c23c4ed153dab18791bdc2",
"text": "Deep neural networks (DNNs) have shown an inherent vulnerability to adversarial examples which are maliciously crafted on real examples by attackers, aiming at making target DNNs misbehave. The threats of adversarial examples are widely existed in image, voice, speech, and text recognition and classification. Inspired by the previous work, researches on adversarial attacks and defenses in text domain develop rapidly. In order to make people have a general understanding about the field, this article presents a comprehensive review on adversarial examples in text, including attack and defense approaches. We analyze the advantages and shortcomings of recent adversarial examples generation methods and elaborate the efficiency and limitations on the countermeasures. Finally, we discuss the challenges in adversarial texts and provide a research direction of this aspect.",
"title": ""
},
{
"docid": "8016e80e506dcbae5c85fdabf1304719",
"text": "We introduce globally normalized convolutional neural networks for joint entity classification and relation extraction. In particular, we propose a way to utilize a linear-chain conditional random field output layer for predicting entity types and relations between entities at the same time. Our experiments show that global normalization outperforms a locally normalized softmax layer on a benchmark dataset.",
"title": ""
},
{
"docid": "65d2cb9f55ee169347df6dc957c36629",
"text": "This paper presents data driven control system of DC motor by using system identification process. In this paper we use component base modeling similar to real DC motor by using simscape electronic systems for obtaining the input voltage and output speed of DC motor, the system identification toolbox and the nonlinear autoregressive with exogenous input (NARX) neural network for identification and obtaining the model of an object. The object model and training the neural network for data driven control system are developed by using MATLAB/SIMULINK platform. So, simulation results of this paper present the advantage of the suggested control method and the acceptable accuracy with respect to dynamic characteristics of the system.",
"title": ""
},
{
"docid": "63115b12e4a8192fdce26eb7e2f8989a",
"text": "Theorems and techniques to form different types of transformationally invariant processing and to produce the same output quantitatively based on either transformationally invariant operators or symmetric operations have recently been introduced by the authors. In this study, we further propose to compose a geared rotationally identical CNN system (GRI-CNN) with a small angle increment by connecting networks of participated processes at the first flatten layer. Using an ordinary CNN structure as a base, requirements for constructing a GRI-CNN include the use of either symmetric input vector or kernels with an angle increment that can form a complete cycle as a \"gearwheel\". Four basic GRI-CNN structures were studied. Each of them can produce quantitatively identical output results when a rotation angle of the input vector is evenly divisible by the increment angle of the gear. Our study showed when a rotated input vector does not match to a gear angle, the GRI-CNN can also produce a highly consistent result. With an ultrafine increment angle (e.g., 1 or 0.1), a virtually isotropic CNN system can be constructed.",
"title": ""
},
{
"docid": "dd51cc2138760f1dcdce6e150cabda19",
"text": "Breast cancer is the most common cancer in women worldwide. The most common screening technology is mammography. To reduce the cost and workload of radiologists, we propose a computer aided detection approach for classifying and localizing calcifications and masses in mammogram images. To improve on conventional approaches, we apply deep convolutional neural networks (CNN) for automatic feature learning and classifier building. In computer-aided mammography, deep CNN classifiers cannot be trained directly on full mammogram images because of the loss of image details from resizing at input layers. Instead, our classifiers are trained on labelled image patches and then adapted to work on full mammogram images for localizing the abnormalities. State-of-the-art deep convolutional neural networks are compared on their performance of classifying the abnormalities. Experimental results indicate that VGGNet receives the best overall accuracy at 92.53% in classifications. For localizing abnormalities, ResNet is selected for computing class activation maps because it is ready to be deployed without structural change or further training. Our approach demonstrates that deep convolutional neural network classifiers have remarkable localization capabilities despite no supervision on the location of abnormalities is provided.",
"title": ""
},
{
"docid": "bffe2f0dc57611d49a9041c66ce4515f",
"text": "This paper proposes a multimodal fusion model for 3D car detection inputting both point clouds and RGB images and generates the corresponding 3D bounding boxes. Our model is composed of two subnetworks: one is point-based method and another is multi-view based method, which is then combined by a decision fusion model. This decision model can absorb the advantages of these two sub-networks and restrict their shortcomings effectively. Experiments on the KITTI 3D car detection benchmark show that our work can achieve state of the art performance.",
"title": ""
},
{
"docid": "30bc96451dd979a8c08810415e4a2478",
"text": "An adaptive circulator fabricated on a 130 nm CMOS is presented. Circulator has two adaptive blocks for gain and phase mismatch correction and leakage cancelation. The impedance matching circuit corrects mismatches for antenna, divider, and LNTA. The cancelation block cancels the Tx leakage. Measured isolation between transmitter and receiver for single tone at 2.4 GHz is 90 dB, and for a 40 MHz wide-band signal is 50dB. The circulator Rx gain is 10 dB, with NF = 4.7 dB and 5 dB insertion loss.",
"title": ""
},
{
"docid": "0557b35b7967184d8a69f09316e6e843",
"text": "Research on decision making under uncertainty has been strongly influenced by the documentation of numerous expected utility (EU) anomalies-behaviors that violate the expected utility axioms. The relative lack of progress on the closely related topic of intertemporal choice is partly due to the absence of an analogous set of discounted utility (DU) anomalies. We enumerate a set of DU anomalies analogous to the EU anomalies and propose a model that accounts for the anomalies, as well as other intertemporal choice phenomena incompatible with DU. We discuss implications for savings behavior, estimation of discount rates, and choice framing effects.",
"title": ""
},
{
"docid": "dd2b57d2f73149b7f60b68bdef51625d",
"text": "This work reviews (i) the most recent information on waste arisings and waste disposal options in the world, in the European Union (EU), in Organisation for Economic Co-operation and Development (OEDC) countries, and in some developing countries (notably China) and (ii) the potential direct and indirect impact of waste management activities on health. Though the main focus is primarily on municipal solid waste (MSW), exposure to bioaerosols from composting facilities and to pathogens from sewage treatment plants are considered. The reported effects of radioactive waste are also briefly reviewed. Hundreds of epidemiological studies reported on the incidence of a wide range of possible illnesses on employees of waste facilities and on the resident population. The main conclusion of the overall assessment of the literature is that the evidence of adverse health outcomes for the general population living near landfill sites, incinerators, composting facilities and nuclear installations is usually insufficient and inconclusive. There is convincing evidence of a high risk of gastrointestinal problems associated with pathogens originating at sewage treatment plants. In order to improve the quality and usefulness of epidemiological studies applied to populations residing in areas where waste management facilities are located or planned, preference should be given to prospective cohort studies of sufficient statistical power, with access to direct human exposure measurements, and supported by data on health effect biomarkers and susceptibility biomarkers.",
"title": ""
},
{
"docid": "6997c2d2f5e3a2c16f4eece6b2ef7abd",
"text": "Process, 347 Abstraction Concepts, 75ion Concepts, 75 Horizontal Abstraction, 75 Vertical Abstraction, 77",
"title": ""
},
{
"docid": "4a58ca6e628248088455bf9b8d10711b",
"text": "Developmental dysgraphia, being observed among 10–30% of school-aged children, is a disturbance or difficulty in the production of written language that has to do with the mechanics of writing. The objective of this study is to propose a method that can be used for automated diagnosis of this disorder, as well as for estimation of difficulty level as determined by the handwriting proficiency screening questionnaire. We used a digitizing tablet to acquire handwriting and consequently employed a complex parameterization in order to quantify its kinematic aspects and hidden complexities. We also introduced a simple intrawriter normalization that increased dysgraphia discrimination and HPSQ estimation accuracies. Using a random forest classifier, we reached 96% sensitivity and specificity, while in the case of automated rating by the HPSQ total score, we reached 10% estimation error. This study proves that digital parameterization of pressure and altitude/tilt patterns in children with dysgraphia can be used for preliminary diagnosis of this writing disorder.",
"title": ""
},
{
"docid": "d42a30b26ef26e7bf9b4e5766d620395",
"text": "Development of Web 2.0 enabled users to share information online, which results into an exponential growth of world wide web data. This leads to the so-called information overload problem. Recommender Systems (RS) are intelligent systems, helping on-line users to overcome information overload by providing customized recommendations on various items. In real world, people are willing to take advice and recommendation from their trustworthy friends only. Trust plays a key role in the decision-making process of a person. Incorporation of trust information in RS, results in a new class of recommender systems called trust aware recommender systems (TARS). This paper presents a survey on various implicit trust generation techniques in context of TARS. We have analyzed eight different implicit trust metrics, with respect to various properties of trust proposed by researchers in regard to TARS. Keywords—implicit trust; trust aware recommender system; trust metrics.",
"title": ""
},
{
"docid": "cb2e556dcd7ee57998dbc0c4746f59ff",
"text": "Affective understanding of film plays an important role in sophisticated movie analysis, ranking and indexing. However, due to the seemingly inscrutable nature of emotions and the broad affective gap from low-level features, this problem is seldom addressed. In this paper, we develop a systematic approach grounded upon psychology and cinematography to address several important issues in affective understanding. An appropriate set of affective categories are identified and steps for their classification developed. A number of effective audiovisual cues are formulated to help bridge the affective gap. In particular, a holistic method of extracting affective information from the multifaceted audio stream has been introduced. Besides classifying every scene in Hollywood domain movies probabilistically into the affective categories, some exciting applications are demonstrated. The experimental results validate the proposed approach and the efficacy of the audiovisual cues.",
"title": ""
},
{
"docid": "872ef59b5bec5f6cbb9fcb206b6fe49e",
"text": "In this paper, the analysis and design of a three-level LLC series resonant converter (TL LLC SRC) for high- and wide-input-voltage applications is presented. The TL LLC SRC discussed in this paper consists of two half-bridge LLC SRCs in series, sharing a resonant inductor and a transformer. Its main advantages are that the voltage across each switch is clamped at half of the input voltage and that voltage balance is achieved. Thus, it is suitable for high-input-voltage applications. Moreover, due to its simple driving signals, the additional circulating current of the conventional TL LLC SRCs does not appear in the converter, and a simpler driving circuitry is allowed to be designed. With this converter, the operation principles, the gain of the LLC resonant tank, and the zero-voltage-switching condition under wide input voltage variation are analyzed. Both the current and voltage stresses over different design factors of the resonant tank are discussed as well. Based on the results of these analyses, a design example is provided and its validity is confirmed by an experiment involving a prototype converter with an input of 400-600 V and an output of 48 V/20 A. In addition, a family of TL LLC SRCs with double-resonant tanks for high-input-voltage applications is introduced. While this paper deals with a TL LLC SRC, the analysis results can be applied to other TL LLC SRCs for wide-input-voltage applications.",
"title": ""
},
{
"docid": "138e0d07a9c22224c04bf4b983819f01",
"text": "The olfactory receptor gene family is the largest in the mammalian genome (and larger than any other gene family in any other species), comprising 1% of genes. Beginning with a genetic radiation in reptiles roughly 200 million years ago, terrestrial vertebrates can detect millions of odorants. Each species has an olfactory repertoire unique to the genetic makeup of that species. The human olfactory repertoire is quite diverse. Contrary to erroneously reported estimates, humans can detect millions of airborne odorants (volatiles) in quite small concentrations. We exhibit tremendous variation in our genes that control the receptors in our olfactory epithelium, and this may relate to variation in cross-cultural perception of and preference for odors. With age, humans experience differential olfactory dysfunction, with some odors remaining strong and others becoming increasingly faint. Olfactory dysfunction has been pathologically linked to depression and quality of life issues, neurodegenerative disorders, adult and childhood obesity, and decreased nutrition in elderly females. Human pheromones, a controversial subject, seem to be a natural phenomenon, with a small number identified in clinical studies. The consumer product industry (perfumes, food and beverage, and pesticides) devotes billions of dollars each year supporting olfactory research in an effort to enhance product design and marketing. With so many intersecting areas of research, anthropology has a tremendous contribution to make to this growing body of work that crosses traditional disciplinary lines and has a clear applied component. Also, anthropology could benefit from considering the power of the olfactory system in memory, behavioral and social cues, evolutionary history, mate choice, food decisions, and overall health.",
"title": ""
},
{
"docid": "fde2aefec80624ff4bc21d055ffbe27b",
"text": "Object detector with region proposal networks such as Fast/Faster R-CNN [1, 2] have shown the state-of-the art performance on several benchmarks. However, they have limited success for detecting small objects. We argue the limitation is related to insufficient performance of Fast R-CNN block in Faster R-CNN. In this paper, we propose a refining block for Fast R-CNN. We further merge the block and Faster R-CNN into a single network (RF-RCNN). The RF-RCNN was applied on plate and human detection in RoadView image that consists of high resolution street images (over 30M pixels). As a result, the RF-RCNN showed great improvement over the Faster-RCNN.",
"title": ""
},
{
"docid": "0b61d0ffe709d29e133ead6d6211a003",
"text": "The hypothesis that Enterococcus faecalis resists common intracanal medications by forming biofilms was tested. E. faecalis colonization of 46 extracted, medicated roots was observed with scanning electron microscopy (SEM) and scanning confocal laser microscopy. SEM detected colonization of root canals medicated with calcium hydroxide points and the positive control within 2 days. SEM detected biofilms in canals medicated with calcium hydroxide paste in an average of 77 days. Scanning confocal laser microscopy analysis of two calcium hydroxide paste medicated roots showed viable colonies forming in a root canal infected for 86 days, whereas in a canal infected for 160 days, a mushroom-shape typical of a biofilm was observed. Analysis by sodium dodecyl sulfate polyacrylamide gel electrophoresis showed no differences between the protein profiles of bacteria in free-floating (planktonic) and inoculum cultures. Analysis of biofilm bacteria was inconclusive. These observations support potential E. faecalis biofilm formation in vivo in medicated root canals.",
"title": ""
},
{
"docid": "99196d6c559d31f465ea5c64d165c283",
"text": "The United States Supreme Court recently ruled that execution by a commonly used protocol of drug administration does not represent cruel or unusual punishment. Various medical journals have editorialized on this drug protocol, the death penalty in general and the role that physicians play. Many physicians, and societies of physicians, express the opinion that it is unethical for doctors to participate in executions. This Target Article explores the harm that occurs to murder victims' relatives when an execution is delayed or indefinitely postponed. By using established principles in psychiatry and the science of the brain, it is shown that victims' relatives can suffer brain damage when justice is not done. Conversely, adequate justice can reverse some of those changes in the brain. Thus, physician opposition to capital punishment may be contributing to significant harm. In this context, the ethics of physician involvement in lethal injection is complex.",
"title": ""
}
] |
scidocsrr
|
a368257773296c2f8337c1f93ecf14a1
|
Intrinsic Depth: Improving Depth Transfer with Intrinsic Images
|
[
{
"docid": "a5776d4da32a93c69b18c696c717e634",
"text": "Optical flow computation is a key component in many computer vision systems designed for tasks such as action detection or activity recognition. However, despite several major advances over the last decade, handling large displacement in optical flow remains an open problem. Inspired by the large displacement optical flow of Brox and Malik, our approach, termed Deep Flow, blends a matching algorithm with a variational approach for optical flow. We propose a descriptor matching algorithm, tailored to the optical flow problem, that allows to boost performance on fast motions. The matching algorithm builds upon a multi-stage architecture with 6 layers, interleaving convolutions and max-pooling, a construction akin to deep convolutional nets. Using dense sampling, it allows to efficiently retrieve quasi-dense correspondences, and enjoys a built-in smoothing effect on descriptors matches, a valuable asset for integration into an energy minimization framework for optical flow estimation. Deep Flow efficiently handles large displacements occurring in realistic videos, and shows competitive performance on optical flow benchmarks. Furthermore, it sets a new state-of-the-art on the MPI-Sintel dataset.",
"title": ""
}
] |
[
{
"docid": "aebb4ee07fd2b9f804746df85a6af151",
"text": "Markov chain Monte Carlo (MC) simulations started in earnest with the 1953 article by Nicholas Metropolis, Arianna Rosenbluth, Marshall Rosenbluth, Augusta Teller and Edward Teller [18]. Since then MC simulations have become an indispensable tool with applications in many branches of science. Some of those are reviewed in the proceedings [13] of the 2003 Los Alamos conference, which celebrated the 50th birthday of Metropolis simulations. The purpose of this tutorial is to provide an overview of basic concepts, which are prerequisites for an understanding of the more advanced lectures of this volume. In particular the lectures by Prof. Landau are closely related. The theory behind MC simulations is based on statistics and the analy-",
"title": ""
},
{
"docid": "62c4d4a68164835f897306222c6e9782",
"text": "M.A. in Translation Studies, Working as a Translator E-mail: shabnam_shakernia@yahoo.com This study aimed to investigate the use of Nida’s formal and dynamic equivalence and Newmark’s Semantic and communicative translation on two short stories. The present study aimed to investigate which of these approaches are the main focuses of the translators in the translations of the two short stories. In order to systematically conduct the study, two short stories with their corresponding Persian translations were analyzed. The findings obtained from the analysis show that the readability of the translation especially in short stories is more important than preserving the original wording. Moreover, the findings manifest that these translations are also tried to have naturalness.",
"title": ""
},
{
"docid": "df163d94fbf0414af1dde4a9e7fe7624",
"text": "This paper introduces a web image dataset created by NUS's Lab for Media Search. The dataset includes: (1) 269,648 images and the associated tags from Flickr, with a total of 5,018 unique tags; (2) six types of low-level features extracted from these images, including 64-D color histogram, 144-D color correlogram, 73-D edge direction histogram, 128-D wavelet texture, 225-D block-wise color moments extracted over 5x5 fixed grid partitions, and 500-D bag of words based on SIFT descriptions; and (3) ground-truth for 81 concepts that can be used for evaluation. Based on this dataset, we highlight characteristics of Web image collections and identify four research issues on web image annotation and retrieval. We also provide the baseline results for web image annotation by learning from the tags using the traditional k-NN algorithm. The benchmark results indicate that it is possible to learn effective models from sufficiently large image dataset to facilitate general image retrieval.",
"title": ""
},
{
"docid": "e487efba10df1b548d897d95b348bed2",
"text": "Threats of distributed denial of service (DDoS) attacks have been increasing day-by-day due to rapid development of computer networks and associated infrastructure, and millions of software applications, large and small, addressing all varieties of tasks. Botnets pose a major threat to network security as they are widely used for many Internet crimes such as DDoS attacks, identity theft, email spamming, and click fraud. Botnet based DDoS attacks are catastrophic to the victim network as they can exhaust both network bandwidth and resources of the victim machine. This survey presents a comprehensive overview of DDoS attacks, their causes, types with a taxonomy, and technical details of various attack launching tools. A detailed discussion of several botnet architectures, tools developed using botnet architectures, and pros and cons analysis are also included. Furthermore, a list of important issues and research challenges is also reported.",
"title": ""
},
{
"docid": "2c798421352e4f128823fca2e229e812",
"text": "The use of renewables materials for industrial applications is becoming impellent due to the increasing demand of alternatives to scarce and unrenewable petroleum supplies. In this regard, nanocrystalline cellulose, NCC, derived from cellulose, the most abundant biopolymer, is one of the most promising materials. NCC has unique features, interesting for the development of new materials: the abundance of the source cellulose, its renewability and environmentally benign nature, its mechanical properties and its nano-scaled dimensions open a wide range of possible properties to be discovered. One of the most promising uses of NCC is in polymer matrix nanocomposites, because it can provide a significant reinforcement. This review provides an overview on this emerging nanomaterial, focusing on extraction procedures, especially from lignocellulosic biomass, and on technological developments and applications of NCC-based materials. Challenges and future opportunities of NCC-based materials will be are discussed as well as obstacles remaining for their large use.",
"title": ""
},
{
"docid": "bb853c369f37d2d960d6b312f80cfa98",
"text": "The purpose of this platform is to support research and education goals in human-robot interaction and mobile manipulation with applications that require the integration of these abilities. In particular, our research aims to develop personal robots that work with people as capable teammates to assist in eldercare, healthcare, domestic chores, and other physical tasks that require robots to serve as competent members of human-robot teams. The robot’s small, agile design is particularly well suited to human-robot interaction and coordination in human living spaces. Our collaborators include the Laboratory for Perceptual Robotics at the University of Massachusetts at Amherst, Xitome Design, Meka Robotics, and digitROBOTICS.",
"title": ""
},
{
"docid": "029cca0b7e62f9b52e3d35422c11cea4",
"text": "This letter presents the design of a novel wideband horizontally polarized omnidirectional printed loop antenna. The proposed antenna consists of a loop with periodical capacitive loading and a parallel stripline as an impedance transformer. Periodical capacitive loading is realized by adding interlaced coupling lines at the end of each section. Similarly to mu-zero resonance (MZR) antennas, the periodical capacitive loaded loop antenna proposed in this letter allows current along the loop to remain in phase and uniform. Therefore, it can achieve a horizontally polarized omnidirectional pattern in the far field, like a magnetic dipole antenna, even though the perimeter of the loop is comparable to the operating wavelength. Furthermore, the periodical capacitive loading is also useful to achieve a wide impedance bandwidth. A prototype of the proposed periodical capacitive loaded loop antenna is fabricated and measured. It can provide a wide impedance bandwidth of about 800 MHz (2170-2970 MHz, 31.2%) and a horizontally polarized omnidirectional pattern in the azimuth plane.",
"title": ""
},
{
"docid": "42bc10578e76a0d006ee5d11484b1488",
"text": "In this paper, we present a wrapper-based acoustic group feature selection system for the INTERSPEECH 2015 Computational Paralinguistics Challenge (ComParE) 2015, Eating Condition (EC) Sub-challenge. The wrapper-based method has two components: the feature subset evaluation and the feature space search. The feature subset evaluation is performed using Support Vector Machine (SVM) classifiers. The wrapper method combined with complex algorithms such as SVM is computationally intensive. To address this, the feature space search uses Best Incremental Ranked Subset (BIRS), a fast and efficient algorithm. Moreover, we investigate considering the feature space in meaningful groups rather than individually. The acoustic feature space is partitioned into groups with each group representing a Low Level Descriptor (LLD). This partitioning reduces the time complexity of the search algorithm and makes the problem more tractable while attempting to gain insight into the relevant acoustic feature groups. Our wrapper-based system achieves improvement over the challenge baseline on the EC Sub-challenge test set using a variant of BIRS algorithm and LLD groups.",
"title": ""
},
{
"docid": "dfb5a6dbd1b8788cda6cb41ba741006d",
"text": "The notion of ‘user satisfaction’ plays a prominent role in HCI, yet it remains evasive. This exploratory study reports three experiments from an ongoing research program. In this program we aim to uncover (1) what user satisfaction is, (2) whether it is primarily determined by user expectations or by the interactive experience, (3) how user satisfaction may be related to perceived usability, and (4) the extent to which satisfaction rating scales capture the same interface qualities as uncovered in self-reports of interactive experiences. In all three experiments reported here user satisfaction was found to be a complex construct comprising several concepts, the distribution of which varied with the nature of the experience. Expectations were found to play an important role in the way users approached a browsing task. Satisfaction and perceived usability was assessed using two methods: scores derived from unstructured interviews and from the Web site Analysis MeasureMent Inventory (WAMMI) rating scales. Scores on these two instruments were somewhat similar, but conclusions drawn across all three experiments differed in terms of satisfaction ratings, suggesting that rating scales and interview statements may tap different interface qualities. Recent research suggests that ‘beauty’, or ‘appeal’ is linked to perceived usability so that what is ‘beautiful’ is also perceived to be usable [Interacting with Computers 13 (2000) 127]. This was true in one experiment here using a web site high in perceived usability and appeal. However, using a site with high appeal but very low in perceived usability yielded very high satisfaction, but low perceived usability scores, suggesting that what is ‘beautiful’ need not also be perceived to be usable. The results suggest that web designers may need to pay attention to both visual appeal and usability. q 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "c41ea96802e4b4f5de7d438fb54dbc6d",
"text": "AIM\nThis study explores nurse managers' experiences in dealing with patient/family violence toward their staff.\n\n\nBACKGROUND\nStudies and guidelines have emphasised the responsibility of nurse managers to manage violence directed at their staff. Although studies on nursing staff have highlighted the ineffectiveness of strategies used by nurse managers, few have explored their perspectives on dealing with violence.\n\n\nMETHODS\nThis qualitative study adopted a grounded theory approach to explore the experiences of 26 Japanese nurse managers.\n\n\nRESULTS\nThe nurse managers made decisions using internalised ethical values, which included maintaining organisational functioning, keeping staff safe, advocating for the patient/family and avoiding moral transgressions. They resolved internal conflicts among their ethical values by repeating a holistic assessment and simultaneous approach consisting of damage control and dialogue. They facilitated the involved persons' understanding, acceptance and sensemaking of the incident, which contributed to a resolution of the internal conflicts among their ethical values.\n\n\nCONCLUSIONS\nNurse managers adhere to their ethical values when dealing with patient violence toward nurses. Their ethical decision-making process should be acknowledged as an effective strategy to manage violence.\n\n\nIMPLICATIONS FOR NURSING MANAGEMENT\nOrganisational strategies that support and incorporate managers' ethical decision-making are needed to prevent and manage violence toward nurses.",
"title": ""
},
{
"docid": "3e94875b3229fc621ec90915414b9b22",
"text": "Inflammation, endothelial dysfunction, and mineral bone disease are critical factors contributing to morbidity and mortality in hemodialysis (HD) patients. Physical exercise alleviates inflammation and increases bone density. Here, we investigated the effects of intradialytic aerobic cycling exercise on HD patients. Forty end-stage renal disease patients undergoing HD were randomly assigned to either an exercise or control group. The patients in the exercise group performed a cycling program consisting of a 5-minute warm-up, 20 minutes of cycling at the desired workload, and a 5-minute cool down during 3 HD sessions per week for 3 months. Biochemical markers, inflammatory cytokines, nutritional status, the serum endothelial progenitor cell (EPC) count, bone mineral density, and functional capacity were analyzed. After 3 months of exercise, the patients in the exercise group showed significant improvements in serum albumin levels, the body mass index, inflammatory cytokine levels, and the number of cells positive for CD133, CD34, and kinase insert domain-conjugating receptor. Compared with the exercise group, the patients in the control group showed a loss of bone density at the femoral neck and no increases in EPCs. The patients in the exercise group also had a significantly greater 6-minute walk distance after completing the exercise program. Furthermore, the number of EPCs significantly correlated with the 6-minute walk distance both before and after the 3-month program. Intradialytic aerobic cycling exercise programs can effectively alleviate inflammation and improve nutrition, bone mineral density, and exercise tolerance in HD patients.",
"title": ""
},
{
"docid": "ca1177fa72f6b1ffdb2154c000514720",
"text": "The vast information related to products and services available online, of both objective and subjective nature, can be used to provide contextualized suggestions and guidance to possible new customers. User feedback and comments left on different shopping websites, portals and social media have become a valuable resource, and text analysis methods have become an invaluable tool to process this kind of data. A lot of business use-cases have applied sentiment analysis in order to gauge people’s response to a service or product, or to support customers with reaching a decision when choosing such a product. Although methods and techniques in this area abound, the majority only address a handful of natural languages at best. In this paper, we describe a lexiconbased sentiment analysis method designed around the Persian language. An evaluation of the developed GATE pipeline shows an encouraging overall accuracy of up to 69%.",
"title": ""
},
{
"docid": "b2626cc0e91d63378575caf91c89cfed",
"text": "BACKGROUND\nThe aim of the study was to compare clinical outcomes and quality of life in patients undergoing surgery for pilonidal disease with unroofing and marsupialization (UM) or rhomboid excision and Limberg flap (RELP) procedures.\n\n\nMETHODS\nOne hundred forty consecutive patients with pilonidal sinus were randomly assigned to receive either UM or RELP procedures. A specifically designed questionnaire was administered at three months to assess time from the operation until the patient was able to walk, return to daily activities, or sit without pain, time to return to work or school, and time to healing. Postoperative pain was assessed with a visual analog scale and the McGill Pain Questionnaire. Patients' quality of life was evaluated with the Cardiff Wound Impact Schedule (CWIS). Questionnaires were administered by a clinician blinded to treatment.\n\n\nRESULTS\nCompared with RELP, patients receiving UM had significantly shorter duration of operation and hospital stay, shorter time periods to walk, return to daily activities, or sit without pain and to return to work or school, and fewer complications. Time to final healing was significantly shorter and quality of life scores on the CWIS were higher in patients receiving RELP than in those receiving UM. Patients with UM had lower levels of pain one week after surgery.\n\n\nCONCLUSION\nThe unroofing and marsupialization procedure provides more clinical benefits in the treatment of pilonidal disease than rhomboid excision and Limberg flap and should be considered the procedure of choice. However, it may be associated with more inconvenience in wound care and longer healing time than rhomboid excision and Lindberg flap.",
"title": ""
},
{
"docid": "be9c88e6916e1c5af04e8ae1b6dc5748",
"text": "In neural networks, the learning rate of the gradient descent strongly affects performance. This prevents reliable out-of-the-box training of a model on a new problem. We propose the All Learning Rates At Once (Alrao) algorithm: each unit or feature in the network gets its own learning rate sampled from a random distribution spanning several orders of magnitude, in the hope that enough units will get a close-to-optimal learning rate. Perhaps surprisingly, stochastic gradient descent (SGD) with Alrao performs close to SGD with an optimally tuned learning rate, for various network architectures and problems. In our experiments, all Alrao runs were able to learn well without any tuning.",
"title": ""
},
{
"docid": "07fe3943f0d2d7bbbcc98bcb764a1b23",
"text": "Advertisers use online customer data to target their marketing appeals. This has heightened consumers’ privacy concerns, leading governments to pass laws designed to protect consumer privacy by restricting the use of data and by restricting online tracking techniques used by websites. We use the responses of 3.3 million survey-takers who had been randomly exposed to 9,596 online display (banner) advertising campaigns to explore how strong privacy regulation in the European Union has influenced advertising effectiveness. We find that display advertising became far less effective at changing stated purchase intent after the laws were enacted relative to other countries. The loss in effectiveness was more pronounced for websites that had general content (such as news sites), where non-data-driven targeting is particularly hard to do. The loss of effectiveness was also more pronounced for ads with a smaller page presence and for ads that did not have additional interactive, video, or audio features. ∗Avi Goldfarb is Associate Professor of Marketing, Rotman School of Management, University of Toronto, 105 St George St., Toronto, ON. Tel. 416-946-8604. Email: agoldfarb@rotman.utoronto.ca. Catherine Tucker is Assistant Professor of Marketing, MIT Sloan School of Business, 1 Amherst St., E40-167, Cambridge, MA. Tel. 617-252-1499. Email: cetucker@mit.edu. We thank Glen Urban and participants at workshops at IDC, MIT, Michigan, Northwestern, UC Berkeley and Wharton for helpful comments.",
"title": ""
},
{
"docid": "13173c37670511963b23a42a3cc7e36b",
"text": "In patients having a short nose with a short septal length and/or severe columellar retraction, a septal extension graft is a good solution, as it allows the dome to move caudally and pushes down the columellar base. Fixing the medial crura of the alar cartilages to a septal extension graft leads to an uncomfortably rigid nasal tip and columella, and results in unnatural facial animation. Further, because of the relatively small and weak septal cartilage in the East Asian population, undercorrection of a short nose is not uncommon. To overcome these shortcomings, we have used the septal extension graft combined with a derotation graft. Among 113 patients who underwent the combined procedure, 82 patients had a short nose deformity alone; the remaining 31 patients had a short nose with columellar retraction. Thirty-two patients complained of nasal tip stiffness caused by a septal extension graft from previous operations. In addition to the septal extension graft, a derotation graft was used for bridging the gap between the alar cartilages and the septal extension graft for tip lengthening. Satisfactory results were obtained in 102 (90%) patients. Eleven (10%) patients required revision surgery. This combination method is a good surgical option for patients who have a short nose with small septal cartilages and do not have sufficient cartilage for tip lengthening by using a septal extension graft alone. It can also overcome the postoperative nasal tip rigidity of a septal extension graft.",
"title": ""
},
{
"docid": "26f2e3918eb624ce346673d10b5d2eb7",
"text": "We consider generation and comprehension of natural language referring expression for objects in an image. Unlike generic image captioning which lacks natural standard evaluation criteria, quality of a referring expression may be measured by the receivers ability to correctly infer which object is being described. Following this intuition, we propose two approaches to utilize models trained for comprehension task to generate better expressions. First, we use a comprehension module trained on human-generated expressions, as a critic of referring expression generator. The comprehension module serves as a differentiable proxy of human evaluation, providing training signal to the generation module. Second, we use the comprehension model in a generate-and-rerank pipeline, which chooses from candidate expressions generated by a model according to their performance on the comprehension task. We show that both approaches lead to improved referring expression generation on multiple benchmark datasets.",
"title": ""
},
{
"docid": "8f7375f788d7d152477c7816852dee0d",
"text": "Many decentralized, inter-organizational environments such as supply chains are characterized by high transactional uncertainty and risk. At the same time, blockchain technology promises to mitigate these issues by introducing certainty into economic transactions. This paper discusses the findings of a Design Science Research project involving the construction and evaluation of an information technology artifact in collaboration with Maersk, a leading international shipping company, where central documents in shipping, such as the Bill of Lading, are turned into a smart contract on blockchain. Based on our insights from the project, we provide first evidence for preliminary design principles for applications that aim to mitigate the transactional risk and uncertainty in decentralized environments using blockchain. Both the artifact and the first evidence for emerging design principles are novel, contributing to the discourse on the implications that the advent of blockchain technology poses for governing economic activity.",
"title": ""
},
{
"docid": "b1ae52dfa5ed1bb9c835816ca3fd52b4",
"text": "The use of the halide-sensitive fluorescent probes (6-methoxy-N-(-sulphopropyl)quinolinium (SPQ) and N-(ethoxycarbonylmethyl)-6-methoxyquinolinium bromide (MQAE)) to measure chloride transport in cells has now been established as an alternative to the halide-selective electrode technique, radioisotope efflux assays and patch-clamp electrophysiology. We report here procedures for the assessment of halide efflux, using SPQ/MQAE halide-sensitive fluorescent indicators, from both adherent cultured epithelial cells and freshly obtained primary human airway epithelial cells. The procedure describes the calculation of efflux rate constants using experimentally derived SPQ/MQAE fluorescence intensities and empirically derived Stern-Volmer calibration constants. These fluorescence methods permit the quantitative analysis of CFTR function.",
"title": ""
},
{
"docid": "ee38062c7c479cfc9d8e9fc0982a9ae3",
"text": "Integrating data from heterogeneous sources is often modeled as merging graphs. Given two ormore “compatible”, but not-isomorphic graphs, the first step is to identify a graph alignment, where a potentially partial mapping of vertices between two graphs is computed. A significant portion of the literature on this problem only takes the global structure of the input graphs into account. Only more recent ones additionally use vertex and edge attributes to achieve a more accurate alignment. However, these methods are not designed to scale to map large graphs arising in many modern applications. We propose a new iterative graph aligner, gsaNA, that uses the global structure of the graphs to significantly reduce the problem size and align large graphs with a minimal loss of information. Concretely, we show that our proposed technique is highly flexible, can be used to achieve higher recall, and it is orders of magnitudes faster than the current state of the art techniques. ACM Reference format: Abdurrahman Yaşar and Ümit V. Çatalyürek. 2018. An Iterative Global Structure-Assisted Labeled Network Aligner. In Proceedings of Special Interest Group on Knowledge Discovery and Data Mining, London, England, August 18 (SIGKDD’18), 10 pages. https://doi.org/10.1145/nnnnnnn.nnnnnnn",
"title": ""
}
] |
scidocsrr
|
c56d00e759ba8f672561315ff5e69b28
|
Led-based optical cochlear implant on highly flexible triple layer polyimide substrates
|
[
{
"docid": "700eae4f09baf96bffe94d600098a5fa",
"text": "Temporally precise, noninvasive control of activity in well-defined neuronal populations is a long-sought goal of systems neuroscience. We adapted for this purpose the naturally occurring algal protein Channelrhodopsin-2, a rapidly gated light-sensitive cation channel, by using lentiviral gene delivery in combination with high-speed optical switching to photostimulate mammalian neurons. We demonstrate reliable, millisecond-timescale control of neuronal spiking, as well as control of excitatory and inhibitory synaptic transmission. This technology allows the use of light to alter neural processing at the level of single spikes and synaptic events, yielding a widely applicable tool for neuroscientists and biomedical engineers.",
"title": ""
}
] |
[
{
"docid": "1d956bafdb6b7d4aa2afcfeb77ac8cbb",
"text": "In this paper, we propose a novel model for high-dimensional data, called the Hybrid Orthogonal Projection and Estimation (HOPE) model, which combines a linear orthogonal projection and a finite mixture model under a unified generative modeling framework. The HOPE model itself can be learned unsupervised from unlabelled data based on the maximum likelihood estimation as well as discriminatively from labelled data. More interestingly, we have shown the proposed HOPE models are closely related to neural networks (NNs) in a sense that each hidden layer can be reformulated as a HOPE model. As a result, the HOPE framework can be used as a novel tool to probe why and how NNs work, more importantly, to learn NNs in either supervised or unsupervised ways. In this work, we have investigated the HOPE framework to learn NNs for several standard tasks, including image recognition on MNIST and speech recognition on TIMIT. Experimental results have shown that the HOPE framework yields significant performance gains over the current state-of-the-art methods in various types of NN learning problems, including unsupervised feature learning, supervised or semi-supervised learning.",
"title": ""
},
{
"docid": "25c815f5fc0cf87bdef5e069cbee23a8",
"text": "This paper presents a 9-bit subrange analog-to-digital converter (ADC) consisting of a 3.5-bit flash coarse ADC, a 6-bit successive-approximation-register (SAR) fine ADC, and a differential segmented capacitive digital-to-analog converter (DAC). The flash ADC controls the thermometer coarse capacitors of the DAC and the SAR ADC controls the binary fine ones. Both theoretical analysis and behavioral simulations show that the differential non-linearity (DNL) of a SAR ADC with a segmented DAC is better than that of a binary ADC. The merged switching of the coarse capacitors significantly enhances overall operation speed. At 150 MS/s, the ADC consumes 1.53 mW from a 1.2-V supply. The effective number of bits (ENOB) is 8.69 bits and the effective resolution bandwidth (ERBW) is 100 MHz. With a 1.3-V supply voltage, the sampling rate is 200 MS/s with 2.2-mW power consumption. The ENOB is 8.66 bits and the ERBW is 100 MHz. The FOMs at 1.3 V and 200 MS/s, 1.2 V and 150 MS/s and 1 V and 100 MS/s are 27.2, 24.7, and 17.7 fJ/conversion-step, respectively.",
"title": ""
},
{
"docid": "37b92d9059cbf0e3775e4bf20dbe1f64",
"text": "In this thesis, the framework of multi-stream combination has been explored to improve the noise robustness of automatic speech recognition (ASR) systems. The central idea of multi-stream ASR is to combine information from several sources to improve the performance of a system. The two important issues of multi-stream systems are which information sources (feature representations) to combine and what importance (weights) be given to each information source. In the framework of hybrid hidden Markov model/artificial neural network (HMM/ANN) and Tandem systems, several weighting strategies are investigated in this thesis to merge the posterior outputs of multi-layered perceptrons (MLPs) trained on different feature representations. The best results were obtained by inverse entropy weighting in which the posterior estimates at the output of the MLPs were weighted by their respective inverse output entropies. In the second part of this thesis, two feature representations have been investigated, namely pitch frequency and spectral entropy features. The pitch frequency feature is used along with perceptual linear prediction (PLP) features in a multi-stream framework. The second feature proposed in this thesis is estimated by applying an entropy function to the normalized spectrum to produce a measure which has been termed spectral entropy. The idea of the spectral entropy feature is extended to multi-band spectral entropy features by dividing the normalized full-band spectrum into sub-bands and estimating the spectral entropy of each sub-band. The proposed multi-band spectral entropy features were observed to be robust in high noise conditions. Subsequently, the idea of embedded training is extended to multi-stream HMM/ANN systems. To evaluate the maximum performance that can be achieved by frame-level weighting, we investigated an “oracle test”. We also studied the relationship of oracle selection to inverse entropy weighting and proposed an alternative interpretation of the oracle test to analyze the complementarity of streams in multi-stream systems. The techniques investigated in this work gave a significant improvement in performance for clean as well as noisy test conditions.",
"title": ""
},
{
"docid": "5ea560095b752ca8e7fb6672f4092980",
"text": "Access control is a security aspect whose requirements evolve with technology advances and, at the same time, contemporary social contexts. Multitudes of access control models grow out of their respective application domains such as healthcare and collaborative enterprises; and even then, further administering means, human factor considerations, and infringement management are required to effectively deploy the model in the particular usage environment. This paper presents a survey of access control mechanisms along with their deployment issues and solutions available today. We aim to give a comprehensive big picture as well as pragmatic deployment details to guide in understanding, setting up and enforcing access control in its real world application.",
"title": ""
},
{
"docid": "7bc768eda75a2c7184e4a01d373641db",
"text": "T development of algorithmic ideas for next-generation sequencing can be traced back 300 years to the Prussian city of Königsberg (present-day Kaliningrad, Russia), where seven bridges joined the four parts of the city located on opposing banks of the Pregel River and two river islands (Fig. 1a). At the time, Königsberg’s residents enjoyed strolling through their city, and they wondered if every part of the city could be visited by walking across each of the seven bridges exactly once and returning to one’s starting location. The solution came in 1735, when the great mathematician Leonhard Euler1 made a conceptual breakthrough that would solve this ‘Bridges of Königsberg problem’. Euler’s first insight was to represent each landmass as a point (called a node) and each bridge as a line segment (called an edge) connecting the appropriate two points. This creates a graph—a network of nodes connected by edges (Fig. 1b). By describing a procedure for determining whether an arbitrary graph contains a path that visits every edge exactly once and returns to where it started, Euler not only resolved the Bridges of Königsberg problem but also effectively launched the entire branch of mathematics known today as graph theory2. Since Euler’s original description, the use of graph theory has turned out to have many additional practical applications, most of which have greater scientific importance than the development of walking itineraries. Specifically, Euler’s ideas were subsequently adapted by Dutch mathematician Nicolaas de Bruijn to find a cyclic sequence of letters taken from a given alphabet for which every possible word of a certain length (k) appears as a string of consecutive characters in the cyclic sequence exactly once (Box 1 and Fig. 2). Application of the de Bruijn graph has also proven invaluable in the field of molecular biology where researchers are faced with the problem of assembling billions of short sequencing reads into a single genome. In the following article, we describe the problems faced when constructing a genome and how the de Bruijn graph approach can be applied to assemble short-read sequences.",
"title": ""
},
{
"docid": "5c46291b9a3cab0fb2f9501fff6f6a36",
"text": "We discuss the fundamental limits of computing using a new paradigm for quantum computation, cellular automata composed of arrays of Coulombically coupled quantum dot molecules, which we term quantum cellular automata (QCA). Any logical or arithmetic operation can be performed in this scheme. QCA’s provide a valuable concrete example of quantum computation in which a number of fundamental issues come to light. We examine the physics of the computing process in this paradigm. We show to what extent thermodynamic considerations impose limits on the ultimate size of individual QCA arrays. Adiabatic operation of the QCA is examined and the implications for dissipationless computing are explored.",
"title": ""
},
{
"docid": "f5fb99d9dccdd2a16dc4c3f160e65389",
"text": "We present the Flink system for the extraction, aggregation and visualization of online social networks. Flink employs semantic technology for reasoning with personal information extracted from a number of electronic information sources including web pages, emails, publication archives and FOAF profiles. The acquired knowledge is used for the purposes of social network analysis and for generating a webbased presentation of the community. We demonstrate our novel method to social science based on electronic data using the example of the Semantic Web research community.",
"title": ""
},
{
"docid": "d8f6f4bef57e26e9d2dc3684ea07a2f4",
"text": "Alzheimer's disease is a progressive neurodegenerative disease that typically manifests clinically as an isolated amnestic deficit that progresses to a characteristic dementia syndrome. Advances in neuroimaging research have enabled mapping of diverse molecular, functional, and structural aspects of Alzheimer's disease pathology in ever increasing temporal and regional detail. Accumulating evidence suggests that distinct types of imaging abnormalities related to Alzheimer's disease follow a consistent trajectory during pathogenesis of the disease, and that the first changes can be detected years before the disease manifests clinically. These findings have fuelled clinical interest in the use of specific imaging markers for Alzheimer's disease to predict future development of dementia in patients who are at risk. The potential clinical usefulness of single or multimodal imaging markers is being investigated in selected patient samples from clinical expert centres, but additional research is needed before these promising imaging markers can be successfully translated from research into clinical practice in routine care.",
"title": ""
},
{
"docid": "8bd15d6b67bf73c85d83f5548bc48c56",
"text": "Traditional time series similarity search, based on relevance feedback, combines initial, positive and negative relevant series directly to create new query sequence for the next search; it can’t make full use of the negative relevant sequence, even results in inaccurate query results due to excessive adjustment of the query sequence in some cases. In this paper, time series similarity search based on separate relevance feedback is proposed, each round of query includes positive query and negative query, and combines the results of them to generate the query results of each round. For one data sequence, positive query evaluates its similarity to the initial and positive relevant sequences, and negative query evaluates it’s similarity to the negative relevant sequences. The final similar sequences should be not only close to positive relevant series but also far away from negative relevant series. The experiments on UCR data sets showed that, compared with the retrieval method without feedback and the commonly used feedback algorithm the proposed method can improve accuracy of similarity search on some data sets.",
"title": ""
},
{
"docid": "64cbd9f9644cc71f5108c3f2ee7851e7",
"text": "The use of neurofeedback as an operant conditioning paradigm has disclosed that participants are able to gain some control over particular aspects of their electroencephalogram (EEG). Based on the association between theta activity (4-7 Hz) and working memory performance, and sensorimotor rhythm (SMR) activity (12-15 Hz) and attentional processing, we investigated the possibility that training healthy individuals to enhance either of these frequencies would specifically influence a particular aspect of cognitive performance, relative to a non-neurofeedback control-group. The results revealed that after eight sessions of neurofeedback the SMR-group were able to selectively enhance their SMR activity, as indexed by increased SMR/theta and SMR/beta ratios. In contrast, those trained to selectively enhance theta activity failed to exhibit any changes in their EEG. Furthermore, the SMR-group exhibited a significant and clear improvement in cued recall performance, using a semantic working memory task, and to a lesser extent showed improved accuracy of focused attentional processing using a 2-sequence continuous performance task. This suggests that normal healthy individuals can learn to increase a specific component of their EEG activity, and that such enhanced activity may facilitate semantic processing in a working memory task and to a lesser extent focused attention. We discuss possible mechanisms that could mediate such effects and indicate a number of directions for future research.",
"title": ""
},
{
"docid": "051afcc588dc8888699fd2e627d935ac",
"text": "Objective: Evaluation of dietary intakes and lifestyle factors of German vegans.Design: Cross-sectional study.Settings: Germany.Subjects: Subjects were recruited through journal advertisements. Of 868 volunteers, only 154 participated in all study segments (pre- and main questionnaire, two 9-day food frequency questionnaires, blood sampling) and fulfilled the following study criteria: vegan dietary intake at least 1 year prior to study start, minimum age of 18 y, no pregnancy or childbirth during the last 12 months.Interventions: No interventions.Results: All the 154 subjects had a comparatively low BMI (median 21.2 kg/m2), with an extremely low mean consumption of alcohol (0.77±3.14 g/day) and tobacco (96.8% were nonsmokers). Mean energy intake (total collective: 8.23±2.77 MJ) was higher in strict vegans than in moderate ones. Mean carbohydrate, fat, and protein intakes in proportion to energy (total collective: 57.1:29.7:11.6%) agreed with current recommendations. Recommended intakes for vitamins and minerals were attained through diet, except for calcium (median intake: 81.1% of recommendation), iodine (median: 40.6%), and cobalamin (median: 8.8%). For the male subgroup, the intake of a small amount of food of animal origin improved vitamin and mineral nutrient densities (except for zinc), whereas this was not the case for the female subgroup (except for calcium).Conclusion: In order to reach favourable vitamin and mineral intakes, vegans should consider taking supplements containing riboflavin, cobalamin, calcium, and iodine. Intake of total energy and protein should also be improved.Sponsorship: EDEN Foundation, Bad Soden, Germany; Stoll VITA Foundation, Waldshut-Tiengen, Germany",
"title": ""
},
{
"docid": "6868e3b2432d9914a9b4a4fd2b50b3ee",
"text": "Nutritional deficiencies detection for coffee leaves is a task which is often undertaken manually by experts on the field known as agronomists. The process they follow to carry this task is based on observation of the different characteristics of the coffee leaves while relying on their own experience. Visual fatigue and human error in this empiric approach cause leaves to be incorrectly labeled and thus affecting the quality of the data obtained. In this context, different crowdsourcing approaches can be applied to enhance the quality of the data extracted. These approaches separately propose the use of voting systems, association rule filters and evolutive learning. In this paper, we extend the use of association rule filters and evolutive approach by combining them in a methodology to enhance the quality of the data while guiding the users during the main stages of data extraction tasks. Moreover, our methodology proposes a reward component to engage users and keep them motivated during the crowdsourcing tasks. The extracted dataset by applying our proposed methodology in a case study on Peruvian coffee leaves resulted in 93.33% accuracy with 30 instances collected by 8 experts and evaluated by 2 agronomic engineers with background on coffee leaves. The accuracy of the dataset was higher than independently implementing the evolutive feedback strategy and an empiric approach which resulted in 86.67% and 70% accuracy respectively under the same conditions.",
"title": ""
},
{
"docid": "fb1a178c7c097fbbf0921dcef915dc55",
"text": "AIMS\nThe management of open lower limb fractures in the United Kingdom has evolved over the last ten years with the introduction of major trauma networks (MTNs), the publication of standards of care and the wide acceptance of a combined orthopaedic and plastic surgical approach to management. The aims of this study were to report recent changes in outcome of open tibial fractures following the implementation of these changes.\n\n\nPATIENTS AND METHODS\nData on all patients with an open tibial fracture presenting to a major trauma centre between 2011 and 2012 were collected prospectively. The treatment and outcomes of the 65 Gustilo Anderson Grade III B tibial fractures were compared with historical data from the same unit.\n\n\nRESULTS\nThe volume of cases, the proportion of patients directly admitted and undergoing first debridement in a major trauma centre all increased. The rate of limb salvage was maintained at 94% and a successful limb reconstruction rate of 98.5% was achieved. The rate of deep bone infection improved to 1.6% (one patient) in the follow-up period.\n\n\nCONCLUSION\nThe reasons for these improvements are multifactorial, but the major trauma network facilitating early presentation to the major trauma centre, senior orthopaedic and plastic surgical involvement at every stage and proactive microbiological management, may be important factors.\n\n\nTAKE HOME MESSAGE\nThis study demonstrates that a systemised trauma network combined with evidence based practice can lead to improvements in patient care.",
"title": ""
},
{
"docid": "6fe371a784928b17b3360d12961ae40d",
"text": "The combination of filters concept is a simple and flexible method to circumvent various compromises hampering the operation of adaptive linear filters. Recently, applications which require the identification of not only linear, but also nonlinear systems are widely studied. In this paper, we propose a combination of adaptive Volterra filters as the most versatile nonlinear models with memory. Moreover, we develop a novel approach that shows a similar behavior but significantly reduces the computational load by combining Volterra kernels rather than complete Volterra filters. Following an outline of the basic principles, the second part of the paper focuses on the application to nonlinear acoustic echo cancellation scenarios. As the ratio of the linear to nonlinear echo signal power is, in general, a priori unknown and time-variant, the performance of nonlinear echo cancellers may be inferior to a linear echo canceller if the nonlinear distortion is very low. Therefore, a modified version of the combination of kernels is developed obtaining a robust behavior regardless of the level of nonlinear distortion. Experiments with noise and speech signals demonstrate the desired behavior and the robustness of both the combination of Volterra filters and the combination of kernels approaches in different application scenarios.",
"title": ""
},
{
"docid": "72ddcb7a55918a328576a811a89d245b",
"text": "Among all new emerging RNA species, microRNAs (miRNAs) have attracted the interest of the scientific community due to their implications as biomarkers of prognostic value, disease progression, or diagnosis, because of defining features as robust association with the disease, or stable presence in easily accessible human biofluids. This field of research has been established twenty years ago, and the development has been considerable. The regulatory nature of miRNAs makes them great candidates for the treatment of infectious diseases, and a successful example in the field is currently being translated to clinical practice. This review will present a general outline of miRNAmolecules, as well as successful stories of translational significance which are getting us closer from the basic bench studies into clinical practice.",
"title": ""
},
{
"docid": "18e5b72779f6860e2a0f2ec7251b0718",
"text": "This paper presents a novel dielectric resonator filter exploiting dual TM11 degenerate modes. The dielectric rod resonators are short circuited on the top and bottom surfaces to the metallic cavity. The dual-mode cavities can be conveniently arranged in many practical coupling configurations. Through-holes in height direction are made in each of the dielectric rods for the frequency tuning and coupling screws. All the coupling elements, including inter-cavity coupling elements, are accessible from the top of the filter cavity. This planar coupling configuration is very attractive for composing a diplexer or a parallel multifilter assembly using the proposed filter structure. To demonstrate the new filter technology, two eight-pole filters with cross-couplings for UMTS band are prototyped and tested. It has been experimentally shown that as compared to a coaxial combline filter with a similar unloaded Q, the proposed dual-mode filter can save filter volume by more than 50%. Moreover, a simple method that can effectively suppress the lower band spurious mode is also presented.",
"title": ""
},
{
"docid": "be5b0dd659434e77ce47034a51fd2767",
"text": "Current obstacles in the study of social media marketing include dealing with massive data and real-time updates have motivated to contribute solutions that can be adopted for viral marketing. Since information diffusion and social networks are the core of viral marketing, this article aims to investigate the constellation of diffusion methods for viral marketing. Studies on diffusion methods for viral marketing have applied different computational methods, but a systematic investigation of these methods has limited. Most of the literature have focused on achieving objectives such as influence maximization or community detection. Therefore, this article aims to conduct an in-depth review of works related to diffusion for viral marketing. Viral marketing has applied to business-to-consumer transactions but has seen limited adoption in business-to-business transactions. The literature review reveals a lack of new diffusion methods, especially in dynamic and large-scale networks. It also offers insights into applying various mining methods for viral marketing. It discusses some of the challenges, limitations, and future research directions of information diffusion for viral marketing. The article also introduces a viral marketing information diffusion model. The proposed model attempts to solve the dynamicity and large-scale data of social networks by adopting incremental clustering and a stochastic differential equation for business-to-business transactions. Keywords—information diffusion; viral marketing; social media marketing; social networks",
"title": ""
},
{
"docid": "91af0de4d566cdd28282b76ba43ca163",
"text": "Melancholia is typified by features of psychomotor slowing, anxiety, appetite loss and sleep changes. It is usually observed in 20-30% of individuals meeting diagnostic criteria for major depressive disorder (MDD). There is currently no agreement on whether melancholic MDD represents a distinct entity defined by neurobiological as well as clinical features or, rather, a specifier for MDD. This situation is reflected in the revisions to DSM, including in the DSM-5 due for release in 2013. With this context in mind, the authors review the origins of the construct of melancholia in MDD, its theoretical grounding and the defining characteristics that arose from this research. The authors then outline the state of knowledge on the neurobiology of melancholia. This second aspect is illustrative of the National Institutes of Mental Health's research domain criteria initiative, which offers a framework for redefining constructs along neurobiological dimensions. The authors also consider the outlook for identifying a useful biosignature of melancholia.",
"title": ""
}
] |
scidocsrr
|
bd6beaf1f4ef4fcc9f860dac019de463
|
Spring Embedders and Force Directed Graph Drawing Algorithms
|
[
{
"docid": "b66846f076d41c8be3f5921cc085d997",
"text": "We present a novel hierarchical force-directed method for drawing large graphs. The algorithm produces a graph embedding in an Euclidean space E of any dimension. A two or three dimensional drawing of the graph is then obtained by projecting a higher-dimensional embedding into a two or three dimensional subspace of E. Projecting high-dimensional drawings onto two or three dimensions often results in drawings that are “smoother” and more symmetric. Among the other notable features of our approach are the utilization of a maximal independent set filtration of the set of vertices of a graph, a fast energy function minimization strategy, efficient memory management, and an intelligent initial placement of vertices. Our implementation of the algorithm can draw graphs with tens of thousands of vertices using a negligible amount of memory in less than one minute on a mid-range PC.",
"title": ""
},
{
"docid": "6073601ab6d6e1dbba7a42c346a29436",
"text": "We present a new focus+Context (fisheye) technique for visualizing and manipulating large hierarchies. Our technique assigns more display space to a portion of the hierarchy while still embedding it in the context of the entire hierarchy. The essence of this scheme is to layout the hierarchy in a uniform way on a hyperbolic plane and map this plane onto a circular display region. This supports a smooth blending between focus and context, as well as continuous redirection of the focus. We have developed effective procedures for manipulating the focus using pointer clicks as well as interactive dragging, and for smoothly animating transitions across such manipulation. A laboratory experiment comparing the hyperbolic browser with a conventional hierarchy browser was conducted.",
"title": ""
}
] |
[
{
"docid": "3205184f918eab105ee17bfb12277696",
"text": "The Trilobita were characterized by a cephalic region in which the biomineralized exoskeleton showed relatively high morphological differentiation among a taxonomically stable set of well defined segments, and an ontogenetically and taxonomically dynamic trunk region in which both exoskeletal segments and ventral appendages were similar in overall form. Ventral appendages were homonomous biramous limbs throughout both the cephalon and trunk, except for the most anterior appendage pair that was antenniform, preoral, and uniramous, and a posteriormost pair of antenniform cerci, known only in one species. In some clades trunk exoskeletal segments were divided into two batches. In some, but not all, of these clades the boundary between batches coincided with the boundary between the thorax and the adult pygidium. The repeated differentiation of the trunk into two batches of segments from the homonomous trunk condition indicates an evolutionary trend in aspects of body patterning regulation that was achieved independently in several trilobite clades. The phylogenetic placement of trilobites and congruence of broad patterns of tagmosis with those seen among extant arthropods suggest that the expression domains of trilobite cephalic Hox genes may have overlapped in a manner similar to that seen among extant arachnates. This, coupled with the fact that trilobites likely possessed ten Hox genes, presents one alternative to a recent model in which Hox gene distribution in trilobites was equated to eight putative divisions of the trilobite body plan.",
"title": ""
},
{
"docid": "f702a8c28184a6d49cd2f29a1e4e7ea4",
"text": "Recent deep learning based approaches have shown promising results for the challenging task of inpainting large missing regions in an image. These methods can generate visually plausible image structures and textures, but often create distorted structures or blurry textures inconsistent with surrounding areas. This is mainly due to ineffectiveness of convolutional neural networks in explicitly borrowing or copying information from distant spatial locations. On the other hand, traditional texture and patch synthesis approaches are particularly suitable when it needs to borrow textures from the surrounding regions. Motivated by these observations, we propose a new deep generative model-based approach which can not only synthesize novel image structures but also explicitly utilize surrounding image features as references during network training to make better predictions. The model is a feedforward, fully convolutional neural network which can process images with multiple holes at arbitrary locations and with variable sizes during the test time. Experiments on multiple datasets including faces (CelebA, CelebA-HQ), textures (DTD) and natural images (ImageNet, Places2) demonstrate that our proposed approach generates higher-quality inpainting results than existing ones. Code, demo and models are available at: https://github.com/JiahuiYu/generative_inpainting.",
"title": ""
},
{
"docid": "89eee86640807e11fa02d0de4862b3a5",
"text": "The evolving fifth generation (5G) cellular wireless networks are envisioned to overcome the fundamental challenges of existing cellular networks, for example, higher data rates, excellent end-to-end performance, and user-coverage in hot-spots and crowded areas with lower latency, energy consumption, and cost per information transfer. To address these challenges, 5G systems will adopt a multi-tier architecture consisting of macrocells, different types of licensed small cells, relays, and device-to-device (D2D) networks to serve users with different quality-of-service (QoS) requirements in a spectrum and energy-efficient manner. Starting with the visions and requirements of 5G multi-tier networks, this article outlines the challenges of interference management (e.g. power control, cell association) in these networks with shared spectrum access (i.e. when the different network tiers share the same licensed spectrum). It is argued that the existing interference management schemes will not be able to address the interference management problem in prioritized 5G multi-tier networks where users in different tiers have different priorities for channel access. In this context a survey and qualitative comparison of the existing cell association and power control schemes is provided to demonstrate their limitations for interference management in 5G networks. Open challenges are highlighted and guidelines are provided to modify the existing schemes in order to overcome these limitations and make them suitable for the emerging 5G systems.",
"title": ""
},
{
"docid": "3a74928dc955504a12dbfe7cd2deeb16",
"text": "Very few large-scale music research datasets are publicly available. There is an increasing need for such datasets, because the shift from physical to digital distribution in the music industry has given the listener access to a large body of music, which needs to be cataloged efficiently and be easily browsable. Additionally, deep learning and feature learning techniques are becoming increasingly popular for music information retrieval applications, and they typically require large amounts of training data to work well. In this paper, we propose to exploit an available large-scale music dataset, the Million Song Dataset (MSD), for classification tasks on other datasets, by reusing models trained on the MSD for feature extraction. This transfer learning approach, which we refer to as supervised pre-training, was previously shown to be very effective for computer vision problems. We show that features learned from MSD audio fragments in a supervised manner, using tag labels and user listening data, consistently outperform features learned in an unsupervised manner in this setting, provided that the learned feature extractor is of limited complexity. We evaluate our approach on the GTZAN, 1517-Artists, Unique and Magnatagatune datasets.",
"title": ""
},
{
"docid": "391cce3ac9ab87e31203637d89a8a082",
"text": "MicroRNAs (miRNAs) are small conserved non-coding RNA molecules that post-transcriptionally regulate gene expression by targeting the 3' untranslated region (UTR) of specific messenger RNAs (mRNAs) for degradation or translational repression. miRNA-mediated gene regulation is critical for normal cellular functions such as the cell cycle, differentiation, and apoptosis, and as much as one-third of human mRNAs may be miRNA targets. Emerging evidence has demonstrated that miRNAs play a vital role in the regulation of immunological functions and the prevention of autoimmunity. Here we review the many newly discovered roles of miRNA regulation in immune functions and in the development of autoimmunity and autoimmune disease. Specifically, we discuss the involvement of miRNA regulation in innate and adaptive immune responses, immune cell development, T regulatory cell stability and function, and differential miRNA expression in rheumatoid arthritis and systemic lupus erythematosus.",
"title": ""
},
{
"docid": "12e726dadcb76bfb6dc4f98e8b520347",
"text": "Inexact and approximate circuit design is a promising approach to improve performance and energy efficiency in technology-scaled and low-power digital systems. Such strategy is suitable for error-tolerant applications involving perceptive or statistical outputs. This paper presents a novel architecture of an Inexact Speculative Adder with optimized hardware efficiency and advanced compensation technique with either error correction or error reduction. This general topology of speculative adders improves performance and enables precise accuracy control. A brief design methodology and comparative study of this speculative adder are also presented herein, demonstrating power savings up to 26 % and energy-delay-area reductions up to 60% at equivalent accuracy compared to the state-of-the-art.",
"title": ""
},
{
"docid": "fe06ac2458e00c5447a255486189f1d1",
"text": "The design and control of robots from the perspective of human safety is desired. We propose a mechanical compliance control system as a new pneumatic arm control system. However, safety against collisions with obstacles in an unpredictable environment is difficult to insure in previous system. The main feature of the proposed system is that the two desired pressure values are calculated by using two other desired values, the end compliance of the arm and the end position and posture of the arm.",
"title": ""
},
{
"docid": "6ba37b8e2a8e9f35c7d14d7544aeda61",
"text": "In real-world applications, knowledge bases consisting of all the available information for a specific domain, along with the current state of affairs, will typically contain contradictory data, coming from different sources, as well as data with varying degrees of uncertainty attached. An important aspect of the effort associated with maintaining such knowledge bases is deciding what information is no longer useful; pieces of information may be outdated; may come from sources that have recently been discovered to be of low quality; or abundant evidence may be available that contradicts them. In this paper, we propose a probabilistic structured argumentation framework that arises from the extension of Presumptive Defeasible Logic Programming (PreDeLP) with probabilistic models, and argue that this formalism is capable of addressing these basic issues. The formalism is capable of handling contradictory and uncertain data, and we study non-prioritized belief revision over probabilistic PreDeLP programs that can help with knowledge-base maintenance. For belief revision, we propose a set of rationality postulates — based on well-known ones developed for classical knowledge bases — that characterize how these belief revision operations should behave, and study classes of operators along with theoretical relationships with the proposed postulates, including representation theorems stating the equivalence between classes of operators and their associated postulates. We then demonstrate how our framework can be used to address the attribution problem in cyber security/cyber warfare.",
"title": ""
},
{
"docid": "d76e46eec2aa0abcbbd47b8270673efa",
"text": "OBJECTIVE\nTo explore the clinical efficacy and the mechanism of acupoint autohemotherapy in the treatment of allergic rhinitis.\n\n\nMETHODS\nForty-five cases were randomized into an autohemotherapy group (24 cases) and a western medication group (21 cases). In the autohemotherapy group, the acupoint autohemotherapy was applied to the bilateral Dingchuan (EX-B 1), Fengmen (BL 12), Feishu (BL 13), Quchi (LI 11), Zusanli (ST 36) and the others. In the western medication group, loratadine tablets were prescribed. The patients were treated continuously for 3 months in both groups. The clinical symptom score was taken for the assessment of clinical efficacy. The enzyme-linked immunoadsordent assay (ELISA) was adopted to determine the contents of serum interferon-gamma (IFN-gamma) and interleukin-12 (IL-12).\n\n\nRESULTS\nThe total effective rate was 83.3% (20/24) in the autohemotherapy group, which was obviously superior to 66.7% (14/21) in the western medication group (P < 0.05). After treatment, the clinical symptom scores of patients in the two groups were all reduced. The improvements in the scores of sneezing and clear nasal discharge in the autohemotherapy group were much more significant than those in the western medication group (both P < 0.05). After treatment, the serum IL-12 content of patients in the two groups was all increased to different extents as compared with that before treatment (both P < 0.05). In the autohemotherapy group, the serum IFN-gamma was increased after treatment (P < 0.05). In the western medication group, the serum IFN-gamma was not increased obviously after treatment (P > 0.05). The increase of the above index contents in the autohemotherapy group were more apparent than those in the western medication group (both P < 0.05).\n\n\nCONCLUSION\nThe acupoint autohemotherapy relieves significantly the clinical symptoms of allergic rhinitis and the therapeutic effect is better than that with oral administration of loratadine tablets, which is probably relevant with the increase of serum IL-12 content and the promotion of IFN-gamma synthesis.",
"title": ""
},
{
"docid": "1f50a6d6e7c48efb7ffc86bcc6a8271d",
"text": "Creating short summaries of documents with respect to a query has applications in for example search engines, where it may help inform users of the most relevant results. Constructing such a summary automatically, with the potential expressiveness of a human-written summary, is a difficult problem yet to be fully solved. In this thesis, a neural network model for this task is presented. We adapt an existing dataset of news article summaries for the task and train a pointer-generator model using this dataset to summarize such articles. The generated summaries are then evaluated by measuring similarity to reference summaries. We observe that the generated summaries exhibit abstractive properties, but also that they have issues, such as rarely being truthful. However, we show that a neural network summarization model, similar to existing neural network models for abstractive summarization, can be constructed to make use of queries for more targeted summaries.",
"title": ""
},
{
"docid": "74927f18642b088b1d2d1ff2c57eb675",
"text": "AIM\nThe conventional treatment of a single missing tooth is most frequently based on the provision of a fixed dental prosthesis (FDPs). A variety of designs and restorative materials are available which have an impact on the treatment outcome. Consequently, it was the aim of this review to compare resin-bonded, all-ceramic and metal-ceramic FDPs based on existing evidence.\n\n\nMATERIALS AND METHODS\nAn electronic literature search using \"metal-ceramic\" AND \"fixed dental prosthesis\" AND \"clinical, all-ceramic\" AND \"fixed dental prosthesis\" AND \"clinical, resin-bonded\" AND \"fixed dental prosthesis\" AND \"clinical, fiber reinforced composite\" AND \"clinical, monolithic\" AND \"zirconia\" AND \"clinical\" was conducted and supplemented by the manual searching of bibliographies from articles already included.\n\n\nRESULTS\nA total of 258 relevant articles were identified. Metal-ceramic FDPs still show the highest survival rates of all tooth-supported restorations. Depending on the ceramic system used, all-ceramic restorations may reach comparable survival rates while the technical complications, i.e. chipping fractures of veneering materials in particular, are more frequent. Resin-bonded FDPs can be seen as long-term provisional restorations with the survival rate being higher in anterior locations and when a cantilever design is applied. Inlay-retained FDPs and the use of fiber-reinforced composites overall results in a compromised long-term prognosis. Recently advocated monolithic zirconia restorations bear the risk of low temperature degradation.\n\n\nCONCLUSIONS\nSeveral variables affect treatment planning for a given patient situation, with survival and success rates of different restorative options representing only one factor. The broad variety of designs and materials available for conventional tooth-supported restorations should still be considered as a viable treatment option for single tooth replacement.",
"title": ""
},
{
"docid": "d8b0ef94385d1379baeb499622253a02",
"text": "Mining association rules associates events that took place together. In market basket analysis, these discovered rules associate items purchased together. Items that are not part of a transaction are not considered. In other words, typical association rules do not take into account items that are part of the domain but that are not together part of a transaction. Association rules are based on frequencies and count the transactions where items occur together. However, counting absences of items is prohibitive if the number of possible items is very large, which is typically the case. Nonetheless, knowing the relationship between the absence of an item and the presence of another can be very important in some applications. These rules are called negative association rules. We review current approaches for mining negative association rules and we discuss limitations and future research directions.",
"title": ""
},
{
"docid": "7c485c59a1662966d7d8e079c67f43ca",
"text": "Given the diversity of recommendation algorithms, choosing one technique is becoming increasingly difficult. In this paper, we explore methods for combining multiple recommendation approaches. We studied rank aggregation methods that have been proposed for the metasearch task (i.e., fusing the outputs of different search engines) but have never been applied to merge top-N recommender systems. These methods require no training data nor parameter tuning. We analysed two families of methods: voting-based and score-based approaches. These rank aggregation techniques yield significant improvements over state-of-the-art top-N recommenders. In particular, score-based methods yielded good results; however, some voting techniques were also competitive without using score information, which may be unavailable in some recommendation scenarios. The studied methods not only improve the state of the art of recommendation algorithms but they are also simple and efficient.",
"title": ""
},
{
"docid": "222f28aa8b4cc4eaddb21e21c9020593",
"text": "We study an approach to text categorization that combines di stributional clustering of words and a Support Vector Machine (SVM) classifier. This word-cluster r presentation is computed using the recently introducedInformation Bottleneckmethod, which generates a compact and efficient representation of documents. When combined with the classifica tion power of the SVM, this method yields high performance in text categorization. This novel combination of SVM with word-cluster representation is compared with SVM-based categorization using the simpler bag-of-words (BOW) representation. The comparison is performed over three kno wn datasets. On one of these datasets (the 20 Newsgroups) the method based on word clusters signifi ca tly outperforms the word-based representation in terms of categorization accuracy or repr esentation efficiency. On the two other sets (Reuters-21578 and WebKB) the word-based representation s lightly outperforms the word-cluster representation. We investigate the potential reasons for t his behavior and relate it to structural differences between the datasets.",
"title": ""
},
{
"docid": "1573dcbb7b858ab6802018484f00ef91",
"text": "There is a multitude of tools available for Business Model Innovation (BMI). However, Business models (BM) and supporting tools are not yet widely known by micro, small and medium sized companies (SMEs). In this paper, we build on analysis of 61 cases to present typical BMI paths of European SMEs. Firstly, we constructed two paths for established companies that we named as 'I want to grow' and 'I want to make my business profitable'. We also found one path for start-ups: 'I want to start a new business'. Secondly, we suggest appropriate BM toolsets for the three paths. The identified paths and related tools contribute to BMI research and practise with an aim to boost BMI in SMEs.",
"title": ""
},
{
"docid": "90cfe22d4e436e9caa61a2ac198cb7f7",
"text": "Deep Neural Networks (DNNs) are fast becoming ubiquitous for their ability to attain good accuracy in various machine learning tasks. A DNN’s architecture (i.e., its hyper-parameters) broadly determines the DNN’s accuracy and performance, and is often confidential. Attacking a DNN in the cloud to obtain its architecture can potentially provide major commercial value. Further, attaining a DNN’s architecture facilitates other, existing DNN attacks. This paper presents Cache Telepathy: a fast and accurate mechanism to steal a DNN’s architecture using the cache side channel. Our attack is based on the insight that DNN inference relies heavily on tiled GEMM (Generalized Matrix Multiply), and that DNN architecture parameters determine the number of GEMM calls and the dimensions of the matrices used in the GEMM functions. Such information can be leaked through the cache side channel. This paper uses Prime+Probe and Flush+Reload to attack VGG and ResNet DNNs running OpenBLAS and Intel MKL libraries. Our attack is effective in helping obtain the architectures by very substantially reducing the search space of target DNN architectures. For example, for VGG using OpenBLAS, it reduces the search space from more than 1035 architectures to just 16.",
"title": ""
},
{
"docid": "b583e130f5066166107e36f766f513ac",
"text": "Non-intrusive load monitoring, or energy disaggregation, aims to separate household energy consumption data collected from a single point of measurement into appliance-level consumption data. In recent years, the field has rapidly expanded due to increased interest as national deployments of smart meters have begun in many countries. However, empirically comparing disaggregation algorithms is currently virtually impossible. This is due to the different data sets used, the lack of reference implementations of these algorithms and the variety of accuracy metrics employed. To address this challenge, we present the Non-intrusive Load Monitoring Toolkit (NILMTK); an open source toolkit designed specifically to enable the comparison of energy disaggregation algorithms in a reproducible manner. This work is the first research to compare multiple disaggregation approaches across multiple publicly available data sets. Our toolkit includes parsers for a range of existing data sets, a collection of preprocessing algorithms, a set of statistics for describing data sets, two reference benchmark disaggregation algorithms and a suite of accuracy metrics. We demonstrate the range of reproducible analyses which are made possible by our toolkit, including the analysis of six publicly available data sets and the evaluation of both benchmark disaggregation algorithms across such data sets.",
"title": ""
},
{
"docid": "9c008dc2f3da4453317ce92666184da0",
"text": "In embedded system design, there is an increasing demand for modeling techniques that can provide both accurate measurements of delay and fast simulation speed. Modeling latency effects of a cache can greatly increase accuracy of the simulation and assist developers to optimize their software. Current solutions have not succeeded in balancing three important factors: speed, accuracy and usability. In this research, we created a cache simulation module inside a well-known instruction set simulator QEMU. Our implementation can simulate various cases of cache configuration and obtain every memory access. In full system simulation, speed is kept at around 73 MIPS on a personal host computer which is close to native execution of ARM Cortex-M3(125 MIPS at 100 MHz). Compared to the widely used cache simulation tool, Valgrind, our simulator is three time faster.",
"title": ""
},
{
"docid": "2aa324628b082f1fd6d1e1e0221d1bad",
"text": "Recent behavioral investigations have revealed that autistics perform more proficiently on Raven's Standard Progressive Matrices (RSPM) than would be predicted by their Wechsler intelligence scores. A widely-used test of fluid reasoning and intelligence, the RSPM assays abilities to flexibly infer rules, manage goal hierarchies, and perform high-level abstractions. The neural substrates for these abilities are known to encompass a large frontoparietal network, with different processing models placing variable emphasis on the specific roles of the prefrontal or posterior regions. We used functional magnetic resonance imaging to explore the neural bases of autistics' RSPM problem solving. Fifteen autistic and eighteen non-autistic participants, matched on age, sex, manual preference and Wechsler IQ, completed 60 self-paced randomly-ordered RSPM items along with a visually similar 60-item pattern matching comparison task. Accuracy and response times did not differ between groups in the pattern matching task. In the RSPM task, autistics performed with similar accuracy, but with shorter response times, compared to their non-autistic controls. In both the entire sample and a subsample of participants additionally matched on RSPM performance to control for potential response time confounds, neural activity was similar in both groups for the pattern matching task. However, for the RSPM task, autistics displayed relatively increased task-related activity in extrastriate areas (BA18), and decreased activity in the lateral prefrontal cortex (BA9) and the medial posterior parietal cortex (BA7). Visual processing mechanisms may therefore play a more prominent role in reasoning in autistics.",
"title": ""
},
{
"docid": "286dd9575b4de418b0d2daf121306e62",
"text": "Absfract—Impedance transforming networks are described which consist of short lengths of relatively high impedance transmission line alternating with short lengths of relatively low impedance line. The sections of transmission line are all exactly the same length (except for corrections for fringing capacitances), and the lengths of the line sections are typically short compared to a quarter wavelength throughout the operating band of the transformer. Tables of designs are presented which give exactly Chebyshev transmission characteristics between resistive terminations having ratios ranging from 1.5 to 10, and for fractional bandwidths ranging from 0.10 to 1.20. These impedance-transforming networks should have application where very compact transmission-line or dielectric-layer impedance transformers are desired.",
"title": ""
}
] |
scidocsrr
|
dc7e1b56824cb335d10c1cded140fe1a
|
Allele-specific FKBP5 DNA demethylation mediates gene–childhood trauma interactions
|
[
{
"docid": "1e40fbed88643aa696d74460dc489358",
"text": "We introduce a statistical model for microarray gene expression data that comprises data calibration, the quantification of differential expression, and the quantification of measurement error. In particular, we derive a transformation h for intensity measurements, and a difference statistic Deltah whose variance is approximately constant along the whole intensity range. This forms a basis for statistical inference from microarray data, and provides a rational data pre-processing strategy for multivariate analyses. For the transformation h, the parametric form h(x)=arsinh(a+bx) is derived from a model of the variance-versus-mean dependence for microarray intensity data, using the method of variance stabilizing transformations. For large intensities, h coincides with the logarithmic transformation, and Deltah with the log-ratio. The parameters of h together with those of the calibration between experiments are estimated with a robust variant of maximum-likelihood estimation. We demonstrate our approach on data sets from different experimental platforms, including two-colour cDNA arrays and a series of Affymetrix oligonucleotide arrays.",
"title": ""
}
] |
[
{
"docid": "c0fd9b73e2af25591e3c939cdbed1c1a",
"text": "We propose a new end-to-end single image dehazing method, called Densely Connected Pyramid Dehazing Network (DCPDN), which can jointly learn the transmission map, atmospheric light and dehazing all together. The end-to-end learning is achieved by directly embedding the atmospheric scattering model into the network, thereby ensuring that the proposed method strictly follows the physics-driven scattering model for dehazing. Inspired by the dense network that can maximize the information flow along features from different levels, we propose a new edge-preserving densely connected encoder-decoder structure with multi-level pyramid pooling module for estimating the transmission map. This network is optimized using a newly introduced edge-preserving loss function. To further incorporate the mutual structural information between the estimated transmission map and the dehazed result, we propose a joint-discriminator based on generative adversarial network framework to decide whether the corresponding dehazed image and the estimated transmission map are real or fake. An ablation study is conducted to demonstrate the effectiveness of each module evaluated at both estimated transmission map and dehazed result. Extensive experiments demonstrate that the proposed method achieves significant improvements over the state-of-the-art methods. Code and dataset is made available at: https://github.com/hezhangsprinter/DCPDN",
"title": ""
},
{
"docid": "f333bc03686cf85aee0a65d4a81e8b34",
"text": "A large portion of data mining and analytic services use modern machine learning techniques, such as deep learning. The state-of-the-art results by deep learning come at the price of an intensive use of computing resources. The leading frameworks (e.g., TensorFlow) are executed on GPUs or on high-end servers in datacenters. On the other end, there is a proliferation of personal devices with possibly free CPU cycles; this can enable services to run in users' homes, embedding machine learning operations. In this paper, we ask the following question: Is distributed deep learning computation on WAN connected devices feasible, in spite of the traffic caused by learning tasks? We show that such a setup rises some important challenges, most notably the ingress traffic that the servers hosting the up-to-date model have to sustain. In order to reduce this stress, we propose AdaComp, a novel algorithm for compressing worker updates to the model on the server. Applicable to stochastic gradient descent based approaches, it combines efficient gradient selection and learning rate modulation. We then experiment and measure the impact of compression, device heterogeneity and reliability on the accuracy of learned models, with an emulator platform that embeds TensorFlow into Linux containers. We report a reduction of the total amount of data sent by workers to the server by two order of magnitude (e.g., 191-fold reduction for a convolutional network on the MNIST dataset), when compared to a standard asynchronous stochastic gradient descent, while preserving model accuracy.",
"title": ""
},
{
"docid": "d8d0b6d8b422b8d1369e99ff8b9dee0e",
"text": "The advent of massive open online courses (MOOCs) poses new learning opportunities for learners as well as challenges for researchers and designers. MOOC students approach MOOCs in a range of fashions, based on their learning goals and preferred approaches, which creates new opportunities for learners but makes it difficult for researchers to figure out what a student’s behavior means, and makes it difficult for designers to develop MOOCs appropriate for all of their learners. Towards better understanding the learners who take MOOCs, we conduct a survey of MOOC learners’ motivations and correlate it to which students complete the course according to the pace set by the instructor/platform (which necessitates having the goal of completing the course, as well as succeeding in that goal). The results showed that course completers tend to be more interested in the course content, whereas non-completers tend to be more interested in MOOCs as a type of learning experience. Contrary to initial hypotheses, however, no substantial differences in mastery-goal orientation or general academic efficacy were observed between completers and non-completers. However, students who complete the course tend to have more self-efficacy for their ability to complete the course, from the beginning.",
"title": ""
},
{
"docid": "e3747bf4694854d0a38d73de5d478f17",
"text": "Virtual Reality (VR) is starting to be used in psychological therapy around the world. However, a thorough understanding of the reason why VR is effective and what effect it has on the human psyche is still missing. Most research on this subject is related to the concept of presence. This paper gives an up-to-date overview of research in this diverse field. It starts with the most prevailing definitions and theories on presence, most of which attribute special roles for the mental process of attention and for mental models of the virtual space. A review of the phenomena thought to be effected by presence shows that there is still a strong need for research on this subject because little conclusive evidence exists regarding the relationship between presence and phenoma such as emotional responses to virtual stimuli. An investigation shows there has been substantial research for developing methods for measuring presence and research regarding factors that contribute to presence. Knowledge of these contributing factors can play a vital role in development of new VR applications, but key knowledge elements in this area are still missing.",
"title": ""
},
{
"docid": "3caf447d0dbf258566124e79f8617f45",
"text": "High-level actions (HLAs) lie at the heart of hierarchical planning. Typically, an HLA admits multiple refinements into primitive action sequences. Correct descriptions of the effects of HLAs may be essential to their effective use, yet the literature is mostly silent. We propose an angelic semantics for HLAs, the key concept of which is the set of states reachable by some refinement of a high-level plan, representing uncertainty that will ultimately be resolved in the planning agent’s own best interest. We describe upper and lower approximations to these reachable sets, and show that the resulting definition of a high-level solution automatically satisfies the upward and downward refinement properties. We define a STRIPS-like notation for such descriptions. A sound and complete hierarchical planning algorithm is given and its computational benefits are demonstrated.",
"title": ""
},
{
"docid": "46dcde3ad2de1d8971a7290e6dd3e335",
"text": "Modeling the semantic similarity between text documents presents a significant theoretical challenge for cognitive science, with ready-made applications in information handling and decision support systems dealing with text. While a number of candidate models exist, they have generally not been assessed in terms of their ability to emulate human judgments of similarity. To address this problem, we conducted an experiment that collected repeated similarity measures for each pair of documents in a small corpus of short news documents. An analysis of human performance showed inter-rater correlations of about 0.6. We then considered the ability of existing models—using wordbased, n-gram and Latent Semantic Analysis (LSA) approaches—to model these human judgments. The best performed LSA model produced correlations of about 0.6, consistent with human performance, while the best performed word-based and n-gram models achieved correlations closer to 0.5. Many of the remaining models showed almost no correlation with human performance. Based on our results, we provide some discussion of the key strengths and weaknesses of the models we examined.",
"title": ""
},
{
"docid": "017f0d1c89531bc3664a9504b0b70d30",
"text": "In this paper, we present an approach to automatic detection and recognition of signs from natural scenes, and its application to a sign translation task. The proposed approach embeds multiresolution and multiscale edge detection, adaptive searching, color analysis, and affine rectification in a hierarchical framework for sign detection, with different emphases at each phase to handle the text in different sizes, orientations, color distributions and backgrounds. We use affine rectification to recover deformation of the text regions caused by an inappropriate camera view angle. The procedure can significantly improve text detection rate and optical character recognition (OCR) accuracy. Instead of using binary information for OCR, we extract features from an intensity image directly. We propose a local intensity normalization method to effectively handle lighting variations, followed by a Gabor transform to obtain local features, and finally a linear discriminant analysis (LDA) method for feature selection. We have applied the approach in developing a Chinese sign translation system, which can automatically detect and recognize Chinese signs as input from a camera, and translate the recognized text into English.",
"title": ""
},
{
"docid": "056f5179fa5c0cdea06d29d22a756086",
"text": "Finding solution values for unknowns in Boolean equations was a principal reasoning mode in the Algebra of Logic of the 19th century. Schröder investigated it as Auflösungsproblem (solution problem). It is closely related to the modern notion of Boolean unification. Today it is commonly presented in an algebraic setting, but seems potentially useful also in knowledge representation based on predicate logic. We show that it can be modeled on the basis of first-order logic extended by secondorder quantification. A wealth of classical results transfers, foundations for algorithms unfold, and connections with second-order quantifier elimination and Craig interpolation show up. Although for first-order inputs the set of solutions is recursively enumerable, the development of constructive methods remains a challenge. We identify some cases that allow constructions, most of them based on Craig interpolation, and show a method to take vocabulary restrictions on solution components into account. Revision: June 26, 2017",
"title": ""
},
{
"docid": "e34ba302c8d4310cc64305a3329eada9",
"text": "The aim of this study was to examine the validity of vertical jump (VJ) performance variables in elite-standard male and female Italian soccer players. One hundred eighteen national team soccer players (n = 56 men and n = 62 women) were tested for countermovement (CMJ) and squatting jump (SJ) heights. The stretch-shortening cycle efficiency (SSCE) was assessed as percentage of CMJ gain over SJ ([INCREMENT]CMJ-SJ), difference (CMJ-SJ), and ratio (CMJ:SJ). Results showed significant sex difference in SJ and CMJ. Differences in SSCE were mainly in the absolute variables between sexes. Cutoff values for CMJ and SJ using sex as construct were 34.4 and 32.9 cm, respectively. No competitive level differences in VJ performance were detected in the male players. Female national team players showed VJ performance higher than the under 17 counterpart. The results of this study showed that VJ performance could not discriminate between competitive levels in male national team-selected soccer players. However, the use of CMJ and SJ normative data may help strength and conditioning coaches in prescribing lower limb explosive strength training in elite soccer players. In this, variations in VJ performance in the range of approximately 1 cm may be regarded as of interest in tracking noncasual variation in elite-standard soccer players.",
"title": ""
},
{
"docid": "c76fc0f9ce4422bee1d2cf3964f1024c",
"text": "The subjective nature of gender inequality motivates the analysis and comparison of data from real and fictional human interaction. We present a computational extension of the Bechdel test: A popular tool to assess if a movie contains a male gender bias, by looking for two female characters who discuss about something besides a man. We provide the tools to quantify Bechdel scores for both genders, and we measure them in movie scripts and large datasets of dialogues between users of MySpace and Twitter. Comparing movies and users of social media, we find that movies and Twitter conversations have a consistent male bias, which does not appear when analyzing MySpace. Furthermore, the narrative of Twitter is closer to the movies that do not pass the Bechdel test than to",
"title": ""
},
{
"docid": "618496f6e0b1da51e1e2c81d72c4a6f1",
"text": "Paid employment within clinical setting, such as externships for undergraduate student, are used locally and globally to better prepare and retain new graduates for actual practice and facilitate their transition into becoming registered nurses. However, the influence of paid employment on the post-registration experience of such nurses remains unclear. Through the use of narrative inquiry, this study explores how the experience of pre-registration paid employment shapes the post-registration experience of newly graduated registered nurses. Repeated individual interviews were conducted with 18 new graduates, and focus group interviews were conducted with 11 preceptors and 10 stakeholders recruited from 8 public hospitals in Hong Kong. The data were subjected to narrative and paradigmatic analyses. Taken-for-granted assumptions about the knowledge and performance of graduates who worked in the same unit for their undergraduate paid work experience were uncovered. These assumptions affected the quantity and quality of support and time that other senior nurses provided to these graduates for their further development into competent nurses and patient advocates, which could have implications for patient safety. It is our hope that this narrative inquiry will heighten awareness of taken-for-granted assumptions, so as to help graduates transition to their new role and provide quality patient care.",
"title": ""
},
{
"docid": "5dddbc2b2c53436c9d2176045118dce5",
"text": "This work introduces a method to tune a sequence-based generative model for molecular de novo design that through augmented episodic likelihood can learn to generate structures with certain specified desirable properties. We demonstrate how this model can execute a range of tasks such as generating analogues to a query structure and generating compounds predicted to be active against a biological target. As a proof of principle, the model is first trained to generate molecules that do not contain sulphur. As a second example, the model is trained to generate analogues to the drug Celecoxib, a technique that could be used for scaffold hopping or library expansion starting from a single molecule. Finally, when tuning the model towards generating compounds predicted to be active against the dopamine receptor type 2, the model generates structures of which more than 95% are predicted to be active, including experimentally confirmed actives that have not been included in either the generative model nor the activity prediction model. Graphical abstract .",
"title": ""
},
{
"docid": "324cf25f3c288572c217896b8082f213",
"text": "Problem-based learning (PBL) is an instructional approach that has been used successfully for over 30 years and continues to gain acceptance in multiple disciplines. It is an instructional (and curricular) learner-centered approach that empowers learners to conduct research, integrate theory and practice, and apply knowledge and skills to develop a viable solution to a defined problem. This overview presents a brief history, followed by a discussion of the similarities and differences between PBL and other experiential approaches to teaching, and identifies some of the challenges that lie ahead for PBL.",
"title": ""
},
{
"docid": "18883fdb506d235fdf72b46e76923e41",
"text": "The Ponseti method for the management of idiopathic clubfoot has recently experienced a rise in popularity, with several centers reporting excellent outcomes. The challenge in achieving a successful outcome with this method lies not in correcting deformity but in preventing relapse. The most common cause of relapse is failure to adhere to the prescribed postcorrective bracing regimen. Socioeconomic status, cultural factors, and physician-parent communication may influence parental compliance with bracing. New, more user-friendly braces have been introduced in the hope of improving the rate of compliance. Strategies that may be helpful in promoting adherence include educating the family at the outset about the importance of bracing, encouraging calls and visits to discuss problems, providing clear written instructions, avoiding or promptly addressing skin problems, and refraining from criticism of the family when noncompliance is evident. A strong physician-family partnership and consideration of underlying cognitive, socioeconomic, and cultural issues may lead to improved adherence to postcorrective bracing protocols and better patient outcomes.",
"title": ""
},
{
"docid": "3ed3b4f507c32f6423ca3918fa3eb843",
"text": "In recent years, it has been clearly evidenced that most cells in a human being are not human: they are microbial, represented by more than 1000 microbial species. The vast majority of microbial species give rise to symbiotic host-bacterial interactions that are fundamental for human health. The complex of these microbial communities has been defined as microbiota or microbiome. These bacterial communities, forged over millennia of co-evolution with humans, are at the basis of a partnership with the developing human newborn, which is based on reciprocal molecular exchanges and cross-talking. Recent data on the role of the human microbiota in newborns and children clearly indicate that microbes have a potential importance to pediatrics, contributing to host nutrition, developmental regulation of intestinal angiogenesis, protection from pathogens, and development of the immune system. This review is aimed at reporting the most recent data on the knowledge of microbiota origin and development in the human newborn, and on the multiple factors influencing development and maturation of our microbiota, including the use and abuse of antibiotic therapies.",
"title": ""
},
{
"docid": "587a3faf58498312ffe63cd692d70a51",
"text": "Soft tissue filler injection has been a very common procedure worldwide since filler injection was first introduced for soft tissue augmentation. Currently, filler is used in various medical fields with satisfactory results, but the number of complications is increasing due to the increased use of filler. The complications after filler injection can occur at any time after the procedure, early and delayed, and they range from minor to severe. In this review, based on our experience and previously published other articles, we suggest a treatment algorithm to help wound healing and tissue regeneration and generate good aesthetic results with early treatment in response to the side effects of filler. Familiarity with the treatment of these rare complications is essential for achieving the best possible outcome.",
"title": ""
},
{
"docid": "6d75fc5b57df4f4b497e550c9bd4d14b",
"text": "A highly-digital clock multiplication architecture that achieves excellent jitter and mitigates supply noise is presented. The proposed architecture utilizes a calibration-free digital multiplying delay-locked loop (MDLL) to decouple the tradeoff between time-to-digital converter (TDC) resolution and oscillator phase noise in digital phase-locked loops (PLLs). Both reduction in jitter accumulation down to sub-picosecond levels and improved supply noise rejection over conventional PLL architectures is demonstrated with low power consumption. A digital PLL that employs a 1-bit TDC and a low power regulator that seeks to improve supply noise immunity without increasing loop delay is presented and used to compare with the proposed MDLL. The prototype MDLL and DPLL chips are fabricated in a 0.13 μm CMOS technology and operate from a nominal 1.1 V supply. The proposed MDLL achieves an integrated jitter of 400 fs rms at 1.5 GHz output frequency from a 375 MHz reference clock, while consuming 890 μ W. The worst-case supply noise sensitivity of the MDLL is 20 fspp/mVpp which translates to a jitter degradation of 3.8 ps in the presence of 200 mV supply noise. The proposed clock multipliers occupy active die areas of 0.25 mm2 and 0.2 mm2 for the MDLL and DPLL, respectively.",
"title": ""
},
{
"docid": "3ca04efcb370e8a30ab5ad42d1d2d047",
"text": "The exceptionally adhesive foot of the gecko remains clean in dirty environments by shedding contaminants with each step. Synthetic gecko-inspired adhesives have achieved similar attachment strengths to the gecko on smooth surfaces, but the process of contact self-cleaning has yet to be effectively demonstrated. Here, we present the first gecko-inspired adhesive that has matched both the attachment strength and the contact self-cleaning performance of the gecko's foot on a smooth surface. Contact self-cleaning experiments were performed with three different sizes of mushroom-shaped elastomer microfibres and five different sizes of spherical silica contaminants. Using a load-drag-unload dry contact cleaning process similar to the loads acting on the gecko foot during locomotion, our fully contaminated synthetic gecko adhesives could recover lost adhesion at a rate comparable to that of the gecko. We observed that the relative size of contaminants to the characteristic size of the microfibres in the synthetic adhesive strongly determined how and to what degree the adhesive recovered from contamination. Our approximate model and experimental results show that the dominant mechanism of contact self-cleaning is particle rolling during the drag process. Embedding of particles between adjacent fibres was observed for particles with diameter smaller than the fibre tips, and further studied as a temporary cleaning mechanism. By incorporating contact self-cleaning capabilities, real-world applications of synthetic gecko adhesives, such as reusable tapes, clothing closures and medical adhesives, would become feasible.",
"title": ""
},
{
"docid": "91f45641d96b519dd65bf00249571a99",
"text": "Tissue perfusion is determined by both blood vessel geometry and the rheological properties of blood. Blood is a nonNewtonian fluid, its viscosity being dependent on flow conditions. Blood and plasma viscosities, as well as the rheological properties of blood cells (e.g., deformability and aggregation of red blood cells), are influenced by disease processes and extreme physiological conditions. These rheological parameters may in turn affect the blood flow in vessels, and hence tissue perfusion. Unfortunately it is not always possible to determine if a change in rheological parameters is the cause or the result of a disease process. The hemorheology-tissue perfusion relationship is further complicated by the distinct in vivo behavior of blood. Besides the special hemodynamic mechanisms affecting the composition of blood in various regions of the vascular system, autoregulation based on vascular control mechanisms further complicates this relationship. Hemorheological parameters may be especially important for adequate tissue perfusion if the vascular system is geometrically challenged.",
"title": ""
},
{
"docid": "4cbf1340e30becd3570220fa13e0e115",
"text": "Observation: Wooly hair is usually present at birth or infancy with a genetic linkage of autosomal dominant or recessive. Hair is curly, thick and often heavily pigmented. This condition has been reported with eye, teeth, cardiac anomalies. Also, keratosis pilaris atrophicans, ichtiyosis and deafness, palmoplantar keratoderma and Noonan syndrome may accompany wooly hair. We report two sisters with wooly hair, simultaneously developed an inflammatory tinea capitis (kerion). Our patients have neither a systemic disease nor eye, dental and other skin disorders. In their family; mother, two sisters, and one brother of them have also wooly hair without any other clinical associations. To our knowledge, this is the second, describes the association of wooly hair with tinea capitis. However, in the first report, mother and her son, also had keratosis follicularis spinulosa decalvans. As a result, presence of tinea capitis in both patients may be explained by the enhanced susceptibility to fungal infection in keratinizing disorders.",
"title": ""
}
] |
scidocsrr
|
9a910acb6e64485f5d2d4b4cff24c246
|
Securing smart maintenance services: Hardware-security and TLS for MQTT
|
[
{
"docid": "d6a80510eaf935268aec872e2d9112e0",
"text": "SSL (Secure Sockets Layer) is the de facto standard for secure Internet communications. Security of SSL connections against an active network attacker depends on correctly validating public-key certificates presented when the connection is established.\n We demonstrate that SSL certificate validation is completely broken in many security-critical applications and libraries. Vulnerable software includes Amazon's EC2 Java library and all cloud clients based on it; Amazon's and PayPal's merchant SDKs responsible for transmitting payment details from e-commerce sites to payment gateways; integrated shopping carts such as osCommerce, ZenCart, Ubercart, and PrestaShop; AdMob code used by mobile websites; Chase mobile banking and several other Android apps and libraries; Java Web-services middleware including Apache Axis, Axis 2, Codehaus XFire, and Pusher library for Android and all applications employing this middleware. Any SSL connection from any of these programs is insecure against a man-in-the-middle attack.\n The root causes of these vulnerabilities are badly designed APIs of SSL implementations (such as JSSE, OpenSSL, and GnuTLS) and data-transport libraries (such as cURL) which present developers with a confusing array of settings and options. We analyze perils and pitfalls of SSL certificate validation in software based on these APIs and present our recommendations.",
"title": ""
},
{
"docid": "e4574b1e8241599b5c3ef740b461efba",
"text": "Increasing awareness of ICS security issues has brought about a growing body of work in this area, including pioneering contributions based on realistic control system logs and network traces. This paper surveys the state of the art in ICS security research, including efforts of industrial researchers, highlighting the most interesting works. Research efforts are grouped into divergent areas, where we add “secure control” as a new category to capture security goals specific to control systems that differ from security goals in traditional IT systems.",
"title": ""
},
{
"docid": "951c2ce5816ffd7be55b8ae99a82f5fc",
"text": "Many Android apps have a legitimate need to communicate over the Internet and are then responsible for protecting potentially sensitive data during transit. This paper seeks to better understand the potential security threats posed by benign Android apps that use the SSL/TLS protocols to protect data they transmit. Since the lack of visual security indicators for SSL/TLS usage and the inadequate use of SSL/TLS can be exploited to launch Man-in-the-Middle (MITM) attacks, an analysis of 13,500 popular free apps downloaded from Google's Play Market is presented. \n We introduce MalloDroid, a tool to detect potential vulnerability against MITM attacks. Our analysis revealed that 1,074 (8.0%) of the apps examined contain SSL/TLS code that is potentially vulnerable to MITM attacks. Various forms of SSL/TLS misuse were discovered during a further manual audit of 100 selected apps that allowed us to successfully launch MITM attacks against 41 apps and gather a large variety of sensitive data. Furthermore, an online survey was conducted to evaluate users' perceptions of certificate warnings and HTTPS visual security indicators in Android's browser, showing that half of the 754 participating users were not able to correctly judge whether their browser session was protected by SSL/TLS or not. We conclude by considering the implications of these findings and discuss several countermeasures with which these problems could be alleviated.",
"title": ""
},
{
"docid": "28b7905d804cef8e54dbdf4f63f6495d",
"text": "The recently introduced Galois/Counter Mode (GCM) of operation for block ciphers provides both encryption and message authentication, using universal hashing based on multiplication in a binary finite field. We analyze its security and performance, and show that it is the most efficient mode of operation for high speed packet networks, by using a realistic model of a network crypto module and empirical data from studies of Internet traffic in conjunction with software experiments and hardware designs. GCM has several useful features: it can accept IVs of arbitrary length, can act as a stand-alone message authentication code (MAC), and can be used as an incremental MAC. We show that GCM is secure in the standard model of concrete security, even when these features are used. We also consider several of its important system-security aspects.",
"title": ""
}
] |
[
{
"docid": "2f1caa8b2c83d7581343bd29cc6f898d",
"text": "Sequencing ribosomal RNA (rRNA) genes is currently the method of choice for phylogenetic reconstruction, nucleic acid based detection and quantification of microbial diversity. The ARB software suite with its corresponding rRNA datasets has been accepted by researchers worldwide as a standard tool for large scale rRNA analysis. However, the rapid increase of publicly available rRNA sequence data has recently hampered the maintenance of comprehensive and curated rRNA knowledge databases. A new system, SILVA (from Latin silva, forest), was implemented to provide a central comprehensive web resource for up to date, quality controlled databases of aligned rRNA sequences from the Bacteria, Archaea and Eukarya domains. All sequences are checked for anomalies, carry a rich set of sequence associated contextual information, have multiple taxonomic classifications, and the latest validly described nomenclature. Furthermore, two precompiled sequence datasets compatible with ARB are offered for download on the SILVA website: (i) the reference (Ref) datasets, comprising only high quality, nearly full length sequences suitable for in-depth phylogenetic analysis and probe design and (ii) the comprehensive Parc datasets with all publicly available rRNA sequences longer than 300 nucleotides suitable for biodiversity analyses. The latest publicly available database release 91 (August 2007) hosts 547 521 sequences split into 461 823 small subunit and 85 689 large subunit rRNAs.",
"title": ""
},
{
"docid": "36342d65aaa9dff0339f8c1c8cb23f30",
"text": "Recent approaches to Reinforcement Learning (RL) with function approximation include Neural Fitted Q Iteration and the use of Gaussian Processes. They belong to the class of fitted value iteration algorithms, which use a set of support points to fit the value-function in a batch iterative process. These techniques make efficient use of a reduced number of samples by reusing them as needed, and are appropriate for applications where the cost of experiencing a new sample is higher than storing and reusing it, but this is at the expense of increasing the computational effort, since these algorithms are not incremental. On the other hand, non-parametric models for function approximation, like Gaussian Processes, are preferred against parametric ones, due to their greater flexibility. A further advantage of using Gaussian Processes for function approximation is that they allow to quantify the uncertainty of the estimation at each point. In this paper, we propose a new approach for RL in continuous domains based on Probability Density Estimations. Our method combines the best features of the previous methods: it is non-parametric and provides an estimation of the variance of the approximated function at any point of the domain. In addition, our method is simple, incremental, and computationally efficient. All these features make this approach more appealing than Gaussian Processes and fitted value iteration algorithms in general.",
"title": ""
},
{
"docid": "5f52b31afe9bf18f009a10343ccedaf0",
"text": "The preservation of image quality under various display conditions becomes more and more important in the multimedia era. A considerable amount of effort has been devoted to compensating the quality degradation caused by dim LCD backlight for mobile devices and desktop monitors. However, most previous enhancement methods for backlight-scaled images only consider the luminance component and overlook the impact of color appearance on image quality. In this paper, we propose a fast and elegant method that exploits the anchoring property of human visual system to preserve the color appearance of backlight-scaled images as much as possible. Our approach is distinguished from previous ones in many aspects. First, it has a sound theoretical basis. Second, it takes the luminance and chrominance components into account in an integral manner. Third, it has low complexity and can process 720p high-definition videos at 35 frames per second without flicker. The superior performance of the proposed method is verified through psychophysical tests.",
"title": ""
},
{
"docid": "55f0aa6a21e4976dc4705b037fd82a11",
"text": "Dynamic topic models (DTMs) are very effective in discovering topics and capturing their evolution trends in time series data. To do posterior inference of DTMs, existing methods are all batch algorithms that scan the full dataset before each update of the model and make inexact variational approximations with mean-field assumptions. Due to a lack of a more scalable inference algorithm, despite the usefulness, DTMs have not captured large topic dynamics. This paper fills this research void, and presents a fast and parallelizable inference algorithm using Gibbs Sampling with Stochastic Gradient Langevin Dynamics that does not make any unwarranted assumptions. We also present a Metropolis-Hastings based O(1) sampler for topic assignments for each word token. In a distributed environment, our algorithm requires very little communication between workers during sampling (almost embarrassingly parallel) and scales up to large-scale applications. We are able to learn the largest Dynamic Topic Model to our knowledge, and learned the dynamics of 1,000 topics from 2.6 million documents in less than half an hour, and our empirical results show that our algorithm is not only orders of magnitude faster than the baselines but also achieves lower perplexity.",
"title": ""
},
{
"docid": "b58071272823aa6bc6c9e04f45ffd6a5",
"text": "Ever since its initiation, the Catalogue for Transmission Genetics in Arabs (CTGA) database has endeavored to index reports of genetic disorders among Arab patients. This is a daunting task, compounded by the fact that the Arabs not only constitute a huge population of close to 340 million people, but are also spread over 23 countries covering 14 million square km in two continents within their homeland itself. Add to this the Arab diaspora spread around the world, and a picture of truly massive proportions emerges. The CTGA Database Development Team realized earlier on that this enormous undertaking would need to be managed in a systematic manner. To this end, the Team works on the database in a targeted fashion, focusing on a single Arab country at a time. This strategy, initiated with the United Arab Emirates, has enabled the complete coverage of genetic disorders in two more countries (Kingdom of Bahrain and Sultanate of Oman), while maintaining a basal amount of information in the remaining Arab countries.",
"title": ""
},
{
"docid": "339efad8a055a90b43abebd9a4884baa",
"text": "The paper presents an investigation into the role of virtual reality and web technologies in the field of distance education. Within this frame, special emphasis is given on the building of web-based virtual learning environments so as to successfully fulfill their educational objectives. In particular, basic pedagogical methods are studied, focusing mainly on the efficient preparation, approach and presentation of learning content, and specific designing rules are presented considering the hypermedia, virtual and educational nature of this kind of applications. The paper also aims to highlight the educational benefits arising from the use of virtual reality technology in medicine and study the emerging area of web-based medical simulations. Finally, an innovative virtual reality environment for distance education in medicine is demonstrated. The proposed environment reproduces conditions of the real learning process and enhances learning through a real-time interactive simulator. Keywords—Distance education, medicine, virtual reality, web.",
"title": ""
},
{
"docid": "0f85ce6afd09646ee1b5242a4d6122d1",
"text": "Environmental concern has resulted in a renewed interest in bio-based materials. Among them, plant fibers are perceived as an environmentally friendly substitute to glass fibers for the reinforcement of composites, particularly in automotive engineering. Due to their wide availability, low cost, low density, high-specific mechanical properties, and eco-friendly image, they are increasingly being employed as reinforcements in polymer matrix composites. Indeed, their complex microstructure as a composite material makes plant fiber a really interesting and challenging subject to study. Research subjects about such fibers are abundant because there are always some issues to prevent their use at large scale (poor adhesion, variability, low thermal resistance, hydrophilic behavior). The choice of natural fibers rather than glass fibers as filler yields a change of the final properties of the composite. One of the most relevant differences between the two kinds of fiber is their response to humidity. Actually, glass fibers are considered as hydrophobic whereas plant fibers have a pronounced hydrophilic behavior. Composite materials are often submitted to variable climatic conditions during their lifetime, including unsteady hygroscopic conditions. However, in humid conditions, strong hydrophilic behavior of such reinforcing fibers leads to high level of moisture absorption in wet environments. This results in the structural modification of the fibers and an evolution of their mechanical properties together with the composites in which they are fitted in. Thereby, the understanding of these moisture absorption mechanisms as well as the influence of water on the final properties of these fibers and their composites is of great interest to get a better control of such new biomaterials. This is the topic of this review paper.",
"title": ""
},
{
"docid": "bcf1f9c23e790bda059603f98dcb1fea",
"text": "Hurdle technology is used in industrialized as well as in developing countries for the gentle but effective preservation of foods. Hurdle technology was developed several years ago as a new concept for the production of safe, stable, nutritious, tasty, and economical foods. Previously hurdle technology, i.e., a combination of preservation methods, was used empirically without much knowledge of the governing principles. The intelligent application of hurdle technology has become more prevalent now, because the principles of major preservative factors for foods (e.g., temperature, pH, aw, Eh, competitive flora), and their interactions, became better known. Recently, the influence of food preservation methods on the physiology and behavior of microorganisms in foods, i.e. their homeostasis, metabolic exhaustion, stress reactions, are taken into account, and the novel concept of multi-target food preservation emerged. The present contribution reviews the concept of the potential hurdles for foods, the hurdle effect, and the hurdle technology for the prospects of the future goal of a multi-target preservation of foods.",
"title": ""
},
{
"docid": "3a2740b7f65841f7eb4f74a1fb3c9b65",
"text": "Getting a better understanding of user behavior is important for advancing information retrieval systems. Existing work focuses on modeling and predicting single interaction events, such as clicks. In this paper, we for the first time focus on modeling and predicting sequences of interaction events. And in particular, sequences of clicks. We formulate the problem of click sequence prediction and propose a click sequence model (CSM) that aims to predict the order in which a user will interact with search engine results. CSM is based on a neural network that follows the encoder-decoder architecture. The encoder computes contextual embeddings of the results. The decoder predicts the sequence of positions of the clicked results. It uses an attentionmechanism to extract necessary information about the results at each timestep. We optimize the parameters of CSM by maximizing the likelihood of observed click sequences. We test the effectiveness ofCSMon three new tasks: (i) predicting click sequences, (ii) predicting the number of clicks, and (iii) predicting whether or not a user will interact with the results in the order these results are presented on a search engine result page (SERP). Also, we show that CSM achieves state-of-the-art results on a standard click prediction task, where the goal is to predict an unordered set of results a user will click on.",
"title": ""
},
{
"docid": "78b07bce8817c60dce98ad434d1fc3e0",
"text": "Boost converters are widely used as power-factorcorrected preregulators. In high-power applications, interleaved operation of two or more boost converters has been proposed to increase the output power and to reduce the output ripple. A major design criterion then is to ensure equal current sharing among the parallel converters. In this paper, a converter consisting of two interleaved and intercoupled boost converter cells is proposed and investigated. The boost converter cells have very good current sharing characteristics even in the presence of relatively large duty cycle mismatch. In addition, it can be designed to have small input current ripple and zero boost-rectifier reverse-recovery loss. The operating principle, steady-state analysis, and comparison with the conventional boost converter are presented. Simulation and experimental results are also given.",
"title": ""
},
{
"docid": "7d57caa810120e1590ad277fb8113222",
"text": "Cancer is increasing the total number of unexpected deaths around the world. Until now, cancer research could not significantly contribute to a proper solution for the cancer patient, and as a result, the high death rate is uncontrolled. The present research aim is to extract the significant prevention factors for particular types of cancer. To find out the prevention factors, we first constructed a prevention factor data set with an extensive literature review on bladder, breast, cervical, lung, prostate and skin cancer. We subsequently employed three association rule mining algorithms, Apriori, Predictive apriori and Tertius algorithms in order to discover most of the significant prevention factors against these specific types of cancer. Experimental results illustrate that Apriori is the most useful association rule-mining algorithm to be used in the discovery of prevention factors.",
"title": ""
},
{
"docid": "1a65a6e22d57bb9cd15ba01943eeaa25",
"text": "+ optimal local factor – expensive for general obs. + exploit conj. graph structure + arbitrary inference queries + natural gradients – suboptimal local factor + fast for general obs. – does all local inference – limited inference queries – no natural gradients ± optimal given conj. evidence + fast for general obs. + exploit conj. graph structure + arbitrary inference queries + some natural gradients",
"title": ""
},
{
"docid": "3cc0218ffbdb04ee37c20138c1b56f3f",
"text": "Many kinds of communication networks, in particular social and opportunistic networks, rely at least partly on on humans to help move data across the network. Human altruistic behavior is an important factor determining the feasibility of such a system. In this paper, we study the impact of different distributions of altruism on the throughput and delay of mobile social communication system. We evaluate the system performance using four experimental human mobility traces with uniform and community-biased traffic patterns. We found that mobile social networks are very robust to the distributions of altruism due to the nature of multiple paths. We further confirm the results by simulations on two popular social network models. To the best of our knowledge, this is the first complete study of the impact of altruism on mobile social networks, including the impact of topologies and traffic patterns.",
"title": ""
},
{
"docid": "d5a18a82f8e041b717291c69676c7094",
"text": "Total sleep deprivation (TSD) for one whole night improves depressive symptoms in 40-60% of treatments. The degree of clinical change spans a continuum from complete remission to worsening (in 2-7%). Other side effects are sleepiness and (hypo-) mania. Sleep deprivation (SD) response shows up in the SD night or on the following day. Ten to 15% of patients respond after recovery sleep only. After recovery sleep 50-80% of day 1 responders suffer a complete or partial relapse; but improvement can last for weeks. Sleep seems to lead to relapse although this is not necessarily the case. Treatment effects may be stabilised by antidepressant drugs, lithium, shifting of sleep time or light therapy. The best predictor of a therapeutic effect is a large variability of mood. Current opinion is that partial sleep deprivation (PSD) in the second half of the night is equally effective as TSD. There are, however, indications that TSD is superior. Early PSD (i.e. sleeping between 3:00 and 6:00) has the same effect as late PSD given equal sleep duration. New data cast doubt on the time-honoured conviction that REM sleep deprivation is more effective than non-REM SD. Both may work by reducing total sleep time. SD is an unspecific therapy. The main indication is the depressive syndrome. Some studies show positive effects in Parkinson's disease. It is still unknown how sleep deprivation works.",
"title": ""
},
{
"docid": "6eef3e845dfb65b0cbe993bb4f679697",
"text": "Pathologic conditions in the shoulder of a throwing athlete frequently represent a breakdown of multiple elements of the shoulder restraint system, both static and dynamic, and also a breakdown in the kinetic chain. Physical therapy and rehabilitation should be, with only a few exceptions, the primary treatment for throwing athletes before operative treatment is considered. Articular-sided partial rotator cuff tears and superior labral tears are common in throwing athletes. Operative treatment can be successful when nonoperative measures have failed. Throwing athletes who have a glenohumeral internal rotation deficit have a good response, in most cases, to stretching of the posteroinferior aspect of the capsule.",
"title": ""
},
{
"docid": "b987b231b1f8e3013c956dc5f0c33fdb",
"text": "Context As autonomous driving technology matures towards series production, it is necessary to take a deeper look at various aspects of electrical/electronic (E/E) architectures for autonomous driving. Objective This paper describes a functional reference architecture for autonomous driving, along with various considerations that influence such an architecture. The functionality is described at the logical level, without dependence on specific implementation technologies. Method Engineering design has been used as the research method, which focuses on creating solutions intended for practical application. The architecture has been refined and applied over a five year period to the construction of prototype autonomous vehicles in three different categories, with both academic and industrial stakeholders. Results The architectural components are divided into categories pertaining to (i) perception, (ii) decision and control, and (iii) vehicle platform manipulation. The architecture itself is divided into two layers comprising the vehicle platform and a cognitive driving intelligence. The distribution of components among the architectural layers considers two extremes: one where the vehicle platform is as \"dumb\" as possible, and the other, where the vehicle platform can be treated as an autonomous system with limited intelligence. We recommend a clean split between the driving intelligence and the vehicle platform. The architecture description includes identification of stakeholder concerns, which are grouped under the business and engineering cate-",
"title": ""
},
{
"docid": "618ba5659da9110ae02299dff4be227f",
"text": "Over the past decade , many organizations have begun to routinely capture huge volumes of historical data describing their operations, products, and customers. At the same time, scientists and engineers in many fields have been capturing increasingly complex experimental data sets, such as gigabytes of functional magnetic resonance imaging (MRI) data describing brain activity in humans. The field of data mining addresses the question of how best to use this historical data to discover general regularities and improve the process of making decisions. Machine Learning and Data Mining",
"title": ""
},
{
"docid": "def6762457fd4e95a35e3c83990c4943",
"text": "The possibility of controlling dexterous hand prostheses by using a direct connection with the nervous system is particularly interesting for the significant improvement of the quality of life of patients, which can derive from this achievement. Among the various approaches, peripheral nerve based intrafascicular electrodes are excellent neural interface candidates, representing an excellent compromise between high selectivity and relatively low invasiveness. Moreover, this approach has undergone preliminary testing in human volunteers and has shown promise. In this paper, we investigate whether the use of intrafascicular electrodes can be used to decode multiple sensory and motor information channels with the aim to develop a finite state algorithm that may be employed to control neuroprostheses and neurocontrolled hand prostheses. The results achieved both in animal and human experiments show that the combination of multiple sites recordings and advanced signal processing techniques (such as wavelet denoising and spike sorting algorithms) can be used to identify both sensory stimuli (in animal models) and motor commands (in a human volunteer). These findings have interesting implications, which should be investigated in future experiments.",
"title": ""
},
{
"docid": "8a32bdadcaa2c94f83e95c19e400835b",
"text": "Create a short summary of your paper (200 words), double-spaced. Your summary will say something like: In this action research study of my classroom of 7 grade mathematics, I investigated ______. I discovered that ____________. As a result of this research, I plan to ___________. You now begin your paper. Pages should be numbered, with the first page of text following the abstract as page one. (In Microsoft Word: after your abstract, rather than inserting a “page break” insert a “section break” to start on the next page; this will allow you to start the 3 page being numbered as page 1). You should divide this report of your research into sections. We should be able to identity the following sections and you may use these headings (headings should be bold, centered, and capitalized). Consider the page length to be a minimum.",
"title": ""
}
] |
scidocsrr
|
15240d99f8396b94a54232466730f5d0
|
GeoLife: A Collaborative Social Networking Service among User, Location and Trajectory
|
[
{
"docid": "cbaf7cd4e17c420b7546d132959b3283",
"text": "User mobility has given rise to a variety of Web applications, in which the global positioning system (GPS) plays many important roles in bridging between these applications and end users. As a kind of human behavior, transportation modes, such as walking and driving, can provide pervasive computing systems with more contextual information and enrich a user's mobility with informative knowledge. In this article, we report on an approach based on supervised learning to automatically infer users' transportation modes, including driving, walking, taking a bus and riding a bike, from raw GPS logs. Our approach consists of three parts: a change point-based segmentation method, an inference model and a graph-based post-processing algorithm. First, we propose a change point-based segmentation method to partition each GPS trajectory into separate segments of different transportation modes. Second, from each segment, we identify a set of sophisticated features, which are not affected by differing traffic conditions (e.g., a person's direction when in a car is constrained more by the road than any change in traffic conditions). Later, these features are fed to a generative inference model to classify the segments of different modes. Third, we conduct graph-based postprocessing to further improve the inference performance. This postprocessing algorithm considers both the commonsense constraints of the real world and typical user behaviors based on locations in a probabilistic manner. The advantages of our method over the related works include three aspects. (1) Our approach can effectively segment trajectories containing multiple transportation modes. (2) Our work mined the location constraints from user-generated GPS logs, while being independent of additional sensor data and map information like road networks and bus stops. (3) The model learned from the dataset of some users can be applied to infer GPS data from others. Using the GPS logs collected by 65 people over a period of 10 months, we evaluated our approach via a set of experiments. As a result, based on the change-point-based segmentation method and Decision Tree-based inference model, we achieved prediction accuracy greater than 71 percent. Further, using the graph-based post-processing algorithm, the performance attained a 4-percent enhancement.",
"title": ""
}
] |
[
{
"docid": "6d81f84f909854299226390d31466446",
"text": "Recently, the new Kinect One has been issued by Microsoft, providing the next generation of real-time range sensing devices based on the Time-of-Flight (ToF) principle. As the first Kinect version was using a structured light approach, one would expect various differences in the characteristics of the range data delivered by both devices. This paper presents a detailed and in-depth comparison between both devices. In order to conduct the comparison, we propose a framework of seven different experimental setups, which is a generic basis for evaluating range cameras such as Kinect. The experiments have been designed with the goal to capture individual effects of the Kinect devices as isolatedly as possible and in a way, that they can also be adopted, in order to apply them to any other range sensing device. The overall goal of this paper is to provide a solid insight into the pros and cons of either device. Thus, scientists that are interested in using Kinect range sensing cameras in their specific application scenario can directly assess the expected, specific benefits and potential problem of either device.",
"title": ""
},
{
"docid": "e189f36ba0fcb91d0608d0651c60516e",
"text": "In this paper, we describe the progressive design of the gesture recognition module of an automated food journaling system -- Annapurna. Annapurna runs on a smartwatch and utilises data from the inertial sensors to first identify eating gestures, and then captures food images which are presented to the user in the form of a food journal. We detail the lessons we learnt from multiple in-the-wild studies, and show how eating recognizer is refined to tackle challenges such as (i) high gestural diversity, and (ii) non-eating activities with similar gestural signatures. Annapurna is finally robust (identifying eating across a wide diversity in food content, eating styles and environments) and accurate (false-positive and false-negative rates of 6.5% and 3.3% respectively)",
"title": ""
},
{
"docid": "273153d0cf32162acb48ed989fa6d713",
"text": "This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.",
"title": ""
},
{
"docid": "a04d5892b14ebbe78a74057444dfc3af",
"text": "Finding the appropriate number of clusters to which documents should be partitioned is crucial in document clustering. In this paper, we propose a novel approach, namely DPMFP, to discover the latent cluster structure based on the DPM model without requiring the number of clusters as input. Document features are automatically partitioned into two groups, in particular, discriminative words and nondiscriminative words, and contribute differently to document clustering. A variational inference algorithm is investigated to infer the document collection structure as well as the partition of document words at the same time. Our experiments indicate that our proposed approach performs well on the synthetic data set as well as real data sets. The comparison between our approach and state-of-the-art document clustering approaches shows that our approach is robust and effective for document clustering.",
"title": ""
},
{
"docid": "532fa89af9499db8d4c50abcb17b633a",
"text": "Our languages are in constant flux driven by external factors such as cultural, societal and technological changes, as well as by only partially understood internal motivations. Words acquire new meanings and lose old senses, new words are coined or borrowed from other languages and obsolete words slide into obscurity. Understanding the characteristics of shifts in the meaning and in the use of words is useful for those who work with the content of historical texts, the interested general public, but also in and of itself. The findings from automatic lexical semantic change detection, and the models of diachronic conceptual change are currently being incorporated in approaches for measuring document across-time similarity, information retrieval from long-term document archives, the design of OCR algorithms, and so on. In recent years we have seen a surge in interest in the academic community in computational methods and tools supporting inquiry into diachronic conceptual change and lexical replacement. This article is an extract of a survey of recent computational techniques to tackle lexical semantic change currently under review. In this article we focus on diachronic conceptual change as an extension of semantic change.",
"title": ""
},
{
"docid": "7f9b9bef62aed80a918ef78dcd15fb2a",
"text": "Transferring image-based object detectors to domain of videos remains a challenging problem. Previous efforts mostly exploit optical flow to propagate features across frames, aiming to achieve a good trade-off between performance and computational complexity. However, introducing an extra model to estimate optical flow would significantly increase the overall model size. The gap between optical flow and high-level features can hinder it from establishing the spatial correspondence accurately. Instead of relying on optical flow, this paper proposes a novel module called Progressive Sparse Local Attention (PSLA), which establishes the spatial correspondence between features across frames in a local region with progressive sparse strides and uses the correspondence to propagate features. Based on PSLA, Recursive Feature Updating (RFU) and Dense feature Transforming (DFT) are introduced to model temporal appearance and enrich feature representation respectively. Finally, a novel framework for video object detection is proposed. Experiments on ImageNet VID are conducted. Our framework achieves a state-of-the-art speedaccuracy trade-off with significantly reduced model capacity.",
"title": ""
},
{
"docid": "cb6d8eb59f135add17b7c56d60b08e3c",
"text": "Deep learning methods are a class of machine learning techniques capable of identifying highly complex patterns in large datasets. Here, we provide a perspective and primer on deep learning applications for genome analysis. We discuss successful applications in the fields of regulatory genomics, variant calling and pathogenicity scores. We include general guidance for how to effectively use deep learning methods as well as a practical guide to tools and resources. This primer is accompanied by an interactive online tutorial. This perspective presents a primer on deep learning applications for the genomics field. It includes a general guide for how to use deep learning and describes the current tools and resources that are available to the community.",
"title": ""
},
{
"docid": "979e25abca763217d58b995c06bd6c83",
"text": "This paper examines search across competing e-commerce sites. By analyzing panel data from over 10,000 Internet households and three commodity-like products (books, compact discs (CDs), and air travel services), we show that the amount of online search is actually quite limited. On average, households visit only 1.2 book sites, 1.3 CD sites, and 1.8 travel sites during a typical active month in each category. Using probabilistic models, we characterize search behavior at the individual level in terms of (1) depth of search, (2) dynamics of search, and (3) activity of search. We model an individual's tendency to search as a logarithmic process, finding that shoppers search across very few sites in a given shopping month. We extend the logarithmic model of search to allow for time-varying dynamics that may cause the consumer to evolve and, perhaps, learn to search over time. We find that for two of the three product categories studied, search propensity does not change from month to month. However, in the third product category we find mild evidence of time-varying dynamics, where search decreases over time from already low levels. Finally, we model the level of a household's shopping activity and integrate it into our model of search. The results suggest that more-active online shoppers tend also to search across more sites. This consumer characteristic largely drives the dynamics of search that can easily be mistaken as increases from experience at the individual level.",
"title": ""
},
{
"docid": "30e229f91456c3d7eb108032b3470b41",
"text": "Software as a service (SaaS) is a rapidly growing model of software licensing. In contrast to traditional software where users buy a perpetual-use license, SaaS users buy a subscription from the publisher. Whereas traditional software publishers typically release new product features as part of new versions of software once in a few years, publishers using SaaS have an incentive to release new features as soon as they are completed. We show that this property of the SaaS licensing model leads to greater investment in product development under most conditions. This increased investment leads to higher software quality in equilibrium under SaaS compared to perpetual licensing. The software publisher earns greater profits under SaaS while social welfare is also higher",
"title": ""
},
{
"docid": "d62a68d6fcd5c2ae4635709007e471da",
"text": "We introduce a new method to combine the output probabilities of convolutional neural networks which we call Weighted Convolutional Neural Network Ensemble. Each network has an associated weight that makes networks with better performance have a greater influence at the time to classify in relation to networks that performed worse. This new approach produces better results than the common method that combines the networks doing just the average of the output probabilities to make the predictions. We show the validity of our proposal by improving the classification rate on a common image classification benchmark.",
"title": ""
},
{
"docid": "995655a6a9f662d33e0525b3ea236ce4",
"text": "A well-known problem in the design of operating systems is the selection of a resource allocation policy that will prevent deadlock. Deadlock is the situation in which resources have been allocated to various tasks in such a way that none of the tasks can continue. The various published solutions have been somewhat restrictive: either they do not handle the problem in sufficient generality or they suggest policies which will on occasion refuse a request which could have been safely granted. Algorithms are presented which examine a request in the light of the current allocation of resources and determi.~e whether or not the granting of the request will introduce the possibility of a deadlock. Proofs given in the appendixes show that the conditions imposed by the algorithms are both necessary and sufficient to prevent deadlock. The algorithms have been successfully used in the THE system.",
"title": ""
},
{
"docid": "69bb9ac73e4135dbfa00084a734adfa7",
"text": "Mobility-assistive device such as powered wheelchair is very useful for disabled people, to gain some physical independence. The three main functions of the proposed system are, 1) wheelchair navigation using multiple input, 2) obstacle detection using IR sensors, 3) home automation for disable person. Wheelchair can be navigated through i)voice command or ii) moving head or hand in four fixed position which is captured using accelerometer sensor built in android phone. Using 4 IR sensors we can avoid the risk of collision and injury and can maintain some safer distance from the objects. Disable person cannot stand up and switch on-off the light or fan every time. So to give them more relaxation this system offers home automation by giving voice command to the android phone or by manually swipe the button on the screen. The system can be available at very low cost so that more number of disable persons can get benefits.",
"title": ""
},
{
"docid": "6d0ba36e4371cbd9aa7d136aec11f92d",
"text": "The DNS is a fundamental service that has been repeatedly attacked and abused. DNS manipulation is a prominent case: Recursive DNS resolvers are deployed to explicitly return manipulated answers to users' queries. While DNS manipulation is used for legitimate reasons too (e.g., parental control), rogue DNS resolvers support malicious activities, such as malware and viruses, exposing users to phishing and content injection. We introduce REMeDy, a system that assists operators to identify the use of rogue DNS resolvers in their networks. REMeDy is a completely automatic and parameter-free system that evaluates the consistency of responses across the resolvers active in the network. It operates by passively analyzing DNS traffic and, as such, requires no active probing of third-party servers. REMeDy is able to detect resolvers that manipulate answers, including resolvers that affect unpopular domains. We validate REMeDy using large-scale DNS traces collected in ISP networks where more than 100 resolvers are regularly used by customers. REMeDy automatically identifies regular resolvers, and pinpoint manipulated responses. Among those, we identify both legitimate services that offer additional protection to clients, and resolvers under the control of malwares that steer traffic with likely malicious goals.",
"title": ""
},
{
"docid": "2a818337c472caa1e693edb05722954b",
"text": "UNLABELLED\nThis study focuses on the relationship between classroom ventilation rates and academic achievement. One hundred elementary schools of two school districts in the southwest United States were included in the study. Ventilation rates were estimated from fifth-grade classrooms (one per school) using CO(2) concentrations measured during occupied school days. In addition, standardized test scores and background data related to students in the classrooms studied were obtained from the districts. Of 100 classrooms, 87 had ventilation rates below recommended guidelines based on ASHRAE Standard 62 as of 2004. There is a linear association between classroom ventilation rates and students' academic achievement within the range of 0.9-7.1 l/s per person. For every unit (1 l/s per person) increase in the ventilation rate within that range, the proportion of students passing standardized test (i.e., scoring satisfactory or above) is expected to increase by 2.9% (95%CI 0.9-4.8%) for math and 2.7% (0.5-4.9%) for reading. The linear relationship observed may level off or change direction with higher ventilation rates, but given the limited number of observations, we were unable to test this hypothesis. A larger sample size is needed for estimating the effect of classroom ventilation rates higher than 7.1 l/s per person on academic achievement.\n\n\nPRACTICAL IMPLICATIONS\nThe results of this study suggest that increasing the ventilation rates toward recommended guideline ventilation rates in classrooms should translate into improved academic achievement of students. More studies are needed to fully understand the relationships between ventilation rate, other indoor environmental quality parameters, and their effects on students' health and achievement. Achieving the recommended guidelines and pursuing better understanding of the underlying relationships would ultimately support both sustainable and productive school environments for students and personnel.",
"title": ""
},
{
"docid": "88c713e4358ab9fc9a6345aeed2105a9",
"text": "The train scheduling problem is an integer programming problem known to be NP hard. In practice such a problem is often required to be solved in real time, hence a quick heuristic that allows a good feasible solution to be obtained in a predetermined and finite number of steps is most desired. We propose an algorithm which is based on local optimality criteria in the event of a potential crossing conflict. The suboptimal but feasible solution can be obtained very quickly in polynomial time. The model can also be generalized to cater for the possibility of overtaking when the trains have different speed. We also furnish a complexity analysis to show the NP-completeness of the problem. Simulation results for two non-trivial examples are presented to demonstrate the effectiveness of the proposed algorithm.",
"title": ""
},
{
"docid": "057e7fa07d106553c7bf52f20be6a26d",
"text": "A half-bridge LLC resonant converter with Zero Voltage Switching (ZVS) and Pulse Frequency Modulation (PFM) is a lucrative topology for DC/DC conversion. A Digital Signal Controller (DSC) provides component cost reduction, flexible design, and the ability to monitor and process the system conditions to achieve greater stability. The dynamics of the LLC resonant converter are investigated using the small signal modeling technique based on Extended Describing Functions (EDF) methodology. Also, a comprehensive description of the design for the compensator for control of the LLC converter is presented. INTRODUCTION The LLC resonant converter topology, illustrated in Figure 1, allows ZVS for half-bridge MOSFETs, thereby considerably lowering the switching losses and improving the converter efficiency. The control system design of resonant converters is different from the conventional fixed frequency Pulse-Width Modulation (PWM) converters. In order to design a suitable digital compensator, the large signal and small signal models of the LLC resonant converter are derived using the EDF technique. FIGURE 1: LLC RESONANT CONVERTER SCHEMATIC Author: Meeravali Shaik and Ramesh Kankanala Microchip Technology Inc.",
"title": ""
},
{
"docid": "aa03d917910a3da1f22ceea8f5b8d1c8",
"text": "We train a language-universal dependency parser on a multilingual collection of treebanks. The parsing model uses multilingual word embeddings alongside learned and specified typological information, enabling generalization based on linguistic universals and based on typological similarities. We evaluate our parser’s performance on languages in the training set as well as on the unsupervised scenario where the target language has no trees in the training data, and find that multilingual training outperforms standard supervised training on a single language, and that generalization to unseen languages is competitive with existing model-transfer approaches.",
"title": ""
},
{
"docid": "c7e584bca061335c8cd085511f4abb3b",
"text": "The application of boosting technique to regression problems has received relatively little attention in contrast to research aimed at classification problems. This letter describes a new boosting algorithm, AdaBoost.RT, for regression problems. Its idea is in filtering out the examples with the relative estimation error that is higher than the preset threshold value, and then following the AdaBoost procedure. Thus, it requires selecting the suboptimal value of the error threshold to demarcate examples as poorly or well predicted. Some experimental results using the M5 model tree as a weak learning machine for several benchmark data sets are reported. The results are compared to other boosting methods, bagging, artificial neural networks, and a single M5 model tree. The preliminary empirical comparisons show higher performance of AdaBoost.RT for most of the considered data sets.",
"title": ""
},
{
"docid": "bf7e67dededd5f4585aaefecc60e7c1a",
"text": "Multidimensional long short-term memory recurrent neural networks achieve impressive results for handwriting recognition. However, with current CPU-based implementations, their training is very expensive and thus their capacity has so far been limited. We release an efficient GPU-based implementation which greatly reduces training times by processing the input in a diagonal-wise fashion. We use this implementation to explore deeper and wider architectures than previously used for handwriting recognition and show that especially the depth plays an important role. We outperform state of the art results on two databases with a deep multidimensional network.",
"title": ""
}
] |
scidocsrr
|
770c68a8a6f2cc155794a23b78a99059
|
Catastrophic Forgetting in Connectionist Networks : Causes , Consequences and Solutions
|
[
{
"docid": "c4fcbe21b7ca6aeac5720b254b02df70",
"text": "This paper explores the effect of initial weight selection on feed-forward networks learning simple functions with the back-propagation technique. We first demonstrate, through the use of Monte Carlo techniques, that the magnitude of the initial condition vector (in weight space) is a very significant parameter in convergence time variability. In order to further understand this result, additional deterministic experiments were performed. The results of these experiments demonstrate the extreme sensitivity of back propagation to initial weight configuration. Back Propagation is Sensitive to Initial Conditions John F. Kolen Jordan B. Pollack Laboratory for Artificial Intelligence Research Computer and Information Science Department The Ohio State University Columbus, Ohio 43210, USA kolen-j@cis.ohio-state.edu, pollack@cis.ohio-state.edu",
"title": ""
}
] |
[
{
"docid": "80655e659e9cf0456595259f2969fe42",
"text": "The induction motor equivalent circuit parameters are required for many performance and planning studies involving induction motors. These parameters are typically calculated from standardized motor performance tests, such as the no load, full load, and locked rotor tests. However, standardized test data is not typically available to the end user. Alternatively, the equivalent circuit parameters may be estimated based on published performance data for the motor. This paper presents an iterative method for estimating the induction motor equivalent circuit parameters using only the motor nameplate data.",
"title": ""
},
{
"docid": "5a0dc8516206c5b9a5944e425014747f",
"text": "Chronic wasting disease (CWD) is a fatal, transmissible prion disease that affects captive and free-ranging deer, elk, and moose. Although the zoonotic potential of CWD is considered low, identification of multiple CWD strains and the potential for agent evolution upon serial passage hinders a definitive conclusion. Surveillance for CWD in free-ranging populations has documented a continual geographic spread of the disease throughout North America. CWD prions are shed from clinically and preclinically affected hosts, and CWD transmission is mediated at least in part by the environment, perhaps by soil. Much remains unknown, including the sites and mechanisms of prion uptake in the naive host. There are no therapeutics or effective eradication measures for CWD-endemic populations. Continued surveillance and research of CWD and its effects on cervid ecosystems is vital for controlling the long-term consequences of this emerging disease.",
"title": ""
},
{
"docid": "346cd0b680f7da2ff8ab3d97a294086c",
"text": "Inference in Conditional Random Fields and Hidden Markov Models is done using the Viterbi algorithm, an efficient dynamic programming algorithm. In many cases, general (non-local and non-sequential) constraints may exist over the output sequence, but cannot be incorporated and exploited in a natural way by this inference procedure. This paper proposes a novel inference procedure based on integer linear programming (ILP) and extends CRF models to naturally and efficiently support general constraint structures. For sequential constraints, this procedure reduces to simple linear programming as the inference process. Experimental evidence is supplied in the context of an important NLP problem, semantic role labeling.",
"title": ""
},
{
"docid": "3a0cac0050f40b9ce62bb0d4234ecf52",
"text": "The ephemeral nature of human communication via networks today poses interesting and challenging problems for information technologists. The Intelink intelligence network, for example, has a need to monitor chat-room conversations to ensure the integrity of sensitive data being transmitted via the network. However, the sheer volume of communication in venues such as email, newsgroups, and chat precludes manual techniques of information management. It has been estimated that over 430 million instant messages, for example, are exchanged each day on the America Online network [3]. Although a not insignificant fraction of such data may be temporarily archived (e.g., newsgroups), no systematic mechanisms exist for accumulating these artifacts of communication in a form that lends itself to the construction of models of semantics [12]. In essence, dynamic techniques of analysis are needed if textual data of this nature is to be effectively mined. This article reports our progress in developing a text mining tool for analysis of chat-room conversations. Central to our efforts is the development of functionality to answer questions such as \"What topics are being discussed in a chat-room?\", \"Who is discussing which topics?\" and \"Who is interacting with whom?\" The objective of our research is to develop technology that can automatically identify such patterns of interaction in both social and semantic terms. In this article we report our preliminary findings in identifying threads of conversation in multi-topic, multi-person chat-rooms. We have achieved promising results in terms of precision and recall by employing pattern recognition techniques based on finite state automata. We also report the design of our approach to building models of social and semantic interactions based on our HDDI text mining infrastructure [13].",
"title": ""
},
{
"docid": "9582bf78b9227fa4fd2ebdb957138571",
"text": "The prestige of publication has been based on traditional citation metrics, most commonly journal impact factor. However, the Internet has radically changed the speed, flow, and sharing of medical information. Furthermore, the explosion of social media, along with development of popular professional and scientific websites and blogs, has led to the need for alternative metrics, known as altmetrics, to quantify the wider impact of research. We explore the evolution of current research impact metrics and examine the evolving role of altmetrics in measuring the wider impact of research. We suggest that altmetrics used in research evaluation should be part of an informed peer-review process such as traditional metrics. Moreover, results based on altmetrics must not lead to direct decision making about research, but instead, should be used to assist experts in making decisions. Finally, traditional and alternative metrics should complement, not replace, each other in the peer-review process.",
"title": ""
},
{
"docid": "aef5b80e4c161a779c182774178a54c9",
"text": "Students are continually exposed to a variety of stressors during their academic career, and this can have significant negative effects on their mental health and subjective wellbeing. In this paper we explore how gamified persuasive interventions can promote engagement in performing random acts of kindness to improve wellbeing and help students manage stressors more effectively. In a pilot study we investigated how participation levels in a gamified persuasive intervention that promotes random acts of kindness at University, are influenced by (1) different persuasive message types, and (2) different game challenge categories. Furthermore, we analysed the impact on behavioural intention by comparing pre-intention and post-intention to perform random acts of kindness. Participants were assigned 5 different quests each morning, for two days, and asked to complete as many as possible by the end of each day. Participants were divided into 2 groups and received different types of persuasive notifications during the day: Group A received messages that set out group goals and used the social comparison strategy, while Group B received messages that set out individual goals and used the self-monitoring strategy. The findings from the pilot study will inform the design of a larger study to investigate persuasive game-based interventions for subjective wellbeing. ACM Classification",
"title": ""
},
{
"docid": "f271fbf2cc674bd1fa7d5f0c8149ced4",
"text": "A wide range of inconsistencies can arise during requirements engineering as goals and requirements are elicited from multiple stakeholders. Resolving such inconsistencies sooner or later in the process is a necessary condition for successful development of the software implementing those requirements. The paper first reviews the main types of inconsistency that can arise during requirements elaboration, defining them in an integrated framework and exploring their interrelationships. It then concentrates on the specific case of conflicting formulations of goals and requirements among different stakeholder viewpoints or within a single viewpoint. A frequent, weaker form of conflict called divergence is introduced and studied in depth. Formal techniques and heuristics are proposed for detecting conflicts and divergences from specifications of goals/ requirements and of domain properties. Various techniques are then discussed for resolving conflicts and divergences systematically by introduction of new goals or by transformation of specifications of goals/objects towards conflict-free versions. Numerous examples are given throughout the paper to illustrate the practical relevance of the concepts and techniques presented. The latter are discussed in the framework of the KAOS methodology for goal-driven requirements engineering. Index Terms Goal-driven requirements engineering, divergent requirements, conflict management, viewpoints, specification transformation, lightweight formal methods. ,((( 7UDQVDFWLRQV RQ 6RIWZDUH (QJLQHHULQJ 6SHFLDO ,VVXH RQ 0DQDJLQJ ,QFRQVLVWHQF\\ LQ 6RIWZDUH 'HYHORSPHQW 1RY",
"title": ""
},
{
"docid": "94d14d837104590fcf9055351fa59482",
"text": "The amygdala receives cortical inputs from the medial prefrontal cortex (mPFC) and orbitofrontal cortex (OFC) that are believed to affect emotional control and cue-outcome contingencies, respectively. Although mPFC impact on the amygdala has been studied, how the OFC modulates mPFC-amygdala information flow, specifically the infralimbic (IL) division of mPFC, is largely unknown. In this study, combined in vivo extracellular single-unit recordings and pharmacological manipulations were used in anesthetized rats to examine how OFC modulates amygdala neurons responsive to mPFC activation. Compared with basal condition, pharmacological (N-Methyl-D-aspartate) or electrical activation of the OFC exerted an inhibitory modulation of the mPFC-amygdala pathway, which was reversed with intra-amygdala blockade of GABAergic receptors with combined GABAA and GABAB antagonists (bicuculline and saclofen). Moreover, potentiation of the OFC-related pathways resulted in a loss of OFC control over the mPFC-amygdala pathway. These results show that the OFC potently inhibits mPFC drive of the amygdala in a GABA-dependent manner; but with extended OFC pathway activation this modulation is lost. Our results provide a circuit-level basis for this interaction at the level of the amygdala, which would be critical in understanding the normal and pathophysiological control of emotion and contingency associations regulating behavior.",
"title": ""
},
{
"docid": "802f77b4e2b8c8cdfb68f80fe31d7494",
"text": "In this article, we use three clustering methods (K-means, self-organizing map, and fuzzy K-means) to find properly graded stock market brokerage commission rates based on the 3-month long total trades of two different transaction modes (representative assisted and online trading system). Stock traders for both modes are classified in terms of the amount of the total trade as well as the amount of trade of each transaction mode, respectively. Results of our empirical analysis indicate that fuzzy K-means cluster analysis is the most robust approach for segmentation of customers of both transaction modes. We then propose a decision tree based rule to classify three groups of customers and suggest different brokerage commission rates of 0.4, 0.45, and 0.5% for representative assisted mode and 0.06, 0.1, and 0.18% for online trading system, respectively. q 2003 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "7159c79664f69f7ebe95a12babfee1f5",
"text": "In information visualization, interaction is commonly carried out by using traditional input devices, and visual feedback is usually given on desktop displays. By contrast, recent advances in interactive surface technology suggest combining interaction and display functionality in a single device for a more direct interaction. With our work, we contribute to the seamless integration of interaction and display devices and introduce new ways of visualizing and directly interacting with information. Rather than restricting the interaction to the display surface alone, we explicitly use the physical three-dimensional space above it for natural interaction with multiple displays. For this purpose, we introduce tangible views as spatially aware lightweight displays that can be interacted with by moving them through the physical space on or above a tabletop display's surface. Tracking the 3D movement of tangible views allows us to control various parameters of a visualization with more degrees of freedom. Tangible views also facilitate making multiple -- previously virtual -- views physically \"graspable\". In this paper, we introduce a number of interaction and visualization patterns for tangible views that constitute the vocabulary for performing a variety of common visualization tasks. Several implemented case studies demonstrate the usefulness of tangible views for widely used information visualization approaches and suggest the high potential of this novel approach to support interaction with complex visualizations.",
"title": ""
},
{
"docid": "8c70f1af7d3132ca31b0cf603b7c5939",
"text": "Much of the existing work on action recognition combines simple features (e.g., joint angle trajectories, optical flow, spatio-temporal video features) with somewhat complex classifiers or dynamical models (e.g., kernel SVMs, HMMs, LDSs, deep belief networks). Although successful, these approaches represent an action with a set of parameters that usually do not have any physical meaning. As a consequence, such approaches do not provide any qualitative insight that relates an action to the actual motion of the body or its parts. For example, it is not necessarily the case that clapping can be correlated to hand motion or that walking can be correlated to a specific combination of motions from the feet, arms and body. In this paper, we propose a new representation of human actions called Sequence of the Most Informative Joints (SMIJ), which is extremely easy to interpret. At each time instant, we automatically select a few skeletal joints that are deemed to be the most informative for performing the current action. The selection of joints is based on highly interpretable measures such as the mean or variance of joint angles, maximum angular velocity of joints, etc. We then represent an action as a sequence of these most informative joints. Our experiments on multiple databases show that the proposed representation is very discriminative for the task of human action recognition and performs better than several state-of-the-art algorithms.",
"title": ""
},
{
"docid": "4b3592efd8a4f6f6c9361a6f66a30a5f",
"text": "Error correction codes provides a mean to detect and correct errors introduced by the transmission channel. This paper presents a high-speed parallel cyclic redundancy check (CRC) implementation based on unfolding, pipelining, and retiming algorithms. CRC architectures are first pipelined to reduce the iteration bound by using novel look-ahead pipelining methods and then unfolded and retimed to design high-speed parallel circuits. The study and implementation using Verilog HDL. Modelsim Xilinx Edition (MXE) will be used for simulation and functional verification. Xilinx ISE will be used for synthesis and bit file generation. The Xilinx Chip scope will be used to test the results on Spartan 3E",
"title": ""
},
{
"docid": "9d74aa736c43914c16262c6ce838d563",
"text": "In this paper, we propose two level control system for a mobile robot. The first level subsystem deals with the control of the linear and angular volocities using a multivariable PI controller described with a full matrix. The position control of the mobile robot represents the second level control, which is nonlinear. The nonlinear control design is implemented by a modified backstepping algorithm whose parameters are adjusted by a genetic algorithm, which is a robust nonlinear optimization method. The performance of the proposed system is investigated using a dynamic model of a nonholonomic mobile robot with friction. We present a new dynamic model in which the angular velocities of wheels are main variables. Simulation results show the good quality of position tracking capabilities a mobile robot with the various viscous friction torques. Copyright © 2005 IFAC.",
"title": ""
},
{
"docid": "001afb3fadd6bc4d4fe4c7f0c16b4ebf",
"text": "The authors propose a minimally invasive procedure for treating type-A3 amyelic thoracolumbar fractures according to Magerl classification (compression burst fractures). The procedure, percutaneous kyphoplasty, allows the fracture to be reduced and vertebral height to be restored by injecting bone cement into a cavity created in the vertebral body by an inflatable balloon introduced with the percutaneous approach. Four patients were successfully treated, with clinical and functional benefit in all cases. There were no complications. Per il trattamento delle fratture vertebrali toraco-lombari amieliche del tipo A3 secondo Magerl (fratture da scoppio prodotte per meccanismo compressivo), gli autori propongo una procedura mini-invasiva, la cifoplastica percutanea che mediante iniezione di cemento osseo in una cavità creata nel corpo vertebrale da un palloncino espansibile introdotto con approccio percutaneo consente la riduzione della frattura, il ripristino dell’altezza del corpo vertebrale. Sono stati trattati 4 pazienti con beneficio clinico e funzionale in tutti i casi. Non sono state rilevate complicanze.",
"title": ""
},
{
"docid": "5f3fb159055f95995f4d338d14646594",
"text": "............................................................................................................................... i Zusammenfassung ............................................................................................................. iii Table of",
"title": ""
},
{
"docid": "54bcaafa495d6d778bddbbb5d5cf906e",
"text": "Low-shot visual learning—the ability to recognize novel object categories from very few examples—is a hallmark of human visual intelligence. Existing machine learning approaches fail to generalize in the same way. To make progress on this foundational problem, we present a novel protocol to evaluate low-shot learning on complex images where the learner is permitted to first build a feature representation. Then, we propose and evaluate representation regularization techniques that improve the effectiveness of convolutional networks at the task of low-shot learning, leading to a 2x reduction in the amount of training data required at equal accuracy rates on the challenging ImageNet dataset.",
"title": ""
},
{
"docid": "88e9a282434e95a43366df7dfdf18a94",
"text": "Traditional approaches to building a large scale knowledge graph have usually relied on extracting information (entities, their properties, and relations between them) from unstructured text (e.g. Dbpedia). Recent advances in Convolutional Neural Networks (CNN) allow us to shift our focus to learning entities and relations from images, as they build robust models that require little or no pre-processing of the images. In this paper, we present an approach to identify and extract spatial relations (e.g., The girl is standing behind the table) from images using CNNs. Our research addresses two specific challenges: providing insight into how spatial relations are learned by the network and which parts of the image are used to predict these relations. We use the pre-trained network VGGNet to extract features from an image and train a Multi-layer Perceptron (MLP) on a set of synthetic images and the sun09 dataset to extract spatial relations. The MLP predicts spatial relations without a bounding box around the objects or the space in the image depicting the relation. To understand how the spatial relations are represented in the network, a heatmap is overlayed on the image to show the regions that are deemed important by the network. Also, we analyze the MLP to show the relationship between the activation of consistent groups of nodes and the prediction of a spatial relation. We show how the loss of these groups affects the network's ability to identify relations.",
"title": ""
},
{
"docid": "3d84f5f8322737bf8c6f440180e07660",
"text": "Incremental Dialog Processing (IDP) enables Spoken Dialog Systems to gradually process minimal units of user speech in order to give the user an early system response. In this paper, we present an application of IDP that shows its effectiveness in a task-oriented dialog system. We have implemented an IDP strategy and deployed it for one month on a real-user system. We compared the resulting dialogs with dialogs produced over the previous month without IDP. Results show that the incremental strategy significantly improved system performance by eliminating long and often off-task utterances that generally produce poor speech recognition results. User behavior is also affected; the user tends to shorten utterances after being interrupted by the system.",
"title": ""
},
{
"docid": "8536a89fdc1c3d1556a801b87e80b0c3",
"text": "Pattern solutions for software and architectures have significantly reduced design, verification, and validation times by mapping challenging problems into a solved generic problem. In the paper, we present an architecture pattern for ensuring synchronous computation semantics using the PALS protocol. We develop a modeling framework in AADL to automatically transform a synchronous design of a real-time distributed system into an asynchronous design satisfying the PALS protocol. We present a detailed example of how the PALS transformation works for a dual-redundant system. From the example, we also describe the general transformation in terms of intuitively defined AADL semantics. Furthermore, we develop a static analysis checker to find necessary conditions that must be satisfied in order for the PALS transformation to work correctly. The transformations and static checks that we have described are implemented in OSATE using the generated EMF metamodel API for model manipulation.",
"title": ""
},
{
"docid": "cee9b099f6ea087376b56067620e1c64",
"text": "This paper presents a set of techniques for predicting aggressive comments in social media. In a time when cyberbullying has, unfortunately, made its entrance into society and Internet, it becomes necessary to find ways for preventing and overcoming this phenomenon. One of these concerns the use of machine learning techniques for automatically detecting cases of cyberbullying; a primary task within this cyberbullying detection consists of aggressive text detection. We concretely explore different computational techniques for carrying out this task, either as a classification or as a regression problem, and our results suggest that a key feature is the identification of profane words.",
"title": ""
}
] |
scidocsrr
|
3369769f0f8898c14cf7ad38e1b2fd3b
|
Urban area characterization based on crowd behavioral lifelogs over Twitter
|
[
{
"docid": "f8e20046f9ad2e4ef63339f7c611e815",
"text": "We propose and evaluate a probabilistic framework for estimating a Twitter user's city-level location based purely on the content of the user's tweets, even in the absence of any other geospatial cues. By augmenting the massive human-powered sensing capabilities of Twitter and related microblogging services with content-derived location information, this framework can overcome the sparsity of geo-enabled features in these services and enable new location-based personalized information services, the targeting of regional advertisements, and so on. Three of the key features of the proposed approach are: (i) its reliance purely on tweet content, meaning no need for user IP information, private login information, or external knowledge bases; (ii) a classification component for automatically identifying words in tweets with a strong local geo-scope; and (iii) a lattice-based neighborhood smoothing model for refining a user's location estimate. The system estimates k possible locations for each user in descending order of confidence. On average we find that the location estimates converge quickly (needing just 100s of tweets), placing 51% of Twitter users within 100 miles of their actual location.",
"title": ""
}
] |
[
{
"docid": "8f4f687aff724496efcc37ff7f6bbbeb",
"text": "Sentiment Analysis is new way of machine learning to extract opinion orientation (positive, negative, neutral) from a text segment written for any product, organization, person or any other entity. Sentiment Analysis can be used to predict the mood of people that have impact on stock prices, therefore it can help in prediction of actual stock movement. In order to exploit the benefits of sentiment analysis in stock market industry we have performed sentiment analysis on tweets related to Apple products, which are extracted from StockTwits (a social networking site) from 2010 to 2017. Along with tweets, we have also used market index data which is extracted from Yahoo Finance for the same period. The sentiment score of a tweet is calculated by sentiment analysis of tweets through SVM. As a result each tweet is categorized as bullish or bearish. Then sentiment score and market data is used to build a SVM model to predict next day's stock movement. Results show that there is positive relation between people opinion and market data and proposed work has an accuracy of 76.65% in stock prediction.",
"title": ""
},
{
"docid": "51a9180623be4ddaf514377074edc379",
"text": "Breast region measurements are important for research, but they may also become significant in the legal field as a quantitative tool for preoperative and postoperative evaluation. Direct anthropometric measurements can be taken in clinical practice. The aim of this study was to compare direct breast anthropometric measurements taken with a tape measure and a compass. Forty women, aged 18–60 years, were evaluated. They had 14 anatomical landmarks marked on the breast region and arms. The union of these points formed eight linear segments and one angle for each side of the body. The volunteers were evaluated by direct anthropometry in a standardized way, using a tape measure and a compass. Differences were found between the tape measure and the compass measurements for all segments analyzed (p > 0.05). Measurements obtained by tape measure and compass are not identical. Therefore, once the measurement tool is chosen, it should be used for the pre- and postoperative measurements in a standardized way. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .",
"title": ""
},
{
"docid": "7908e315d84cf916fb4a61a083be7fe6",
"text": "A base station antenna with dual-broadband and dual-polarization characteristics is presented in this letter. The proposed antenna contains four parts: a lower-band element, an upper-band element, arc-shaped baffle plates, and a box-shaped reflector. The lower-band element consists of two pairs of dipoles with additional branches for bandwidth enhancement. The upper-band element embraces two crossed hollow dipoles and is nested inside the lower-band element. Four arc-shaped baffle plates are symmetrically arranged on the reflector for isolating the lower- and upper-band elements and improving the radiation performance of upper-band element. As a result, the antenna can achieve a bandwidth of 50.6% for the lower band and 48.2% for the upper band when the return loss is larger than 15 dB, fully covering the frequency ranges 704–960 and 1710–2690 MHz for 2G/3G/4G applications. Measured port isolation larger than 27.5 dB in both the lower and upper bands is also obtained. At last, an array that consists of two lower-band elements and five upper-band elements is discussed for giving an insight into the future array design.",
"title": ""
},
{
"docid": "590b25cb2d6fe3c7d655e4524f169235",
"text": "Adult onset Still's disease (AOSD) is a rare systemic inflammatory disease of unknown etiology and pathogenesis that presents in 5 to 10% of patients as fever of unknown origin (FUO) accompanied by systemic manifestations. We report an interesting case of a 33-year-old African-American male who presented with one-month duration of FUO along with skin rash, sore throat, and arthralgia. After extensive workup, potential differential diagnoses were ruled out and the patient was diagnosed with AOSD based on the Yamaguchi criteria. The case history, incidence, pathogenesis, clinical manifestations, differential diagnoses, diagnostic workup, treatment modalities, and prognosis of AOSD are discussed in this case report.",
"title": ""
},
{
"docid": "15f51cbbb75d236a5669f613855312e0",
"text": "The recent work of Gatys et al., who characterized the style of an image by the statistics of convolutional neural network filters, ignited a renewed interest in the texture generation and image stylization problems. While their image generation technique uses a slow optimization process, recently several authors have proposed to learn generator neural networks that can produce similar outputs in one quick forward pass. While generator networks are promising, they are still inferior in visual quality and diversity compared to generation-by-optimization. In this work, we advance them in two significant ways. First, we introduce an instance normalization module to replace batch normalization with significant improvements to the quality of image stylization. Second, we improve diversity by introducing a new learning formulation that encourages generators to sample unbiasedly from the Julesz texture ensemble, which is the equivalence class of all images characterized by certain filter responses. Together, these two improvements take feed forward texture synthesis and image stylization much closer to the quality of generation-via-optimization, while retaining the speed advantage.",
"title": ""
},
{
"docid": "be4469990da6cea2cec35fedb2c37df1",
"text": "In this paper an attempt has been made to review the research studies on application of data mining techniques in the field of agriculture. Some of the techniques, such asID3 algorithms, the k-means, the k nearest neighbor, artificial neural networks and support vector machines applied in the field of agriculture were presented. Data mining in application in agriculture is a relatively new approach for forecasting / predicting of agricultural crop/animal management. This article explores the applications of data mining techniques in the field of agriculture and allied sciences. I. Introduction Data mining is the extraction of hidden predictive information from large databases, is a powerful new technology with great potential to help companies focus on the most important information in their data warehouses. Data mining tools predict future trends and behaviours, allowing businesses to make proactive, knowledge-driven decisions. The automated, prospective analysis offered by data mining move beyond the analysis of past events provided by retrospective tools typical of decision support systems. Agriculture and allied activities constitute the single largest component of India's gross domestic product, contributing nearly 25% of the total and nearly 60% of Indian population depends on this profession. Due to vagaries of climate factors the agricultural productivities in India are continuously decreasing over a decade. The reasons for this were studied mostly using regression analysis. In this paper an attempt has been made to compile the research findings of different researchers who used data.",
"title": ""
},
{
"docid": "40dcca8b2f91be95d29f88cfa8a78a52",
"text": "Deadlocks are a rather undesirable situation in a highly automated flexible manufacturing system. Their occurrences often deteriorate the utilization of resources and may lead to catastrophic results in safety-critical systems. Graph theory, automata, and Petri nets are three important mathematical tools to handle deadlock problems in resource allocation systems. Particularly, Petri nets are considered as a popular formalism because of their inherent characteristics. They received much attention over the past decades to deal with deadlock problems, leading to a variety of deadlock-control policies. This study surveys the state-of-the-art deadlock-control strategies for automated manufacturing systems by reviewing the principles and techniques that are involved in preventing, avoiding, and detecting deadlocks. The focus is deadlock prevention due to its large and continuing stream of efforts. A control strategy is evaluated in terms of computational complexity, behavioral permissiveness, and structural complexity of its deadlock-free supervisor. This study provides readers with a conglomeration of the updated results in this area and facilitates engineers in finding a suitable approach for their industrial scenarios. Future research directions are finally discussed.",
"title": ""
},
{
"docid": "b1f348ff63eaa97f6eeda5fcd81330a9",
"text": "The recent expansion of the cloud computing paradigm has motivated educators to include cloud-related topics in computer science and computer engineering curricula. While programming and algorithm topics have been covered in different undergraduate and graduate courses, cloud architecture/system topics are still not usually studied in academic contexts. But design, deployment and management of datacenters, virtualization technologies for cloud, cloud management tools and similar issues should be addressed in current computer science and computer engineering programs. This work presents our approach and experiences in designing and implementing a curricular module covering all these topics. In this approach the utilization of a simulation tool, CloudSim, is essential to allow the students a practical approximation to the course contents.",
"title": ""
},
{
"docid": "eb9459d0eb18f0e49b3843a6036289f9",
"text": "Experimental research has had a long tradition in psychology and education. When psychology emerged as an infant science during the 1900s, it modeled its research methods on the established paradigms of the physical sciences, which for centuries relied on experimentation to derive principals and laws. Subsequent reliance on experimental approaches was strengthened by behavioral approaches to psychology and education that predominated during the first half of this century. Thus, usage of experimentation in educational technology over the past 40 years has been influenced by developments in theory and research practices within its parent disciplines. In this chapter, we examine practices, issues, and trends related to the application of experimental research methods in educational technology. The purpose is to provide readers with sufficient background to understand and evaluate experimental designs encountered in the literature and to identify designs that will effectively address questions of interest in their own research. In an introductory section, we define experimental research, differentiate it from alternative approaches, and identify important concepts in its use (e.g., internal vs. external validity). We also suggest procedures for conducting experimental studies and publishing them in educational technology research journals. Next, we analyze uses of experimental methods by instructional researchers, extending the analyses of three decades ago by Clark and Snow (1975). In the concluding section, we turn to issues in using experimental research in educational technology, to include balancing internal and external validity, using multiple outcome measures to assess learning processes and products, using item responses vs. aggregate scores as dependent variables, reporting effect size as a complement to statistical significance, and media replications vs. media comparisons.",
"title": ""
},
{
"docid": "0450f2d507d447d0ea21fcf536ba8b08",
"text": "Recently, People pay more and more attention to the robot education. Experts at home and abroad in education proposed that education robot could serve as an effective platform for implementing innovation education, quality-oriented education, and technology education. This paper will elaborate the development of robot education in China's basic education, analyze problems along with several strategies from the following three perspectives: robot competitions, robot teaching, and researches on robot education.",
"title": ""
},
{
"docid": "e04bc357c145c38ed555b3c1fa85c7da",
"text": "This paper presents Hybrid (RSA & AES) encryption algorithm to safeguard data security in Cloud. Security being the most important factor in cloud computing has to be dealt with great precautions. This paper mainly focuses on the following key tasks: 1. Secure Upload of data on cloud such that even the administrator is unaware of the contents. 2. Secure Download of data in such a way that the integrity of data is maintained. 3. Proper usage and sharing of the public, private and secret keys involved for encryption and decryption. The use of a single key for both encryption and decryption is very prone to malicious attacks. But in hybrid algorithm, this problem is solved by the use of three separate keys each for encryption as well as decryption. Out of the three keys one is the public key, which is made available to all, the second one is the private key which lies only with the user. In this way, both the secure upload as well as secure download of the data is facilitated using the two respective keys. Also, the key generation technique used in this paper is unique in its own way. This has helped in avoiding any chances of repeated or redundant key.",
"title": ""
},
{
"docid": "e11b6fd2dcec42e7b726363a869a0d95",
"text": "Future frame prediction in videos is a promising avenue for unsupervised video representation learning. Video frames are naturally generated by the inherent pixel flows from preceding frames based on the appearance and motion dynamics in the video. However, existing methods focus on directly hallucinating pixel values, resulting in blurry predictions. In this paper, we develop a dual motion Generative Adversarial Net (GAN) architecture, which learns to explicitly enforce future-frame predictions to be consistent with the pixel-wise flows in the video through a duallearning mechanism. The primal future-frame prediction and dual future-flow prediction form a closed loop, generating informative feedback signals to each other for better video prediction. To make both synthesized future frames and flows indistinguishable from reality, a dual adversarial training method is proposed to ensure that the futureflow prediction is able to help infer realistic future-frames, while the future-frame prediction in turn leads to realistic optical flows. Our dual motion GAN also handles natural motion uncertainty in different pixel locations with a new probabilistic motion encoder, which is based on variational autoencoders. Extensive experiments demonstrate that the proposed dual motion GAN significantly outperforms stateof-the-art approaches on synthesizing new video frames and predicting future flows. Our model generalizes well across diverse visual scenes and shows superiority in unsupervised video representation learning.",
"title": ""
},
{
"docid": "7f1fefbcbe5bac0cae0151477cda5886",
"text": "In this study, a multi-level type multi-phase resonant converter is presented for high power wireless EV charging applications. As an alternative to the traditional frequency and phase shift control methods, a hybrid phase-frequency control strategy is implemented to improve the system efficiency. In order to confirm the proposed converter and control technique, a laboratory prototype wireless EV charger is designed using 8 inches air gap coreless transformer and rectifier. The proposed control is compared with the conventional control methods for various load conditions at the different power levels. The experimental results show that the proposed converter is within the desired frequency range while regulating output from 0 to 15 kW with 750 V input DC bus voltage.",
"title": ""
},
{
"docid": "d34baa3591e9b6fe1a261d2fadaf23dc",
"text": "This paper proposes a high-efficiency zero-voltage-switching (ZVS) AC-DC light-emitting-diode (LED) driver. The structure of the proposed converter is based on a buck-boost power-factor-correction (PFC) converter. Through replacement of an output diode with a self-driven synchronous rectifier (SR), the conduction loss decreases significantly, and there is no switching loss because of the ZVS operation of the switching devices. In addition, no additional control circuit is required and cost reduction is achieved because the SR is self-driven. The efficiency of the proposed converter is higher than that of the conventional critical-conduction-mode (CRM) buck-boost PFC converter, owing to the reduced conduction loss of the high-side output rectifier and no switching loss of either of the switches. For verifying the ZVS operation and efficiency improvement of the proposed AC-DC LED driver, theoretical analysis and experimental results of a 48-[V] and 1.4-[A] prototype for LED driver are discussed.",
"title": ""
},
{
"docid": "3d06052330110c1a401c327af6140d43",
"text": "Many online videogames make use of characters controlled by both humans (avatar) and computers (agent) to facilitate game play. However, the level of agency a teammate shows potentially produces differing levels of social presence during play, which in turn may impact on the player experience. To better understand these effects, two experimental studies were conducted utilising cooperative multiplayer games (Left 4 Dead 2 and Rocket League). In addition, the effect of familiarity between players was considered. The trend across the two studies show that playing with another human is more enjoyable, and facilitates greater connection, cooperation, presence and positive mood than play with a computer agent. The implications for multiplayer game design is discussed.",
"title": ""
},
{
"docid": "16a3bf4df6fb8e61efad6f053f1c6f9c",
"text": "The objective of this paper is to improve large scale visual object retrieval for visual place recognition. Geo-localization based on a visual query is made difficult by plenty of non-distinctive features which commonly occur in imagery of urban environments, such as generic modern windows, doors, cars, trees, etc. The focus of this work is to adapt standard Hamming Embedding retrieval system to account for varying descriptor distinctiveness. To this end, we propose a novel method for efficiently estimating distinctiveness of all database descriptors, based on estimating local descriptor density everywhere in the descriptor space. In contrast to all competing methods, the (unsupervised) training time for our method (DisLoc) is linear in the number database descriptors and takes only a 100 seconds on a single CPU core for a 1 million image database. Furthermore, the added memory requirements are negligible (1%). The method is evaluated on standard publicly available large-scale place recognition benchmarks containing street-view imagery of Pittsburgh and San Francisco. DisLoc is shown to outperform all baselines, while setting the new state-of-the-art on both benchmarks. The method is compatible with spatial reranking, which further improves recognition results. Finally, we also demonstrate that 7% of the least distinctive features can be removed, therefore reducing storage requirements and improving retrieval speed, without any loss in place recognition accuracy.",
"title": ""
},
{
"docid": "4932aedacd73a8af2793242ca7683bfc",
"text": "In this article, we propose a new remote user authentication scheme using smart cards. The scheme is based on the ElGamal’s public key cryptosystem. Our scheme does not require a system to maintain a password table for verifying the legitimacy of the login users. In addition, our scheme can withstand message replaying attack.",
"title": ""
},
{
"docid": "983fc1788fe5d9358eff85d3c16d000b",
"text": "Object tracking over wide-areas, such as an airport, the downtown of a large city or any large public area, is done by multiple cameras. Especially in realistic application, those cameras have non overlapping Field of Views (FOVs). Multiple camera tracking is very important to establish correspondence among detected objects across different cameras. In this paper we investigate color histogram techniques to evaluate inter-camera tracking algorithm based on object appearances. We compute HSV and RGB color histograms in order to evaluate their performance in establishing correspondence between object appearances in different FOVs before and after Cumulative Brightness Transfer Function (CBTF).",
"title": ""
},
{
"docid": "6d2efd95c2b3486bec5b4c2ab2db18ad",
"text": "The goal of this work is to replace objects in an RGB-D scene with corresponding 3D models from a library. We approach this problem by first detecting and segmenting object instances in the scene using the approach from Gupta et al. [13]. We use a convolutional neural network (CNN) to predict the pose of the object. This CNN is trained using pixel normals in images containing rendered synthetic objects. When tested on real data, it outperforms alternative algorithms trained on real data. We then use this coarse pose estimate along with the inferred pixel support to align a small number of prototypical models to the data, and place the model that fits the best into the scene. We observe a 48% relative improvement in performance at the task of 3D detection over the current state-of-the-art [33], while being an order of magnitude faster at the same time.",
"title": ""
},
{
"docid": "68bd70bc546e983f5fa71e17bdde3e00",
"text": "Hand–eye calibration is a classic problem in robotics that aims to find the transformation between two rigidly attached reference frames, usually a camera and a robot end-effector or a motion tracker. Most hand–eye calibration techniques require two data streams, one containing the eye (camera) motion and the other containing the hand (robot/tracker) motion, and the classic hand–eye formulation assumes that both data streams are fully synchronized. However, different motion capturing devices and cameras often have variable capture rates and timestamps that cannot always be easily triggered in sync. Although probabilistic approaches have been proposed to solve for nonsynchronized data streams, they are not able to deal with different capture rates. We propose a new approach for unsynchronized hand–eye calibration that is able to deal with different capture rates and time delays. Our method interpolates and resamples the signal with the lowest capture rate in a way that is consistent with the twist motion constraints of the hand–eye problem. Cross-correlation techniques are then used to produce two fully synchronized data streams that can be used to solve the hand–eye problem with classic methods. In our experimental section, we show promising validation results on simulation data and also on real data obtained from a robotic arm holding a camera.",
"title": ""
}
] |
scidocsrr
|
a19b02612e08918c50d0611a2e920529
|
CroRank: Cross Domain Personalized Transfer Ranking for Collaborative Filtering
|
[
{
"docid": "3bb905351ce1ea2150f37059ed256a90",
"text": "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.",
"title": ""
},
{
"docid": "7ad0c164ece34159f9051c1510761aa8",
"text": "Collaborative filtering (CF) is a major technique in recommender systems to help users find their potentially desired items. Since the data sparsity problem is quite commonly encountered in real-world scenarios, Cross-Domain Collaborative Filtering (CDCF) hence is becoming an emerging research topic in recent years. However, due to the lack of sufficient dense explicit feedbacks and even no feedback available in users' uninvolved domains, current CDCF approaches may not perform satisfactorily in user preference prediction. In this paper, we propose a generalized Cross Domain Triadic Factorization (CDTF) model over the triadic relation user-item-domain, which can better capture the interactions between domain-specific user factors and item factors. In particular, we devise two CDTF algorithms to leverage user explicit and implicit feedbacks respectively, along with a genetic algorithm based weight parameters tuning algorithm to trade off influence among domains optimally. Finally, we conduct experiments to evaluate our models and compare with other state-of-the-art models by using two real world datasets. The results show the superiority of our models against other comparative models.",
"title": ""
},
{
"docid": "c924aada75b7e3ec231d72f26b936330",
"text": "To solve the sparsity problem in collaborative filtering, researchers have introduced transfer learning as a viable approach to make use of auxiliary data. Most previous transfer learning works in collaborative filtering have focused on exploiting point-wise ratings such as numerical ratings, stars, or binary ratings of likes/dislike s. However, in many real-world recommender systems, many users may be unwilling or unlikely to rate items with precision. In contrast, practitioners can turn to various non-preference data to estimate a range or rating distribution of a user’s preference on an item. Such a range or rating distribution is called an uncertain rating since it represents a rating spectrum of uncertainty instead of an accurate point-wise score. In this paper, we propose an efficient transfer learning solution for collaborative filtering, known astransfer by integrative factorization(TIF), to leverage such auxiliary uncertain ratings to improve the performance of recommendation. In particular, we integrate auxiliary data of uncertain ratings as additional constraints in the target matrix factorization problem, and learn an expected rating value for each uncertain rating automatically. The advantages of our proposed approach include the efficiency and the improved effectiveness of collaborative filtering, showing that incorporating the auxiliary data of uncertain ratings can really bring a benefit. Experimental results on two movie recommendation tasks show that our TIF algorithm performs significantly better over a state-of-the-art non-transfer learning method.",
"title": ""
}
] |
[
{
"docid": "77233d4f7a7bb0150b5376c7bb93c108",
"text": "In-filled frame structures are commonly used in buildings, even in those located in seismically active regions. Precent codes unfortunately, do not have adequate guidance for treating the modelling, analysis and design of in-filled frame structures. This paper addresses this need and first develops an appropriate technique for modelling the infill-frame interface and then uses it to study the seismic response of in-filled frame structures. Finite element time history analyses under different seismic records have been carried out and the influence of infill strength, openings and soft-storey phenomenon are investigated. Results in terms of tip deflection, fundamental period, inter-storey drift ratio and stresses are presented and they will be useful in the seismic design of in-filled frame structures.",
"title": ""
},
{
"docid": "af2dbc8d3a04fb3059263b8c367ac856",
"text": "The area of sentiment mining (also called sentiment extraction, opinion mining, opinion extraction, sentiment analysis, etc.) has seen a large increase in academic interest in the last few years. Researchers in the areas of natural language processing, data mining, machine learning, and others have tested a variety of methods of automating the sentiment analysis process. In this research work, new hybrid classification method is proposed based on coupling classification methods using arcing classifier and their performances are analyzed in terms of accuracy. A Classifier ensemble was designed using Naïve Bayes (NB), Support Vector Machine (SVM) and Genetic Algorithm (GA). In the proposed work, a comparative study of the effectiveness of ensemble technique is made for sentiment classification. The feasibility and the benefits of the proposed approaches are demonstrated by means of restaurant review that is widely used in the field of sentiment classification. A wide range of comparative experiments are conducted and finally, some in-depth discussion is presented and conclusions are drawn about the effectiveness of ensemble technique for sentiment classification. Keywords— Accuracy, Arcing classifier, Genetic Algorithm (GA). Naïve Bayes (NB), Sentiment Mining, Support Vector Machine (SVM)",
"title": ""
},
{
"docid": "2dde6c9387ee0a51220d92a4bc0bb8bf",
"text": "We propose a generic algorithm for computation of similarit y measures for sequential data. The algorithm uses generalized suffix trees f or efficient calculation of various kernel, distance and non-metric similarity func tions. Its worst-case run-time is linear in the length of sequences and independen t of the underlying embedding language, which can cover words, k-grams or all contained subsequences. Experiments with network intrusion detection, DN A analysis and text processing applications demonstrate the utility of distan ces and similarity coefficients for sequences as alternatives to classical kernel fu ctions.",
"title": ""
},
{
"docid": "eeac967209e931538e0b7a035c876446",
"text": "INTRODUCTION\nThis is the first of seven articles from a preterm birth and stillbirth report. Presented here is an overview of the burden, an assessment of the quality of current estimates, review of trends, and recommendations to improve data.\n\n\nPRETERM BIRTH\nFew countries have reliable national preterm birth prevalence data. Globally, an estimated 13 million babies are born before 37 completed weeks of gestation annually. Rates are generally highest in low- and middle-income countries, and increasing in some middle- and high-income countries, particularly the Americas. Preterm birth is the leading direct cause of neonatal death (27%); more than one million preterm newborns die annually. Preterm birth is also the dominant risk factor for neonatal mortality, particularly for deaths due to infections. Long-term impairment is an increasing issue.\n\n\nSTILLBIRTH\nStillbirths are currently not included in Millennium Development Goal tracking and remain invisible in global policies. For international comparisons, stillbirths include late fetal deaths weighing more than 1000g or occurring after 28 weeks gestation. Only about 2% of all stillbirths are counted through vital registration and global estimates are based on household surveys or modelling. Two global estimation exercises reached a similar estimate of around three million annually; 99% occur in low- and middle-income countries. One million stillbirths occur during birth. Global stillbirth cause-of-death estimates are impeded by multiple, complex classification systems.\n\n\nRECOMMENDATIONS TO IMPROVE DATA\n(1) increase the capture and quality of pregnancy outcome data through household surveys, the main data source for countries with 75% of the global burden; (2) increase compliance with standard definitions of gestational age and stillbirth in routine data collection systems; (3) strengthen existing data collection mechanisms--especially vital registration and facility data--by instituting a standard death certificate for stillbirth and neonatal death linked to revised International Classification of Diseases coding; (4) validate a simple, standardized classification system for stillbirth cause-of-death; and (5) improve systems and tools to capture acute morbidity and long-term impairment outcomes following preterm birth.\n\n\nCONCLUSION\nLack of adequate data hampers visibility, effective policies, and research. Immediate opportunities exist to improve data tracking and reduce the burden of preterm birth and stillbirth.",
"title": ""
},
{
"docid": "e943bc89e2b8318ce30002a68ee84124",
"text": "Evaluation has become a fundamental part of visualization research and researchers have employed many approaches from the field of human-computer interaction like measures of task performance, thinking aloud protocols, and analysis of interaction logs. Recently, eye tracking has also become popular to analyze visual strategies of users in this context. This has added another modality and more data, which requires special visualization techniques to analyze this data. However, only few approaches exist that aim at an integrated analysis of multiple concurrent evaluation procedures. The variety, complexity, and sheer amount of such coupled multi-source data streams require a visual analytics approach. Our approach provides a highly interactive visualization environment to display and analyze thinking aloud, interaction, and eye movement data in close relation. Automatic pattern finding algorithms allow an efficient exploratory search and support the reasoning process to derive common eye-interaction-thinking patterns between participants. In addition, our tool equips researchers with mechanisms for searching and verifying expected usage patterns. We apply our approach to a user study involving a visual analytics application and we discuss insights gained from this joint analysis. We anticipate our approach to be applicable to other combinations of evaluation techniques and a broad class of visualization applications.",
"title": ""
},
{
"docid": "f2e8e1b729276ee3316e6d0162f731ac",
"text": "In the field of reinforcement learning there has been recent progress towards safety and high-confidence bounds on policy performance. However, to our knowledge, no practical methods exist for determining high-confidence policy performance bounds in the inverse reinforcement learning setting— where the true reward function is unknown and only samples of expert behavior are given. We propose a sampling method based on Bayesian inverse reinforcement learning that uses demonstrations to determine practical high-confidence upper bounds on the α-worst-case difference in expected return between any evaluation policy and the optimal policy under the expert’s unknown reward function. We evaluate our proposed bound on both a standard grid navigation task and a simulated driving task and achieve tighter and more accurate bounds than a feature count-based baseline. We also give examples of how our proposed bound can be utilized to perform riskaware policy selection and risk-aware policy improvement. Because our proposed bound requires several orders of magnitude fewer demonstrations than existing high-confidence bounds, it is the first practical method that allows agents that learn from demonstration to express confidence in the quality of their learned policy.",
"title": ""
},
{
"docid": "13dde903c4568b7077d43e1786a1175b",
"text": "In this paper, a method is proposed to detect the emotion of a song based on its lyrical and audio features. Lyrical features are generated by segmentation of lyrics during the process of data extraction. ANEW and WordNet knowledge is then incorporated to compute Valence and Arousal values. In addition to this, linguistic association rules are applied to ensure that the issue of ambiguity is properly addressed. Audio features are used to supplement the lyrical ones and include attributes like energy, tempo, and danceability. These features are extracted from The Echo Nest, a widely used music intelligence platform. Construction of training and test sets is done on the basis of social tags extracted from the last.fm website. The classification is done by applying feature weighting and stepwise threshold reduction on the k-Nearest Neighbors algorithm to provide fuzziness in the classification.",
"title": ""
},
{
"docid": "ebb024bbd923d35fd86adc2351073a48",
"text": "Background: Depression is a chronic condition that results in considerable disability, and particularly in later life, severely impacts the life quality of the individual with this condition. The first aim of this review article was to summarize, synthesize, and evaluate the research base concerning the use of dance-based exercises on health status, in general, and secondly, specifically for reducing depressive symptoms, in older adults. A third was to provide directives for professionals who work or are likely to work with this population in the future. Methods: All English language peer reviewed publications detailing the efficacy of dance therapy as an intervention strategy for older people in general, and specifically for minimizing depression and dependence among the elderly were analyzed.",
"title": ""
},
{
"docid": "19cb14825c6654101af1101089b66e16",
"text": "Critical infrastructures, such as power grids and transportation systems, are increasingly using open networks for operation. The use of open networks poses many challenges for control systems. The classical design of control systems takes into account modeling uncertainties as well as physical disturbances, providing a multitude of control design methods such as robust control, adaptive control, and stochastic control. With the growing level of integration of control systems with new information technologies, modern control systems face uncertainties not only from the physical world but also from the cybercomponents of the system. The vulnerabilities of the software deployed in the new control system infrastructure will expose the control system to many potential risks and threats from attackers. Exploitation of these vulnerabilities can lead to severe damage as has been reported in various news outlets [1], [2]. More recently, it has been reported in [3] and [4] that a computer worm, Stuxnet, was spread to target Siemens supervisory control and data acquisition (SCADA) systems that are configured to control and monitor specific industrial processes.",
"title": ""
},
{
"docid": "c716e7dc1c0e770001bcb57eab871968",
"text": "We present a new method to visualize from an ensemble of flow fields the statistical properties of streamlines passing through a selected location. We use principal component analysis to transform the set of streamlines into a low-dimensional Euclidean space. In this space the streamlines are clustered into major trends, and each cluster is in turn approximated by a multivariate Gaussian distribution. This yields a probabilistic mixture model for the streamline distribution, from which confidence regions can be derived in which the streamlines are most likely to reside. This is achieved by transforming the Gaussian random distributions from the low-dimensional Euclidean space into a streamline distribution that follows the statistical model, and by visualizing confidence regions in this distribution via iso-contours. We further make use of the principal component representation to introduce a new concept of streamline-median, based on existing median concepts in multidimensional Euclidean spaces. We demonstrate the potential of our method in a number of real-world examples, and we compare our results to alternative clustering approaches for particle trajectories as well as curve boxplots.",
"title": ""
},
{
"docid": "ab85854fab566b49dd07ee9c9a9cf990",
"text": "A traveling-wave circularly-polarized microstrip array antenna is presented in this paper. It uses a circularly polarized dual-feed radiating element. The element is a rectangular patch with two chamfered corners. It is fed by microstrip lines, making it possible for the radiating element and feed lines to be realized and integrated in a single layer. A four-element array is designed, built and tested. Measured performance of the antenna is presented, where a good agreement between the simulated and measured results is obtained and demonstrated.",
"title": ""
},
{
"docid": "baefa50fee4f5ea6aaa314ae97342145",
"text": "MicroRNAs (miRNAs) are an extensive class of newly discovered endogenous small RNAs, which negatively regulate gene expression at the post-transcription levels. As the application of next-generation deep sequencing and advanced bioinformatics, the miRNA-related study has been expended to non-model plant species and the number of identified miRNAs has dramatically increased in the past years. miRNAs play a critical role in almost all biological and metabolic processes, and provide a unique strategy for plant improvement. Here, we first briefly review the discovery, history, and biogenesis of miRNAs, then focus more on the application of miRNAs on plant breeding and the future directions. Increased plant biomass through controlling plant development and phase change has been one achievement for miRNA-based biotechnology; plant tolerance to abiotic and biotic stress was also significantly enhanced by regulating the expression of an individual miRNA. Both endogenous and artificial miRNAs may serve as important tools for plant improvement.",
"title": ""
},
{
"docid": "eeff8eeb391e789a40cb8f900fa241e3",
"text": "We extend Stochastic Gradient Variational Bayes to perform posterior inference for the weights of Stick-Breaking processes. This development allows us to define a Stick-Breaking Variational Autoencoder (SB-VAE), a Bayesian nonparametric version of the variational autoencoder that has a latent representation with stochastic dimensionality. We experimentally demonstrate that the SB-VAE, and a semisupervised variant, learn highly discriminative latent representations that often outperform the Gaussian VAE’s.",
"title": ""
},
{
"docid": "5c62f66d948f15cea55c1d2c9d10f229",
"text": "This paper addresses the problem of large-scale image search. Three constraints have to be taken into account: search accuracy, efficiency, and memory usage. We first present and evaluate different ways of aggregating local image descriptors into a vector and show that the Fisher kernel achieves better performance than the reference bag-of-visual words approach for any given vector dimension. We then jointly optimize dimensionality reduction and indexing in order to obtain a precise vector comparison as well as a compact representation. The evaluation shows that the image representation can be reduced to a few dozen bytes while preserving high accuracy. Searching a 100 million image data set takes about 250 ms on one processor core.",
"title": ""
},
{
"docid": "6379d5330037a774f9ceed4c51bda1f6",
"text": "Despite long-standing observations on diverse cytokinin actions, the discovery path to cytokinin signaling mechanisms was tortuous. Unyielding to conventional genetic screens, experimental innovations were paramount in unraveling the core cytokinin signaling circuitry, which employs a large repertoire of genes with overlapping and specific functions. The canonical two-component transcription circuitry involves His kinases that perceive cytokinin and initiate signaling, as well as His-to-Asp phosphorelay proteins that transfer phosphoryl groups to response regulators, transcriptional activators, or repressors. Recent advances have revealed the complex physiological functions of cytokinins, including interactions with auxin and other signal transduction pathways. This review begins by outlining the historical path to cytokinin discovery and then elucidates the diverse cytokinin functions and key signaling components. Highlights focus on the integration of cytokinin signaling components into regulatory networks in specific contexts, ranging from molecular, cellular, and developmental regulations in the embryo, root apical meristem, shoot apical meristem, stem and root vasculature, and nodule organogenesis to organismal responses underlying immunity, stress tolerance, and senescence.",
"title": ""
},
{
"docid": "b94e63f386340350449f600766da8fec",
"text": "In the future, production systems will consist of modular and flexible production components, being able to adapt to completely new manufacturing processes. This requirement arises from market turbulences caused by customer demands, i. e. highly customized goods in smaller production batches, or phenomenon like commercial crisis. In order to achieve adaptable production systems, one of the major challenges is to develop suitable autoconfiguration mechanisms for industrial automation systems. This paper presents a two-step architecture for the autoconfiguration of real-time Ethernet (RTE) systems. As a first step, an RTE-independent device discovery mechanism is introduced. Afterwards, it is shown how the parameters of an RTE can be configured automatically using Profinet IO as an exemplary RTE system. In contrast to the existing approaches, the proposed discovery mechanism is based on the OPC Unified Architecture (OPC-UA). In addition, a procedure to autoconfigure modular IO-Devices is introduced.",
"title": ""
},
{
"docid": "9362781ea97715077d54e8e9645552e2",
"text": "Web sites are often a mixture of static sites and programs that integrate relational databases as a back-end. Software that implements Web sites continuously evolve to meet ever-changing user needs. As a Web sites evolve, new versions of programs, interactions and functionalities are added and existing ones are removed or modified. Web sites require configuration and programming attention to assure security, confidentiality, and trustiness of the published information. During evolution of Web software, from one version to the next one, security flaws may be introduced, corrected, or ignored. This paper presents an investigation of the evolution of security vulnerabilities as detected by propagating and combining granted authorization levels along an inter-procedural control flow graph (CFG) together with required security levels for DB accesses with respect to SQL-injection attacks. The paper reports results about experiments performed on 31 versions of phpBB, that is a publicly available bulletin board written in PHP, version 1.0.0 (9547 LOC) to version 2.0.22 (40663 LOC) have been considered as a case study. Results show that the vulnerability analysis can be used to observe and monitor the evolution of security vulnerabilities in subsequent versions of the same software package. Suggestions for further research are also presented.",
"title": ""
},
{
"docid": "98d0a45eb8da2fa8541055014db6e238",
"text": "OBJECTIVE\nThe Multicultural Quality of Life Index is a concise instrument for comprehensive, culture-informed, and self-rated assessment of health-related quality of life. It is composed of 10 items (from physical well-being to global perception of quality of life). Each item is rated on a 10-point scale. The objective was to evaluate the reliability (test-retest), internal structure, discriminant validity, and feasibility of the Multicultural Quality of Life Index in Lima, Peru.\n\n\nMETHOD\nThe reliability was studied in general medical patients (n = 30) hospitalized in a general medical ward. The Multicultural Quality of Life Index was administered in two occasions and the correlation coefficients (\"r\") between both interviews were calculated. Its discriminant validity was studied statistically comparing the average score in a group of patients with AIDS (with presumed lower quality of life, n = 50) and the average score in a group of dentistry students and professionals (with presumed higher quality of life, n = 50). Data on its applicability and internal structure were compiled from the 130 subjects.\n\n\nRESULTS\nA high reliability correlation coefficient (r = 0.94) was found for the total score. The discriminant validity study found a significant difference between mean total score in the samples of presumed higher (7.66) and lower (5.32) quality of life. The average time to complete the Multicultural Quality of Life Index was less than 4 minutes and was reported by the majority of subjects as easily applicable. A high Cronbach's a (0.88) was also documented.\n\n\nCONCLUSIONS\nThe results reported that the Multicultural Quality of Life Index is reliable, has a high internal consistency, is capable of discriminating groups of presumed different quality of life levels, is quite efficient, and easy to use.",
"title": ""
},
{
"docid": "bb6737c84b0d96896c82abefee876858",
"text": "This paper introduces a novel tactile sensor with the ability to detect objects in the sensor's near proximity. For both tasks, the same capacitive sensing principle is used. The tactile part of the sensor provides a tactile sensor array enabling the sensor to gather pressure profiles of the mechanical contact area. Several tactile sensors have been developed in the past. These sensors lack the capability of detecting objects in their near proximity before a mechanical contact occurs. Therefore, we developed a tactile proximity sensor, which is able to measure the current flowing out of or even into the sensor. Measuring these currents and the exciting voltage makes a calculation of the capacitance coupled to the sensor's surface and, using more sensors of this type, the change of capacitance between the sensors possible. The sensor's mechanical design, the analog/digital signal processing and the hardware efficient demodulator structure, implemented on a FPGA, will be discussed in detail.",
"title": ""
},
{
"docid": "cba3209a27e1332f25f29e8b2c323d37",
"text": "One of the technologies that has been showing possibilities of application in educational environments is the Augmented Reality (AR), in addition to its application to other fields such as tourism, advertising, video games, among others. The present article shows the results of an experiment carried out at the National University of Colombia, with the design and construction of augmented learning objects for the seventh and eighth grades of secondary education, which were tested and evaluated by students of a school in the department of Caldas. The study confirms the potential of this technology to support educational processes represented in the creation of digital resources for mobile devices. The development of learning objects in AR for mobile devices can support teachers in the integration of information and communication technologies (ICT) in the teaching-learning processes.",
"title": ""
}
] |
scidocsrr
|
845e3be9bf241a27669c011e6064d3ac
|
Noise power spectral density estimation based on optimal smoothing and minimum statistics
|
[
{
"docid": "65d60131b1ceba50399ceffa52de7e8a",
"text": "Cox, Matthew L. Miller, and Jeffrey A. Bloom. San Diego, CA: Academic Press, 2002, 576 pp. $69.96 (hardbound). A key ingredient to copyright protection, digital watermarking provides a solution to the illegal copying of material. It also has broader uses in recording and electronic transaction tracking. This book explains “the principles underlying digital watermarking technologies, describes the requirements that have given rise to them, and discusses the diverse ends to which these technologies are being applied.” [book notes] The authors are extensively experienced in digital watermarking technologies. Cox recently joined the NEC Research Institute after a five-year stint at AT&T Bell Labs. Miller’s interest began at AT&T Bell Labs in 1979. He also is employed at NEC. Bloom is a researcher in digital watermarking at the Sarnoff Corporation. His acquaintance with the field began at Signafy, Inc. and continued through his employment at NEC Research Institute. The book features the following: Review of the underlying principles of watermarking relevant for image, video, and audio; Discussion of a wide variety of applications, theoretical principles, detection and embedding concepts, and key properties; Examination of copyright protection and other applications; Presentation of a series of detailed examples that illustrate watermarking concepts and practices; Appendix, in print and on the Web, containing the source code for the examples; Comprehensive glossary of terms. “The authors provide a comprehensive overview of digital watermarking, rife with detailed examples and grounded within strong theoretical framework. Digital Watermarking will serve as a valuable introduction as well as a useful reference for those engaged in the field.”—Walter Bender, Director, M.I.T. Media Lab",
"title": ""
}
] |
[
{
"docid": "bd5d84c9d699080b2d668809626e90fe",
"text": "Until now, error type performance for Grammatical Error Correction (GEC) systems could only be measured in terms of recall because system output is not annotated. To overcome this problem, we introduce ERRANT, a grammatical ERRor ANnotation Toolkit designed to automatically extract edits from parallel original and corrected sentences and classify them according to a new, dataset-agnostic, rulebased framework. This not only facilitates error type evaluation at different levels of granularity, but can also be used to reduce annotator workload and standardise existing GEC datasets. Human experts rated the automatic edits as “Good” or “Acceptable” in at least 95% of cases, so we applied ERRANT to the system output of the CoNLL-2014 shared task to carry out a detailed error type analysis for the first time.",
"title": ""
},
{
"docid": "19699035d427e648fa495628dac79c71",
"text": "We address the problem of online path planning for optimal sensing with a mobile robot. The objective of the robot is to learn the most about its pose and the environment given time constraints. We use a POMDP with a utility function that depends on the belief state to model th finite horizon planning problem. We replan as the robot progresses throughout the environment. The POMDP is highdimensional, continuous, non-differentiable, nonlinear , nonGaussian and must be solved in real-time. Most existing techniques for stochastic planning and reinforcement lear ning are therefore inapplicable. To solve this extremely com plex problem, we propose a Bayesian optimization method that dynamically trades off exploration (minimizing uncer tainty in unknown parts of the policy space) and exploitation (capitalizing on the current best solution). We demonstrate our approach with a visually-guide mobile robot. The solution proposed here is also applicable to other closelyrelated domains, including active vision, sequential expe rimental design, dynamic sensing and calibration with mobile sensors.",
"title": ""
},
{
"docid": "b7189c1b1dc625fb60a526d81c0d0a89",
"text": "This paper presents a development of an anthropomorphic robot hand, `KITECH Hand' that has 4 full-actuated fingers. Most robot hands have small size simultaneously many joints as compared with robot manipulators. Components of actuator, gear, and sensors used for building robots are not small and are expensive, and those make it difficult to build a small sized robot hand. Differently from conventional development of robot hands, KITECH hand adopts a RC servo module that is cheap, easily obtainable, and easy to handle. The RC servo module that have been already used for several small sized humanoid can be new solution of building small sized robot hand with many joints. The feasibility of KITECH hand in object manipulation is shown through various experimental results. It is verified that the modified RC servo module is one of effective solutions in the development of a robot hand.",
"title": ""
},
{
"docid": "5b617701a4f2fa324ca7e3e7922ce1c4",
"text": "Open circuit voltage of a silicon solar cell is around 0.6V. A solar module is constructed by connecting a number of cells in series to get a practically usable voltage. Partial shading of a Solar Photovoltaic Module (SPM) is one of the main causes of overheating of shaded cells and reduced energy yield of the module. The present work is a study of harmful effects of partial shading on the performance of a PV module. A PSPICE simulation model that represents 36 cells PV module under partial shaded conditions has been used to test several shading profiles and results are presented.",
"title": ""
},
{
"docid": "ee2f1d856532b262455224ebaddf73d1",
"text": "In this paper a behavioral control framework is developed to control anunmanned aerial vehicle-manipulator (UAVM) system, composed by a multirotor aerial vehicle equipped with a robotic arm. The goal is to ensure vehiclearm coordination and manage complex multi-task missions, where different behaviors must be encompassed in a clear and meaningful way. In detail, a control scheme, based on the null space-based behavioral paradigm, is proposed to hanB F. Pierri francesco.pierri@unibas.it K. Baizid khelifa.baizid@mines-douai.fr G. Giglio gerardo.giglio@unibas.it M. A. Trujillo matrujillo@catec.aero G. Antonelli antonelli@unicas.it F. Caccavale fabrizio.caccavale@unibas.it A. Viguria aviguria@catec.aero S. Chiaverini chiaverini@unicas.it A. Ollero aollero@us.es 1 Mines Douai, IA 59508 Douai, France 2 Univ. Lille, 59000 Lille, France 3 University of Basilicata, Potenza, Italy 4 Center for Advanced Aerospace Technologies (CATEC), Seville, Spain 5 University of Cassino and Southern Lazio, Cassino, Italy 6 University of Seville, Seville, Spain dle the coordination between the arm and vehicle motion. To this aim, a set of basic functionalities (elementary behaviors) are designed and combined in a given priority order, in order to attain more complex tasks (compound behaviors). A supervisor is in charge of switching between the compound behaviors according to the mission needs and the sensory feedback. The method is validated on a real testbed, consisting of a multirotor aircraft with an attached 6 Degree of Freedoms manipulator, developed within the EU-funded project ARCAS (Aerial Robotics Cooperative Assembly System). At the the best of authors’ knowledge, this is the first time that an UAVM system is experimentally tested in the execution of complex multi-task missions. The results show that, by properly designing a set of compound behaviors and a supervisor, vehicle-arm coordination in complex missions can be effectively managed.",
"title": ""
},
{
"docid": "21dd7b4582f71d678b5592a547d9e730",
"text": "The existence of a worldwide indoor floorplans database can lead to significant growth in location-based applications, especially for indoor environments. In this paper, we present CrowdInside: a crowdsourcing-based system for the automatic construction of buildings floorplans. CrowdInside leverages the smart phones sensors that are ubiquitously available with humans who use a building to automatically and transparently construct accurate motion traces. These accurate traces are generated based on a novel technique for reducing the errors in the inertial motion traces by using the points of interest in the indoor environment, such as elevators and stairs, for error resetting. The collected traces are then processed to detect the overall floorplan shape as well as higher level semantics such as detecting rooms and corridors shapes along with a variety of points of interest in the environment.\n Implementation of the system in two testbeds, using different Android phones, shows that CrowdInside can detect the points of interest accurately with 0.2% false positive rate and 1.3% false negative rate. In addition, the proposed error resetting technique leads to more than 12 times enhancement in the median distance error compared to the state-of-the-art. Moreover, the detailed floorplan can be accurately estimated with a relatively small number of traces. This number is amortized over the number of users of the building. We also discuss possible extensions to CrowdInside for inferring even higher level semantics about the discovered floorplans.",
"title": ""
},
{
"docid": "5259c661992baa926173348c4e0b0cd2",
"text": "A controller assistant system is developed based on the closed-form solution of an offline optimization problem for a four-wheel-drive front-wheel-steerable vehicle. The objective of the controller is to adjust the actual vehicle attitude and motion according to the driver's manipulating commands. The controller takes feedback from acceleration signals, and the imposed conditions and limitations on the controller are studied through the concept of state-derivative feedback control systems. The controller gains are optimized using linear matrix inequality (LMI) and genetic algorithm (GA) techniques. Reference signals are calculated using a driver command interpreter module (DCIM) to accurately interpret the driver's intentions for vehicle motion and to allow the controller to generate proper control actions. It is shown that the controller effectively enhances the handling performance and stability of the vehicle under different road conditions and driving scenarios. Although controller performance is studied for a four-wheel-drive front-wheel-steerable vehicle, the algorithm can also be applied to other vehicle configurations with slight changes.",
"title": ""
},
{
"docid": "a818a70bd263617eb3089cde9e9d1bb9",
"text": "The paper proposes identifying relevant information sources from the history of combined searching and browsing behavior of many Web users. While it has been previously shown that user interactions with search engines can be employed to improve document ranking, browsing behavior that occurs beyond search result pages has been largely overlooked in prior work. The paper demonstrates that users' post-search browsing activity strongly reflects implicit endorsement of visited pages, which allows estimating topical relevance of Web resources by mining large-scale datasets of search trails. We present heuristic and probabilistic algorithms that rely on such datasets for suggesting authoritative websites for search queries. Experimental evaluation shows that exploiting complete post-search browsing trails outperforms alternatives in isolation (e.g., clickthrough logs), and yields accuracy improvements when employed as a feature in learning to rank for Web search.",
"title": ""
},
{
"docid": "a67f7593ea049be1e2785108b6181f7d",
"text": "This paper describes torque characteristics of the interior permanent magnet synchronous motor (IPMSM) using the inexpensive ferrite magnets. IPMSM model used in this study has the spoke and the axial type magnets in the rotor, and torque characteristics are analyzed by the three-dimensional finite element method (3D-FEM). As a result, torque characteristics can be improved by using both the spoke type magnets and the axial type magnets in the rotor.",
"title": ""
},
{
"docid": "1f121c30e686d25f44363f44dc71b495",
"text": "In this paper we show that the Euler number of the compactified Jacobian of a rational curve C with locally planar singularities is equal to the multiplicity of the δ-constant stratum in the base of a semi-universal deformation of C. In particular, the multiplicity assigned by Yau, Zaslow and Beauville to a rational curve on a K3 surface S coincides with the multiplicity of the normalisation map in the moduli space of stable maps to S. Introduction Let C be a reduced and irreducible projective curve with singular set Σ ⊂ C and let n : C̃ −→ C be its normalisation. The generalised Jacobian JC of C is an extension of JC̃ by an affine commutative group of dimension δ := dimH0(n∗(OC̃)/OC) = ∑",
"title": ""
},
{
"docid": "933a43bb4564a683415da49009626ce7",
"text": "In recent years, deep learning methods applying unsupervised learning to train deep layers of neural networks have achieved remarkable results in numerous fields. In the past, many genetic algorithms based methods have been successfully applied to training neural networks. In this paper, we extend previous work and propose a GA-assisted method for deep learning. Our experimental results indicate that this GA-assisted approach improves the performance of a deep autoencoder, producing a sparser neural network.",
"title": ""
},
{
"docid": "7853936d58687b143bc135e6e60092ce",
"text": "Multilabel learning has become a relevant learning paradigm in the past years due to the increasing number of fields where it can be applied and also to the emerging number of techniques that are being developed. This article presents an up-to-date tutorial about multilabel learning that introduces the paradigm and describes the main contributions developed. Evaluation measures, fields of application, trending topics, and resources are also presented.",
"title": ""
},
{
"docid": "59433ea14c58dafae7746df2dcfc6197",
"text": "Learning a high-dimensional dense representation for vocabulary terms, also known as a word embedding, has recently attracted much attention in natural language processing and information retrieval tasks. The embedding vectors are typically learned based on term proximity in a large corpus. This means that the objective in well-known word embedding algorithms, e.g., word2vec, is to accurately predict adjacent word(s) for a given word or context. However, this objective is not necessarily equivalent to the goal of many information retrieval (IR) tasks. The primary objective in various IR tasks is to capture relevance instead of term proximity, syntactic, or even semantic similarity. This is the motivation for developing unsupervised relevance-based word embedding models that learn word representations based on query-document relevance information. In this paper, we propose two learning models with different objective functions; one learns a relevance distribution over the vocabulary set for each query, and the other classifies each term as belonging to the relevant or non-relevant class for each query. To train our models, we used over six million unique queries and the top ranked documents retrieved in response to each query, which are assumed to be relevant to the query. We extrinsically evaluate our learned word representation models using two IR tasks: query expansion and query classification. Both query expansion experiments on four TREC collections and query classification experiments on the KDD Cup 2005 dataset suggest that the relevance-based word embedding models significantly outperform state-of-the-art proximity-based embedding models, such as word2vec and GloVe.",
"title": ""
},
{
"docid": "8fb598f1f55f7a20bfc05865fc0a5efa",
"text": "The detection of anomalous executions is valuable for reducing potential hazards in assistive manipulation. Multimodal sensory signals can be helpful for detecting a wide range of anomalies. However, the fusion of high-dimensional and heterogeneous modalities is a challenging problem for model-based anomaly detection. We introduce a long short-term memory-based variational autoencoder (LSTM-VAE) that fuses signals and reconstructs their expected distribution by introducing a progress-based varying prior. Our LSTM-VAE-based detector reports an anomaly when a reconstruction-based anomaly score is higher than a state-based threshold. For evaluations with 1555 robot-assisted feeding executions, including 12 representative types of anomalies, our detector had a higher area under the receiver operating characteristic curve of 0.8710 than 5 other baseline detectors from the literature. We also show the variational autoencoding and state-based thresholding are effective in detecting anomalies from 17 raw sensory signals without significant feature engineering effort.",
"title": ""
},
{
"docid": "3c60c99cf32bb97129f3d91c7ada383c",
"text": "An adaptive neuro-fuzzy inference system is developed and tested for traffic signal controlling. From a given input data set, the developed adaptive neuro-fuzzy inference system can draw the membership functions and corresponding rules by its own, thus making the designing process easier and reliable compared to standard fuzzy logic controllers. Among useful inputs of fuzzy signal control systems, gap between two vehicles, delay at intersections, vehicle density, flow rate and queue length are often used. By considering the practical applicability, the average vehicle inflow rate of each lane is considered in this work as inputs to model the adaptive neuro-fuzzy signal control system. In order to define the desired objectives of reducing the waiting time of vehicles at the signal control, the combined delay of vehicles within one signal cycle is minimized using a simple mathematical optimization method The performance of the control system was tested further by developing an event driven traffic simulation program in Matlab under Windows environment. As expected, the neuro-fuzzy logic controller performed better than the fixed time controller due to its real time adaptability. The neuro-fuzzy controlling system allows more vehicles to pass the junction in congestion and less number of vehicles when the flow rate is low. In particular, the performance of the developed system was superior when there were abrupt changes in traffic flow rates.",
"title": ""
},
{
"docid": "4c2a936cf236009993e32faee549c268",
"text": "In this paper, we proposed Discrete Radon Transform (DRT) technique for feature extraction of static signature recognition to identify forgeries. Median filter has been introduced for noise cancellation of handwritten signature. This paper describes static signature verification techniques where signature samples of each person was collected and cropped by automatic cropping system. Projection based global features are extracted like Horizontal, Vertical and combination of both the projections, these all are one dimensional feature vectors to recognize the handwritten static signature. The distance between two corresponding vectors can be measured with Dynamic Time Warping algorithm (DTW) and using only six genuine signatures samples of each person has been employed here in order to train our system. In the proposed system process time required for training our system for each person is between 1.5 to 4.2 seconds and requires less memory for storage. The optimal performance of the system was found using proposed technique for Combined projection features and it gives FAR of 5.60%, FRR of 8.49% and EER 7.60%, which illustrates such new approach to be quite effective and reliable.",
"title": ""
},
{
"docid": "6b2211308ad03c0eaa3dccec5bb81b75",
"text": "Mobile developers face unique challenges when detecting and reporting crashes in apps due to their prevailing GUI event-driven nature and additional sources of inputs (e.g., sensor readings). To support developers in these tasks, we introduce a novel, automated approach called CRASHSCOPE. This tool explores a given Android app using systematic input generation, according to several strategies informed by static and dynamic analyses, with the intrinsic goal of triggering crashes. When a crash is detected, CRASHSCOPE generates an augmented crash report containing screenshots, detailed crash reproduction steps, the captured exception stack trace, and a fully replayable script that automatically reproduces the crash on a target device(s). We evaluated CRASHSCOPE's effectiveness in discovering crashes as compared to five state-of-the-art Android input generation tools on 61 applications. The results demonstrate that CRASHSCOPE performs about as well as current tools for detecting crashes and provides more detailed fault information. Additionally, in a study analyzing eight real-world Android app crashes, we found that CRASHSCOPE's reports are easily readable and allow for reliable reproduction of crashes by presenting more explicit information than human written reports.",
"title": ""
},
{
"docid": "759207b77a14edb08b81cbd53def9960",
"text": "Computer Aided Design (CAD) typically involves tasks such as adjusting the camera perspective and assembling pieces in free space that require specifying 6 degrees of freedom (DOF). The standard approach is to factor these DOFs into 2D subspaces that are mapped to the x and y axes of a mouse. This metaphor is inherently modal because one needs to switch between subspaces, and disconnects the input space from the modeling space. In this paper, we propose a bimanual hand tracking system that provides physically-motivated 6-DOF control for 3D assembly. First, we discuss a set of principles that guide the design of our precise, easy-to-use, and comfortable-to-use system. Based on these guidelines, we describe a 3D input metaphor that supports constraint specification classically used in CAD software, is based on only a few simple gestures, lets users rest their elbows on their desk, and works alongside the keyboard and mouse. Our approach uses two consumer-grade webcams to observe the user's hands. We solve the pose estimation problem with efficient queries of a precomputed database that relates hand silhouettes to their 3D configuration. We demonstrate efficient 3D mechanical assembly of several CAD models using our hand-tracking system.",
"title": ""
},
{
"docid": "ffde9de213d2b4f7387e8b92bdb517e6",
"text": "Renewable energy sources based on photovoltaic (PV) along with battery-based energy storage necessitate power conditioning to meet load requirements and/or be connected to the electrical grid. The power conditioning is achieved via a dc-dc converter and a DC-AC inverter stages to produce the desired AC source. This is also the case even when the load is of dc type, such as the typical portable electronic devices that require AC adaptors to be powered from the AC mains. The letter presents a hybrid PV-battery-powered dc bus system that eliminates the DC-AC conversion stage, resulting in lower cost and improved overall energy conversion efficiency. It is also shown experimentally that the switching ac adaptors associated with the various commonly used portable electronic devices can be reused with the proposed dc bus system. A novel high-gain hybrid boost-flyback converter is also introduced with several times higher voltage conversion ratio than the conventional boost converter topology. This arrangement results in higher DC bus levels and lower cable conduction losses. Moreover, the voltage stress on the hybrid boost-flyback converter power switch is within half the output voltage. Experimental results taken from a laboratory prototype are presented to confirm the effectiveness of the proposed converter/system.",
"title": ""
},
{
"docid": "c5f749c36b3d8af93c96bee59f78efe5",
"text": "INTRODUCTION\nMolecular diagnostics is a key component of laboratory medicine. Here, the authors review key triggers of ever-increasing automation in nucleic acid amplification testing (NAAT) with a focus on specific automated Polymerase Chain Reaction (PCR) testing and platforms such as the recently launched cobas® 6800 and cobas® 8800 Systems. The benefits of such automation for different stakeholders including patients, clinicians, laboratory personnel, hospital administrators, payers, and manufacturers are described. Areas Covered: The authors describe how molecular diagnostics has achieved total laboratory automation over time, rivaling clinical chemistry to significantly improve testing efficiency. Finally, the authors discuss how advances in automation decrease the development time for new tests enabling clinicians to more readily provide test results. Expert Commentary: The advancements described enable complete diagnostic solutions whereby specific test results can be combined with relevant patient data sets to allow healthcare providers to deliver comprehensive clinical recommendations in multiple fields ranging from infectious disease to outbreak management and blood safety solutions.",
"title": ""
}
] |
scidocsrr
|
b1b9cafe1826c60a5bcebf8451bb9a1e
|
DNN-based residual echo suppression
|
[
{
"docid": "46e37ce77756f58ab35c0930d45e367f",
"text": "In this letter, we propose an enhanced stereophonic acoustic echo suppression (SAES) algorithm incorporating spectral and temporal correlations in the short-time Fourier transform (STFT) domain. Unlike traditional stereophonic acoustic echo cancellation, SAES estimates the echo spectra in the STFT domain and uses a Wiener filter to suppress echo without performing any explicit double-talk detection. The proposed approach takes account of interdependencies among components in adjacent time frames and frequency bins, which enables more accurate estimation of the echo signals. Experimental results show that the proposed method yields improved performance compared to that of conventional SAES.",
"title": ""
},
{
"docid": "28f61d005f1b53ad532992e30b9b9b71",
"text": "We propose a method for nonlinear residual echo suppression that consists of extracting spectral features from the far-end signal, and using an artificial neural network to model the residual echo magnitude spectrum from these features. We compare the modeling accuracy achieved by realizations with different features and network topologies, evaluating the mean squared error of the estimated residual echo magnitude spectrum. We also present a low complexity real-time implementation combining an offline-trained network with online adaptation, and investigate its performance in terms of echo suppression and speech distortion for real mobile phone recordings.",
"title": ""
}
] |
[
{
"docid": "2bb39c3428116cef1f60cd1c5d36613e",
"text": "Digital video signal is widely used in modern society. There is increasing demand for it to be more secure and highly reliable. Focusing on this, we propose a method of detecting mosaic blocks. Our proposed method combines two algorithms: HOG with SVM classifier and template matching. We also consider characteristics of mosaic blocks other than shape. Experimental results show that our proposed method has high detection performance of mosaic blocks.",
"title": ""
},
{
"docid": "9ee40d6585b21544e2f112337b0f6b65",
"text": "This article presents a convolutional neural network for the automatic segmentation of brain tumors in multimodal 3D MR images based on a U-net architecture. We evaluate the use of a densely connected convolutional network encoder (DenseNet) which was pretrained on the ImageNet data set. We detail two network architectures that can take into account multiple 3D images as inputs. This work aims to identify if a generic pretrained network can be used for very specific medical applications where the target data differ both in the number of spatial dimensions as well as in the number of inputs channels. Moreover in order to regularize this transfer learning task we only train the decoder part of the U-net architecture. We evaluate the effectiveness of the proposed approach on the BRATS 2018 segmentation challenge [1,2,3,4,5] where we obtained dice scores of 0.79, 0.90, 0.85 and 95% Hausdorff distance of 2.9mm, 3.95mm, and 6.48mm for enhanced tumor core, whole tumor and tumor core respectively on the validation set. This scores degrades to 0.77, 0.88, 0.78 and 95% Hausdorff distance of 3.6mm, 5.72mm, and 5.83mm on the testing set [1].",
"title": ""
},
{
"docid": "84be70157c6a6707d8c5621c9b7aed82",
"text": "Depression is associated with significant disability, mortality and healthcare costs. It is the third leading cause of disability in high-income countries, 1 and affects approximately 840 million people worldwide. 2 Although biological, psychological and environmental theories have been advanced, 3 the underlying pathophysiology of depression remains unknown and it is probable that several different mechanisms are involved. Vitamin D is a unique neurosteroid hormone that may have an important role in the development of depression. Receptors for vitamin D are present on neurons and glia in many areas of the brain including the cingulate cortex and hippocampus, which have been implicated in the pathophysiology of depression. 4 Vitamin D is involved in numerous brain processes including neuroimmuno-modulation, regulation of neurotrophic factors, neuroprotection, neuroplasticity and brain development, 5 making it biologically plausible that this vitamin might be associated with depression and that its supplementation might play an important part in the treatment of depression. Over two-thirds of the populations of the USA and Canada have suboptimal levels of vitamin D. 6,7 Some studies have demonstrated a strong relationship between vitamin D and depression, 8,9 whereas others have shown no relationship. 10,11 To date there have been eight narrative reviews on this topic, 12–19 with the majority of reviews reporting that there is insufficient evidence for an association between vitamin D and depression. None of these reviews used a comprehensive search strategy, provided inclusion or exclusion criteria, assessed risk of bias or combined study findings. In addition, several recent studies were not included in these reviews. 9,10,20,21 Therefore, we undertook a systematic review and meta-analysis to investigate whether vitamin D deficiency is associated with depression in adults in case–control and cross-sectional studies; whether vitamin D deficiency increases the risk of developing depression in cohort studies in adults; and whether vitamin D supplementation improves depressive symptoms in adults with depression compared with placebo, or prevents depression compared with placebo, in healthy adults in randomised controlled trials (RCTs). We searched the databases MEDLINE, EMBASE, PsycINFO, CINAHL, AMED and Cochrane CENTRAL (up to 2 February 2011) using separate comprehensive strategies developed in consultation with an experienced research librarian (see online supplement DS1). A separate search of PubMed identified articles published electronically prior to print publication within 6 months of our search and therefore not available through MEDLINE. The clinical trials registries clinicaltrials.gov and Current Controlled Trials (controlled-trials.com) were searched for unpublished data. The reference lists …",
"title": ""
},
{
"docid": "b2b4e5162b3d7d99a482f9b82820d59e",
"text": "Modern Internet-enabled smart lights promise energy efficiency and many additional capabilities over traditional lamps. However, these connected lights create a new attack surface, which can be maliciously used to violate users’ privacy and security. In this paper, we design and evaluate novel attacks that take advantage of light emitted by modern smart bulbs in order to infer users’ private data and preferences. The first two attacks are designed to infer users’ audio and video playback by a systematic observation and analysis of the multimediavisualization functionality of smart light bulbs. The third attack utilizes the infrared capabilities of such smart light bulbs to create a covert-channel, which can be used as a gateway to exfiltrate user’s private data out of their secured home or office network. A comprehensive evaluation of these attacks in various real-life settings confirms their feasibility and affirms the need for new privacy protection mechanisms.",
"title": ""
},
{
"docid": "6f768934f02c0e559801a7b98d0fbbd7",
"text": "Voice-activated intelligent assistants, such as Siri, Google Now, and Cortana, are prevalent on mobile devices. However, it is challenging to evaluate them due to the varied and evolving number of tasks supported, e.g., voice command, web search, and chat. Since each task may have its own procedure and a unique form of correct answers, it is expensive to evaluate each task individually. This paper is the first attempt to solve this challenge. We develop consistent and automatic approaches that can evaluate different tasks in voice-activated intelligent assistants. We use implicit feedback from users to predict whether users are satisfied with the intelligent assistant as well as its components, i.e., speech recognition and intent classification. Using this approach, we can potentially evaluate and compare different tasks within and across intelligent assistants ac-cording to the predicted user satisfaction rates. Our approach is characterized by an automatic scheme of categorizing user-system interaction into task-independent dialog actions, e.g., the user is commanding, selecting, or confirming an action. We use the action sequence in a session to predict user satisfaction and the quality of speech recognition and intent classification. We also incorporate other features to further improve our approach, including features derived from previous work on web search satisfaction prediction, and those utilizing acoustic characteristics of voice requests. We evaluate our approach using data collected from a user study. Results show our approach can accurately identify satisfactory and unsatisfactory sessions.",
"title": ""
},
{
"docid": "3f7ccea016ab9e4742c52603209bbd45",
"text": "Quick Response (QR) code is a two-dimensional bar-code, which is quite popular due to its excellent storage capacity and error resilience. Generally, QR codes are widely used to store text information such as URL links, geographical coordinates, name cards, inventory information, authorship, etc. To reach the current limit of QR codes, in this paper, we would like to propose an innovative multimedia archiving technique, which is built upon the advanced signal processing scheme to tackle the multiple QR codes all at once. The multimedia data often include texts, images, audio data, special-purpose codes, etc. The recently proposed software-defined multiplexing code (SDMC) can be applied to combine all of them even though each type of data would have the individual data format different from others. Our proposed new archiving technique involves two phases, namely multimedia-amalgamation (MA) and multimedia-detachment (MD). In the MA phase, the multimedia data, regardless of their data format, can be converted to the binary streams; then the SDMC will be employed to aggregate them together in a longer binary stream; such an ultimate long binary stream will be converted to the QR codes. The QR codes, protected by the inherent error-correction mechanism, will thus be placed on a sheet or multiple sheets with a uniform spacing. The sheet(s) containing multiple QR codes can thus be archived in soft (PDF) or hard (print-out) copies. In the MD (recovery) phase, one can scan these PDF files and employ our designed signal processing algorithms to separate the individual QR codes; then the corresponding QR decoder can convert these QR codes back to the original binary stream. Finally the unstuffing algorithm in the SDMC can detach the individual data from the long binary stream composed by the multimedia mixture.",
"title": ""
},
{
"docid": "cbc9e0641caea9af6d75a94de26e09df",
"text": "At present, spatio-temporal action detection in the video is still a challenging problem, considering the complexity of the background, the variety of the action or the change of the viewpoint in the unconstrained environment. Most of current approaches solve the problem via a two-step processing: first detecting actions at each frame; then linking them, which neglects the continuity of the action and operates in an offline and batch processing manner. In this paper, we attempt to build an online action detection model that introduces the spatio-temporal coherence existed among action regions when performing action category inference and position localization. Specifically, we seek to represent the spatio-temporal context pattern via establishing an encoder-decoder model based on the convolutional recurrent network. The model accepts a video snippet as input and encodes the dynamic information of the action in the forward pass. During the backward pass, it resolves such information at each time instant for action detection via fusing the current static or motion cue. Additionally, we propose an incremental action tube generation algorithm, which accomplishes action bounding-boxes association, action label determination and the temporal trimming in a single pass. Our model takes in the appearance, motion or fused signals as input and is tested on two prevailing datasets, UCF-Sports and UCF-101. The experiment results demonstrate the effectiveness of our method which achieves a performance superior or comparable to compared existing approaches.",
"title": ""
},
{
"docid": "b181d6fd999fdcd8c5e5b52518998175",
"text": "Hydrogels are used to create 3D microenvironments with properties that direct cell function. The current study demonstrates the versatility of hyaluronic acid (HA)-based hydrogels with independent control over hydrogel properties such as mechanics, architecture, and the spatial distribution of biological factors. Hydrogels were prepared by reacting furan-modified HA with bis-maleimide-poly(ethylene glycol) in a Diels-Alder click reaction. Biomolecules were photopatterned into the hydrogel by two-photon laser processing, resulting in spatially defined growth factor gradients. The Young's modulus was controlled by either changing the hydrogel concentration or the furan substitution on the HA backbone, thereby decoupling the hydrogel concentration from mechanical properties. Porosity was controlled by cryogelation, and the pore size distribution, by the thaw temperature. The addition of galactose further influenced the porosity, pore size, and Young's modulus of the cryogels. These HA-based hydrogels offer a tunable platform with a diversity of properties for directing cell function, with applications in tissue engineering and regenerative medicine.",
"title": ""
},
{
"docid": "b2d1a0befef19d466cd29868d5cf963b",
"text": "Accurate prediction of the functional effect of genetic variation is critical for clinical genome interpretation. We systematically characterized the transcriptome effects of protein-truncating variants, a class of variants expected to have profound effects on gene function, using data from the Genotype-Tissue Expression (GTEx) and Geuvadis projects. We quantitated tissue-specific and positional effects on nonsense-mediated transcript decay and present an improved predictive model for this decay. We directly measured the effect of variants both proximal and distal to splice junctions. Furthermore, we found that robustness to heterozygous gene inactivation is not due to dosage compensation. Our results illustrate the value of transcriptome data in the functional interpretation of genetic variants.",
"title": ""
},
{
"docid": "e737bb31bb7dbb6dbfdfe0fd01bfe33c",
"text": "Cannabidiol (CBD) is a non-psychotomimetic phytocannabinoid derived from Cannabis sativa. It has possible therapeutic effects over a broad range of neuropsychiatric disorders. CBD attenuates brain damage associated with neurodegenerative and/or ischemic conditions. It also has positive effects on attenuating psychotic-, anxiety- and depressive-like behaviors. Moreover, CBD affects synaptic plasticity and facilitates neurogenesis. The mechanisms of these effects are still not entirely clear but seem to involve multiple pharmacological targets. In the present review, we summarized the main biochemical and molecular mechanisms that have been associated with the therapeutic effects of CBD, focusing on their relevance to brain function, neuroprotection and neuropsychiatric disorders.",
"title": ""
},
{
"docid": "d93609853422aed1c326d35ab820095d",
"text": "We present a method for inferring a 4D light field of a hidden scene from 2D shadows cast by a known occluder on a diffuse wall. We do this by determining how light naturally reflected off surfaces in the hidden scene interacts with the occluder. By modeling the light transport as a linear system, and incorporating prior knowledge about light field structures, we can invert the system to recover the hidden scene. We demonstrate results of our inference method across simulations and experiments with different types of occluders. For instance, using the shadow cast by a real house plant, we are able to recover low resolution light fields with different levels of texture and parallax complexity. We provide two experimental results: a human subject and two planar elements at different depths.",
"title": ""
},
{
"docid": "7b27d8b8f05833888b9edacf9ace0a18",
"text": "This paper reports results from a study on the adoption of an information visualization system by administrative data analysts. Despite the fact that the system was neither fully integrated with their current software tools nor with their existing data analysis practices, analysts identified a number of key benefits that visualization systems provide to their work. These benefits for the most part occurred when analysts went beyond their habitual and well-mastered data analysis routines and engaged in creative discovery processes. We analyze the conditions under which these benefits arose, to inform the design of visualization systems that can better assist the work of administrative data analysts.",
"title": ""
},
{
"docid": "eeb1c6e76e3957e5444dcc3865595642",
"text": "The advances of Radio-Frequency Identification (RFID) technology have significantly enhanced the capability of capturing data from pervasive space. It becomes a great challenge in the information era to effectively understand human behavior, mobility and activity through the perceived RFID data. Focusing on RFID data management, this article provides an overview of current challenges, emerging opportunities and recent progresses in RFID. In particular, this article has described and analyzed the research work on three aspects: algorithm, protocol and performance evaluation. We investigate the research progress in RFID with anti-collision algorithms, authentication and privacy protection protocols, localization and activity sensing, as well as performance tuning in realistic settings. We emphasize the basic principles of RFID data management to understand the state-of-the-art and to address directions of future research in RFID.",
"title": ""
},
{
"docid": "1a5b28583eaf7cab8cc724966d700674",
"text": "Advertising (ad) revenue plays a vital role in supporting free websites. When the revenue dips or increases sharply, ad system operators must find and fix the rootcause if actionable, for example, by optimizing infrastructure performance. Such revenue debugging is analogous to diagnosis and root-cause analysis in the systems literature but is more general. Failure of infrastructure elements is only one potential cause; a host of other dimensions (e.g., advertiser, device type) can be sources of potential causes. Further, the problem is complicated by derived measures such as costs-per-click that are also tracked along with revenue. Our paper takes the first systematic look at revenue debugging. Using the concepts of explanatory power, succinctness, and surprise, we propose a new multidimensional root-cause algorithm for fundamental and derived measures of ad systems to identify the dimension mostly likely to blame. Further, we implement the attribution algorithm and a visualization interface in a tool called the Adtributor to help troubleshooters quickly identify potential causes. Based on several case studies on a very large ad system and extensive evaluation, we show that the Adtributor has an accuracy of over 95% and helps cut down troubleshooting time by an order of magnitude.",
"title": ""
},
{
"docid": "b50fb31e9c9bbf5f77b54bb048c0a025",
"text": "Companies use Facebook fan pages to promote their products or services. Recent research shows that user(UGC) and marketer-generated content (MGC) created on fan pages affect online sales. But it is still unclear how exactly they affect consumers during their purchase process. We analyze field data from a large German e-tailer to investigate the effects of UGC and MGC in a multi-stage model of purchase decision processes: awareness creation, interest stimulation, and final purchase decision. We find that MGC and UGC create awareness by attracting users to the fan page. Increased numbers of active users stimulate user interest, and more users visit the e-tailer’s online shop. Neutral UGC increase the conversion rate of online shop visitors. Comparisons between one-, twoand three-stage modes show that neglecting one or two stages hides several important effects of MGC and UGC on consumers and ultimately leads to inaccurate predictions of key business figures.",
"title": ""
},
{
"docid": "beb9fe0cb07e8531f01744bd8800d67b",
"text": "Networked embedded systems have become quite important nowadays, especially for monitoring and controlling the devices. Advances in embedded system technologies have led to the development of residential gateway and automation systems. Apart from frequent power cuts, residential areas suffer from a serious problem that people are not aware about the power cuts due to power disconnections in the transformers, also power theft issues and power wastage in the street lamps during day time exists. So this paper presents a lifestyle system using GSM which transmits the status of the transformer.",
"title": ""
},
{
"docid": "84bfeb74de62b3ff84aac972d6a92c27",
"text": "In this paper, extremely miniaturized film bulk acoustic resonator (FBAR) duplexer was newly designed and fabricated for advanced mobile/wireless applications by using PCB embedded vertically stacked high Q spiral inductors. Proposed FBAR duplexer was comprised of FBAR Tx/Rx filters and PCB embedded shunt high Q inductors. The fabricated FBAR duplexer has insertion loss of −3.0 dBMax in pass-band and absolute attenuation of −41 dBmin at transmitter to antenna ports, while it has insertion loss of −3.4 dBMax in pass-band and absolute attenuation of −51 dBmin at antenna to receiver ports. The simulation and measurement of the fabricated FBAR duplexer were well-matched, respectively. The fabricated duplexer has a size of 3.0 mm × 2.5 mm which is the smallest in the reported ones so far. Since it has excellent performance characteristics and small size/volume, it can be directly applied for US-PCS handsets.",
"title": ""
},
{
"docid": "d131cda62d8ac73b209d092d8e36037e",
"text": "The problem of packing congruent spheres (i.e., copies of the same sph ere) in a bounded domain arises in many applications. In this paper, we present a new pack-and-shake scheme for packing congruent spheres in various bounded 2-D domains. Our packing scheme is based on a number of interesting ideas, such as a trimming and packing approach, optimal lattice packing under translation and/or rotation, shaking procedures, etc. Our packing algorithms have fairly low time complexities. In certain cases, they even run in nearly linear time. Our techniques can be easily generalized to congruent packing of other shapes of objects, and are readily extended to higher dimensional spaces. Applications of our packing algorithms to treatment planning of radiosurgery are discussed. Experimental results suggest that our algorithms produce reasonably dense packings.",
"title": ""
},
{
"docid": "778d760ce03e559763112d365a3d8444",
"text": "The growing market for smart home IoT devices promises new conveniences for consumers while presenting new challenges for preserving privacy within the home. Many smart home devices have always-on sensors that capture users’ offline activities in their living spaces and transmit information about these activities on the Internet. In this paper, we demonstrate that an ISP or other network observer can infer privacy sensitive in-home activities by analyzing Internet traffic from smart homes containing commercially-available IoT devices even when the devices use encryption. We evaluate several strategies for mitigating the privacy risks associated with smart home device traffic, including blocking, tunneling, and rate-shaping. Our experiments show that traffic shaping can effectively and practically mitigate many privacy risks associated with smart home IoT devices. We find that 40KB/s extra bandwidth usage is enough to protect user activities from a passive network adversary. This bandwidth cost is well within the Internet speed limits and data caps for many smart homes.",
"title": ""
}
] |
scidocsrr
|
c451fd0cb6ff3f6f33922eb71fe4b875
|
The Post Adoption Switching Of Social Network Service: A Human Migratory Model
|
[
{
"docid": "1c0efa706f999ee0129d21acbd0ef5ab",
"text": "Ten years ago, we presented the DeLone and McLean Information Systems (IS) Success Model as a framework and model for measuring the complexdependent variable in IS research. In this paper, we discuss many of the important IS success research contributions of the last decade, focusing especially on research efforts that apply, validate, challenge, and propose enhancements to our original model. Based on our evaluation of those contributions, we propose minor refinements to the model and propose an updated DeLone and McLean IS Success Model. We discuss the utility of the updated model for measuring e-commerce system success. Finally, we make a series of recommendations regarding current and future measurement of IS success. 10 DELONE AND MCLEAN",
"title": ""
},
{
"docid": "9948738a487ed899ec50ac292e1f9c6d",
"text": "A Web survey of 1,715 college students was conducted to examine Facebook Groups users' gratifications and the relationship between users' gratifications and their political and civic participation offline. A factor analysis revealed four primary needs for participating in groups within Facebook: socializing, entertainment, self-status seeking, and information. These gratifications vary depending on user demographics such as gender, hometown, and year in school. The analysis of the relationship between users' needs and civic and political participation indicated that, as predicted, informational uses were more correlated to civic and political action than to recreational uses.",
"title": ""
},
{
"docid": "013bf71ab18747afefa07cbe6ae6d477",
"text": "Mobile commerce is becoming increasingly important in business. This trend is particularly evident in the service industry. To cope with this demand, various platforms have been proposed to provide effective mobile commerce solutions. Among these solutions, wireless application protocol (WAP) is one of the most widespread technical standards for mobile commerce. Following continuous technical evolution, WAP has come to include various new features. However, WAP services continue to struggle for market share. Hence, understanding WAP service adoption is increasingly important for enterprises interested in developing mobile commerce. This study aims to (1) identify the critical factors of WAP service adoption; (2) explore the relative importance of each factor for users who adopt WAP and those who do not; (3) examine the causal relationships among variables on WAP service adoption behavior. This study conducts an empirical test of WAP service adoption in Taiwan, based on theory of planned behavior (TPB) and innovation diffusion theory (IDT). The results help clarify the critical factors influences on WAP service adoption in the Greater China economic region. The Greater China economic region is a rapidly growing market. Many western telecommunication enterprises are strongly interested in providing wireless services in Shanghai, Singapore, Hong Kong and Taipei. Since these cities share a similar culture and the same language, the analytical results and conclusions of this study may be a good reference for global telecommunication enterprises to establish the developing strategy for their eastern branches. From the analysis conducted in this study, the critical factors for influences on WAP service adoption include connection speed, service cost, user satisfaction, personal innovativeness, ease of use, peer influence, and facilitating condition. Therefore, this study proposes that strategies for marketing WAP services in the Greater China economic region can pay increased attention to these factors. Notably, this study also provides some suggestion for subsequent researchers and practitioners seeking to understand WAP service adoption behavior.",
"title": ""
}
] |
[
{
"docid": "11f84f99de269ca5ca43fc6d761504b7",
"text": "Effective use of distributed collaboration environments requires shared mental models that guide users in sensemaking and categorization. In Lotus Notes -based collaboration systems, such shared models are usually implemented as views and document types. TeamRoom, developed at Lotus Institute, implements in its design a theory of effective social process that creates a set of team-specific categories, which can then be used as a basis for knowledge sharing, collaboration, and team memory. This paper reports an exploratory study in collective concept formation in the TeamRoom environment. The study was run in an ecological setting, while the team members used the system for their everyday work. We apply theory developed by Lev Vygotsky, and use a modified version of an experiment on concept formation, devised by Lev Sakharov, and discussed in Vygotsky (1986). Vygotsky emphasized the role of language, cognitive artifacts, and historical and social sources in the development of thought processes. Within the Vygotskian framework it becomes clear that development of thinking does not end in adolescence. In teams of adult people, learning and knowledge creation are continuous processes. New concepts are created, shared, and developed into systems. The question, then, becomes how spontaneous concepts are collectively generated in teams, how they become integrated as systems, and how computer mediated collaboration environments affect these processes. d in ittle ons",
"title": ""
},
{
"docid": "bf7bc12a4f5cbac481c8a0a4e92854b9",
"text": "Recurrent neural networks (RNN), especially the ones requiring extremely long term memories, are difficult to training. Hence, they provide an ideal testbed for benchmarking the performance of optimization algorithms. This paper reports test results of a recently proposed preconditioned stochastic gradient descent (PSGD) algorithm on RNN training. We find that PSGD may outperform Hessian-free optimization which achieves the state-of-the-art performance on the target problems, although it is only slightly more complicated than stochastic gradient descent (SGD) and is user friendly, virtually a tuning free algorithm.",
"title": ""
},
{
"docid": "4762cbac8a7e941f26bce8217cf29060",
"text": "The 2-D maximum entropy method not only considers the distribution of the gray information, but also takes advantage of the spatial neighbor information with using the 2-D histogram of the image. As a global threshold method, it often gets ideal segmentation results even when the image s signal noise ratio (SNR) is low. However, its time-consuming computation is often an obstacle in real time application systems. In this paper, the image thresholding approach based on the index of entropy maximization of the 2-D grayscale histogram is proposed to deal with infrared image. The threshold vector (t, s), where t is a threshold for pixel intensity and s is another threshold for the local average intensity of pixels, is obtained through a new optimization algorithm, namely, the particle swarm optimization (PSO) algorithm. PSO algorithm is realized successfully in the process of solving the 2-D maximum entropy problem. The experiments of segmenting the infrared images are illustrated to show that the proposed method can get ideal segmentation result with less computation cost. 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "721ff703dfafad6b1b330226c36ed641",
"text": "In the Narrowband Internet-of-Things (NB-IoT) LTE systems, the device shall be able to blindly lock to a cell within 200-KHz bandwidth and with only one receive antenna. In addition, the device is required to setup a call at a signal-to-noise ratio (SNR) of −12.6 dB in the extended coverage mode. A new set of synchronization signals have been introduced to provide data-aided synchronization and cell search. In this letter, we present a procedure for NB-IoT cell search and initial synchronization subject to the new challenges given the new specifications. Simulation results show that this method not only provides the required performance at very low SNRs, but also can be quickly camped on a cell, if any.",
"title": ""
},
{
"docid": "9d5de7a0330d8bba49eb8d73597473b9",
"text": "Web crawlers are highly automated and seldom regulated manually. The diversity of crawler activities often leads to ethical problems such as spam and service attacks. In this research, quantitative models are proposed to measure the web crawler ethics based on their behaviors on web servers. We investigate and define rules to measure crawler ethics, referring to the extent to which web crawlers respect the regulations set forth in robots.txt configuration files. We propose a vector space model to represent crawler behavior and measure the ethics of web crawlers based on the behavior vectors. The results show that ethicality scores vary significantly among crawlers. Most commercial web crawlers' behaviors are ethical. However, many commercial crawlers still consistently violate or misinterpret certain robots.txt rules. We also measure the ethics of big search engine crawlers in terms of return on investment. The results show that Google has a higher score than other search engines for a US website but has a lower score than Baidu for Chinese websites.",
"title": ""
},
{
"docid": "766c723d00ac15bf31332c8ab4b89b63",
"text": "For those people without artistic talent, they can only draw rough or even awful doodles to express their ideas. We propose a doodle beautification system named Doodle Master, which can transfer a rough doodle to a plausible image and also keep the semantic concepts of the drawings. The Doodle Master applies the VAE/GAN model to decode and generate the beautified result from a constrained latent space. To achieve better performance for sketch data which is more like discrete distribution, a shared-weight method is proposed to improve the learnt features of the discriminator with the aid of the encoder. Furthermore, we design an interface for the user to draw with basic drawing tools and adjust the number of reconstruction times. The experiments show that the proposed Doodle Master system can successfully beautify the rough doodle or sketch in real-time.",
"title": ""
},
{
"docid": "c337226d663e69ecde67ff6f35ba7654",
"text": "In this paper, we presented a new model for cyber crime investigation procedure which is as follows: readiness phase, consulting with profiler, cyber crime classification and investigation priority decision, damaged cyber crime scene investigation, analysis by crime profiler, suspects tracking, injurer cyber crime scene investigation, suspect summon, cyber crime logical reconstruction, writing report.",
"title": ""
},
{
"docid": "b5b4e637065ba7c0c18a821bef375aea",
"text": "The new era of mobile health ushered in by the wide adoption of ubiquitous computing and mobile communications has brought opportunities for governments and companies to rethink their concept of healthcare. Simultaneously, the worldwide urbanization process represents a formidable challenge and attracts attention toward cities that are expected to gather higher populations and provide citizens with services in an efficient and human manner. These two trends have led to the appearance of mobile health and smart cities. In this article we introduce the new concept of smart health, which is the context-aware complement of mobile health within smart cities. We provide an overview of the main fields of knowledge that are involved in the process of building this new concept. Additionally, we discuss the main challenges and opportunities that s-Health would imply and provide a common ground for further research.",
"title": ""
},
{
"docid": "bf44cc7e8e664f930edabf20ca06dd29",
"text": "Nowadays, our living environment is rich in radio-frequency energy suitable for harvesting. This energy can be used for supplying low-power consumption devices. In this paper, we analyze a new type of a Koch-like antenna which was designed for energy harvesting specifically. The designed antenna covers two different frequency bands (GSM 900 and Wi-Fi). Functionality of the antenna is verified by simulations and measurements.",
"title": ""
},
{
"docid": "14cb0e8fc4e8f82dc4e45d8562ca4bb2",
"text": "Information security is one of the most important factors to be considered when secret information has to be communicated between two parties. Cryptography and steganography are the two techniques used for this purpose. Cryptography scrambles the information, but it reveals the existence of the information. Steganography hides the actual existence of the information so that anyone else other than the sender and the recipient cannot recognize the transmission. In steganography the secret information to be communicated is hidden in some other carrier in such a way that the secret information is invisible. In this paper an image steganography technique is proposed to hide audio signal in image in the transform domain using wavelet transform. The audio signal in any format (MP3 or WAV or any other type) is encrypted and carried by the image without revealing the existence to anybody. When the secret information is hidden in the carrier the result is the stego signal. In this work, the results show good quality stego signal and the stego signal is analyzed for different attacks. It is found that the technique is robust and it can withstand the attacks. The quality of the stego image is measured by Peak Signal to Noise Ratio (PSNR), Structural Similarity Index Metric (SSIM), Universal Image Quality Index (UIQI). The quality of extracted secret audio signal is measured by Signal to Noise Ratio (SNR), Squared Pearson Correlation Coefficient (SPCC). The results show good values for these metrics. © 2015 The Authors. Published by Elsevier B.V. Peer-review under responsibility of organizing committee of the Graph Algorithms, High Performance Implementations and Applications (ICGHIA2014).",
"title": ""
},
{
"docid": "6097315ac2e4475e8afd8919d390babf",
"text": "This paper presents an origami-inspired technique which allows the application of 2-D fabrication methods to build 3-D robotic systems. The ability to design robots as origami structures introduces a fast and low-cost fabrication method to modern, real-world robotic applications. We employ laser-machined origami patterns to build a new class of robotic systems for mobility and manipulation. Origami robots use only a flat sheet as the base structure for building complicated bodies. An arbitrarily complex folding pattern can be used to yield an array of functionalities, in the form of actuated hinges or active spring elements. For actuation, we use compact NiTi coil actuators placed on the body to move parts of the structure on-demand. We demonstrate, as a proof-of-concept case study, the end-to-end fabrication and assembly of a simple mobile robot that can undergo worm-like peristaltic locomotion.",
"title": ""
},
{
"docid": "9d089af812c0fdd245a218362d88b62a",
"text": "Interaction is increasingly a public affair, taking place in our theatres, galleries, museums, exhibitions and on the city streets. This raises a new design challenge for HCI - how should spectators experience a performer's interaction with a computer? We classify public interfaces (including examples from art, performance and exhibition design) according to the extent to which a performer's manipulations of an interface and their resulting effects are hidden, partially revealed, fully revealed or even amplified for spectators. Our taxonomy uncovers four broad design strategies: 'secretive,' where manipulations and effects are largely hidden; 'expressive,' where they tend to be revealed enabling the spectator to fully appreciate the performer's interaction; 'magical,' where effects are revealed but the manipulations that caused them are hidden; and finally 'suspenseful,' where manipulations are apparent but effects are only revealed as the spectator takes their turn.",
"title": ""
},
{
"docid": "61615f5aefb0aa6de2dd1ab207a966d5",
"text": "Wikipedia provides an enormous amount of background knowledge to reason about the semantic relatedness between two entities. We propose Wikipedia-based Distributional Semantics for Entity Relatedness (DiSER), which represents the semantics of an entity by its distribution in the high dimensional concept space derived from Wikipedia. DiSER measures the semantic relatedness between two entities by quantifying the distance between the corresponding high-dimensional vectors. DiSER builds the model by taking the annotated entities only, therefore it improves over existing approaches, which do not distinguish between an entity and its surface form. We evaluate the approach on a benchmark that contains the relative entity relatedness scores for 420 entity pairs. Our approach improves the accuracy by 12% on state of the art methods for computing entity relatedness. We also show an evaluation of DiSER in the Entity Disambiguation task on a dataset of 50 sentences with highly ambiguous entity mentions. It shows an improvement of 10% in precision over the best performing methods. In order to provide the resource that can be used to find out all the related entities for a given entity, a graph is constructed, where the nodes represent Wikipedia entities and the relatedness scores are reflected by the edges. Wikipedia contains more than 4.1 millions entities, which required efficient computation of the relatedness scores between the corresponding 17 trillions of entity-pairs.",
"title": ""
},
{
"docid": "4d297680cd342f46a5a706c4969273b8",
"text": "Theory on passwords has lagged practice, where large providers use back-end smarts to survive with imperfect technology.",
"title": ""
},
{
"docid": "88a21d973ec80ee676695c95f6b20545",
"text": "Three-dimensional models provide a volumetric representation of space which is important for a variety of robotic applications including flying robots and robots that are equipped with manipulators. In this paper, we present an open-source framework to generate volumetric 3D environment models. Our mapping approach is based on octrees and uses probabilistic occupancy estimation. It explicitly represents not only occupied space, but also free and unknown areas. Furthermore, we propose an octree map compression method that keeps the 3D models compact. Our framework is available as an open-source C++ library and has already been successfully applied in several robotics projects. We present a series of experimental results carried out with real robots and on publicly available real-world datasets. The results demonstrate that our approach is able to update the representation efficiently and models the data consistently while keeping the memory requirement at a minimum.",
"title": ""
},
{
"docid": "30520912723d67f7d07881aa33cdf229",
"text": "OBJECTIVE\nA study to examine the incidence and characteristics of concussions among Canadian university athletes during 1 full year of football and soccer participation.\n\n\nDESIGN\nRetrospective survey.\n\n\nPARTICIPANTS\nThree hundred eighty Canadian university football and 240 Canadian university soccer players reporting to 1999 fall training camp. Of these, 328 football and 201 soccer players returned a completed questionnaire.\n\n\nMAIN OUTCOME MEASURES\nBased on self-reported symptoms, calculations were made to determine the number of concussions experienced during the previous full year of football or soccer participation, the duration of symptoms, the time for return to play, and any associated risk factors for concussions.\n\n\nRESULTS\nOf all the athletes who returned completed questionnaires, 70.4% of the football players and 62.7% of the soccer players had experienced symptoms of a concussion during the previous year. Only 23.4% of the concussed football players and 19.8% of the concussed soccer players realized they had suffered a concussion. More than one concussion was experienced by 84.6% of the concussed football players and 81.7% of the concussed soccer players. Examining symptom duration, 27.6% of all concussed football players and 18.8% of all concussed soccer players experienced symptoms for at least 1 day or longer. Tight end and defensive lineman were the positions most commonly affected in football, while goalies were the players most commonly affected in soccer. Variables that increased the odds of suffering a concussion during the previous year for football players included a history of a traumatic loss of consciousness or a recognized concussion in the past. Variables that increased the odds of suffering a concussion during the previous year for soccer players included a past history of a recognized concussion while playing soccer and being female.\n\n\nCONCLUSIONS\nUniversity football and soccer players seem to be experiencing a significant amount of concussions while participating in their respective sports. Variables that seem to increase the odds of suffering a concussion during the previous year for football and soccer players include a history of a recognized concussion. Despite being relatively common, symptoms of concussion may not be recognized by many players.",
"title": ""
},
{
"docid": "28d8cad6fda1f1345b9905e71495e745",
"text": "To provide COSMOS, a dynamic model baaed manipulator control system, with an improved dynamic model, a PUMA 560 arm waa diaaaaembled; the inertial propertiea of the individual links were meaaured; and an ezplicit model incorporating all ofthe non-zero meaaured parametera waa deriued. The ezplicit model of the PUMA arm has been obtained with a derivation procedure comprised of aeveral heuristic rulea for simplification. A aimplijied model, abbreviated from the full ezplicit model with a 1% aignijicance criterion, can be evaluated with 305 calculationa, one fifth the number required by the recuraive Newton-Euler method. The procedure used to derive the model i a laid out; the meaaured inertial parametera are preaented, and the model ia included in an appendiz.",
"title": ""
},
{
"docid": "ff6a487e49d1fed033ad082ad7cd0524",
"text": "We present a novel technique for shadow removal based on an information theoretic approach to intrinsic image analysis. Our key observation is that any illumination change in the scene tends to increase the entropy of observed texture intensities. Similarly, the presence of texture in the scene increases the entropy of the illumination function. Consequently, we formulate the separation of an image into texture and illumination components as minimization of entropies of each component. We employ a non-parametric kernel-based quadratic entropy formulation, and present an efficient multi-scale iterative optimization algorithm for minimization of the resulting energy functional. Our technique may be employed either fully automatically, using a proposed learning based method for automatic initialization, or alternatively with small amount of user interaction. As we demonstrate, our method is particularly suitable for aerial images, which consist of either distinctive texture patterns, e.g. building facades, or soft shadows with large diffuse regions, e.g. cloud shadows.",
"title": ""
},
{
"docid": "5063adc5020cacddb5a4c6fd192fc17e",
"text": "In this paper, A Novel 1 to 4 modified Wilkinson power divider operating over the frequency range of (3 GHz to 8 GHz) is proposed. The design perception of the proposed divider based on two different stages and printed on FR4 (Epoxy laminate material) with the thickness of 1.57mm and єr =4.3 respectively. The modified design of this power divider including curved corners instead of the sharp edges and some modification in the length of matching stubs. In addition, this paper contain the power divider with equal power split at all ports, reasonable insertion loss, acceptable return loss below −10 dB, good impedance matching at all ports and satisfactory isolation performance has been obtained over the mentioned frequency range. The design concept and optimization development is practicable through CST simulation software.",
"title": ""
}
] |
scidocsrr
|
c6be81bce1b12f8698fd308689266307
|
High Throughput Parallel Implementation of Aho-Corasick Algorithm on a GPU
|
[
{
"docid": "d9bd07f0c83f1f2ac8a3eddf9b66e000",
"text": "The constant increase in link speeds and number of threats poses challenges to network intrusion detection systems (NIDS), which must cope with higher traffic throughput and perform even more complex per-packet processing. In this paper, we present an intrusion detection system based on the Snort open-source NIDS that exploits the underutilized computational power of modern graphics cards to offload the costly pattern matching operations from the CPU, and thus increase the overall processing throughput. Our prototype system, called Gnort, achieved a maximum traffic processing throughput of 2.3 Gbit/s using synthetic network traces, while when monitoring real traffic using a commodity Ethernet interface, it outperformed unmodified Snort by a factor of two. The results suggest that modern graphics cards can be used effectively to speed up intrusion detection systems, as well as other systems that involve pattern matching operations.",
"title": ""
}
] |
[
{
"docid": "98aec0805e83e344a6b9898fb65e1a11",
"text": "Technology offers the potential to objectively monitor people's eating and activity behaviors and encourage healthier lifestyles. BALANCE is a mobile phone-based system for long term wellness management. The BALANCE system automatically detects the user's caloric expenditure via sensor data from a Mobile Sensing Platform unit worn on the hip. Users manually enter information on foods eaten via an interface on an N95 mobile phone. Initial validation experiments measuring oxygen consumption during treadmill walking and jogging show that the system's estimate of caloric output is within 87% of the actual value. Future work will refine and continue to evaluate the system's efficacy and develop more robust data input and activity inference methods.",
"title": ""
},
{
"docid": "8fc26adf38a835823f3ec590b43abbc9",
"text": "This paper presents an application of the analytic hierarchy process (AHP) used to select the most appropriate tool to support knowledge management (KM). This method adopts a multi-criteria approach that can be used to analyse and compare KM tools in the software market. The method is based on pairwise comparisons between several factors that affect the selection of the most appropriate KM tool. An AHP model is formulated and applied to a real case of assisting decision-makers in a leading communications company in Hong Kong to evaluate a suitable KM tool. We believe that the application shown can be of use to managers and that, because of its ease of implementation, others can benefit from this approach. q 2005 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "39bfd705fb71e9ba4a503246408c6820",
"text": "We develop a theoretical model to describe and explain variation in corporate governance among advanced capitalist economies, identifying the social relations and institutional arrangements that shape who controls corporations, what interests corporations serve, and the allocation of rights and responsibilities among corporate stakeholders. Our “actor-centered” institutional approach explains firm-level corporate governance practices in terms of institutional factors that shape how actors’ interests are defined (“socially constructed”) and represented. Our model has strong implications for studying issues of international convergence.",
"title": ""
},
{
"docid": "a6def37312896cf470360b2c2282af69",
"text": "The use of herbal medicinal products and supplements has increased during last decades. At present, some herbs are used to enhance muscle strength and body mass. Emergent evidence suggests that the health benefits from plants are attributed to their bioactive compounds such as Polyphenols, Terpenoids, and Alkaloids which have several physiological effects on the human body. At times, manufacturers launch numerous products with banned ingredient inside with inappropriate amounts or fake supplement inducing harmful side effect. Unfortunately up to date, there is no guarantee that herbal supplements are safe for anyone to use and it has not helped to clear the confusion surrounding the herbal use in sport field especially. Hence, the purpose of this review is to provide guidance on the efficacy and side effect of most used plants in sport. We have identified plants according to the following categories: Ginseng, alkaloids, and other purported herbal ergogenics such as Tribulus Terrestris, Cordyceps Sinensis. We found that most herbal supplement effects are likely due to activation of the central nervous system via stimulation of catecholamines. Ginseng was used as an endurance performance enhancer, while alkaloids supplementation resulted in improvements in sprint and cycling intense exercises. Despite it is prohibited, small amount of ephedrine was usually used in combination with caffeine to enhance muscle strength in trained individuals. Some other alkaloids such as green tea extracts have been used to improve body mass and composition in athletes. Other herb (i.e. Rhodiola, Astragalus) help relieve muscle and joint pain, but results about their effects on exercise performance are missing.",
"title": ""
},
{
"docid": "a2a6753c37fd338d128f1f3ae22cf227",
"text": "Search engine optimization [SEO] is often about making modifications to parts of the website. When viewed individually, these changes might seem like incremental improvements, but when combined with optimization technique, they could have noticeable impact on website’s user experience and performance in search results. SEO requires considerable time, professional communicators should progressively apply these lessons in sequence presented in this paper and should keep up to date with frequently changing ranking algorithms and with the associated changing practices of the search engine optimization professionals. Search engine rankings are shaped by three classes of key logic participants namely, 1.Business logic, 2.Professional communicators, 3.End user logic. By using these key logics the optimization technique makes it easier for the users to search their web contents. It focuses only on general web search engine and the deliver lesson that professional communication can readily implement without any specialized technique. The key concepts introduces a theoretical framework for this to search engine optimization, describes how the approach was used to implement the three classes of stakeholders to shape the whole framework because it is easier for the audiences to find their web-content and websites through search engines. Search engine users of course hold the attention economy’s key commodity, their own attention, and confer it not only among the sites of contending web content creators but also among the search engine themselves, thereby compelling search engines to try to better accommodate users’ interests that search engines serve up among their top results.",
"title": ""
},
{
"docid": "0a7a2cfe41f1a04982034ef9cb42c3d4",
"text": "The biocontrol agent Torymus sinensis has been released into Japan, the USA, and Europe to suppress the Asian chestnut gall wasp, Dryocosmus kuriphilus. In this study, we provide a quantitative assessment of T. sinensis effectiveness for suppressing gall wasp infestations in Northwest Italy by annually evaluating the percentage of chestnuts infested by D. kuriphilus (infestation rate) and the number of T. sinensis adults that emerged per 100 galls (emergence index) over a 9-year period. We recorded the number of T. sinensis adults emerging from a total of 64,000 galls collected from 23 sampling sites. We found that T. sinensis strongly reduced the D. kuriphilus population, as demonstrated by reduced galls and an increased T. sinensis emergence index. Specifically, in Northwest Italy, the infestation rate was nearly zero 9 years after release of the parasitoid with no evidence of resurgence in infestation levels. In 2012, the number of T. sinensis females emerging per 100 galls was approximately 20 times higher than in 2009. Overall, T. sinensis proved to be an outstanding biocontrol agent, and its success highlights how the classical biological control approach may represent a cost-effective tool for managing an exotic invasive pest.",
"title": ""
},
{
"docid": "3e1165f031ac1337e79bd5c4eb1ad790",
"text": "Brainstorm is a collaborative open-source application dedicated to magnetoencephalography (MEG) and electroencephalography (EEG) data visualization and processing, with an emphasis on cortical source estimation techniques and their integration with anatomical magnetic resonance imaging (MRI) data. The primary objective of the software is to connect MEG/EEG neuroscience investigators with both the best-established and cutting-edge methods through a simple and intuitive graphical user interface (GUI).",
"title": ""
},
{
"docid": "bd681720305b4dbfca49c3c90ee671be",
"text": "This document describes an extension of the One-Time Password (OTP) algorithm, namely the HMAC-based One-Time Password (HOTP) algorithm, as defined in RFC 4226, to support the time-based moving factor. The HOTP algorithm specifies an event-based OTP algorithm, where the moving factor is an event counter. The present work bases the moving factor on a time value. A time-based variant of the OTP algorithm provides short-lived OTP values, which are desirable for enhanced security. The proposed algorithm can be used across a wide range of network applications, from remote Virtual Private Network (VPN) access and Wi-Fi network logon to transaction-oriented Web applications. The authors believe that a common and shared algorithm will facilitate adoption of two-factor authentication on the Internet by enabling interoperability across commercial and open-source implementations. (IETF). It represents the consensus of the IETF community. It has received public review and has been approved for publication by the Internet Engineering Steering Group (IESG). Not all documents approved by the IESG are a candidate for any level of Internet Standard; see Section 2 of RFC 5741. Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.",
"title": ""
},
{
"docid": "802d66fda1701252d1addbd6d23f6b4c",
"text": "Powered wheelchair users often struggle to drive safely and effectively and, in more critical cases, can only get around when accompanied by an assistant. To address these issues, we propose a collaborative control mechanism that assists users as and when they require help. The system uses a multiple-hypothesis method to predict the driver's intentions and, if necessary, adjusts the control signals to achieve the desired goal safely. The main emphasis of this paper is on a comprehensive evaluation, where we not only look at the system performance but also, perhaps more importantly, characterize the user performance in an experiment that combines eye tracking with a secondary task. Without assistance, participants experienced multiple collisions while driving around the predefined route. Conversely, when they were assisted by the collaborative controller, not only did they drive more safely but also they were able to pay less attention to their driving, resulting in a reduced cognitive workload. We discuss the importance of these results and their implications for other applications of shared control, such as brain-machine interfaces, where it could be used to compensate for both the low frequency and the low resolution of the user input.",
"title": ""
},
{
"docid": "3b2faaddf8f530799b758178700d9cce",
"text": "This research work presents a method for automatic classification of medical images in two classes Normal and Abnormal based on image features and automatic abnormality detection. Our proposed system consists of four phases Preprocessing, Feature extraction, Classification, and Post processing. Statistical texture feature set is derived from normal and abnormal images. We used the KNN classifier for classifying image. The KNN classifier performance compared with kernel based SVM classifier (Linear and RBF). The confusion matrix computed and result shows that KNN obtain 80% classification rate which is more than SVM classification rate. So we choose KNN algorithm for classification of images. If image classified as abnormal then post processing step applied on the image and abnormal region is highlighted on the image. The system has been tested on the number of real CT scan brain images.",
"title": ""
},
{
"docid": "0835b5f25cff5a6083cb8ed2690c5d21",
"text": "BACKGROUND\nIn 2010, the 'European Declaration on alternatives to surgical castration of pigs' was agreed. The Declaration stipulates that from January 1, 2012, surgical castration of pigs shall only be performed with prolonged analgesia and/or anaesthesia and from 2018 surgical castration of pigs should be phased out altogether. The Federation of Veterinarians of Europe together with the European Commission carried out an online survey via SurveyMonkey© to investigate the progress made in different European countries. This study provides descriptive information on the practice of piglet castration across 24 European countries. It gives also an overview on published literature regarding the practicability and effectiveness of the alternatives to surgical castration without anaesthesia/analgesia.\n\n\nRESULTS\nForty usable survey responses from 24 countries were received. Besides Ireland, Portugal, Spain and United Kingdom, who have of history in producing entire males, 18 countries surgically castrate 80% or more of their male pig population. Overall, in 5% of the male pigs surgically castrated across the 24 European countries surveyed, castration is performed with anaesthesia and analgesia and 41% with analgesia (alone). Meloxicam, ketoprofen and flunixin were the most frequently used drugs for analgesia. Procaine was the most frequent local anaesthetic. The sedative azaperone was frequently mentioned even though it does not have analgesic properties. Half of the countries surveyed believed that the method of anaesthesia/analgesia applied is not practicable and effective. However, countries that have experience in using both anaesthesia and post-operative analgesics, such as Norway, Sweden, Switzerland and The Netherlands, found this method practical and effective. The estimated average percentage of immunocastrated pigs in the countries surveyed was 2.7% (median = 0.2%), where Belgium presented the highest estimated percentage of immunocastrated pigs (18%).\n\n\nCONCLUSION\nThe deadlines of January 1, 2012, and of 2018 are far from being met. The opinions on the animal-welfare-conformity and the practicability of the alternatives to surgical castration without analgesia/anaesthesia and the alternatives to surgical castration are widely dispersed. Although countries using analgesia/anaesthesia routinely found this method practical and effective, only few countries seem to aim at meeting the deadline to phase out surgical castration completely.",
"title": ""
},
{
"docid": "c1d84d7ba3fdae0f5f49c3740345fce5",
"text": "Applying end-to-end learning to solve complex, interactive, pixeldriven control tasks on a robot is an unsolved problem. Deep Reinforcement Learning algorithms are too slow to achieve performance on a real robot, but their potential has been demonstrated in simulated environments. We propose using progressive networks to bridge the reality gap and transfer learned policies from simulation to the real world. The progressive net approach is a general framework that enables reuse of everything from low-level visual features to highlevel policies for transfer to new tasks, enabling a compositional, yet simple, approach to building complex skills. We present an early demonstration of this approach with a number of experiments in the domain of robot manipulation that focus on bridging the reality gap. Unlike other proposed approaches, our realworld experiments demonstrate successful task learning from raw visual input on a fully actuated robot manipulator. Moreover, rather than relying on modelbased trajectory optimisation, the task learning is accomplished using only deep reinforcement learning and sparse rewards.",
"title": ""
},
{
"docid": "0f1f6570abf200de786221f28210ed78",
"text": "This paper presents a novel idea for reducing the data storage problems in the self-driving cars. Self-driving cars is a technology that is observed by the modern word with most curiosity. However the vulnerability with the car is the growing data and the approach for handling such huge amount of data growth. This paper proposes a cloud based self-driving car which can optimize the data storage problems in such cars. The idea is to not store any data in the car, rather download everything from the cloud as per the need of the travel. This allows the car to not keep a huge amount of data and rely on a cloud infrastructure for the drive.",
"title": ""
},
{
"docid": "e22378cc4ae64e9c3abbd4b308198fb6",
"text": "Knowledge about the argumentative structure of scientific articles can, amongst other things, be used to improve automatic abstracts. We argue that the argumentative structure of scientific discourse can be automatically detected because reasordng about problems, research tasks and solutions follows predictable patterns. Certain phrases explicitly mark the rhetorical status (communicative function) of sentences with respect to the global argumentative goal. Examples for such meta-diacaurse markers are \"in this paper, we have p r e s e n t e d . . . \" or \"however, their method fails to\". We report on work in progress about recognizing such meta-comments automatically in research articles from two disciplines: computational linguistics and medicine (cardiology). 1 M o t i v a t i o n We are interested in a formal description of the document s t ructure of scientific articles from different disciplines. Such a description could be of practical use for many applications in document management; our specific mot ivat ion for detecting document structure is qual i ty improvement in automatic abstracting. Researchem in the field of automatic abstracting largely agree that it is currently not technically feasible to create automatic abstracts based on full text unders tanding (Sparck Jones 1994). As a result, many researchers have turned to sentence extraction (Kupiec, Pedersen, & Chen 1995; Brandow, Mitze, & Rau 1995; Hovy & Lin 1997). Sentence extraction, which does not involve any deep analysis, has the huge advantage of being robust with respect to individual writing style, discipline and text type (genre). Instead of producing a b s t r a c t , this results produces only extracts: documen t surrogates consisting of a number of sentences selected verbat im from the original text. We consider a concrete document retrieval (DR) scenario in which a researcher wants to select one or more scientific articles from a large scientific database (or even f rom the Internet) for further inspection. The ma in task for the searcher is relevance decision for each paper: she needs to decide whether or not to spend more t ime on a paper (read or skim-read it), depending on how useful it presumably is to her current information needs. Traditional sentence extracts can be used as rough-and-ready relevance indicators for this task, but they are not doing a great job at representing the contents of the original document: searchers often get the wrong idea about what the text is about. Much of this has to do with the fact that extracts are typically incoherent texts, consisting of potential ly unrelated sentences which have been taken out of their context. Crucially, extracts have no handle at revealing the text 's logical and semantic organisation. More sophisticated, user-tailored abstracts could help the searcher make a fast, informed relevance decision by taking factors like the searcher's expertise and current information need into account. If the searcher is dealing with research she knows well, her information needs might be quite concrete: during the process of writing her own paper she might want to find research which supports her own claims, find out if there are contradictory results to hers in the literature, or compare her results to those of researchers using a similar methodology. A different information need arises when she wants to gain an overview of a new research area as an only \"partially informed user\" in this field (Kircz 1991) she will need to find out about specific research goals, the names of the researchers who have contributed the main research ideas in a given time period, along with information of methodology and results in this research field. There are new functions these abstracts could fulfil. In order to make an informed relevance decision, the searcher needs to judge differences and similarities between papers, e.g. how a given paper relates to similar papers with respect to research goals or methodology, so that she can place the research described in a given paper in the larger picture of the field, a function we call navigation between research articles. A similar operation is navigation within a paper, which supports searchers in non-linear reading and allows them to find relevant information faster, e.g. numerical results. We believe that a document surrogate that aims at supporting such functions should characterize research articles in terms of the problems, research tasks and",
"title": ""
},
{
"docid": "00b2d45d6810b727ab531f215d2fa73e",
"text": "Parental preparation for a child's discharge from the hospital sets the stage for successful transitioning to care and recovery at home. In this study of 135 parents of hospitalized children, the quality of discharge teaching, particularly the nurses' skills in \"delivery\" of parent teaching, was associated with increased parental readiness for discharge, which was associated with less coping difficulty during the first 3 weeks postdischarge. Parental coping difficulty was predictive of greater utilization of posthospitalization health services. These results validate the role of the skilled nurse as a teacher in promoting positive outcomes at discharge and beyond the hospitalization.",
"title": ""
},
{
"docid": "f21b0f519f4bf46cb61b2dc2861014df",
"text": "Player experience is difficult to evaluate and report, especially using quantitative methodologies in addition to observations and interviews. One step towards tying quantitative physiological measures of player arousal to player experience reports are Biometric Storyboards (BioSt). They can visualise meaningful relationships between a player's physiological changes and game events. This paper evaluates the usefulness of BioSt to the game industry. We presented the Biometric Storyboards technique to six game developers and interviewed them about the advantages and disadvantages of this technique.",
"title": ""
},
{
"docid": "9a1cb9ebd7bfb9eb10898602399cd304",
"text": "HBase is a distributed column-oriented database built on top of HDFS. HBase is the Hadoop application to use when you require real-time read/write random access to very large datasets. HBase is a scalable data store targeted at random read and write access of (fairly-) structured data. It's modeled after Google's Big table and targeted to support large tables, on the order of billions of rows and millions of columns. It uses HDFS as the underlying file system and is designed to be fully distributed and highly available. Version 0.20 introduces significant performance improvement. Base's Table Input Format is designed to allow a Map Reduce program to operate on data stored in an HBase table. Table Output Format is for writing Map Reduce outputs into an HBase table. HBase has different storage characteristics than HDFS, such as the ability to do row updates and column indexing, so we can expect to see these features used by Hive in future releases. It is already possible to access HBase tables from Hive. This paper includes the step by step introduction to the HBase, Identify differences between apache HBase and a traditional RDBMS, The Problem with Relational Database Systems, Relation between the Hadoop and HBase, How an Apache HBase table is physically stored on disk. Later part of this paper introduces Map Reduce, HBase table and how Apache HBase Cells stores data, what happens to data when it is deleted. Last part explains difference between Big Data and HBase, Conclusion followed with the References.",
"title": ""
},
{
"docid": "d551eda5717671b53afc330ab2188e8d",
"text": "Graphs are a powerful representation formalism that can be applied to a variety of aspects related to language processing. We provide an overview of how Natural Language Processing (NLP) problems have been projected into the graph framework, focusing in particular on graph construction – a crucial step in modeling the data to emphasize the phenomena targeted.",
"title": ""
},
{
"docid": "de052fc7092f8baa599cf8c79ecd8059",
"text": "In this paper, we propose a novel multi-task learning architecture, which incorporates recent advances in attention mechanisms. Our approach, the Multi-Task Attention Network (MTAN), consists of a single shared network containing a global feature pool, together with task-specific soft-attention modules, which are trainable in an end-to-end manner. These attention modules allow for learning of task-specific features from the global pool, whilst simultaneously allowing for features to be shared across different tasks. The architecture can be built upon any feed-forward neural network, is simple to implement, and is parameter efficient. Experiments on the CityScapes dataset show that our method outperforms several baselines in both single-task and multi-task learning, and is also more robust to the various weighting schemes in the multitask loss function. We further explore the effectiveness of our method through experiments over a range of task complexities, and show how our method scales well with task complexity compared to baselines.",
"title": ""
},
{
"docid": "0c1e7ff806fd648dbd7adec1ec639413",
"text": "We recently proposed the Rate Control Protocol (RCP) as way to minimize download times (or flow-completion times). Simulations suggest that if RCP were widely deployed, downloads would frequently finish ten times faster than with TCP. This is because RCP involves explicit feedback from the routers along the path, allowing a sender to pick a fast starting rate, and adapt quickly to network conditions. RCP is particularly appealing because it can be shown to be stable under broad operating conditions, and its performance is independent of the flow-size distribution and the RTT. Although it requires changes to the routers, the changes are small: The routers keep no per-flow state or per-flow queues, and the per-packet processing is minimal. However, the bar is high for a new congestion control mechanism - introducing a new scheme requires enormous change, and the argument needs to be compelling. And so, to enable some scientific and repeatable experiments with RCP, we have built and tested an open and public implementation of RCP; we have made available both the end- host software, and the router hardware. In this paper we describe our end-host implementation of RCP in Linux, and our router implementation in Verilog (on the NetFPGA platform). We hope that others will be able to use these implementations to experiment with RCP and further our understanding of congestion control.",
"title": ""
}
] |
scidocsrr
|
436abca7cd898f03ecbe6f230c5bf4ce
|
Virtual Machine Introspection: Techniques and Applications
|
[
{
"docid": "4d9397a14425e13f9e4b7a340008f416",
"text": "Applying optimized security settings to applications is a difficult and laborious task. Especially in cloud computing, where virtual servers with various pre-installed software packages are leased, selecting optimized security settings is very difficult. In particular, optimized security settings are not identical in every setup. They depend on characteristics of the setup, on the ways an application is used or on other applications running on the same system. Configuring optimized settings given these interdependencies is a complex and time-consuming task. In this work, we present an autonomous agent which improves security settings of applications which run in virtual servers. The agent retrieves custom-made security settings for a target application by investigating its specific setup, it tests and transparently changes settings via introspection techniques unbeknownst from the perspective of the virtual server. During setting selection, the application's operation is not disturbed nor any user interaction is needed. Since optimal settings can change over time or they can change depending on different tasks the application handles, the agent can continuously adapt settings as well as improve them periodically. We call this approach hot-hardening and present results of an implementation that can hot-harden popular networking applications such as Apache2 and OpenSSH.",
"title": ""
},
{
"docid": "79503c15b37209892fa7cfe02c90f967",
"text": "To direct the operation of a computer, we often use a shell, a user interface that provides accesses to the OS kernel services. Traditionally, shells are designed atop an OS kernel. In this paper, we show that a shell can also be designed below an OS. More specifically, we present HYPERSHELL, a practical hypervisor layer guest OS shell that has all of the functionality of a traditional shell, but offers better automation, uniformity and centralized management. This will be particularly useful for cloud and data center providers to manage the running VMs in a large scale. To overcome the semantic gap challenge, we introduce a reverse system call abstraction, and we show that this abstraction can significantly relieve the painful process of developing software below an OS. More importantly, we also show that this abstraction can be implemented transparently. As such, many of the legacy guest OS management utilities can be directly reused in HYPERSHELL without any modification. Our evaluation with over one hundred management utilities demonstrates that HYPERSHELL has 2.73X slowdown on average compared to their native in-VM execution, and has less than 5% overhead to the guest OS kernel.",
"title": ""
},
{
"docid": "6e666fdd26ea00a6eebf7359bdf82329",
"text": "Kernel-level attacks or rootkits can compromise the security of an operating system by executing with the privilege of the kernel. Current approaches use virtualization to gain higher privilege over these attacks, and isolate security tools from the untrusted guest VM by moving them out and placing them in a separate trusted VM. Although out-of-VM isolation can help ensure security, the added overhead of world-switches between the guest VMs for each invocation of the monitor makes this approach unsuitable for many applications, especially fine-grained monitoring. In this paper, we present Secure In-VM Monitoring (SIM), a general-purpose framework that enables security monitoring applications to be placed back in the untrusted guest VM for efficiency without sacrificing the security guarantees provided by running them outside of the VM. We utilize contemporary hardware memory protection and hardware virtualization features available in recent processors to create a hypervisor protected address space where a monitor can execute and access data in native speeds and to which execution is transferred in a controlled manner that does not require hypervisor involvement. We have developed a prototype into KVM utilizing Intel VT hardware virtualization technology. We have also developed two representative applications for the Windows OS that monitor system calls and process creations. Our microbenchmarks show at least 10 times performance improvement in invocation of a monitor inside SIM over a monitor residing in another trusted VM. With a systematic security analysis of SIM against a number of possible threats, we show that SIM provides at least the same security guarantees as what can be achieved by out-of-VM monitors.",
"title": ""
},
{
"docid": "b6e67047ac710fa619c809839412231c",
"text": "An essential goal of Virtual Machine Introspection (VMI) is assuring security policy enforcement and overall functionality in the presence of an untrustworthy OS. A fundamental obstacle to this goal is the difficulty in accurately extracting semantic meaning from the hypervisor's hardware level view of a guest OS, called the semantic gap. Over the twelve years since the semantic gap was identified, immense progress has been made in developing powerful VMI tools. Unfortunately, much of this progress has been made at the cost of reintroducing trust into the guest OS, often in direct contradiction to the underlying threat model motivating the introspection. Although this choice is reasonable in some contexts and has facilitated progress, the ultimate goal of reducing the trusted computing base of software systems is best served by a fresh look at the VMI design space. This paper organizes previous work based on the essential design considerations when building a VMI system, and then explains how these design choices dictate the trust model and security properties of the overall system. The paper then observes portions of the VMI design space which have been under-explored, as well as potential adaptations of existing techniques to bridge the semantic gap without trusting the guest OS. Overall, this paper aims to create an essential checkpoint in the broader quest for meaningful trust in virtualized environments through VM introspection.",
"title": ""
}
] |
[
{
"docid": "8f704e4c4c2a0c696864116559a0f22c",
"text": "Friendships with competitors can improve the performance of organizations through the mechanisms of enhanced collaboration, mitigated competition, and better information exchange. Moreover, these benefits are best achieved when competing managers are embedded in a cohesive network of friendships (i.e., one with many friendships among competitors), since cohesion facilitates the verification of information culled from the network, eliminates the structural holes faced by customers, and facilitates the normative control of competitors. The first part of this analysis examines the performance implications of the friendship-network structure within the Sydney hotel industry, with performance being the yield (i.e., revenue per available room) of a given hotel. This shows that friendships with competitors lead to dramatic improvements in hotel yields. Performance is further improved if a manager’s competitors are themselves friends, evidencing the benefit of cohesive friendship networks. The second part of the analysis examines the structure of friendship ties among hotel managers and shows that friendships are more likely between managers who are competitors.",
"title": ""
},
{
"docid": "39d1271ce88b840b8d75806faf9463ad",
"text": "Dynamically Reconfigurable Systems (DRS), implemented using Field-Programmable Gate Arrays (FPGAs), allow hardware logic to be partially reconfigured while the rest of a design continues to operate. By mapping multiple reconfigurable hardware modules to the same physical region of an FPGA, such systems are able to time-multiplex their circuits at run time and can adapt to changing execution requirements. This architectural flexibility introduces challenges for verifying system functionality. New simulation approaches need to extend traditional simulation techniques to assist designers in testing and debugging the time-varying behavior of DRS. Another significant challenge is the effective use of tools so as to reduce the number of design iterations. This thesis focuses on simulation-based functional verification of modular reconfigurable DRS designs. We propose a methodology and provide tools to assist designers in verifying DRS designs while part of the design is undergoing reconfiguration. This thesis analyzes the challenges in verifying DRS designs with respect to the user design and the physical implementation of such systems. We propose using a simulationonly layer to emulate the behavior of target FPGAs and accurately model the characteristic features of reconfiguration. The simulation-only layer maintains verification productivity by abstracting away the physical details of the FPGA fabric. Furthermore, since the design does not need to be modified for simulation purposes, the design as implemented instead of some variation of it is verified. We provide two possible implementations of the simulation-only layer. Extended ReChannel is a SystemC library that can be used to model DRS at a high level. ReSim is a library to support RTL simulation of a DRS reconfiguring both its logic and state. Through a number of case studies, we demonstrate that with insignificant overheads, our approach seamlessly integrates with the existing, mainstream DRS design flow and with wellestablished verification methodologies such as top-down modeling and coverage-driven verification. The case studies also serve as a guide in the use of our libraries to identify bugs that are related to Dynamic Partial Reconfiguration. Our results demonstrate that using the simulation-only layer is an effective approach to the simulation-based functional verification of DRS designs.",
"title": ""
},
{
"docid": "d1f02e2f57cffbc17387de37506fddc9",
"text": "The task of matching patterns in graph-structured data has applications in such diverse areas as computer vision, biology, electronics, computer aided design, social networks, and intelligence analysis. Consequently, work on graph-based pattern matching spans a wide range of research communities. Due to variations in graph characteristics and application requirements, graph matching is not a single problem, but a set of related problems. This paper presents a survey of existing work on graph matching, describing variations among problems, general and specific solution approaches, evaluation techniques, and directions for further research. An emphasis is given to techniques that apply to general graphs with semantic characteristics.",
"title": ""
},
{
"docid": "c95980f3f1921426c20757e6020f62c2",
"text": "Recent successes of deep learning have been largely driven by the ability to train large models on vast amounts of data. We believe that High Performance Computing (HPC) will play an increasingly important role in helping deep learning achieve the next level of innovation fueled by neural network models that are orders of magnitude larger and trained on commensurately more training data. We are targeting the unique capabilities of both current and upcoming HPC systems to train massive neural networks and are developing the Livermore Big Artificial Neural Network (LBANN) toolkit to exploit both model and data parallelism optimized for large scale HPC resources. This paper presents our preliminary results in scaling the size of model that can be trained with the LBANN toolkit.",
"title": ""
},
{
"docid": "480c55bca0099f25a01fe7a9701eef6a",
"text": "Development of the technology in the area of the cameras, computers and algorithms for 3D the reconstruction of the objects from the images resulted in the increased popularity of the photogrammetry. Algorithms for the 3D model reconstruction are so advanced that almost anyone can make a 3D model of photographed object. The main goal of this paper is to examine the possibility of obtaining 3D data for the purposes of the close-range photogrammetry applications, based on the open source technologies. All steps of obtaining 3D point cloud are covered in this paper. Special attention is given to the camera calibration, for which two-step process of calibration is used. Both, presented algorithm and accuracy of the point cloud are tested by calculating the spatial difference between referent and produced point clouds. During algorithm testing, robustness and swiftness of obtaining 3D data is noted, and certainly usage of this and similar algorithms has a lot of potential in the real-time application. That is the reason why this research can find its application in the architecture, spatial planning, protection of cultural heritage, forensic, mechanical engineering, traffic management, medicine and other sciences. * Corresponding author",
"title": ""
},
{
"docid": "186141651bfb780865712deb8c407c54",
"text": "Sample and statistically based singing synthesizers typically require a large amount of data for automatically generating expressive synthetic performances. In this paper we present a singing synthesizer that using two rather small databases is able to generate expressive synthesis from an input consisting of notes and lyrics. The system is based on unit selection and uses the Wide-Band Harmonic Sinusoidal Model for transforming samples. The first database focuses on expression and consists of less than 2 minutes of free expressive singing using solely vowels. The second one is the timbre database which for the English case consists of roughly 35 minutes of monotonic singing of a set of sentences, one syllable per beat. The synthesis is divided in two steps. First, an expressive vowel singing performance of the target song is generated using the expression database. Next, this performance is used as input control of the synthesis using the timbre database and the target lyrics. A selection of synthetic performances have been submitted to the Interspeech Singing Synthesis Challenge 2016, in which they are compared to other competing systems.",
"title": ""
},
{
"docid": "5e50ff15898a96b9dec220331c62820d",
"text": "BACKGROUND AND PURPOSE\nPatients with atrial fibrillation and previous ischemic stroke (IS)/transient ischemic attack (TIA) are at high risk of recurrent cerebrovascular events despite anticoagulation. In this prespecified subgroup analysis, we compared warfarin with edoxaban in patients with versus without previous IS/TIA.\n\n\nMETHODS\nENGAGE AF-TIMI 48 (Effective Anticoagulation With Factor Xa Next Generation in Atrial Fibrillation-Thrombolysis in Myocardial Infarction 48) was a double-blind trial of 21 105 patients with atrial fibrillation randomized to warfarin (international normalized ratio, 2.0-3.0; median time-in-therapeutic range, 68.4%) versus once-daily edoxaban (higher-dose edoxaban regimen [HDER], 60/30 mg; lower-dose edoxaban regimen, 30/15 mg) with 2.8-year median follow-up. Primary end points included all stroke/systemic embolic events (efficacy) and major bleeding (safety). Because only HDER is approved, we focused on the comparison of HDER versus warfarin.\n\n\nRESULTS\nOf 5973 (28.3%) patients with previous IS/TIA, 67% had CHADS2 (congestive heart failure, hypertension, age, diabetes, prior stroke/transient ischemic attack) >3 and 36% were ≥75 years. Compared with 15 132 without previous IS/TIA, patients with previous IS/TIA were at higher risk of both thromboembolism and bleeding (stroke/systemic embolic events 2.83% versus 1.42% per year; P<0.001; major bleeding 3.03% versus 2.64% per year; P<0.001; intracranial hemorrhage, 0.70% versus 0.40% per year; P<0.001). Among patients with previous IS/TIA, annualized intracranial hemorrhage rates were lower with HDER than with warfarin (0.62% versus 1.09%; absolute risk difference, 47 [8-85] per 10 000 patient-years; hazard ratio, 0.57; 95% confidence interval, 0.36-0.92; P=0.02). No treatment subgroup interactions were found for primary efficacy (P=0.86) or for intracranial hemorrhage (P=0.28).\n\n\nCONCLUSIONS\nPatients with atrial fibrillation with previous IS/TIA are at high risk of recurrent thromboembolism and bleeding. HDER is at least as effective and is safer than warfarin, regardless of the presence or the absence of previous IS or TIA.\n\n\nCLINICAL TRIAL REGISTRATION\nURL: http://www.clinicaltrials.gov. Unique identifier: NCT00781391.",
"title": ""
},
{
"docid": "99e8b7b6b883be51c5413c82ac1d5009",
"text": "Named entities are usually composable and extensible. Typical examples are names of symptoms and diseases in medical areas. To distinguish these entities from general entities, we name them compound entities. In this paper, we present an attention-based Bi-GRU-CapsNet model to detect hypernymy relationship between compound entities. Our model consists of several important components. To avoid the out-of-vocabulary problem, English words or Chinese characters in compound entities are fed into the bidirectional gated recurrent units. An attention mechanism is designed to focus on the differences between two compound entities. Since there are some different cases in hypernymy relationship between compound entities, capsule network is finally employed to decide whether the hypernymy relationship exists or not. Experimental results demonstrate the advantages of our model over the state-of-theart methods both on English and Chinese corpora of symptom and disease pairs.",
"title": ""
},
{
"docid": "3ad124875f073ff961aaf61af2832815",
"text": "EVERY HUMAN CULTURE HAS SOME FORM OF MUSIC WITH A BEAT\na perceived periodic pulse that structures the perception of musical rhythm and which serves as a framework for synchronized movement to music. What are the neural mechanisms of musical beat perception, and how did they evolve? One view, which dates back to Darwin and implicitly informs some current models of beat perception, is that the relevant neural mechanisms are relatively general and are widespread among animal species. On the basis of recent neural and cross-species data on musical beat processing, this paper argues for a different view. Here we argue that beat perception is a complex brain function involving temporally-precise communication between auditory regions and motor planning regions of the cortex (even in the absence of overt movement). More specifically, we propose that simulation of periodic movement in motor planning regions provides a neural signal that helps the auditory system predict the timing of upcoming beats. This \"action simulation for auditory prediction\" (ASAP) hypothesis leads to testable predictions. We further suggest that ASAP relies on dorsal auditory pathway connections between auditory regions and motor planning regions via the parietal cortex, and suggest that these connections may be stronger in humans than in non-human primates due to the evolution of vocal learning in our lineage. This suggestion motivates cross-species research to determine which species are capable of human-like beat perception, i.e., beat perception that involves accurate temporal prediction of beat times across a fairly broad range of tempi.",
"title": ""
},
{
"docid": "2d340d004f81a9ed16ead41044103c5d",
"text": "Bio-medical image segmentation is one of the promising sectors where nuclei segmentation from high-resolution histopathological images enables extraction of very high-quality features for nuclear morphometrics and other analysis metrics in the field of digital pathology. The traditional methods including Otsu thresholding and watershed methods do not work properly in different challenging cases. However, Deep Learning (DL) based approaches are showing tremendous success in different modalities of bio-medical imaging including computation pathology. Recently, the Recurrent Residual U-Net (R2U-Net) has been proposed, which has shown state-of-the-art (SOTA) performance in different modalities (retinal blood vessel, skin cancer, and lung segmentation) in medical image segmentation. However, in this implementation, the R2U-Net is applied to nuclei segmentation for the first time on a publicly available dataset that was collected from the Data Science Bowl Grand Challenge in 2018. The R2U-Net shows around 92.15% segmentation accuracy in terms of the Dice Coefficient (DC) during the testing phase. In addition, the qualitative results show accurate segmentation, which clearly demonstrates the robustness of the R2U-Net model for the nuclei segmentation task.",
"title": ""
},
{
"docid": "7eb7cfc2ca574b0965008117cf7070d9",
"text": "We present a framework, Atlas, which incorporates application-awareness into Software-Defined Networking (SDN), which is currently capable of L2/3/4-based policy enforcement but agnostic to higher layers. Atlas enables fine-grained, accurate and scalable application classification in SDN. It employs a machine learning (ML) based traffic classification technique, a crowd-sourcing approach to obtain ground truth data and leverages SDN's data reporting mechanism and centralized control. We prototype Atlas on HP Labs wireless networks and observe 94% accuracy on average, for top 40 Android applications.",
"title": ""
},
{
"docid": "6506a8e0d2772a719f025982770d7eea",
"text": "The choice of a particular NoSQL database imposes a specific distributed software architecture and data model, and is a major determinant of the overall system throughput. NoSQL database performance is in turn strongly influenced by how well the data model and query capabilities fit the application use cases, and so system-specific testing and characterization is required. This paper presents a method and the results of a study that selected among three NoSQL databases for a large, distributed healthcare organization. While the method and study considered consistency, availability, and partition tolerance (CAP) tradeoffs, and other quality attributes that influence the selection decision, this paper reports on the performance evaluation method and results. In our testing, a typical workload and configuration produced throughput that varied from 225 to 3200 operations per second between database products, while read operation latency varied by a factor of 5 and write latency by a factor of 4 (with the highest throughput product delivering the highest latency). We also found that achieving strong consistency reduced throughput by 10-25% compared to eventual consistency.",
"title": ""
},
{
"docid": "36b0ace93b5a902966e96e4649d83b98",
"text": "We introduce a novel matching algorithm, called DeepMatching, to compute dense correspondences between images. DeepMatching relies on a hierarchical, multi-layer, correlational architecture designed for matching images and was inspired by deep convolutional approaches. The proposed matching algorithm can handle non-rigid deformations and repetitive textures and efficiently determines dense correspondences in the presence of significant changes between images. We evaluate the performance of DeepMatching, in comparison with state-of-the-art matching algorithms, on the Mikolajczyk (Mikolajczyk et al. A comparison of affine region detectors, 2005), the MPI-Sintel (Butler et al. A naturalistic open source movie for optical flow evaluation, 2012) and the Kitti (Geiger et al. Vision meets robotics: The KITTI dataset, 2013) datasets. DeepMatching outperforms the state-of-the-art algorithms and shows excellent results in particular for repetitive textures. We also apply DeepMatching to the computation of optical flow, called DeepFlow, by integrating it in the large displacement optical flow (LDOF) approach of Brox and Malik (Large displacement optical flow: descriptor matching in variational motion estimation, 2011). Additional robustness to large displacements and complex motion is obtained thanks to our matching approach. DeepFlow obtains competitive performance on public benchmarks for optical flow estimation.",
"title": ""
},
{
"docid": "3257f01d96bd126bd7e3d6f447e0326d",
"text": "Voice SMS is an application developed in this work that allows a user to record and convert spoken messages into SMS text message. User can send messages to the entered phone number or the number of contact from the phonebook. Speech recognition is done via the Internet, connecting to Google's server. The application is adapted to input messages in English. Used tools are Android SDK and the installation is done on mobile phone with Android operating system. In this article we will give basic features of the speech recognition and used algorithm. Speech recognition for Voice SMS uses a technique based on hidden Markov models (HMM - Hidden Markov Model). It is currently the most successful and most flexible approach to speech recognition.",
"title": ""
},
{
"docid": "2d0765e6b695348dea8822f695dcbfa1",
"text": "Social networks are currently gaining increasing impact especially in the light of the ongoing growth of web-based services like facebook.com. A central challenge for the social network analysis is the identification of key persons within a social network. In this context, the article aims at presenting the current state of research on centrality measures for social networks. In view of highly variable findings about the quality of various centrality measures, we also illustrate the tremendous importance of a reflected utilization of existing centrality measures. For this purpose, the paper analyzes five common centrality measures on the basis of three simple requirements for the behavior of centrality measures.",
"title": ""
},
{
"docid": "bde769df506e361bf374bd494fc5db6f",
"text": "Molded interconnect devices (MID) allow the realization of electronic circuits on injection molded thermoplastics. MID antennas can be manufactured as part of device casings without the need for additional printed circuit boards or attachment of antennas printed on foil. Baluns, matching networks, amplifiers and connectors can be placed on the polymer in the vicinity of the antenna. A MID dipole antenna for 1 GHz is designed, manufactured and measured. A prototype of the antenna is built with laser direct structuring (LDS) on a Xantar LDS 3720 substrate. Measured return loss and calibrated gain patterns are compared to simulation results.",
"title": ""
},
{
"docid": "4cfeef6e449e37219c75f8063220c1f8",
"text": "The 20 century was based on local linear engineering of complicated systems. We made cars, airplanes and chemical plants for example. The 21ot century has opened a new basis for holistic non-linear design of complex systems, such as the Internet, air traffic management and nanotechnologies. Complexity, interconnectivity, interaction and communication are major attributes of our evolving society. But, more interestingly, we have started to understand that chaos theories may be more important than reductionism, to better understand and thrive on our planet. Systems need to be investigated and tested as wholes, which requires a cross-disciplinary approach and new conceptual principles and tools. Consequently, schools cannot continue to teach isolated disciplines based on simple reductionism. Science; Technology, Engineering, and Mathematics (STEM) should be integrated together with the Arts to promote creativity together with rationalization, and move to STEAM (with an \"A\" for Arts). This new concept emphasizes the possibility of longer-term socio-technical futures instead of short-term financial predictions that currently lead to uncontrolled economies. Human-centered design (HCD) can contribute to improving STEAM education technologies, systems and practices. HCD not only provides tools and techniques to build useful and usable things, but also an integrated approach to learning by doing, expressing and critiquing, exploring possible futures, and understanding complex systems.",
"title": ""
},
{
"docid": "a75d3395a1d4859b465ccbed8647fbfe",
"text": "PURPOSE\nThe influence of a core-strengthening program on low back pain (LBP) occurrence and hip strength differences were studied in NCAA Division I collegiate athletes.\n\n\nMETHODS\nIn 1998, 1999, and 2000, hip strength was measured during preparticipation physical examinations and occurrence of LBP was monitored throughout the year. Following the 1999-2000 preparticipation physicals, all athletes began participation in a structured core-strengthening program, which emphasized abdominal, paraspinal, and hip extensor strengthening. Incidence of LBP and the relationship with hip muscle imbalance were compared between consecutive academic years.\n\n\nRESULTS\nAfter incorporation of core strengthening, there was no statistically significant change in LBP occurrence. Side-to-side extensor strength between athletes participating in both the 1998-1999 and 1999-2000 physicals were no different. After core strengthening, the right hip extensor was, on average, stronger than that of the left hip extensor (P = 0.0001). More specific gender differences were noted after core strengthening. Using logistic regression, female athletes with weaker left hip abductors had a more significant probability of requiring treatment for LBP (P = 0.009)\n\n\nCONCLUSION\nThe impact of core strengthening on collegiate athletes has not been previously examined. These results indicated no significant advantage of core strengthening in reducing LBP occurrence, though this may be more a reflection of the small numbers of subjects who actually required treatment. The core program, however, seems to have had a role in modifying hip extensor strength balance. The association between hip strength and future LBP occurrence, observed only in females, may indicate the need for more gender-specific core programs. The need for a larger scale study to examine the impact of core strengthening in collegiate athletes is demonstrated.",
"title": ""
},
{
"docid": "139ecd9ff223facaec69ad6532f650db",
"text": "Student retention in open and distance learning (ODL) is comparatively poor to traditional education and, in some contexts, embarrassingly low. Literature on the subject of student retention in ODL indicates that even when interventions are designed and undertaken to improve student retention, they tend to fall short. Moreover, this area has not been well researched. The main aim of our research, therefore, is to better understand and measure students’ attitudes and perceptions towards the effectiveness of mobile learning. Our hope is to determine how this technology can be optimally used to improve student retention at Bachelor of Science programmes at Indira Gandhi National Open University (IGNOU) in India. For our research, we used a survey. Results of this survey clearly indicate that offering mobile learning could be one method improving retention of BSc students, by enhancing their teaching/ learning and improving the efficacy of IGNOU’s existing student support system. The biggest advantage of this technology is that it can be used anywhere, anytime. Moreover, as mobile phone usage in India explodes, it offers IGNOU easy access to a larger number of learners. This study is intended to help inform those who are seeking to adopt mobile learning systems with the aim of improving communication and enriching students’ learning experiences in their ODL institutions.",
"title": ""
}
] |
scidocsrr
|
f02800775887b28ea5debc405b51badd
|
Learning and Transferring Social and Item Visibilities for Personalized Recommendation
|
[
{
"docid": "51d950dfb9f71b9c8948198c147b9884",
"text": "Collaborative filtering is the most popular approach to build recommender systems and has been successfully employed in many applications. However, it cannot make recommendations for so-called cold start users that have rated only a very small number of items. In addition, these methods do not know how confident they are in their recommendations. Trust-based recommendation methods assume the additional knowledge of a trust network among users and can better deal with cold start users, since users only need to be simply connected to the trust network. On the other hand, the sparsity of the user item ratings forces the trust-based approach to consider ratings of indirect neighbors that are only weakly trusted, which may decrease its precision. In order to find a good trade-off, we propose a random walk model combining the trust-based and the collaborative filtering approach for recommendation. The random walk model allows us to define and to measure the confidence of a recommendation. We performed an evaluation on the Epinions dataset and compared our model with existing trust-based and collaborative filtering methods.",
"title": ""
},
{
"docid": "c0f789451f298fb00abc908ee00b4735",
"text": "Data sparsity is a major problem for collaborative filtering (CF) techniques in recommender systems, especially for new users and items. We observe that, while our target data are sparse for CF systems, related and relatively dense auxiliary data may already exist in some other more mature application domains. In this paper, we address the data sparsity problem in a target domain by transferring knowledge about both users and items from auxiliary data sources. We observe that in different domains the user feedbacks are often heterogeneous such as ratings vs. clicks. Our solution is to integrate both user and item knowledge in auxiliary data sources through a principled matrix-based transfer learning framework that takes into account the data heterogeneity. In particular, we discover the principle coordinates of both users and items in the auxiliary data matrices, and transfer them to the target domain in order to reduce the effect of data sparsity. We describe our method, which is known as coordinate system transfer or CST, and demonstrate its effectiveness in alleviating the data sparsity problem in collaborative filtering. We show that our proposed method can significantly outperform several state-of-the-art solutions for this problem.",
"title": ""
}
] |
[
{
"docid": "01e419d399bd19b9ed1c34c67f1767a9",
"text": "By using music written in a certain style as training data, parameters can be calculated for Markov chains and hidden Markov models to capture the musical style of the training data as mathematical models.",
"title": ""
},
{
"docid": "1e38e2e7f3d1f2ae0ac74964f115f89a",
"text": "Abstract—In this paper, a high-conversion-ratio bidirectional dc–dc converter with coupled inductor is proposed. In the boost mode, two capacitors are parallel charged and series discharged by the coupled inductor. Thus, high step-up voltage gain can be achieved with an appropriate duty ratio. The voltage stress on the main switch is reduced by a passive clamp circuit. Therefore, the low resistance RDS (ON) of the main switch can be adopted to reduce conduction loss. In the buck mode, two capacitors are series charged and parallel discharged by the coupled inductor. The bidirectional converter can have high step-down gain. Aside from that, all of the switches achieve zero voltage-switching turn-on, and the switching loss can be improved. Due to two active clamp circuits, the energy of the leakage inductor of the coupled inductor is recycled. The efficiency can be further improved. The operating principle and the steady-state analyses of the voltage gain are discussed.",
"title": ""
},
{
"docid": "b712bbcad29af3bb8ad210fc9bbeab24",
"text": "Image-based virtual try-on systems for fitting a new in-shop clothes into a person image have attracted increasing research attention, yet is still challenging. A desirable pipeline should not only transform the target clothes into the most fitting shape seamlessly but also preserve well the clothes identity in the generated image, that is, the key characteristics (e.g. texture, logo, embroidery) that depict the original clothes. However, previous image-conditioned generation works fail to meet these critical requirements towards the plausible virtual try-on performance since they fail to handle large spatial misalignment between the input image and target clothes. Prior work explicitly tackled spatial deformation using shape context matching, but failed to preserve clothing details due to its coarse-to-fine strategy. In this work, we propose a new fully-learnable Characteristic-Preserving Virtual Try-On Network (CP-VTON) for addressing all real-world challenges in this task. First, CP-VTON learns a thin-plate spline transformation for transforming the in-shop clothes into fitting the body shape of the target person via a new Geometric Matching Module (GMM) rather than computing correspondences of interest points as prior works did. Second, to alleviate boundary artifacts of warped clothes and make the results more realistic, we employ a Try-On Module that learns a composition mask to integrate the warped clothes and the rendered image to ensure smoothness. Extensive experiments on a fashion dataset demonstrate our CP-VTON achieves the state-of-the-art virtual try-on performance both qualitatively and quantitatively.",
"title": ""
},
{
"docid": "b825426604420620e1bba43c0f45115e",
"text": "Taxonomies are the backbone of many structured, semantic knowledge resources. Recent works for extracting taxonomic relations from text focused on collecting lexical-syntactic patterns to extract the taxonomic relations by matching the patterns to text. These approaches, however, often show low coverage due to the lack of contextual analysis across sentences. To address this issue, we propose a novel approach that collectively utilizes contextual information of terms in syntactic structures such that if the set of contexts of a term includes most of contexts of another term, a subsumption relation between the two terms is inferred. We apply this method to the task of taxonomy construction from scratch, where we introduce another novel graph-based algorithm for taxonomic structure induction. Our experiment results show that the proposed method is well complementary with previous methods of linguistic pattern matching and significantly improves recall and thus F-measure.",
"title": ""
},
{
"docid": "420719690b6249322927153daedba87b",
"text": "• In-domain: 91% F1 on the dev set, 5 we reduced the learning rate from 10−4 to 10−5. We then stopped the training when F1 was not improved after 20 epochs. We did the same for ment-norm except that the learning rate was changed at 91.5% F1. Note that all the hyper-parameters except K and the turning point for early stopping were set to the values used by Ganea and Hofmann (2017). Systematic tuning is expensive though may have further ncreased the result of our models.",
"title": ""
},
{
"docid": "c1f17055249341dd6496fce9a2703b18",
"text": "With systems performing Simultaneous Localization And Mapping (SLAM) from a single robot reaching considerable maturity, the possibility of employing a team of robots to collaboratively perform a task has been attracting increasing interest. Promising great impact in a plethora of tasks ranging from industrial inspection to digitization of archaeological structures, collaborative scene perception and mapping are key in efficient and effective estimation. In this paper, we propose a novel, centralized architecture for collaborative monocular SLAM employing multiple small Unmanned Aerial Vehicles (UAVs) to act as agents. Each agent is able to independently explore the environment running limited-memory SLAM onboard, while sending all collected information to a central server, a ground station with increased computational resources. The server manages the maps of all agents, triggering loop closure, map fusion, optimization and distribution of information back to the agents. This allows an agent to incorporate observations from others in its SLAM estimates on the fly. We put the proposed framework to the test employing a nominal keyframe-based monocular SLAM algorithm, demonstrating the applicability of this system in multi-UAV scenarios.",
"title": ""
},
{
"docid": "de9ac411ae21f12d1101765b81ba9e0c",
"text": "Aarti Singh Department of Computer Science, Guru Nanak Girls College, Yamuna Nagar, Haryana, India Email: singh2208@gmail.com -------------------------------------------------------------------ABSTRACT---------------------------------------------------------Ontologies play a vital role in knowledge representation in artificial intelligent systems. With emergence and acceptance of semantic web and associated services offered to the users, more and more ontologies have been developed by various stack-holders. Different ontologies need to be mapped for various systems to communicate with each other. Ontology mapping is an open research issue in web semantics. Exact mapping of ontologies is rare to achieve so it’s an optimization problem. This work presents and optimized ontology mapping mechanism which deploys genetic algorithm.",
"title": ""
},
{
"docid": "e3978d849b1449c40299841bfd70ea69",
"text": "New generations of network intrusion detection systems create the need for advanced pattern-matching engines. This paper presents a novel scheme for pattern-matching, called BFPM, that exploits a hardware-based programmable statemachine technology to achieve deterministic processing rates that are independent of input and pattern characteristics on the order of 10 Gb/s for FPGA and at least 20 Gb/s for ASIC implementations. BFPM supports dynamic updates and is one of the most storage-efficient schemes in the industry, supporting two thousand patterns extracted from Snort with a total of 32 K characters in only 128 KB of memory.",
"title": ""
},
{
"docid": "cc673c5b16be6fb62a69b471d6e24e95",
"text": "Estimating 3D human pose from 2D joint locations is central to the analysis of people in images and video. To address the fact that the problem is inherently ill posed, many methods impose a prior over human poses. Unfortunately these priors admit invalid poses because they do not model how joint-limits vary with pose. Here we make two key contributions. First, we collect a motion capture dataset that explores a wide range of human poses. From this we learn a pose-dependent model of joint limits that forms our prior. Both dataset and prior are available for research purposes. Second, we define a general parametrization of body pose and a new, multi-stage, method to estimate 3D pose from 2D joint locations using an over-complete dictionary of poses. Our method shows good generalization while avoiding impossible poses. We quantitatively compare our method with recent work and show state-of-the-art results on 2D to 3D pose estimation using the CMU mocap dataset. We also show superior results using manual annotations on real images and automatic detections on the Leeds sports pose dataset.",
"title": ""
},
{
"docid": "becea3d4b1a791b74dc7c6de15584611",
"text": "This study analyzes the climate change and economic impacts of food waste in the United States. Using lossadjusted national food availability data for 134 food commodities, it calculates the greenhouse gas emissions due to wasted food using life cycle assessment and the economic cost of the waste using retail prices. The analysis shows that avoidable food waste in the US exceeds 55 million metric tonnes per year, nearly 29% of annual production. This waste produces life-cycle greenhouse gas emissions of at least 113 million metric tonnes of CO2e annually, equivalent to 2% of national emissions, and costs $198 billion.",
"title": ""
},
{
"docid": "35c7cb759c1ee8e7f547d9789e74b0f0",
"text": "This research investigates an axial flux single-rotor single-stator asynchronous motor (AFAM) with aluminum and copper cage windings. In order to avoid using die casting of the rotor cage winding an open rotor slot structure was implemented. In future, this technique allows using copper cage winding avoiding critically high temperature treatment as in the die casting processing of copper material. However, an open slot structure leads to a large equivalent air gap length. Therefore, semi-magnetic wedges should be used to reduce the effect of open slots and consequently to improve the machine performance. The paper aims to investigate the feasibility of using open slot rotor structure (for avoiding die casting) and impact of semi-magnetic wedges to eliminate negative effects of open slots. The results were mainly obtained by 2D finite element method (FEM) simulations. Measurement results of mechanical performance of the prototype (with aluminum cage winding) given in the paper prove the simulated results.",
"title": ""
},
{
"docid": "693417e5608cf092842ab34ee8cce8d9",
"text": "Software as a Service has become a dominant IT news topic over the last few years. Especially in these current recession times, adopting SaaS solutions is increasingly becoming the more favorable alternative for customers rather than investing on brand new on-premise software or outsourcing. This fact has inevitably stimulated the birth of numerous SaaS vendors. Unfortunately, many small-to-medium vendors have emerged only to disappear again from the market. A lack of maturity in their pricing strategy often becomes part of the reason. This paper presents the ’Pricing Strategy Guideline Framework (PSGF)’ that assists SaaS vendors with a guideline to ensure that all the fundamental pricing elements are included in their pricing strategy. The PSGF describes five different layers that need to be taken to price a software: value creation, price structure, price and value communication, price policy, and price level. The PSGF can be of particularly great use for the startup vendors that tend to have less experience in pricing their SaaS solutions. Up until now, there have been no SaaS pricing frameworks available in the SaaS research area, such as the PSGF developed in this research. The PSGF is evaluated in a case study at a Dutch SaaS vendor in the Finance sector.",
"title": ""
},
{
"docid": "7442f94af36f6d317291da814e7f3676",
"text": "Muscles are required to perform or absorb mechanical work under different conditions. However the ability of a muscle to do this depends on the interaction between its contractile components and its elastic components. In the present study we have used ultrasound to examine the length changes of the gastrocnemius medialis muscle fascicle along with those of the elastic Achilles tendon during locomotion under different incline conditions. Six male participants walked (at 5 km h(-1)) on a treadmill at grades of -10%, 0% and 10% and ran (at 10 km h(-1)) at grades of 0% and 10%, whilst simultaneous ultrasound, electromyography and kinematics were recorded. In both walking and running, force was developed isometrically; however, increases in incline increased the muscle fascicle length at which force was developed. Force was developed at shorter muscle lengths for running when compared to walking. Substantial levels of Achilles tendon strain were recorded in both walking and running conditions, which allowed the muscle fascicles to act at speeds more favourable for power production. In all conditions, positive work was performed by the muscle. The measurements suggest that there is very little change in the function of the muscle fascicles at different slopes or speeds, despite changes in the required external work. This may be a consequence of the role of this biarticular muscle or of the load sharing between the other muscles of the triceps surae.",
"title": ""
},
{
"docid": "f5128625b3687c971ba3bef98d7c2d2a",
"text": "In three experiments, we investigated the influence of juror, victim, and case factors on mock jurors' decisions in several types of child sexual assault cases (incest, day care, stranger abduction, and teacher-perpetrated abuse). We also validated and tested the ability of several scales measuring empathy for child victims, children's believability, and opposition to adult/child sex, to mediate the effect of jurors' gender on case judgments. Supporting a theoretical model derived from research on the perceived credibility of adult rape victims, women compared to men were more empathic toward child victims, more opposed to adult/child sex, more pro-women, and more inclined to believe children generally. In turn, women (versus men) made more pro-victim judgments in hypothetical abuse cases; that is, attitudes and empathy generally mediated this juror gender effect that is pervasive in this literature. The experiments also revealed that strength of case evidence is a powerful factor in determining judgments, and that teen victims (14 years old) are blamed more for sexual abuse than are younger children (5 years old), but that perceptions of 5 and 10 year olds are largely similar. Our last experiment illustrated that our findings of mediation generalize to a community member sample.",
"title": ""
},
{
"docid": "41cf7f09815ad0a8ebac914eaeaa44e3",
"text": "Robotic devices are well-suited to provide high intensity upper limb therapy in order to induce plasticity and facilitate recovery from brain and spinal cord injury. In order to realize gains in functional independence, devices that target the distal joints of the arm are necessary. Further, the robotic device must exhibit key dynamic properties that enable both high dynamic transparency for assessment, and implementation of novel interaction control modes that significantly engage the participant. In this paper, we present the kinematic design, dynamical characterization, and clinical validation of the RiceWrist-S, a serial robotic mechanism that facilitates rehabilitation of the forearm in pronation-supination, and of the wrist in flexion-extension and radial-ulnar deviation. The RiceWrist-Grip, a grip force sensing handle, is shown to provide grip force measurements that correlate well with those acquired from a hand dynamometer. Clinical validation via a single case study of incomplete spinal cord injury rehabilitation for an individual with injury at the C3-5 level showed moderate gains in clinical outcome measures. Robotic measures of movement smoothness also captured gains, supporting our hypothesis that intensive upper limb rehabilitation with the RiceWrist-S would show beneficial outcomes. This work was supported in part by grants from Mission Connect, a project of the TIRR Foundation, the National Science Foundation Graduate Research Fellowship Program under Grant No. 0940902, NSF CNS-1135916, and H133P0800007-NIDRRARRT. A.U. Pehlivan, F. Sergi, A. Erwin, and M. K. O’Malley are with the Mechatronics and Haptic Interfaces Laboratory, Department of Mechanical Engineering, Rice University, Houston, TX 77005. F. Sergi is also with the department of PM&R, Baylor College of Medicine, Houston, TX 77030. N. Yozbatiran and G. E. Francisco are with the Department of PM&R and UTHealth Motor Recovery Lab, University of Texas Health Science Center at Houston, TX 77030 (e-mails: aliutku@rice.edu, fabs@rice.edu, ace7@rice.edu, Nuray.Yozbatiran@uth.tmc.edu, Gerard.E.Francisco@uth.tmc.edu, and omalleym@rice.edu)",
"title": ""
},
{
"docid": "6da5e3f263171d93e2a1d6fe8e38a788",
"text": "With the thriving growth of the cloud computing, the security and privacy concerns of outsourcing data have been increasing dramatically. However, because of delegating the management of data to an untrusted cloud server in data outsourcing process, the data access control has been recognized as a challenging issue in cloud storage systems. One of the preeminent technologies to control data access in cloud computing is Attribute-based Encryption (ABE) as a cryptographic primitive, which establishes the decryption ability on the basis of a user’s attributes. This paper provides a comprehensive survey on attribute-based access control schemes and compares each scheme’s functionality and characteristic. We also present a thematic taxonomy of attribute-based approaches based on significant parameters, such as access control mode, architecture, revocation mode, revocation method, revocation issue, and revocation controller. The paper reviews the state-of-the-art ABE methods and categorizes them into three main classes, such as centralized, decentralized, and hierarchal, based on their architectures. We also analyzed the different ABE techniques to ascertain the advantages and disadvantages, the significance and requirements, and identifies the research gaps. Finally, the paper presents open issues and challenges for further investigations. © 2016 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "cb667b5d3dd2e680f15b7167d20734cd",
"text": "In this letter, a low loss high isolation broadband single-port double-throw (SPDT) traveling-wave switch using 90 nm CMOS technology is presented. A body bias technique is utilized to enhance the circuit performance of the switch, especially for the operation frequency above 30 GHz. The parasitic capacitance between the drain and source of the NMOS transistor can be further reduced using the negative body bias technique. Moreover, the insertion loss, the input 1 dB compression point (P1 dB)> and the third-order intermodulation (IMD3) of the switch are all improved. With the technique, the switch demonstrates an insertion loss of 3 dB and an isolation of better than 48 dB from dc to 60 GHz. The chip size of the proposed switch is 0.68 × 0.87 mm2 with a core area of only 0.32 × 0.21 mm2.",
"title": ""
},
{
"docid": "80563d90bfdccd97d9da0f7276468a43",
"text": "An essential aspect of knowing language is knowing the words of that language. This knowledge is usually thought to reside in the mental lexicon, a kind of dictionary that contains information regarding a word's meaning, pronunciation, syntactic characteristics, and so on. In this article, a very different view is presented. In this view, words are understood as stimuli that operate directly on mental states. The phonological, syntactic and semantic properties of a word are revealed by the effects it has on those states.",
"title": ""
},
{
"docid": "c55d17ec5082c2c5f12b22520b359c91",
"text": "Android apps are made of components which can leak information between one another using the ICC mechanism. With the growing momentum of Android, a number of research contributions have led to tools for the intra-app analysis of Android apps. Unfortunately, these state-of-the-art approaches, and the associated tools, have long left out the security flaws that arise across the boundaries of single apps, in the interaction between several apps. In this paper, we present a tool called ApkCombiner which aims at reducing an inter-app communication problem to an intra-app inter-component communication problem. In practice, ApkCombiner combines different apps into a single apk on which existing tools can indirectly perform inter-app analysis. We have evaluated ApkCombiner on a dataset of 3,000 real-world Android apps, to demonstrate its capability to support static context-aware inter-app analysis scenarios.",
"title": ""
}
] |
scidocsrr
|
a023de5e462729845752e031cbf329b0
|
UGR'16: A new dataset for the evaluation of cyclostationarity-based network IDSs
|
[
{
"docid": "06860bf1ede8dfe83d3a1b01fe4df835",
"text": "The Internet and computer networks are exposed to an increasing number of security threats. With new types of attacks appearing continually, developing flexible and adaptive security oriented approaches is a severe challenge. In this context, anomaly-based network intrusion detection techniques are a valuable technology to protect target systems and networks against malicious activities. However, despite the variety of such methods described in the literature in recent years, security tools incorporating anomaly detection functionalities are just starting to appear, and several important problems remain to be solved. This paper begins with a review of the most well-known anomaly-based intrusion detection techniques. Then, available platforms, systems under development and research projects in the area are presented. Finally, we outline the main challenges to be dealt with for the wide scale deployment of anomaly-based intrusion detectors, with special emphasis on assessment issues. a 2008 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "9de44948e28892190f461199a1d33935",
"text": "As more and more data is provided in RDF format, storing huge amounts of RDF data and efficiently processing queries on such data is becoming increasingly important. The first part of the lecture will introduce state-of-the-art techniques for scalably storing and querying RDF with relational systems, including alternatives for storing RDF, efficient index structures, and query optimization techniques. As centralized RDF repositories have limitations in scalability and failure tolerance, decentralized architectures have been proposed. The second part of the lecture will highlight system architectures and strategies for distributed RDF processing. We cover search engines as well as federated query processing, highlight differences to classic federated database systems, and discuss efficient techniques for distributed query processing in general and for RDF data in particular. Moreover, for the last part of this chapter, we argue that extracting knowledge from the Web is an excellent showcase – and potentially one of the biggest challenges – for the scalable management of uncertain data we have seen so far. The third part of the lecture is thus intended to provide a close-up on current approaches and platforms to make reasoning (e.g., in the form of probabilistic inference) with uncertain RDF data scalable to billions of triples. 1 RDF in centralized relational databases The increasing availability and use of RDF-based information in the last decade has led to an increasing need for systems that can store RDF and, more importantly, efficiencly evaluate complex queries over large bodies of RDF data. The database community has developed a large number of systems to satisfy this need, partly reusing and adapting well-established techniques from relational databases [122]. The majority of these systems can be grouped into one of the following three classes: 1. Triple stores that store RDF triples in a single relational table, usually with additional indexes and statistics, 2. vertically partitioned tables that maintain one table for each property, and 3. Schema-specific solutions that store RDF in a number of property tables where several properties are jointly represented. In the following sections, we will describe each of these classes in detail, focusing on two important aspects of these systems: storage and indexing, i.e., how are RDF triples mapped to relational tables and which additional support structures are created; and query processing, i.e., how SPARQL queries are mapped to SQL, which additional operators are introduced, and how efficient execution plans for queries are determined. In addition to these purely relational solutions, a number of specialized RDF systems has been proposed that built on nonrelational technologies, we will briefly discuss some of these systems. Note that we will focus on SPARQL processing, which is not aware of underlying RDF/S or OWL schema and cannot exploit any information about subclasses; this is usually done in an additional layer on top. We will explain especially the different storage variants with the running example from Figure 1, some simple RDF facts from a university scenario. Here, each line corresponds to a fact (triple, statement), with a subject (usually a resource), a property (or predicate), and an object (which can be a resource or a constant). Even though resources are represented by URIs in RDF, we use string constants here for simplicity. A collection of RDF facts can also be represented as a graph. Here, resources (and constants) are nodes, and for each fact <s,p,o>, an edge from s to o is added with label p. Figure 2 shows the graph representation for the RDF example from Figure 1. <Katja,teaches,Databases> <Katja,works_for,MPI Informatics> <Katja,PhD_from,TU Ilmenau> <Martin,teaches,Databases> <Martin,works_for,MPI Informatics> <Martin,PhD_from,Saarland University> <Ralf,teaches,Information Retrieval> <Ralf,PhD_from,Saarland University> <Ralf,works_for,Saarland University> <Saarland University,located_in,Germany> <MPI Informatics,located_in,Germany> Fig. 1. Running example for RDF data",
"title": ""
},
{
"docid": "067db0925021adcfcacdff175c7fc639",
"text": "Unmanned aerial vehicles, or drones, have the potential to significantly reduce the cost and time of making last-mile deliveries and responding to emergencies. Despite this potential, little work has gone into developing vehicle routing problems (VRPs) specifically for drone delivery scenarios. Existing VRPs are insufficient for planning drone deliveries: either multiple trips to the depot are not permitted, leading to solutions with excess drones, or the effect of battery and payload weight on energy consumption is not considered, leading to costly or infeasible routes. We propose two multitrip VRPs for drone delivery that address both issues. One minimizes costs subject to a delivery time limit, while the other minimizes the overall delivery time subject to a budget constraint. We mathematically derive and experimentally validate an energy consumption model for multirotor drones, demonstrating that energy consumption varies approximately linearly with payload and battery weight. We use this approximation to derive mixed integer linear programs for our VRPs. We propose a cost function that considers our energy consumption model and drone reuse, and apply it in a simulated annealing (SA) heuristic for finding suboptimal solutions to practical scenarios. To assist drone delivery practitioners with balancing cost and delivery time, the SA heuristic is used to show that the minimum cost has an inverse exponential relationship with the delivery time limit, and the minimum overall delivery time has an inverse exponential relationship with the budget. Numerical results confirm the importance of reusing drones and optimizing battery size in drone delivery VRPs.",
"title": ""
},
{
"docid": "a1fb87b94d93da7aec13044d95ee1e44",
"text": "Many natural language processing tasks solely rely on sparse dependencies between a few tokens in a sentence. Soft attention mechanisms show promising performance in modeling local/global dependencies by soft probabilities between every two tokens, but they are not effective and efficient when applied to long sentences. By contrast, hard attention mechanisms directly select a subset of tokens but are difficult and inefficient to train due to their combinatorial nature. In this paper, we integrate both soft and hard attention into one context fusion model, “reinforced self-attention (ReSA)”, for the mutual benefit of each other. In ReSA, a hard attention trims a sequence for a soft self-attention to process, while the soft attention feeds reward signals back to facilitate the training of the hard one. For this purpose, we develop a novel hard attention called “reinforced sequence sampling (RSS)”, selecting tokens in parallel and trained via policy gradient. Using two RSS modules, ReSA efficiently extracts the sparse dependencies between each pair of selected tokens. We finally propose an RNN/CNN-free sentence-encoding model, “reinforced self-attention network (ReSAN)”, solely based on ReSA. It achieves state-of-the-art performance on both Stanford Natural Language Inference (SNLI) and Sentences Involving Compositional Knowledge (SICK) datasets.",
"title": ""
},
{
"docid": "4560fd4f946a5b31693591977ca11207",
"text": "Editors’ abstract. Middle East Arab terrorists are on the cutting edge of organizational networking and stand to gain significantly from the information revolution. They can harness information technology to enable less hierarchical, more networked designs—enhancing their flexibility, responsiveness, and resilience. In turn, information technology can enhance their offensive operational capabilities for the war of ideas as well as for the war of violent acts. Zanini and Edwards (both at RAND) focus their analysis primarily on Middle East terrorism but also discuss other groups around the world. They conclude with a series of recommendations for policymakers. This chapter draws on RAND research originally reported in Ian Lesser et al., Countering the New Terrorism (1999).",
"title": ""
},
{
"docid": "152e5d8979eb1187e98ecc0424bb1fde",
"text": "Face verification remains a challenging problem in very complex conditions with large variations such as pose, illumination, expression, and occlusions. This problem is exacerbated when we rely unrealistically on a single training data source, which is often insufficient to cover the intrinsically complex face variations. This paper proposes a principled multi-task learning approach based on Discriminative Gaussian Process Latent Variable Model (DGPLVM), named GaussianFace, for face verification. In contrast to relying unrealistically on a single training data source, our model exploits additional data from multiple source-domains to improve the generalization performance of face verification in an unknown target-domain. Importantly, our model can adapt automatically to complex data distributions, and therefore can well capture complex face variations inherent in multiple sources. To enhance discriminative power, we introduced a more efficient equivalent form of Kernel Fisher Discriminant Analysis to DGPLVM. To speed up the process of inference and prediction, we exploited the low rank approximation method. Extensive experiments demonstrated the effectiveness of the proposed model in learning from diverse data sources and generalizing to unseen domains. Specifically, the accuracy of our algorithm achieved an impressive accuracy rate of 98.52% on the well-known and challenging Labeled Faces in the Wild (LFW) benchmark. For the first time, the human-level performance in face verification (97.53%) on LFW is surpassed.",
"title": ""
},
{
"docid": "af306589f2a68bdaa8187ea04b5e2962",
"text": "This study concerns the effect that music has on consumer behavior in two different retail contexts during regular opening hours. Two studies were conducted in a field setting with consumers (N1⁄4550). Consumers were recruited to answer questions regarding behavioral measures, attitudes, and mood during days when background music was played. The conclusions from the two studies are that music affects consumer behavior, but also that the type of retail store and gender influences both the strength and direction of the effect. & 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "a979ef975801baf7c5eaf440fb012fcf",
"text": "Shape representation is a fundamental problem in computer vision. Current approaches to shape representation mainly focus on designing low-level shape descriptors which are robust to rotation, scaling and deformation of shapes. In this paper, we focus on mid-level modeling of shape representation. We develop a new shape representation called Bag of Contour Fragments (BCF) inspired by classical Bag of Words (BoW) model. In BCF, a shape is decomposed into contour fragments each of which is then individually described using a shape descriptor, e.g., the Shape Context descriptor, and encoded into a shape code. Finally, a compact shape representation is built by pooling shape codes in the shape. Shape classification with BCF only requires an efficient linear SVM classifier. In our experiments, we fully study the characteristics of BCF, show that BCF achieves the state-of-the-art performance on several well-known shape benchmarks, and can be applied to real image classification problem.",
"title": ""
},
{
"docid": "e88c0e0fb76520ec323b90d8bd7ba64d",
"text": "The intestinal epithelium is the most rapidly self-renewing tissue in adult mammals. We have recently demonstrated the presence of about six cycling Lgr5+ stem cells at the bottoms of small-intestinal crypts. Here we describe the establishment of long-term culture conditions under which single crypts undergo multiple crypt fission events, while simultanously generating villus-like epithelial domains in which all differentiated cell types are present. Single sorted Lgr5+ stem cells can also initiate these cryptvillus organoids. Tracing experiments indicate that the Lgr5+ stem-cell hierarchy is maintained in organoids. We conclude that intestinal cryptvillus units are self-organizing structures, which can be built from a single stem cell in the absence of a non-epithelial cellular niche.",
"title": ""
},
{
"docid": "65017c1a19a0e0b131d894622cb5f14c",
"text": "One of the most important steps in building a recommender system is the interaction design process, which defines how the recommender system interacts with a user. It also shapes the experience the user gets, from the point she registers and provides her preferences to the system, to the point she receives recommendations generated by the system. A proper interaction design may improve user experience and hence may result in higher usability of the system, as well as, in higher satisfaction. In this paper, we focus on the interaction design of a mobile food recommender system that, through a novel interaction process, elicits users’ long-term and short-term preferences for recipes. User’s long-term preferences are captured by asking the user to rate and tag familiar recipes, while for collecting the short-term preferences, the user is asked to select the ingredients she would like to include in the recipe to be prepared. Based on the combined exploitation of both types of preferences, a set of personalized recommendations is generated. We conducted a user study measuring the usability of the proposed interaction. The results of the study show that the majority of users rates the quality of the recommendations high and the system achieves usability scores above the standard benchmark.",
"title": ""
},
{
"docid": "3c44ce1ec0a6253aae520fc928050bb3",
"text": "The cognitive walkthrough method described by Wharton et al. may be difficult to apply in a large software development company because of social constraints that exist in such companies. Managers, developers, and other team members are pressured for time, tend to lapse into lengthy design discussions, and are sometimes defensive about their user-interface designs. By enforcing four ground rules, explicitly defusing defensiveness, and streamlining the cognitive walkthrough method and data collection procedures, these social constraints can be overcome, and useful, valid data can be obtained. This paper describes a modified cognitive walkthrough process that accomplishes these goals, and has been applied in a large software development company.",
"title": ""
},
{
"docid": "5394df4e1d6f52a608bfdab8731da088",
"text": "For over a decade, researchers have devoted much effort to construct theoretical models, such as the Technology Acceptance Model (TAM) and the Expectation Confirmation Model (ECM) for explaining and predicting user behavior in IS acceptance and continuance. Another model, the Cognitive Model (COG), was proposed for continuance behavior; it combines some of the variables used in both TAM and ECM. This study applied the technique of structured equation modeling with multiple group analysis to compare the TAM, ECM, and COG models. Results indicate that TAM, ECM, and COG have quite different assumptions about the underlying constructs that dictate user behavior and thus have different explanatory powers. The six constructs in the three models were synthesized to propose a new Technology Continuance Theory (TCT). A major contribution of TCT is that it combines two central constructs: attitude and satisfaction into one continuance model, and has applicability for users at different stages of the adoption life cycle, i.e., initial, short-term and long-term users. The TCT represents a substantial improvement over the TAM, ECM and COG models in terms of both breadth of applicability and explanatory power.",
"title": ""
},
{
"docid": "dbdbff5b0d3738306099394d952bed83",
"text": "High-flow nasal cannula (HFNC) therapy is increasingly proposed as first-line respiratory support for infants with acute viral bronchiolitis (AVB). Most teams use 2 L/kg/min, but no study compared different flow rates in this setting. We hypothesized that 3 L/kg/min would be more efficient for the initial management of these patients. A randomized controlled trial was performed in 16 pediatric intensive care units (PICUs) to compare these two flow rates in infants up to 6 months old with moderate to severe AVB and treated with HFNC. The primary endpoint was the percentage of failure within 48 h of randomization, using prespecified criteria of worsening respiratory distress and discomfort. From November 2016 to March 2017, 142 infants were allocated to the 2-L/kg/min (2L) flow rate and 144 to the 3-L/kg/min (3L) flow rate. Failure rate was comparable between groups: 38.7% (2L) vs. 38.9% (3L; p = 0.98). Worsening respiratory distress was the most common cause of failure in both groups: 49% (2L) vs. 39% (3L; p = 0.45). In the 3L group, discomfort was more frequent (43% vs. 16%, p = 0.002) and PICU stays were longer (6.4 vs. 5.3 days, p = 0.048). The intubation rates [2.8% (2L) vs. 6.9% (3L), p = 0.17] and durations of invasive [0.2 (2L) vs. 0.5 (3L) days, p = 0.10] and noninvasive [1.4 (2L) vs. 1.6 (3L) days, p = 0.97] ventilation were comparable. No patient had air leak syndrome or died. In young infants with AVB supported with HFNC, 3 L/kg/min did not reduce the risk of failure compared with 2 L/kg/min. This clinical trial was recorded on the National Library of Medicine registry (NCT02824744).",
"title": ""
},
{
"docid": "8cdbbbfa00dfd08119e1802e9498df20",
"text": "Background:Cetuximab is the only targeted agent approved for the treatment of head and neck squamous cell carcinomas (HNSCC), but low response rates and disease progression are frequently reported. As the phosphoinositide 3-kinase (PI3K) and the mammalian target of rapamycin (mTOR) pathways have an important role in the pathogenesis of HNSCC, we investigated their involvement in cetuximab resistance.Methods:Different human squamous cancer cell lines sensitive or resistant to cetuximab were tested for the dual PI3K/mTOR inhibitor PF-05212384 (PKI-587), alone and in combination, both in vitro and in vivo.Results:Treatment with PKI-587 enhances sensitivity to cetuximab in vitro, even in the condition of epidermal growth factor receptor (EGFR) resistance. The combination of the two drugs inhibits cells survival, impairs the activation of signalling pathways and induces apoptosis. Interestingly, although significant inhibition of proliferation is observed in all cell lines treated with PKI-587 in combination with cetuximab, activation of apoptosis is evident in sensitive but not in resistant cell lines, in which autophagy is pre-eminent. In nude mice xenografted with resistant Kyse30 cells, the combined treatment significantly reduces tumour growth and prolongs mice survival.Conclusions:Phosphoinositide 3-kinase/mammalian target of rapamycin inhibition has an important role in the rescue of cetuximab resistance. Different mechanisms of cell death are induced by combined treatment depending on basal anti-EGFR responsiveness.",
"title": ""
},
{
"docid": "b83e537a2c8dcd24b096005ef0cb3897",
"text": "We present Deep Speaker, a neural speaker embedding system that maps utterances to a hypersphere where speaker similarity is measured by cosine similarity. The embeddings generated by Deep Speaker can be used for many tasks, including speaker identification, verification, and clustering. We experiment with ResCNN and GRU architectures to extract the acoustic features, then mean pool to produce utterance-level speaker embeddings, and train using triplet loss based on cosine similarity. Experiments on three distinct datasets suggest that Deep Speaker outperforms a DNN-based i-vector baseline. For example, Deep Speaker reduces the verification equal error rate by 50% (relatively) and improves the identification accuracy by 60% (relatively) on a text-independent dataset. We also present results that suggest adapting from a model trained with Mandarin can improve accuracy for English speaker recognition.",
"title": ""
},
{
"docid": "5d9221deea5438b812e624008dfbc2db",
"text": "Robust face detection is one of the most important preprocessing steps to support facial expression analysis, facial landmarking, face recognition, pose estimation, building of 3D facial models, etc. Although this topic has been intensely studied for decades, it is still challenging due to numerous variants of face images in real-world scenarios. In this paper, we present a novel approach named Multiple Scale Faster Region-based Convolutional Neural Network (MS-FRCNN) to robustly detect human facial regions from images collected under various challenging conditions, e.g. large occlusions, extremely low resolutions, facial expressions, strong illumination variations, etc. The proposed approach is benchmarked on two challenging face detection databases, i.e. the Wider Face database and the Face Detection Dataset and Benchmark (FDDB), and compared against recent other face detection methods, e.g. Two-stage CNN, Multi-scale Cascade CNN, Faceness, Aggregate Chanel Features, HeadHunter, Multi-view Face Detection, Cascade CNN, etc. The experimental results show that our proposed approach consistently achieves highly competitive results with the state-of-the-art performance against other recent face detection methods.",
"title": ""
},
{
"docid": "854c0cc4f9beb2bf03ac58be8bf79e8c",
"text": "Mobile robots have the potential to become the ideal tool to teach a broad range of engineering disciplines. Indeed, mobile robots are getting increasingly complex and accessible. They embed elements from diverse fields such as mechanics, digital electronics, automatic control, signal processing, embedded programming, and energy management. Moreover, they are attractive for students which increases their motivation to learn. However, the requirements of an effective education tool bring new constraints to robotics. This article presents the e-puck robot design, which specifically targets engineering education at university level. Thanks to its particular design, the e-puck can be used in a large spectrum of teaching activities, not strictly related to robotics. Through a systematic evaluation by the students, we show that the epuck fits this purpose and is appreciated by 90 percent of a large sample of students.",
"title": ""
},
{
"docid": "e17fd5cd7f36702af2bb5f8ac0415eaf",
"text": "This paper surveys fitness functions used in the field of evolutionary robotics (ER). Evolutionary robotics is a field of research that applies artificial evolution to generate control systems for autonomous robots. During evolution, robots attempt to perform a given task in a given environment. The controllers in the better performing robots are selected, altered and propagated to perform the task again in an iterative process that mimics some aspects of natural evolution. A key component of this process – one might argue, the key component – is the measurement of fitness in the evolving controllers. ER is one of a host of machine learning methods that rely on interaction with, and feedback from, a complex dynamic environment to drive synthesis of controllers for autonomous agents. These methods have the potential to lead to the development of robots that can adapt to uncharacterized environments and which may be able to perform tasks that human designers do not completely understand. In order to achieve this, issues regarding fitness evaluation must be addressed. In this paper we survey current ER research and focus on work that involved real robots. The surveyed research is organized according to the degree of a priori knowledge used to formulate the various fitness functions employed during evolution. The underlying motivation for this is to identify methods that allow the development of the greatest degree of novel control, while requiring the minimum amount of a priori task knowledge from the designer. © 2008 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "acf6a62e487b79fc0500aa5e6bbb0b0b",
"text": "This paper proposes a low-cost, easily realizable strategy to equip a reinforcement learning (RL) agent the capability of behaving ethically. Our model allows the designers of RL agents to solely focus on the task to achieve, without having to worry about the implementation of multiple trivial ethical patterns to follow. Based on the assumption that the majority of human behavior, regardless which goals they are achieving, is ethical, our design integrates human policy with the RL policy to achieve the target objective with less chance of violating the ethical code that human beings normally obey.",
"title": ""
},
{
"docid": "34401a7e137cffe44f67e6267f29aa57",
"text": "Future Point-of-Care (PoC) molecular-level diagnosis requires advanced biosensing systems that can achieve high sensitivity and portability at low power consumption levels, all within a low price-tag for a variety of applications such as in-field medical diagnostics, epidemic disease control, biohazard detection, and forensic analysis. Magnetically labeled biosensors are proposed as a promising candidate to potentially eliminate or augment the optical instruments used by conventional fluorescence-based sensors. However, magnetic biosensors developed thus far require externally generated magnetic biasing fields [1–4] and/or exotic post-fabrication processes [1,2]. This limits the ultimate form-factor of the system, total power consumption, and cost. To address these impediments, we present a low-power scalable frequency-shift magnetic particle biosensor array in bulk CMOS, which provides single-bead detection sensitivity without any (electrical or permanent) external magnets.",
"title": ""
},
{
"docid": "f59ed11cd56f48e7ff25e5ad21d27ded",
"text": "Recent research has begun to focus on the factors that cause people to respond to phishing attacks as well as affect user behavior on social networks. This study examines the correlation between the Big Five personality traits and email phishing response. Another aspect examined is how these factors relate to users' tendency to share information and protect their privacy on Facebook (which is one of the most popular social networking sites).\n This research shows that when using a prize phishing email, neuroticism is the factor most correlated to responding to this email, in addition to a gender-based difference in the response. This study also found that people who score high on the openness factor tend to both post more information on Facebook as well as have less strict privacy settings, which may cause them to be susceptible to privacy attacks. In addition, this work detected no correlation between the participants estimate of being vulnerable to phishing attacks and actually being phished, which suggests susceptibility to phishing is not due to lack of awareness of the phishing risks and that real-time response to phishing is hard to predict in advance by online users.\n The goal of this study is to better understand the traits that contribute to online vulnerability, for the purpose of developing customized user interfaces and secure awareness education, designed to increase users' privacy and security in the future.",
"title": ""
}
] |
scidocsrr
|
c5c469d08bc56c00a7e64c1ba256920d
|
Morphable Counters: Enabling Compact Integrity Trees For Low-Overhead Secure Memories
|
[
{
"docid": "b27d9ddc450ed71497d70ebb7f31d7a8",
"text": "Cores in a chip-multiprocessor (CMP) system share multiple hardware resources in the memory subsystem. If resource sharing is unfair, some applications can be delayed significantly while others are unfairly prioritized. Previous research proposed separate fairness mechanisms in each individual resource. Such resource-based fairness mechanisms implemented independently in each resource can make contradictory decisions, leading to low fairness and loss of performance. Therefore, a coordinated mechanism that provides fairness in the entire shared memory system is desirable.\n This paper proposes a new approach that provides fairness in the entire shared memory system, thereby eliminating the need for and complexity of developing fairness mechanisms for each individual resource. Our technique, Fairness via Source Throttling (FST), estimates the unfairness in the entire shared memory system. If the estimated unfairness is above a threshold set by system software, FST throttles down cores causing unfairness by limiting the number of requests they can inject into the system and the frequency at which they do. As such, our source-based fairness control ensures fairness decisions are made in tandem in the entire memory system. FST also enforces thread priorities/weights, and enables system software to enforce different fairness objectives and fairness-performance tradeoffs in the memory system.\n Our evaluations show that FST provides the best system fairness and performance compared to four systems with no fairness control and with state-of-the-art fairness mechanisms implemented in both shared caches and memory controllers.",
"title": ""
}
] |
[
{
"docid": "1714b97ec601792446cb7ad34a70e3b6",
"text": "Interaction intent prediction and the Midas touch have been a longstanding challenge for eye-tracking researchers and users of gaze-based interaction. Inspired by machine learning approaches in biometric person authentication, we developed and tested an offline framework for task-independent prediction of interaction intents. We describe the principles of the method, the features extracted, normalization methods, and evaluation metrics. We systematically evaluated the proposed approach on an example dataset of gaze-augmented problem-solving sessions. We present results of three normalization methods, different feature sets and fusion of multiple feature types. Our results show that accuracy of up to 76% can be achieved with Area Under Curve around 80%. We discuss the possibility of applying the results for an online system capable of interaction intent prediction.",
"title": ""
},
{
"docid": "a13a302e7e2fd5e09a054f1bf23f1702",
"text": "A number of machine learning (ML) techniques have recently been proposed to solve color constancy problem in computer vision. Neural networks (NNs) and support vector regression (SVR) in particular, have been shown to outperform many traditional color constancy algorithms. However, neither neural networks nor SVR were compared to simpler regression tools in those studies. In this article, we present results obtained with a linear technique known as ridge regression (RR) and show that it performs better than NNs, SVR, and gray world (GW) algorithm on the same dataset. We also perform uncertainty analysis for NNs, SVR, and RR using bootstrapping and show that ridge regression and SVR are more consistent than neural networks. The shorter training time and single parameter optimization of the proposed approach provides a potential scope for real time video tracking application.",
"title": ""
},
{
"docid": "aedeb977109fd18ef3dd471b80e40fc1",
"text": "Business process modeling has undoubtedly emerged as a popular and relevant practice in Information Systems. Despite being an actively researched field, anecdotal evidence and experiences suggest that the focus of the research community is not always well aligned with the needs of industry. The main aim of this paper is, accordingly, to explore the current issues and the future challenges in business process modeling, as perceived by three key stakeholder groups (academics, practitioners, and tool vendors). We present the results of a global Delphi study with these three groups of stakeholders, and discuss the findings and their implications for research and practice. Our findings suggest that the critical areas of concern are standardization of modeling approaches, identification of the value proposition of business process modeling, and model-driven process execution. These areas are also expected to persist as business process modeling roadblocks in the future.",
"title": ""
},
{
"docid": "0b18f7966a57e266487023d3a2f3549d",
"text": "A clear andpowerfulformalism for describing languages, both natural and artificial, follows f iom a method for expressing grammars in logic due to Colmerauer and Kowalski. This formalism, which is a natural extension o f context-free grammars, we call \"definite clause grammars\" (DCGs). A DCG provides not only a description of a language, but also an effective means for analysing strings o f that language, since the DCG, as it stands, is an executable program o f the programming language Prolog. Using a standard Prolog compiler, the DCG can be compiled into efficient code, making it feasible to implement practical language analysers directly as DCGs. This paper compares DCGs with the successful and widely used augmented transition network (ATN) formalism, and indicates how ATNs can be translated into DCGs. It is argued that DCGs can be at least as efficient as ATNs, whilst the DCG formalism is clearer, more concise and in practice more powerful",
"title": ""
},
{
"docid": "393ba48bf72e535bdd8a735583fae5ba",
"text": "The PCR is used widely for the study of rRNA genes amplified from mixed microbial populations. These studies resemble quantitative applications of PCR in that the templates are mixtures of homologs and the relative abundance of amplicons is thought to provide some measure of the gene ratios in the starting mixture. Although such studies have established the presence of novel rRNA genes in many natural ecosystems, inferences about gene abundance have been limited by uncertainties about the relative efficiency of gene amplification in the PCR. To address this question, three rRNA gene standards were prepared by PCR, mixed in known proportions, and amplified a second time by using primer pairs in which one primer was labeled with a fluorescent nucleotide derivative. The PCR products were digested with restriction endonucleases, and the frequencies of genes in the products were determined by electrophoresis on an Applied Biosystems 373A automated DNA sequencer in Genescan mode. Mixtures of two templates amplified with the 519F-1406R primer pair yielded products in the predicted proportions. A second primer pair (27F-338R) resulted in strong bias towards 1:1 mixtures of genes in final products, regardless of the initial proportions of the templates. This bias was strongly dependent on the number of cycles of replication. The results fit a kinetic model in which the reannealing of genes progressively inhibits the formation of template-primer hybrids.",
"title": ""
},
{
"docid": "8589f7b0b2d1cbea479e97b0aa6b1498",
"text": "Distributed publish/subscribe systems are naturally suited for processing events in distributed systems. However, support for expressing patterns about distributed events and algorithms for detecting correlations among these events are still largely unexplored. Inspired from the requirements of decentralized, event-driven workflow processing, we design a subscription language for expressing correlations among distributed events. We illustrate the potential of our approach with a workflow management case study. The language is validated and implemented in PADRES. In this paper we present an overview of PADRES, highlighting some of its novel features, including the composite subscription language, the coordination patterns, the composite event detection algorithms, the rule-based router design, and a detailed case study illustrating the decentralized processing of workflows. Our experimental evaluation shows that rule-based brokers are a viable and powerful alternative to existing, special-purpose, content-based routing algorithms. The experiments also show that the use of composite subscriptions in PADRES significantly reduces the load on the network. Complex workflows can be processed in a decentralized fashion with a gain of 40% in message dissemination cost. All processing is realized entirely in the publish/subscribe paradigm.",
"title": ""
},
{
"docid": "4de80563d7c651b02764499d4b7e679f",
"text": "Spam is any unwanted electronic message or material in any form posted too many people. As the world is growing as global world, social networking sites play an important role in making world global providing people from different parts of the world a platform to meet and express their views. Among different social networking sites Facebook become the leading one. With increase in usage different users start abusive use of Facebook by posting or creating ways to post spam. This paper highlights the potential spam types nowadays Facebook users’ faces. This paper also provide the reason how user become victim to spam attack. A methodology is proposed in the end discusses how to handle different types of spam. Keywords—Artificial neural networks, Facebook spam, social networking sites, spam filter.",
"title": ""
},
{
"docid": "300106746d148b5e030f964ac0ffb43c",
"text": "A new architecture has been designed and demonstrated for a low-power SP10T-RF-Switch-IC using 0.18μm SOI-CMOS, implementing an RF-Switch, negative voltage generator, and MIPI in a chip. Clock frequency of the negative voltage generator is controlled to increase only in a switch transition and drop at other times in order to reduce power consumption. Results of an evaluation of a trial chip confirmed a 33% reduction in power consumption compared with conventional architecture while RF performance is maintained.",
"title": ""
},
{
"docid": "74128a89e6dc36b264c36a27f5e63cb0",
"text": "Throughout the last 15 years, Synchronous Reluctance (SyR) motor drives have been presented as a competitive alternative to Induction Motor (IM) drives in many variable speed applications, at least in the small power range. Very few examples of SyR motors above the 100Nm torque size are present in the literature. The main advantage of the SyR motor lays in the absence of rotor Joule losses that permits to obtain a continuous torque that is higher than the one of an IM of the same size. The paper presents a 250kW, 1000rpm Synchronous Reluctance motor for industrial applications and its performance comparison with an Induction Motor with identical stator. The rotor of the SyR motor has been purposely designed to fit the stator and housing of the IM competitor. The experimental comparison of the two motors is presented, on a regenerative test bench where one the two motors under test can load the competitor and vice-versa. The results of the experimental tests confirm the assumption that the SyR motor gives more torque and then more power in continuous operation at rated windings temperature. However, the IM maintains a certain advantage in terms of flux-weakening capability that means a larger constant-power speed range.",
"title": ""
},
{
"docid": "9932e16d2202a024223173613e19314c",
"text": "Systems for On-Line Analytical Processing (OLAP) consider ably ease the process of analyzing business data and have become widely used in industry. OLAP syste ms primarily employ multidimensional data models to structure their data. However, current multi dimensional data models fall short in their ability to model the complex data found in some real-world ap plication domains. The paper presents nine requirements to multidimensional data models, each of which is exemplified by a real-world, clinical case study. A survey of the existing models reveals that the requirements not currently met include support for many-to-many relationships between facts and d imensions, built-in support for handling change and time, and support for uncertainty as well as diffe rent levels of granularity in the data. The paper defines an extended multidimensional data model, whic h addresses all nine requirements. Along with the model, we present an associated algebra, and outlin e how to implement the model using relational databases.",
"title": ""
},
{
"docid": "0802735955b52c1dae64cf34a97a33fb",
"text": "Cutaneous facial aging is responsible for the increasingly wrinkled and blotchy appearance of the skin, whereas aging of the facial structures is attributed primarily to gravity. This article purports to show, however, that the primary etiology of structural facial aging relates instead to repeated contractions of certain facial mimetic muscles, the age marker fascicules, whereas gravity only secondarily abets an aging process begun by these muscle contractions. Magnetic resonance imaging (MRI) has allowed us to study the contrasts in the contour of the facial mimetic muscles and their associated deep and superficial fat pads in patients of different ages. The MRI model shows that the facial mimetic muscles in youth have a curvilinear contour presenting an anterior surface convexity. This curve reflects an underlying fat pad lying deep to these muscles, which acts as an effective mechanical sliding plane. The muscle’s anterior surface convexity constitutes the key evidence supporting the authors’ new aging theory. It is this youthful convexity that dictates a specific characteristic to the muscle contractions conveyed outwardly as youthful facial expression, a specificity of both direction and amplitude of facial mimetic movement. With age, the facial mimetic muscles (specifically, the age marker fascicules), as seen on MRI, gradually straighten and shorten. The authors relate this radiologic end point to multiple repeated muscle contractions over years that both expel underlying deep fat from beneath the muscle plane and increase the muscle resting tone. Hence, over time, structural aging becomes more evident as the facial appearance becomes more rigid.",
"title": ""
},
{
"docid": "728bc76467b7f4ddf7c8c368cdf2d44b",
"text": "SQL is the de facto language for manipulating relational data. Though powerful, many users find it difficult to write SQL queries due to highly expressive constructs. \n While using the programming-by-example paradigm to help users write SQL queries is an attractive proposition, as evidenced by online help forums such as Stack Overflow, developing techniques for synthesizing SQL queries from given input-output (I/O) examples has been difficult, due to the large space of SQL queries as a result of its rich set of operators. \n \n In this paper, we present a new scalable and efficient algorithm for synthesizing SQL queries based on I/O examples. The key innovation of our algorithm is development of a language for abstract queries, i.e., queries with uninstantiated operators, that can be used to express a large space of SQL queries efficiently. Using abstract queries to represent the search space nicely decomposes the synthesis problem into two tasks: 1) searching for abstract queries that can potentially satisfy the given I/O examples, and 2) instantiating the found abstract queries and ranking the results. \n \n We have implemented this algorithm in a new tool called Scythe and evaluated it using 193 benchmarks collected from Stack Overflow. Our evaluation shows that Scythe can efficiently solve 74% of the benchmarks, most in just a few seconds, and the queries range from simple ones involving a single selection to complex queries with 6 nested subqueires.",
"title": ""
},
{
"docid": "dd270ffa800d633a7a354180eb3d426c",
"text": "I have taken an experimental approach to this question. Freely voluntary acts are pre ceded by a specific electrical change in the brain (the ‘readiness potential’, RP) that begins 550 ms before the act. Human subjects became aware of intention to act 350–400 ms after RP starts, but 200 ms. before the motor act. The volitional process is therefore initiated unconsciously. But the conscious function could still control the outcome; it can veto the act. Free will is therefore not excluded. These findings put constraints on views of how free will may operate; it would not initiate a voluntary act but it could control performance of the act. The findings also affect views of guilt and responsibility. But the deeper question still remains: Are freely voluntary acts subject to macro deterministic laws or can they appear without such constraints, non-determined by natural laws and ‘truly free’? I shall present an experimentalist view about these fundamental philosophical opposites.",
"title": ""
},
{
"docid": "2982c7f57f3efa82a07ec3e6f8e34f03",
"text": "The aim of this study is to assess whether the effect of gender on the excessive daytime sleepiness (EDS) is influenced by two confounders (age and hours of sleep per night). A cross-sectional study was conducted at King Abdulaziz Medical City-Riyadh (KAMC-R). A total of 2095 respondents answered a questionnaire that included questions regarding gender, age, hours of sleep per night, and daytime sleepiness using the Epworth Sleepiness Scale (ESS). The prevalence of EDS was 20.5% (females 22.2%, males 19.5%, p-value=0.136). The EDS did not differ between genders, age groups, or hours of sleep per night (<6 vs. ⩾6h). However, stratified statistical analysis shows that the prevalence of EDS did differ according to gender (25.3% in females, 19.0% in males, p-value=0.036) in respondents with shorter hours of sleep per night. EDS was strongly related to female gender and young age (ages⩽29years) in respondents with short hours of sleep. This study reveals that one out of five of the general Saudi population has EDS. The effect of gender on EDS appeared to be influenced by hours of sleep per night. High EDS strongly related to female gender with short hours of sleep.",
"title": ""
},
{
"docid": "fbcdb57ae0d42e9665bc95dbbca0d57b",
"text": "Data classification and tag recommendation are both important and challenging tasks in social media. These two tasks are often considered independently and most efforts have been made to tackle them separately. However, labels in data classification and tags in tag recommendation are inherently related. For example, a Youtube video annotated with NCAA, stadium, pac12 is likely to be labeled as football, while a video/image with the class label of coast is likely to be tagged with beach, sea, water and sand. The existence of relations between labels and tags motivates us to jointly perform classification and tag recommendation for social media data in this paper. In particular, we provide a principled way to capture the relations between labels and tags, and propose a novel framework CLARE, which fuses data CLAssification and tag REcommendation into a coherent model. With experiments on three social media datasets, we demonstrate that the proposed framework CLARE achieves superior performance on both tasks compared to the state-of-the-art methods.",
"title": ""
},
{
"docid": "db8cbcc8a7d233d404a18a54cb9fedae",
"text": "Edge preserving filters preserve the edges and its information while blurring an image. In other words they are used to smooth an image, while reducing the edge blurring effects across the edge like halos, phantom etc. They are nonlinear in nature. Examples are bilateral filter, anisotropic diffusion filter, guided filter, trilateral filter etc. Hence these family of filters are very useful in reducing the noise in an image making it very demanding in computer vision and computational photography applications like denoising, video abstraction, demosaicing, optical-flow estimation, stereo matching, tone mapping, style transfer, relighting etc. This paper provides a concrete introduction to edge preserving filters starting from the heat diffusion equation in olden to recent eras, an overview of its numerous applications, as well as mathematical analysis, various efficient and optimized ways of implementation and their interrelationships, keeping focus on preserving the boundaries, spikes and canyons in presence of noise. Furthermore it provides a realistic notion for efficient implementation with a research scope for hardware realization for further acceleration.",
"title": ""
},
{
"docid": "1472e8a0908467404c01d236d2f39c58",
"text": "Millimetre wave antennas are typically used for applications like anti-collision car radar or sensory. A new and upcoming application is the use of 60 GHz antennas for high date rate point-to-point connections to serve wireless local area networks. For high gain antennas, configurations using lenses in combination with planar structures are often applied. However, single layer planar arrays might offer a more cost-efficient solution, especially if the antenna and the RF-circuitry are realised on one and the same substrate. The design of millimetre wave antennas has to cope with the severe impacts of manufacturing tolerances and losses at these frequencies. Reproducibility can become poor in such cases. The successful design and realisation of a cost-efficient 60 GHz planar patch array (8/spl times/8 elements) with high reproducibility for point-to-point connections is presented. Important design aspects are highlighted and manufacturing tolerances and losses are analysed. Measurement results of different prototypes are presented to show the reproducibility of the antenna layout.",
"title": ""
},
{
"docid": "e507cda442e28c4990ae00aec4a38525",
"text": "BACKGROUND\nFusobacterium necrophorum is a common agent of disease in humans, but the occurrence of primary infections outside the head and neck area is extremely rare. While infection with Fusobacterium necrophorum has a rather benign course above the thorax, the organism is capable of producing very severe disease when located in unusual sites, including various forms of septic thrombophlebitis. No infections of the leg have been documented before; thus, antibiotic coverage for Fusobacterium is currently not recommended in this area.\n\n\nCASE PRESENTATION\nA 50-year-old homeless African-American man presented complaining of severe pain in his right lower extremity. A clinical workup was consistent with emphysematous pyomyositis and compartment syndrome; he received limb-saving surgical intervention. The offending organism was identified as Fusobacterium necrophorum, and the antibiotic coverage was adjusted accordingly.\n\n\nCONCLUSIONS\nBacteria typically involved in necrotizing infections of the lower extremity include Group A ß-hemolytic Streptococcus, Clostridium perfringens, and common anaerobic bacteria (Bacteroides, Peptococcus, and Peptostreptococcus). This case report presents a case of gas gangrene of the leg caused by Fusobacterium necrophorum, the first such case reported. Fusobacterium should now be included in the differential diagnosis of necrotizing fasciitis of the extremities.",
"title": ""
},
{
"docid": "44ef466e59603fc90a30217e7fab00cf",
"text": "We address the problem of content-aware, foresighted resource reciprocation for media streaming over peer-to-peer (P2P) networks. The envisioned P2P network consists of autonomous and self-interested peers trying to maximize their individual utilities. The resource reciprocation among such peers is modeled as a stochastic game and peers determine the optimal strategies for resource reciprocation using a Markov Decision Process (MDP) framework. Unlike existing solutions, this framework takes the content and the characteristics of the video signal into account by introducing an artificial currency in order to maximize the video quality in the entire network.",
"title": ""
},
{
"docid": "38559091a247af5d97c971635dd87643",
"text": "In this article, we review probabilistic topic models: graphical models that can be used to summarize a large collection of documents with a smaller number of distributions over words. Those distributions are called \"topics\" because, when fit to data, they capture the salient themes that run through the collection. We describe both finite-dimensional parametric topic models and their Bayesian nonparametric counterparts, which are based on the hierarchical Dirichlet process (HDP). We discuss two extensions of topic models to time-series data-one that lets the topics slowly change over time and one that lets the assumed prevalence of the topics change. Finally, we illustrate the application of topic models to nontext data, summarizing some recent research results in image analysis.",
"title": ""
}
] |
scidocsrr
|
e8db41fb0e4560260a01382e0a19361f
|
Learning to Learn from Weak Supervision by Full Supervision
|
[
{
"docid": "e90b54f7ae5ebc0b46d0fb738bb0f458",
"text": "The availability of large labeled datasets has allowed Convolutional Network models to achieve impressive recognition results. However, in many settings manual annotation of the data is impractical; instead our data has noisy labels, i.e. there is some freely available label for each image which may or may not be accurate. In this paper, we explore the performance of discriminatively-trained Convnets when trained on such noisy data. We introduce an extra noise layer into the network which adapts the network outputs to match the noisy label distribution. The parameters of this noise layer can be estimated as part of the training process and involve simple modifications to current training infrastructures for deep networks. We demonstrate the approaches on several datasets, including large scale experiments on the ImageNet classification benchmark.",
"title": ""
}
] |
[
{
"docid": "c8fd391e486efcf907424119696cdf01",
"text": "AIM\nThis paper is the report of a study to explicate the components of observable behaviour that indicate a potential for violence in patients, their family and friends when presenting at an emergency department.\n\n\nBACKGROUND\nViolence towards nurses is a contemporary, multifaceted problem for the healthcare workforce globally. International literature identifies emergency departments as having high levels of violence.\n\n\nMETHOD\nA mixed method case study design was adopted, and data were collected by means of 290 hours of participant observation, 16 semi-structured interviews and 13 informal field interviews over a 5-month period in 2005. Thematic analysis of textual data was undertaken using NVivo2. Frequency counts were developed from the numerical data.\n\n\nFINDINGS\nFive distinctive elements of observable behaviour indicating potential for violence in patients, their families and friends were identified. These elements can be conceptualized as a potential nursing violence assessment framework and described through the acronym STAMP: Staring and eye contact, Tone and volume of voice, Anxiety, Mumbling and Pacing.\n\n\nCONCLUSION\nStaring and eye contact, Tone and volume of voice, Anxiety, Mumbling and Pacing provides a useful, practical nursing violence assessment framework to assist nurses to quickly identify patients, families and friends who have a potential for violence.",
"title": ""
},
{
"docid": "f773798785419625b8f283fc052d4ab2",
"text": "The increasing interest in energy storage for the grid can be attributed to multiple factors, including the capital costs of managing peak demands, the investments needed for grid reliability, and the integration of renewable energy sources. Although existing energy storage is dominated by pumped hydroelectric, there is the recognition that battery systems can offer a number of high-value opportunities, provided that lower costs can be obtained. The battery systems reviewed here include sodium-sulfur batteries that are commercially available for grid applications, redox-flow batteries that offer low cost, and lithium-ion batteries whose development for commercial electronics and electric vehicles is being applied to grid storage.",
"title": ""
},
{
"docid": "1c252b9df2e43f42f9dc6101d823e644",
"text": "The paper provides an overview and comparison of Greenhouse Gas Emissions associated with fossil, nuclear and renewable energy systems. In this context both the direct technology-specific emissions and the contributions from full energy chains within the Life Cycle Assessment framework are considered. Examples illustrating the differences between countries and regional electricity mixes are also provided. Core results presented here are based on the work performed at PSI, and by partners within the Swiss Centre for Life-Cycle Inventories.",
"title": ""
},
{
"docid": "d73b277bf829a3295dfa86b33ad19c4a",
"text": "Biodiesel is a renewable and environmentally friendly liquid fuel. However, the feedstock, predominantly crop oil, is a limited and expensive food resource which prevents large scale application of biodiesel. Development of non-food feedstocks are therefore, needed to fully utilize biodiesel’s potential. In this study, the larvae of a high fat containing insect, black soldier fly (Hermetia illucens) (BSFL), was evaluated for biodiesel production. Specifically, the BSFL was grown on organic wastes for 10 days and used for crude fat extraction by petroleum ether. The extracted crude fat was then converted into biodiesel by acid-catalyzed (1% H2SO4) esterification and alkaline-catalyzed (0.8% NaOH) transesterification, resulting in 35.5 g, 57.8 g and 91.4 g of biodiesel being produced from 1000 BSFL growing on 1 kg of cattle manure, pig manure and chicken manure, respectively. The major ester components of the resulting biodiesel were lauric acid methyl ester (35.5%), oleinic acid methyl ester (23.6%) and palmitic acid methyl ester (14.8%). Fuel properties of the BSFL fat-based biodiesel, such as density (885 kg/m), viscosity (5.8 mm/s), ester content (97.2%), flash point (123 C), and cetane number (53) were comparable to those of rapeseed-oil-based biodiesel. These results demonstrated that the organic waste-grown BSFL could be a feasible non-food feedstock for biodiesel production. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "0537a00983f91942099d93a5a2c22195",
"text": "Conflicting evidence exists regarding the optimal treatment for abscess complicating acute appendicitis. The objective of this study is to compare immediate appendectomy (IMM APP) versus expectant management (EXP MAN) including percutaneous drainage with or without interval appendectomy to treat periappendiceal abscess. One hundred four patients with acute appendicitis complicated by periappendiceal abscess were identified. We compared 36 patients who underwent IMM APP with 68 patients who underwent EXP MAN. Outcome measures included morbidity and length of hospital stay. The groups were similar with regard to age (30.6 +/- 12.3 vs. 34.8 +/- 13.5 years), gender (61% vs. 62% males), admission WBC count (17.5 +/- 5.1 x 10(3) vs. 17.0 +/- 4.8 x 10(3) cells/dL), and admission temperature (37.9 +/- 1.2 vs. 37.8 +/- 0.9 degrees F). IMM APP patients had a higher rate of complications than EXP MAN patients at initial hospitalization (58% vs. 15%, P < 0.001) and for all hospitalizations (67% vs. 24%, P < 0.001). The IMM APP group also had a longer initial (14.8 +/- 16.1 vs. 9.0 +/- 4.8 days, P = 0.01) and overall hospital stay (15.3 +/- 16.2 vs. 10.7 +/- 5.4 days, P = 0.04). We conclude that percutaneous drainage and interval appendectomy is preferable to immediate appendectomy for treatment of appendiceal abscess because it leads to a lower complication rate and a shorter hospital stay.",
"title": ""
},
{
"docid": "0c61b8228c28c992746cc7b5cf3006c7",
"text": "Cytokinin phytohormones regulate a variety of developmental processes in the root such as meristem size, vascular pattern, and root architecture [1-3]. Long-distance transport of cytokinin is supported by the discovery of cytokinins in xylem and phloem sap [4] and by grafting experiments between wild-type and cytokinin biosynthesis mutants [5]. Acropetal transport of cytokinin (toward the shoot apex) has also been implicated in the control of shoot branching [6]. However, neither the mode of transport nor a developmental role has been shown for basipetal transport of cytokinin (toward the root apex). In this paper, we combine the use of a new technology that blocks symplastic connections in the phloem with a novel approach to visualize radiolabeled hormones in planta to examine the basipetal transport of cytokinin. We show that this occurs through symplastic connections in the phloem. The reduction of cytokinin levels in the phloem leads to a destabilization of the root vascular pattern in a manner similar to mutants affected in auxin transport or cytokinin signaling [7]. Together, our results demonstrate a role for long-distance basipetal transport of cytokinin in controlling polar auxin transport and maintaining the vascular pattern in the root meristem.",
"title": ""
},
{
"docid": "680f3f97de5b5c42d8b3ee1acf6d8452",
"text": "One literature treats the hippocampus as a purely cognitive structure involved in memory; another treats it as a regulator of emotion whose dysfunction leads to psychopathology. We review behavioral, anatomical, and gene expression studies that together support a functional segmentation into three hippocampal compartments: dorsal, intermediate, and ventral. The dorsal hippocampus, which corresponds to the posterior hippocampus in primates, performs primarily cognitive functions. The ventral (anterior in primates) relates to stress, emotion, and affect. Strikingly, gene expression in the dorsal hippocampus correlates with cortical regions involved in information processing, while genes expressed in the ventral hippocampus correlate with regions involved in emotion and stress (amygdala and hypothalamus).",
"title": ""
},
{
"docid": "3fc74e621d0e485e1e706367d30e0bad",
"text": "Many commercial navigation aids suffer from a number of design flaws, the most important of which are related to the human interface that conveys information to the user. Aids for the visually impaired are lightweight electronic devices that are either incorporated into a long cane, hand-held or worn by the client, warning of hazards ahead. Most aids use vibrating buttons or sound alerts to warn of upcoming obstacles, a method which is only capable of conveying very crude information regarding direction and proximity to the nearest object. Some of the more sophisticated devices use a complex audio interface in order to deliver more detailed information, but this often compromises the user's hearing, a critical impairment for a blind user. The author has produced an original design and working prototype solution which is a major first step in addressing some of these faults found in current production models for the blind.",
"title": ""
},
{
"docid": "a1236db07cef7ba2346e51f61839722f",
"text": "We investigated the differential diffusion of all of the verified true and false news stories distributed on Twitter from 2006 to 2017. The data comprise ~126,000 stories tweeted by ~3 million people more than 4.5 million times. We classified news as true or false using information from six independent fact-checking organizations that exhibited 95 to 98% agreement on the classifications. Falsehood diffused significantly farther, faster, deeper, and more broadly than the truth in all categories of information, and the effects were more pronounced for false political news than for false news about terrorism, natural disasters, science, urban legends, or financial information. We found that false news was more novel than true news, which suggests that people were more likely to share novel information. Whereas false stories inspired fear, disgust, and surprise in replies, true stories inspired anticipation, sadness, joy, and trust. Contrary to conventional wisdom, robots accelerated the spread of true and false news at the same rate, implying that false news spreads more than the truth because humans, not robots, are more likely to spread it.",
"title": ""
},
{
"docid": "2e7b628b1a3bcc51455e5822ca82bd56",
"text": "The installed capacity of distributed generation (DG) based on renewable energy sources has increased continuously in power systems, and its market-oriented transaction is imperative. However, traditional transaction management based on centralized organizations has many disadvantages, such as high operation cost, low transparency, and potential risk of transaction data modification. Therefore, a decentralized electricity transaction mode for microgrids is proposed in this study based on blockchain and continuous double auction (CDA) mechanism. A buyer and seller initially complete the transaction matching in the CDA market. In view of the frequent price fluctuation in the CDA market, an adaptive aggressiveness strategy is used to adjust the quotation timely according to market changes. DG and consumer exchange digital certificate of power and expenditure on the blockchain system and the interests of consumers are then guaranteed by multi-signature when DG cannot generate power due to failure or other reasons. The digital certification of electricity assets is replaced by the sequence number with specific tags in the transaction script, and the size of digital certification can be adjusted according to transaction energy quantity. Finally, the feasibility of market mechanism through specific microgrid case and settlement process is also provided.",
"title": ""
},
{
"docid": "a6e46ae5e89f90a77700ea3967fe0608",
"text": "Wireless charging is a technique of transmitting power through an air gap to an electrical device for the purpose of energy replenishment. Recently, wireless charging technology has significantly advanced in terms of efficiency and functionality. This article first presents an overview and fundamentals of wireless charging. We then provide the review of standards, that is, Qi and the Alliance for Wireless Power, and highlight their communication protocols. Next, we propose a novel concept of wireless charger networking that allows chargers to be connected to facilitate information collection and control. We demonstrate the application of the wireless charger network in user-charger assignment, which clearly shows the benefit in terms of reduced costs for users to identify the best chargers to replenish energy for their mobile devices.",
"title": ""
},
{
"docid": "5f31121bf6b8412a84f8aa46763c4d40",
"text": "A novel Koch-like fractal curve is proposed to transform ultra-wideband (UWB) bow-tie into so called Koch-like sided fractal bow-tie dipole. A small isosceles triangle is cut off from center of each side of the initial isosceles triangle, then the procedure iterates along the sides like Koch curve does, forming the Koch-like fractal bow-tie geometry. The fractal bow-tie of each iterative is investigated without feedline in free space for fractal trait unveiling first, followed by detailed expansion upon the four-iterated pragmatic fractal bow-tie dipole fed by 50-Ω coaxial SMA connector through coplanar stripline (CPS) and comparison with Sierpinski gasket. The fractal bow-tie dipole can operate in multiband with moderate gain (3.5-7 dBi) and high efficiency (60%-80%), which is corresponding to certain shape parameters, such as notch ratio α, notch angle φ, and base angles θ of the isosceles triangle. Compared with conventional bow-tie dipole and Sierpinski gasket with the same size, this fractal-like antenna has almost the same operating properties in low frequency and better radiation pattern in high frequency in multi-band operation, which makes it a better candidate for applications of PCS, WLAN, WiFi, WiMAX, and other communication systems.",
"title": ""
},
{
"docid": "e994243e3124e4c0849eeb2b733c2a78",
"text": "This article explores the ways social interaction plays an integral role in the game EverQuest. Through our research we argue that social networks form a powerful component of the gameplay and the gaming experience, one that must be seriously considered to understand the nature of massively multiplayer online games. We discuss the discrepancy between how the game is portrayed and how it is actually played. By examining the role of social networks and interactions we seek to explore how the friendships between the players could be considered the ultimate exploit of the game.",
"title": ""
},
{
"docid": "e95336e305ac921c01198554da91dcdb",
"text": "We consider the problem of staffing call-centers with multip le customer classes and agent types operating under quality-of-service (QoS) constraints and demand rate uncertainty. We introduce a formulation of the staffing problem that requires that the Q oS constraints are met with high probability with respect to the uncertainty in the demand ra te. We contrast this chance-constrained formulation with the average-performance constraints tha t have been used so far in the literature. We then propose a two-step solution for the staffing problem u nder chance constraints. In the first step, we introduce a Random Static Planning Problem (RSPP) a nd discuss how it can be solved using two different methods. The RSPP provides us with a first -order (or fluid) approximation for the true optimal staffing levels and a staffing frontier. In the second step, we solve a finite number of staffing problems with known arrival rates–the arrival rate s on the optimal staffing frontier. Hence, our formulation and solution approach has the important pro perty that it translates the problem with uncertain demand rates to one with known arrival rates. The o utput of our procedure is a solution that is feasible with respect to the chance constraint and ne arly optimal for large call centers.",
"title": ""
},
{
"docid": "2088fcfb9651e2dfcbaa123b723ef8aa",
"text": "Head pose estimation is not only a crucial preprocessing task in applications such as facial expression and face recognition, but also the core task for many others, e.g. gaze; driver focus of attention; head gesture recognitions. In real scenarios, the fine location and scale of a processed face patch should be consistently and automatically obtained. To this end, we propose a depth-based face spotting technique in which the face is cropped with respect to its depth data, and is modeled by its appearance features. By employing this technique, the localization rate was gained. additionally, by building a head pose estimator on top of it, we achieved more accurate pose estimates and better generalization capability. To estimate the head pose, we exploit Support Vector (SV) regressors to map Histogram of oriented Gradient (HoG) features extracted from the spotted face patches in both depth and RGB images to the head rotation angles. The developed pose estimator compared favorably to state-of-the-art approaches on two challenging DRGB databases.",
"title": ""
},
{
"docid": "362c41e8f90c097160c7785e8b4c9053",
"text": "This paper focuses on biomimetic design in the field of technical textiles / smart fabrics. Biologically inspired design is a very promising approach that has provided many elegant solutions. Firstly, a few bio-inspired innovations are presented, followed the introduction of trans-disciplinary research as a useful tool for defining the design problem and giving solutions. Furthermore, the required methods for identifying and applying biological analogies are analysed. Finally, the bio-mimetic approach is questioned and the difficulties, limitations and errors that a designer might face when adopting it are discussed. Researchers and product developers that use this approach should also problematize on the role of biomimetic design: is it a practice that redirects us towards a new model of sustainable development or is it just another tool for generating product ideas in order to increase a company’s competitiveness in the global market? Author",
"title": ""
},
{
"docid": "5cf2c4239507b7d66cec3cf8fabf7f60",
"text": "Government corruption is more prevalent in poor countries than in rich countries. This paper uses cross-industry heterogeneity in growth rates within Vietnam to test empirically whether growth leads to lower corruption. We find that it does. We begin by developing a model of government officials’ choice of how much bribe money to extract from firms that is based on the notion of inter-regional tax competition, and consider how officials’ choices change as the economy grows. We show that economic growth is predicted to decrease the rate of bribe extraction under plausible assumptions, with the benefit to officials of demanding a given share of revenue as bribes outweighed by the increased risk that firms will move elsewhere. This effect is dampened if firms are less mobile. Our empirical analysis uses survey data collected from over 13,000 Vietnamese firms between 2006 and 2010 and an instrumental variables strategy based on industry growth in other provinces. We find, first, that firm growth indeed causes a decrease in bribe extraction. Second, this pattern is particularly true for firms with strong land rights and those with operations in multiple provinces, consistent with these firms being more mobile. Our results suggest that as poor countries grow, corruption could subside “on its own,” and they demonstrate one type of positive feedback between economic growth and good institutions. ∗Contact information: Bai: jieb@mit.edu; Jayachandran: seema@northwestern.edu; Malesky: ejm5@duke.edu; Olken: bolken@mit.edu. We thank Lori Beaman, Raymond Fisman, Chang-Tai Hsieh, Supreet Kaur, Neil McCulloch, Andrei Shleifer, Matthew Stephenson, Eric Verhoogen, and Ekaterina Zhuravskaya for helpful comments.",
"title": ""
},
{
"docid": "45eea1373ec204261d98d99e33214225",
"text": "Current wireless network design is built on the ethos of avoiding interference. In this paper we question this long-held design principle. We show that with appropriate design, successful concurrent transmissions can be enabled and exploited on both the uplink and downlink. We show that this counter-intuitive approach of encouraging interference can be exploited to increase network capacity significantly and simplify network design. We design and implement name, a novel MAC and PHY protocol that exploits recently proposed rateless coding techniques to provide such concurrency. We show via a prototype implementation and experimental evaluation that name can provide a 60% increase in network capacity on the uplink compared to traditional Wifi that does omniscient rate adaptation and a $35\\%$ median throughput gain on the downlink PHY layer as compared to an omniscient scheme that picks the best conventional bitrate.",
"title": ""
},
{
"docid": "cfdc217170410e60fb9323cc39d51aff",
"text": "Malware, i.e., malicious software, represents one of the main cyber security threats today. Over the last decade malware has been evolving in terms of the complexity of malicious software and the diversity of attack vectors. As a result modern malware is characterized by sophisticated obfuscation techniques, which hinder the classical static analysis approach. Furthermore, the increased amount of malware that emerges every day, renders a manual approach inefficient. This study tackles the problem of analyzing, detecting and classifying the vast amount of malware in a scalable, efficient and accurate manner. We propose a novel approach for detecting malware and classifying it to either known or novel, i.e., previously unseen malware family. The approach relies on Random Forests classifier for performing both malware detection and family classification. Furthermore, the proposed approach employs novel feature representations for malware classification, that significantly reduces the feature space, while achieving encouraging predictive performance. The approach was evaluated using behavioral traces of over 270,000 malware samples and 837 samples of benign software. The behavioral traces were obtained using a modified version of Cuckoo sandbox, that was able to harvest behavioral traces of the analyzed samples in a time-efficient manner. The proposed system achieves high malware detection rate and promising predictive performance in the family classification, opening the possibility of coping with the use of obfuscation and the growing number of malware.",
"title": ""
}
] |
scidocsrr
|
cd036da5d6036ba4781cb1791e82f40e
|
Choice and ego-depletion: the moderating role of autonomy.
|
[
{
"docid": "99f52d6a7412060231a0bfe1d5dcea0d",
"text": "The concepts of self-regulation and autonomy are examined within an organizational framework. We begin by retracing the historical origins of the organizational viewpoint in early debates within the field of biology between vitalists and reductionists, from which the construct of self-regulation emerged. We then consider human autonomy as an evolved behavioral, developmental, and experiential phenomenon that operates at both neurobiological and psychological levels and requires very specific supports within higher order social organizations. We contrast autonomy or true self-regulation with controlling regulation (a nonautonomous form of intentional behavior) in phenomenological and functional terms, and we relate the forms of regulation to the developmental processes of intrinsic motivation and internalization. Subsequently, we describe how self-regulation versus control may be characterized by distinct neurobiological underpinnings, and we speculate about some of the adaptive advantages that may underlie the evolution of autonomy. Throughout, we argue that disturbances of autonomy, which have both biological and psychological etiologies, are central to many forms of psychopathology and social alienation.",
"title": ""
}
] |
[
{
"docid": "5e453defd762bb4ecfae5dcd13182b4a",
"text": "We present a comprehensive lifetime prediction methodology for both intrinsic and extrinsic Time-Dependent Dielectric Breakdown (TDDB) failures to provide adequate Design-for-Reliability. For intrinsic failures, we propose applying the √E model and estimating the Weibull slope using dedicated single-via test structures. This effectively prevents lifetime underestimation, and thus relaxes design restrictions. For extrinsic failures, we propose applying the thinning model and Critical Area Analysis (CAA). In the thinning model, random defects reduce effective spaces between interconnects, causing TDDB failures. We can quantify the failure probabilities by using CAA for any design layouts of various LSI products.",
"title": ""
},
{
"docid": "ee5b46719023b5dbae96997bbf9925b0",
"text": "The teaching of reading in different languages should be informed by an effective evidence base. Although most children will eventually become competent, indeed skilled, readers of their languages, the pre-reading (e.g. phonological awareness) and language skills that they bring to school may differ in systematic ways for different language environments. A thorough understanding of potential differences is required if literacy teaching is to be optimized in different languages. Here we propose a theoretical framework based on a psycholinguistic grain size approach to guide the collection of evidence in different countries. We argue that the development of reading depends on children's phonological awareness in all languages studied to date. However, we propose that because languages vary in the consistency with which phonology is represented in orthography, there are developmental differences in the grain size of lexical representations, and accompanying differences in developmental reading strategies across orthographies.",
"title": ""
},
{
"docid": "e2d25382acd23c9431ccd3905d8bf13a",
"text": "Temporal segmentation of human motion into plausible motion primitives is central to understanding and building computational models of human motion. Several issues contribute to the challenge of discovering motion primitives: the exponential nature of all possible movement combinations, the variability in the temporal scale of human actions, and the complexity of representing articulated motion. We pose the problem of learning motion primitives as one of temporal clustering, and derive an unsupervised hierarchical bottom-up framework called hierarchical aligned cluster analysis (HACA). HACA finds a partition of a given multidimensional time series into m disjoint segments such that each segment belongs to one of k clusters. HACA combines kernel k-means with the generalized dynamic time alignment kernel to cluster time series data. Moreover, it provides a natural framework to find a low-dimensional embedding for time series. HACA is efficiently optimized with a coordinate descent strategy and dynamic programming. Experimental results on motion capture and video data demonstrate the effectiveness of HACA for segmenting complex motions and as a visualization tool. We also compare the performance of HACA to state-of-the-art algorithms for temporal clustering on data of a honey bee dance. The HACA code is available online.",
"title": ""
},
{
"docid": "8b4b8c7bff6a6351edbae640a28bbed4",
"text": "Hardware Trojans recently emerged as a serious issue for computer systems, especially for those used in critical applications such as medical or military. Trojan proposed so far can affect the reliability of a device in various ways. Proposed effects range from the leakage of secret information to the complete malfunctioning of the device. A crucial point for securing the overall operation of a device is to guarantee the absence of hardware Trojans. In this paper, we survey several techniques for detecting malicious modification of circuit introduced at different phases of the design flow. We also highlight their capabilities limitations in thwarting hardware Trojans.",
"title": ""
},
{
"docid": "7dda8adb207e69ccbc52ce0497d3f5d4",
"text": "Statistics from security firms, research institutions and government organizations show that the number of data-leak instances have grown rapidly in recent years. Among various data-leak cases, human mistakes are one of the main causes of data loss. There exist solutions detecting inadvertent sensitive data leaks caused by human mistakes and to provide alerts for organizations. A common approach is to screen content in storage and transmission for exposed sensitive information. Such an approach usually requires the detection operation to be conducted in secrecy. However, this secrecy requirement is challenging to satisfy in practice, as detection servers may be compromised or outsourced. In this paper, we present a privacy-preserving data-leak detection (DLD) solution to solve the issue where a special set of sensitive data digests is used in detection. The advantage of our method is that it enables the data owner to safely delegate the detection operation to a semihonest provider without revealing the sensitive data to the provider. We describe how Internet service providers can offer their customers DLD as an add-on service with strong privacy guarantees. The evaluation results show that our method can support accurate detection with very small number of false alarms under various data-leak scenarios.",
"title": ""
},
{
"docid": "0b10bd76d0d78e609c6397b60257a2ed",
"text": "Persistent increase in population of world is demanding more and more supply of food. Hence there is a significant need of advancement in cultivation to meet up the future food needs. It is important to know moisture levels in soil to maximize the output. But most of farmers cannot afford high cost devices to measure soil moisture. Our research work in this paper focuses on home-made low cost moisture sensor with accuracy. In this paper we present a method to manufacture soil moisture sensor to estimate moisture content in soil hence by providing information about required water supply for good cultivation. This sensor is tested with several samples of soil and able to meet considerable accuracy. Measuring soil moisture is an effective way to determine condition of soil and get information about the quantity of water that need to be supplied for cultivation. Two separate methods are illustrated in this paper to determine soil moisture over an area and along the depth.",
"title": ""
},
{
"docid": "e024246deed46b3166a466d2d5ee3214",
"text": "INTRODUCTION\nThis study reports on the development of a new measure of hostile social-cognitive biases for use in paranoia research, the Ambiguous Intentions Hostility Questionnaire (AIHQ). The AIHQ is comprised of a variety of negative situations that differ in terms of intentionality. Items were developed to reflect causes that were ambiguous, intentional, and accidental in nature.\n\n\nMETHODS\nParticipants were 322 college students who completed the AIHQ along with measures of paranoia, hostility, attributional style, and psychosis proneness. The reliability and validity of the AIHQ was evaluated using both correlational and multiple regression methods.\n\n\nRESULTS\nThe AIHQ had good levels of reliability (internal consistency and interrater reliability). The AIHQ was positively correlated with paranoia and hostility and was not correlated with measures of psychosis proneness, which supported the convergent and discriminant validity of the scale. In addition, the AIHQ predicted incremental variance in paranoia scores as compared to the attributional, hostility, and psychosis proneness measures. Ambiguous items showed the most consistent relationships with paranoia.\n\n\nCONCLUSIONS\nThe AIHQ appears to be a reliable and valid measure of hostile social cognitive biases in paranoia. Recommendations for using the AIHQ in the study of paranoia are discussed.",
"title": ""
},
{
"docid": "57d1648391cac4ccfefd85aacef6b5ba",
"text": "Competition in the wireless telecommunications industry is fierce. To maintain profitability, wireless carriers must control churn, which is the loss of subscribers who switch from one carrier to another.We explore techniques from statistical machine learning to predict churn and, based on these predictions, to determine what incentives should be offered to subscribers to improve retention and maximize profitability to the carrier. The techniques include logit regression, decision trees, neural networks, and boosting. Our experiments are based on a database of nearly 47,000 U.S. domestic subscribers and includes information about their usage, billing, credit, application, and complaint history. Our experiments show that under a wide variety of assumptions concerning the cost of intervention and the retention rate resulting from intervention, using predictive techniques to identify potential churners and offering incentives can yield significant savings to a carrier. We also show the importance of a data representation crafted by domain experts. Finally, we report on a real-world test of the techniques that validate our simulation experiments.",
"title": ""
},
{
"docid": "94c9eec9aa4f36bf6b2d83c3cc8dbb12",
"text": "Many real world security problems can be modelled as finite zero-sum games with structured sequential strategies and limited interactions between the players. An abstract class of games unifying these models are the normal-form games with sequential strategies (NFGSS). We show that all games from this class can be modelled as well-formed imperfect-recall extensiveform games and consequently can be solved by counterfactual regret minimization. We propose an adaptation of the CFR algorithm for NFGSS and compare its performance to the standard methods based on linear programming and incremental game generation. We validate our approach on two security-inspired domains. We show that with a negligible loss in precision, CFR can compute a Nash equilibrium with five times less computation than its competitors. Game theory has been recently used to model many real world security problems, such as protecting airports (Pita et al. 2008) or airplanes (Tsai et al. 2009) from terrorist attacks, preventing fare evaders form misusing public transport (Yin et al. 2012), preventing attacks in computer networks (Durkota et al. 2015), or protecting wildlife from poachers (Fang, Stone, and Tambe 2015). Many of these security problems are sequential in nature. Rather than a single monolithic action, the players’ strategies are formed by sequences of smaller individual decisions. For example, the ticket inspectors make a sequence of decisions about where to check tickets and which train to take; a network administrator protects the network against a sequence of actions an attacker uses to penetrate deeper into the network. Sequential decision making in games has been extensively studied from various perspectives. Recent years have brought significant progress in solving massive imperfectinformation extensive-form games with a focus on the game of poker. Counterfactual regret minimization (Zinkevich et al. 2008) is the family of algorithms that has facilitated much of this progress, with a recent incarnation (Tammelin et al. 2015) essentially solving for the first time a variant of poker commonly played by people (Bowling et al. 2015). However, there has not been any transfer of these results to research on real world security problems. Copyright c © 2016, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. We focus on an abstract class of sequential games that can model many sequential security games, such as games taking place in physical space that can be discretized as a graph. This class of games is called normal-form games with sequential strategies (NFGSS) (Bosansky et al. 2015) and it includes, for example, existing game theoretic models of ticket inspection (Jiang et al. 2013), border patrolling (Bosansky et al. 2015), and securing road networks (Jain et al. 2011). In this work we formally prove that any NFGSS can be modelled as a slightly generalized chance-relaxed skew well-formed imperfect-recall game (CRSWF) (Lanctot et al. 2012; Kroer and Sandholm 2014), a subclass of extensiveform games with imperfect recall in which counterfactual regret minimization is guaranteed to converge to the optimal strategy. We then show how to adapt the recent variant of the algorithm, CFR, directly to NFGSS and present experimental validation on two distinct domains modelling search games and ticket inspection. We show that CFR is applicable and efficient in domains with imperfect recall that are substantially different from poker. Moreover, if we are willing to sacrifice a negligible degree of approximation, CFR can find a solution substantially faster than methods traditionally used in research on security games, such as formulating the game as a linear program (LP) and incrementally building the game model by double oracle methods.",
"title": ""
},
{
"docid": "ca807d3bed994a8e7492898e6bfe6dd2",
"text": "This paper proposes state-of-charge (SOC) and remaining charge estimation algorithm of each cell in series-connected lithium-ion batteries. SOC and remaining charge information are indicators for diagnosing cell-to-cell variation; thus, the proposed algorithm can be applied to SOC- or charge-based balancing in cell balancing controller. Compared to voltage-based balancing, SOC and remaining charge information improve the performance of balancing circuit but increase computational complexity which is a stumbling block in implementation. In this work, a simple current sensor-less SOC estimation algorithm with estimated current equalizer is used to achieve aforementioned object. To check the characteristics and validate the feasibility of the proposed method, a constant current discharging/charging profile is applied to a series-connected battery pack (twelve 2.6Ah Li-ion batteries). The experimental results show its applicability to SOC- and remaining charge-based balancing controller with high estimation accuracy.",
"title": ""
},
{
"docid": "74949417ff2ba47f153e05aac587e0dc",
"text": "This review examines the descriptive epidemiology, and risk and protective factors for youth suicide and suicidal behavior. A model of youth suicidal behavior is articulated, whereby suicidal behavior ensues as a result of an interaction of socio-cultural, developmental, psychiatric, psychological, and family-environmental factors. On the basis of this review, clinical and public health approaches to the reduction in youth suicide and recommendations for further research will be discussed.",
"title": ""
},
{
"docid": "b9ae6a5e5a0626db08a59d39220e9749",
"text": "The paper describes the architecture of SCIT supercomputer system of cluster type and the base architecture features used during this research project. This supercomputer system is put into research operation in Glushkov Institute of Cybernetics NAS of Ukraine from the early 2006 year. The paper may be useful for those scientists and engineers that are practically engaged in a cluster supercomputer systems design, integration and services.",
"title": ""
},
{
"docid": "a0fb601da8e6b79d4a876730cfee4271",
"text": "Social media platforms provide an inexpensive communication medium that allows anyone to publish content and anyone interested in the content can obtain it. However, this same potential of social media provide space for discourses that are harmful to certain groups of people. Examples of these discourses include bullying, offensive content, and hate speech. Out of these discourses hate speech is rapidly recognized as a serious problem by authorities of many countries. In this paper, we provide the first of a kind systematic large-scale measurement and analysis study of explicit expressions of hate speech in online social media. We aim to understand the abundance of hate speech in online social media, the most common hate expressions, the effect of anonymity on hate speech, the sensitivity of hate speech and the most hated groups across regions. In order to achieve our objectives, we gather traces from two social media systems: Whisper and Twitter. We then develop and validate a methodology to identify hate speech on both of these systems. Our results identify hate speech forms and unveil a set of important patterns, providing not only a broader understanding of online hate speech, but also offering directions for detection and prevention approaches.",
"title": ""
},
{
"docid": "78007b3276e795d76b692b40c4808c51",
"text": "The construct of trait emotional intelligence (trait EI or trait emotional self-efficacy) provides a comprehensive operationalization of emotion-related self-perceptions and dispositions. In the first part of the present study (N=274, 92 males), we performed two joint factor analyses to determine the location of trait EI in Eysenckian and Big Five factor space. The results showed that trait EI is a compound personality construct located at the lower levels of the two taxonomies. In the second part of the study, we performed six two-step hierarchical regressions to investigate the incremental validity of trait EI in predicting, over and above the Giant Three and Big Five personality dimensions, six distinct criteria (life satisfaction, rumination, two adaptive and two maladaptive coping styles). Trait EI incrementally predicted four criteria over the Giant Three and five criteria over the Big Five. The discussion addresses common questions about the operationalization of emotional intelligence as a personality trait.",
"title": ""
},
{
"docid": "2b4d85ad7ec9bbb3b2b964d1552b3006",
"text": "The transmission of pain from peripheral tissues through the spinal cord to the higher centres of the brain is clearly not a passive simple process using exclusive pathways. Rather, circuitry within the spinal cord has the potential to alter, dramatically, the relation between the stimulus and the response to pain in an individual. Thus an interplay between spinal neuronal systems, both excitatory and inhibitory, will determine the messages delivered to higher levels of the central nervous system. The incoming messages may be attenuated or enhanced, changes which may be determined by the particular circumstances. The latter state, termed central hypersensitivity [61], whereby low levels of afferent activity are amplified by spinal pharmacological mechanisms has attracted much attention [13, 15]. However, additionally, inhibitory controls are subject to alteration so that opioid sensitivity in different pain states is not fixed [14]. This plasticity, the capacity for transmission in nociceptive systems to change, can be induced over very short time courses. Recent research on the pharmacology of nociception has started to shed some well-needed light on this rapid plasticity which could have profound consequences for the pharmacological treatment of pain [8, 13, 15, 23, 24, 35, 36, 41, 62]. The pharmacology of the sensory neurones in the dorsal horn of the spinal cord is complex, so much so that most of the candidate neurotransmitters and their receptors found in the CNS are also found here [4, 32]. The transmitters are derived from either the afferent fibres, intrinsic neurones or descending fibres. The majority of the transmitters and receptors are concentrated in the substantia gelatinosa, one of the densest neuronal areas in the CNS and crucial for the reception and modulation of nociceptive messages transmitted via the peripheral fibres [4]. Nociceptive C-fibres terminate in the outer lamina 1 and the underlying substantia gelatinosa, whereas the large tactile fibres terminate in deeper laminae. However, in addition to the lamina 1 cells which send long ascending axons to the brain, deep dorsal horn cells also give rise to ascending axons and respond to C-fibre stimulation. In the case of these deep cells the C-fibre input may be relayed via",
"title": ""
},
{
"docid": "ddfd1bc1ca748bee286df92f8850286c",
"text": "The rapid growth of Location-based Social Networks (LBSNs) provides a great opportunity to satisfy the strong demand for personalized Point-of-Interest (POI) recommendation services. However, with the tremendous increase of users and POIs, POI recommender systems still face several challenging problems: (1) the hardness of modeling complex user-POI interactions from sparse implicit feedback; (2) the difficulty of incorporating the geographical context information. To cope with these challenges, we propose a novel autoencoder-based model to learn the complex user-POI relations, namely SAE-NAD, which consists of a self-attentive encoder (SAE) and a neighbor-aware decoder (NAD). In particular, unlike previous works equally treat users' checked-in POIs, our self-attentive encoder adaptively differentiates the user preference degrees in multiple aspects, by adopting a multi-dimensional attention mechanism. To incorporate the geographical context information, we propose a neighbor-aware decoder to make users' reachability higher on the similar and nearby neighbors of checked-in POIs, which is achieved by the inner product of POI embeddings together with the radial basis function (RBF) kernel. To evaluate the proposed model, we conduct extensive experiments on three real-world datasets with many state-of-the-art methods and evaluation metrics. The experimental results demonstrate the effectiveness of our model.",
"title": ""
},
{
"docid": "1e8acf321f7ff3a1a496e4820364e2a8",
"text": "The liver is a central regulator of metabolism, and liver failure thus constitutes a major health burden. Understanding how this complex organ develops during embryogenesis will yield insights into how liver regeneration can be promoted and how functional liver replacement tissue can be engineered. Recent studies of animal models have identified key signaling pathways and complex tissue interactions that progressively generate liver progenitor cells, differentiated lineages and functional tissues. In addition, progress in understanding how these cells interact, and how transcriptional and signaling programs precisely coordinate liver development, has begun to elucidate the molecular mechanisms underlying this complexity. Here, we review the lineage relationships, signaling pathways and transcriptional programs that orchestrate hepatogenesis.",
"title": ""
},
{
"docid": "065fc50e811af9a7080486eaf852ae3f",
"text": "While deep convolutional neural networks have shown a remarkable success in image classification, the problems of inter-class similarities, intra-class variances, the effective combination of multi-modal data, and the spatial variability in images of objects remain to be major challenges. To address these problems, this paper proposes a novel framework to learn a discriminative and spatially invariant classification model for object and indoor scene recognition using multi-modal RGB-D imagery. This is achieved through three postulates: 1) spatial invariance <inline-formula><tex-math notation=\"LaTeX\">$-$</tex-math><alternatives> <inline-graphic xlink:href=\"asif-ieq1-2747134.gif\"/></alternatives></inline-formula>this is achieved by combining a spatial transformer network with a deep convolutional neural network to learn features which are invariant to spatial translations, rotations, and scale changes, 2) high discriminative capability<inline-formula> <tex-math notation=\"LaTeX\">$-$</tex-math><alternatives><inline-graphic xlink:href=\"asif-ieq2-2747134.gif\"/> </alternatives></inline-formula>this is achieved by introducing Fisher encoding within the CNN architecture to learn features which have small inter-class similarities and large intra-class compactness, and 3) multi-modal hierarchical fusion<inline-formula><tex-math notation=\"LaTeX\">$-$</tex-math><alternatives> <inline-graphic xlink:href=\"asif-ieq3-2747134.gif\"/></alternatives></inline-formula>this is achieved through the regularization of semantic segmentation to a multi-modal CNN architecture, where class probabilities are estimated at different hierarchical levels (i.e., image- and pixel-levels), and fused into a Conditional Random Field (CRF)-based inference hypothesis, the optimization of which produces consistent class labels in RGB-D images. Extensive experimental evaluations on RGB-D object and scene datasets, and live video streams (acquired from Kinect) show that our framework produces superior object and scene classification results compared to the state-of-the-art methods.",
"title": ""
},
{
"docid": "4a817638751fdfe46dfccc43eea76cbd",
"text": "In this article we present a classification scheme for quantum computing technologies that is based on the characteristics most relevant to computer systems architecture. The engineering trade-offs of execution speed, decoherence of the quantum states, and size of systems are described. Concurrency, storage capacity, and interconnection network topology influence algorithmic efficiency, while quantum error correction and necessary quantum state measurement are the ultimate drivers of logical clock speed. We discuss several proposed technologies. Finally, we use our taxonomy to explore architectural implications for common arithmetic circuits, examine the implementation of quantum error correction, and discuss cluster-state quantum computation.",
"title": ""
}
] |
scidocsrr
|
1e98ee1b9d992765a3e01865a01f759f
|
Beyond Security and Privacy Perception: An Approach to Biometric Authentication Perception Change
|
[
{
"docid": "c99ae731ff819a1208c3e714be85f057",
"text": "Organizational information practices can result in a variety of privacy problems that can increase consumers’ concerns for information privacy. To explore the link between individuals and organizations regarding privacy, we study how institutional privacy assurances such as privacy policies and industry self-regulation can contribute to reducing individual privacy concerns. Drawing on Communication Privacy Management (CPM) theory, we develop a research model suggesting that an individual’s privacy concerns form through a cognitive process involving perceived privacy risk, privacy control, and his or her disposition to value privacy. Furthermore, individuals’ perceptions of institutional privacy assurances -namely, perceived effectiveness of privacy policies and perceived effectiveness of industry privacy self-regulation -are posited to affect the riskcontrol assessment from information disclosure, thus, being an essential component of privacy concerns. We empirically tested the research model through a survey that was administered to 823 users of four different types of websites: 1) electronic commerce sites, 2) social networking sites, 3) financial sites, and 4) healthcare sites. The results provide support for the majority of the hypothesized relationships. The study reported here is novel to the extent that existing empirical research has not explored the link between individuals’ privacy perceptions and institutional privacy assurances. We discuss implications for theory and practice and provide suggestions for future research.",
"title": ""
}
] |
[
{
"docid": "dba1d0b9a2c409bd6ff9c39cbdb1e7ed",
"text": "Recent research suggests that social interactions in video games may lead to the development of community bonding and prosocial attitudes. Building on this line of research, a national survey of U.S. adults finds that gamers who develop ties with a community of fellow gamers possess gaming social capital, a new gaming-related community construct that is shown to be a positive antecedent in predicting both face-to-face social capital and civic participation.",
"title": ""
},
{
"docid": "5eb63e991a00290d5892d010d0b28fef",
"text": "In this paper we investigate deceptive defense strategies for web servers. Web servers are widely exploited resources in the modern cyber threat landscape. Often these servers are exposed in the Internet and accessible for a broad range of valid as well as malicious users. Common security strategies like firewalls are not sufficient to protect web servers. Deception based Information Security enables a large set of counter measures to decrease the efficiency of intrusions. In this work we depict several techniques out of the reconnaissance process of an attacker. We match these with deceptive counter measures. All proposed measures are implemented in an experimental web server with deceptive counter measure abilities. We also conducted an experiment with honeytokens and evaluated delay strategies against automated scanner tools.",
"title": ""
},
{
"docid": "356a72153f61311546f6ff874ee79bb4",
"text": "In this paper, an object cosegmentation method based on shape conformability is proposed. Different from the previous object cosegmentation methods which are based on the region feature similarity of the common objects in image set, our proposed SaCoseg cosegmentation algorithm focuses on the shape consistency of the foreground objects in image set. In the proposed method, given an image set where the implied foreground objects may be varied in appearance but share similar shape structures, the implied common shape pattern in the image set can be automatically mined and regarded as the shape prior of those unsatisfactorily segmented images. The SaCoseg algorithm mainly consists of four steps: 1) the initial Grabcut segmentation; 2) the shape mapping by coherent point drift registration; 3) the common shape pattern discovery by affinity propagation clustering; and 4) the refinement by Grabcut with common shape constraint. To testify our proposed algorithm and establish a benchmark for future work, we built the CoShape data set to evaluate the shape-based cosegmentation. The experiments on CoShape data set and the comparison with some related cosegmentation algorithms demonstrate the good performance of the proposed SaCoseg algorithm.",
"title": ""
},
{
"docid": "e8a01490bc3407a2f8e204408e34c5b3",
"text": "This paper presents the design and implementation of a Class EF2 inverter and Class EF2 rectifier for two -W wireless power transfer (WPT) systems, one operating at 6.78 MHz and the other at 27.12 MHz. It will be shown that the Class EF2 circuits can be designed to have beneficial features for WPT applications such as reduced second-harmonic component and lower total harmonic distortion, higher power-output capability, reduction in magnetic core requirements and operation at higher frequencies in rectification compared to other circuit topologies. A model will first be presented to analyze the circuits and to derive values of its components to achieve optimum switching operation. Additional analysis regarding harmonic content, magnetic core requirements and open-circuit protection will also be performed. The design and implementation process of the two Class-EF2-based WPT systems will be discussed and compared to an equivalent Class-E-based WPT system. Experimental results will be provided to confirm validity of the analysis. A dc-dc efficiency of 75% was achieved with Class-EF2-based systems.",
"title": ""
},
{
"docid": "0c7512ac95d72436e31b9b05199eefdd",
"text": "Usable security has unique usability challenges bec ause the need for security often means that standard human-comput er-in eraction approaches cannot be directly applied. An important usability goal for authentication systems is to support users in s electing better passwords, thus increasing security by expanding th e effective password space. In click-based graphical passwords, poorly chosen passwords lead to the emergence of hotspots – portions of the image where users are more likely to select cli ck-points, allowing attackers to mount more successful diction ary attacks. We use persuasion to influence user choice in click -based graphical passwords, encouraging users to select mo re random, and hence more secure, click-points. Our approach i s to introduce persuasion to the Cued Click-Points graphical passw ord scheme (Chiasson, van Oorschot, Biddle, 2007) . Our resulting scheme significantly reduces hotspots while still maintain ing its usability.",
"title": ""
},
{
"docid": "45f500b2d7e3ee59a34ffe0fa34acb0a",
"text": "Task consolidation is a way to maximize utilization of cloud computing resources. Maximizing resource utilization provides various benefits such as the rationalization of maintenance , IT service customization, and QoS and reliable services. However, maximizing resource utilization does not mean efficient energy use. Much of the literature shows that energy consumption and resource utilization in clouds are highly coupled. Consequently, some of the literature aims to decrease resource utilization in order to save energy, while others try to reach a balance between resource utilization and energy consumption. In this paper, we present an energy-aware task consolidation (ETC) technique that minimizes energy consumption. ETC achieves this by restricting CPU use below a specified peak threshold. ETC does this by consolidating tasks amongst virtual clusters. In addition, the energy cost model considers network latency when a task migrates to another virtual cluster. To evaluate the performance of ETC we compare it against MaxUtil. MaxUtil is a recently developed greedy algorithm that aims to maximize cloud computing resources. The simulation results show that ETC can significantly reduce power consumption in a cloud system, with 17% improvement over MaxUtil. Cloud computing has recently become popular due to the maturity of related technologies such as network devices, software applications and hardware capacities. Resources in these systems can be widely distributed and the scale of resources involved can range from several servers to an entire data center. To integrate and make good use of resources at various scales, cloud computing needs efficient methods to manage them [4]. Consequently, the focus of much research in recent years has been on how to utilize resources and how to reduce power consumption. One of the key technologies in cloud computing is virtualization. The ability to create virtual machines (VMs) [14] dynamically on demand is a popular solution for managing resources on physical machines. Therefore, many methods [17,18] have been developed that enhance resource utilization such as memory compression, request discrimination, defining threshold for resource usage and task allocation among VMs. Improvements in power consumption, and the relationship between resource usage and energy consumption has also been widely studied [6,10–12,14–18]. Some research aims to improve resource utilization while others aim to reduce energy consumption. The goals of both are to reduce costs for data centers. Due to the large size of many data centers, the financial savings are substantial. Energy consumption varies according to CPU utilization [11]. Higher CPU utilization …",
"title": ""
},
{
"docid": "e3bb490de9489a0c02f023d25f0a94d7",
"text": "During the past two decades, self-efficacy has emerged as a highly effective predictor of students' motivation and learning. As a performance-based measure of perceived capability, self-efficacy differs conceptually and psychometrically from related motivational constructs, such as outcome expectations, self-concept, or locus of control. Researchers have succeeded in verifying its discriminant validity as well as convergent validity in predicting common motivational outcomes, such as students' activity choices, effort, persistence, and emotional reactions. Self-efficacy beliefs have been found to be sensitive to subtle changes in students' performance context, to interact with self-regulated learning processes, and to mediate students' academic achievement. Copyright 2000 Academic Press.",
"title": ""
},
{
"docid": "7a10f6b4215f1f4c9a9717412728d0df",
"text": "In this paper we review several novel approaches for research evaluation. We start with a brief overview of the peer review, its controversies, and metrics for assessing efficiency and overall quality of the peer review. We then discuss five approaches, including reputation-based ones, that come out of the research carried out by the LiquidPub project and research groups collaborated with LiquidPub. Those approaches are alternative or complementary to traditional peer review. We discuss pros and cons of the proposed approaches and conclude with a vision for the future of the research evaluation, arguing that no single system can suit all stakeholders in various communities.",
"title": ""
},
{
"docid": "d8e2fe04a2a900a55561f6e59fecc9fa",
"text": "In this paper realization of digital LCR meter is presented. Realized system is based on integrated circuit AD5933 which is controlled by microcontroller ATmega128. Device can calculate resistance, capacitance and inductance of the device under test as well as Dissipation and Quality factors. Operating frequency range is from 5 to 100 kHz with frequency sweep function in maximum 511 steps. Device has full standalone capabilities with LCD for displaying of results and keyboard for configuration. Created report of measured and calculated values is stored on micro SD card in format which is compatible with MS Excel which ensures easy offline analysis on PC. Accuracy of developed system is tested and verified through comparison with commercial LCR meter.",
"title": ""
},
{
"docid": "3d56d2c4b3b326bc676536d35b4bd77f",
"text": "In this work an experimental study about the capability of the LBP, HOG descriptors and color for clothing attribute classification is presented. Two different variants of the LBP descriptor are considered, the original LBP and the uniform LBP. Two classifiers, Linear SVM and Random Forest, have been included in the comparison because they have been frequently used in clothing attributes classification. The experiments are carried out with a public available dataset, the clothing attribute dataset, that has 26 attributes in total. The obtained accuracies are over 75% in most cases, reaching 80% for the necktie or sleeve length attributes.",
"title": ""
},
{
"docid": "3a46f6ff14e4921fa9bcdfdc9064b754",
"text": "Deep learning on graph structures has shown exciting results in various applications. However, few attentions have been paid to the robustness of such models, in contrast to numerous research work for image or text adversarial attack and defense. In this paper, we focus on the adversarial attacks that fool deep learning models by modifying the combinatorial structure of data. We first propose a reinforcement learning based attack method that learns the generalizable attack policy, while only requiring prediction labels from the target classifier. We further propose attack methods based on genetic algorithms and gradient descent in the scenario where additional prediction confidence or gradients are available. We use both synthetic and real-world data to show that, a family of Graph Neural Network models are vulnerable to these attacks, in both graph-level and node-level classification tasks. We also show such attacks can be used to diagnose the learned classifiers.",
"title": ""
},
{
"docid": "aa88086e527a2da737eb1d5968a1f4a9",
"text": "Video analytics will drive a wide range of applications with great potential to impact society. A geographically distributed architecture of public clouds and edges that extend down to the cameras is the only feasible approach to meeting the strict real-time requirements of large-scale live video analytics.",
"title": ""
},
{
"docid": "2dee247b24afc7ddba44b312c0832bc1",
"text": "During crowded events, cellular networks face voice and data traffic volumes that are often orders of magnitude higher than what they face during routine days. Despite the use of portable base stations for temporarily increasing communication capacity and free Wi-Fi access points for offloading Internet traffic from cellular base stations, crowded events still present significant challenges for cellular network operators looking to reduce dropped call events and improve Internet speeds. For an effective cellular network design, management, and optimization, it is crucial to understand how cellular network performance degrades during crowded events, what causes this degradation, and how practical mitigation schemes would perform in real-life crowded events. This paper makes a first step toward this end by characterizing the operational performance of a tier-1 cellular network in the U.S. during two high-profile crowded events in 2012. We illustrate how the changes in population distribution, user behavior, and application workload during crowded events result in significant voice and data performance degradation, including more than two orders of magnitude increase in connection failures. Our findings suggest two mechanisms that can improve performance without resorting to costly infrastructure changes: radio resource allocation tuning and opportunistic connection sharing. Using trace-driven simulations, we show that more aggressive release of radio resources via 1-2 s shorter radio resource control timeouts as compared with routine days helps to achieve better tradeoff between wasted radio resources, energy consumption, and delay during crowded events, and opportunistic connection sharing can reduce connection failures by 95% when employed by a small number of devices in each cell sector.",
"title": ""
},
{
"docid": "4489ab0d4d1c29c5f72a67042250468a",
"text": "This paper adopts a holistic approach to explain why social capital matters for effective implementation, widespread uptake, greater social inclusion, and the sustainability of CI initiatives. It describes a theoretical framework drawn from diffusion of innovation, community development and social capital theories. The framework emphasises the interplay between physical infrastructure (including hard technologies and their location in the community), soft technologies (including capacity building, education, training and awareness raising), social infrastructure (including local networks and community organisations) and social capital (including trust and reciprocity, strong sense of community, shared vision, and outcomes from participation in local and external networks).",
"title": ""
},
{
"docid": "b60c32fd30fe83773196cdbd9c6b2af1",
"text": "We design a new connectivity pattern for the U-Net architecture. Given several stacked U-Nets, we couple each U-Net pair through the connections of their semantic blocks, resulting in the coupled U-Nets (CU-Net). The coupling connections could make the information flow more efficiently across U-Nets. The feature reuse across U-Nets makes each U-Net very parameter efficient. We evaluate the coupled U-Nets on two benchmark datasets of human pose estimation. Both the accuracy and model parameter number are compared. The CU-Net obtains comparable accuracy as state-of-the-art methods. However, it only has at least 60% fewer parameters than other approaches.",
"title": ""
},
{
"docid": "6cab55a27a8ad40ff905f87e1eb206e7",
"text": "In recent years poaching incidents has been massively increased encompass slaughtering of endangered species in Tanzania and Africa in totality. Different initiatives has been taken world widely including establishment of International Anti-Poaching foundation (IAPF). Tanzania in particular has taken several initiatives on the matter at different time including sending her own military army across the borders of National parks as an attempt to eradicate poaching activities. However poachers are still continue to put a bullet on the heads of these species of monumental importance. The main idea presented in this paper involve employing a modern and a sophisticated technology in which poachers will be left behind and being netted easily there by eliminating Poaching activities. The idea utilize animals themselves with sensors as mobile biological sensors (MBS) mounted with sensor fusion (having visual, infrared camera and GPS) that transmits the location of MBS, access points for wireless communication and a central computer system which classifies animal actions. The system propose three different action of responses, firstly: access points continuously receive data about animals’ location using GPS at certain time intervals and the gathered data is then classified and checked to see if there is a sudden movement (panic) of the animal groups: this action is called animal behavior classification (ABC). The second action can be called visualization where by different image processing techniques of the obtained images surrounding an animal group are performed and therefore provide an ample assistance in understanding what makes sudden movement of the animal group. The last action is to send messages to the game ranger’s cellular phones about the panic of animals and the location through GSM network",
"title": ""
},
{
"docid": "b3898262f167c63ba2dbb3aacc259d5f",
"text": "We propose two model-free visual object trackers for following targets using the low-cost quadrotor Parrot AR.Drone 2.0 at low altitudes. Our trackers employ correlation filters for short-term tracking and a redetection strategy based on tracking-learning-detection (TLD). We performed an extensive quantitative evaluation of our trackers and a wide variety of existing trackers on person pursuit sequences. The results show that our trackers outperform the existing trackers. In addition, we demonstrate the applicability of our proposed trackers in a series of flight experiments in unconstrained environments using human targets and an existing visual servoing controller.",
"title": ""
},
{
"docid": "59cf9407986097ac31214c0289d6f8a2",
"text": "Model selection is a core aspect in machine learning and is, occasionally, multi-objective in nature. For instance, hyper-parameter selection in a multi-task learning context is of multi-objective nature, since all the tasks' objectives must be optimized simultaneously. In this paper, a novel multi-objective racing algorithm (RA), namely S-Race, is put forward. Given an ensemble of models, our task is to reliably identify Pareto optimal models evaluated against multiple objectives, while minimizing the total computational cost. As a RA, S-Race attempts to eliminate non-promising models with confidence as early as possible, so as to concentrate computational resources on promising ones. Simultaneously, it addresses the problem of multi-objective model selection (MOMS) in the sense of Pareto optimality. In S-Race, the nonparametric sign test is utilized for pair-wise dominance relationship identification. Moreover, a discrete Holm's step-down procedure is adopted to control the family-wise error rate of the set of hypotheses made simultaneously. The significance level assigned to each family is adjusted adaptively during the race. In order to illustrate its merits, S-Race is applied on three MOMS problems: (1) selecting support vector machines for classification; (2) tuning the parameters of artificial bee colony algorithms for numerical optimization; and (3) constructing optimal hybrid recommendation systems for movie recommendation. The experimental results confirm that S-Race is an efficient and effective MOMS algorithm compared to a brute-force approach.",
"title": ""
},
{
"docid": "171ded161c7d61cfaf4663ba080b0c6a",
"text": "Digital advertisements are delivered in the form of static images, animations or videos, with the goal to promote a product, a service or an idea to desktop or mobile users. Thus, the advertiser pays a monetary cost to buy ad-space in a content provider’s medium (e.g., website) to place their advertisement in the consumer’s display. However, is it only the advertiser who pays for the ad delivery? Unlike traditional advertisements in mediums such as newspapers, TV or radio, in the digital world, the end-users are also paying a cost for the advertisement delivery. Whilst the cost on the advertiser’s side is clearly monetary, on the end-user, it includes both quantifiable costs, such as network requests and transferred bytes, and qualitative costs such as privacy loss to the ad ecosystem. In this study, we aim to increase user awareness regarding the hidden costs of digital advertisement in mobile devices, and compare the user and advertiser views. Specifically, we built OpenDAMP, a transparency tool that passively analyzes users’ web traffic and estimates the costs in both sides. We use a year-long dataset of 1270 real mobile users and by juxtaposing the costs of both sides, we identify a clear imbalance: the advertisers pay several times less to deliver ads, than the cost paid by the users to download them. In addition, the majority of users experience a significant privacy loss, through the personalized ad delivery mechanics.",
"title": ""
},
{
"docid": "ae9de9ddc0a81a3607a1cb8ceb25280c",
"text": "The major chip manufacturers have all introduced chip multiprocessing (CMP) and simultaneous multithreading (SMT) technology into their processing units. As a result, even low-end computing systems and game consoles have become shared memory multiprocessors with L1 and L2 cache sharing within a chip. Mid- and large-scale systems will have multiple processing chips and hence consist of an SMP-CMP-SMT configuration with non-uniform data sharing overheads. Current operating system schedulers are not aware of these new cache organizations, and as a result, distribute threads across processors in a way that causes many unnecessary, long-latency cross-chip cache accesses.\n In this paper we describe the design and implementation of a scheme to schedule threads based on sharing patterns detected online using features of standard performance monitoring units (PMUs) available in today's processing units. The primary advantage of using the PMU infrastructure is that it is fine-grained (down to the cache line) and has relatively low overhead. We have implemented our scheme in Linux running on an 8-way Power5 SMP-CMP-SMT multi-processor. For commercial multithreaded server workloads (VolanoMark, SPECjbb, and RUBiS), we are able to demonstrate reductions in cross-chip cache accesses of up to 70%. These reductions lead to application-reported performance improvements of up to 7%.",
"title": ""
}
] |
scidocsrr
|
a3c5ae71fb46d555a69641cc1354b458
|
Mapping CMMI-DEV maturity levels to ISO/IEC 15504 capability profiles
|
[
{
"docid": "3edd4bde8b7f4f9c3cc9d6fc3182f4ed",
"text": "Small and medium enterprises are a very important cog in the gears of the world economy. The software industry in most countries is composed of an industrial scheme that is made up mainly of small and medium software enterprises—SMEs. To strengthen these types of organizations, efficient Software Engineering practices are needed—practices which have been adapted to their size and type of business. Over the last two decades, the Software Engineering community has expressed special interest in software process improvement (SPI) in an effort to increase software product quality, as well as the productivity of software development. However, there is a widespread tendency to make a point of stressing that the success of SPI is only possible for large companies. In this article, a systematic review of published case studies on the SPI efforts carried out in SMEs is presented. Its objective is to analyse the existing approaches towards SPI which focus on SMEs and which report a case study carried out in industry. A further objective is that of discussing the significant issues related to this area of knowledge, and to provide an up-to-date state of the art, from which innovative research activities can be thought of and planned.",
"title": ""
}
] |
[
{
"docid": "deccfbca102068be749a231405aca30e",
"text": " Case report.. We present a case of 28-year-old female patient with condylomata gigantea (Buschke-Lowenstein tumor) in anal and perianal region with propagation on vulva and vagina. The local surgical excision and CO2 laser treatment were performed. Histological examination showed presence of HPV type 11 without malignant potential. Result.. Three months later, there was no recurrence.",
"title": ""
},
{
"docid": "f06e1cd245863415531e65318c97f96b",
"text": "In this paper, we propose a new joint dictionary learning method for example-based image super-resolution (SR), using sparse representation. The low-resolution (LR) dictionary is trained from a set of LR sample image patches. Using the sparse representation coefficients of these LR patches over the LR dictionary, the high-resolution (HR) dictionary is trained by minimizing the reconstruction error of HR sample patches. The error criterion used here is the mean square error. In this way we guarantee that the HR patches have the same sparse representation over HR dictionary as the LR patches over the LR dictionary, and at the same time, these sparse representations can well reconstruct the HR patches. Simulation results show the effectiveness of our method compared to the state-of-art SR algorithms.",
"title": ""
},
{
"docid": "4edc0f70d6b8d599e28d245cbd8af31e",
"text": "To facilitate the use of biological outcome modeling for treatment planning, an exponential function is introduced as a simpler equivalent to the Lyman formula for calculating normal tissue complication probability (NTCP). The single parameter of the exponential function is chosen to reproduce the Lyman calculation to within approximately 0.3%, and thus enable easy conversion of data contained in empirical fits of Lyman parameters for organs at risk (OARs). Organ parameters for the new formula are given in terms of Lyman model m and TD(50), and conversely m and TD(50) are expressed in terms of the parameters of the new equation. The role of the Lyman volume-effect parameter n is unchanged from its role in the Lyman model. For a non-homogeneously irradiated OAR, an equation relates d(ref), n, v(eff) and the Niemierko equivalent uniform dose (EUD), where d(ref) and v(eff) are the reference dose and effective fractional volume of the Kutcher-Burman reduction algorithm (i.e. the LKB model). It follows in the LKB model that uniform EUD irradiation of an OAR results in the same NTCP as the original non-homogeneous distribution. The NTCP equation is therefore represented as a function of EUD. The inverse equation expresses EUD as a function of NTCP and is used to generate a table of EUD versus normal tissue complication probability for the Emami-Burman parameter fits as well as for OAR parameter sets from more recent data.",
"title": ""
},
{
"docid": "e8cbbb8298b63422c8cb050521cf4287",
"text": "Dynamic Difficulty Adjustment (DDA) is a mechanism used in video games that automatically tailors the individual gaming experience to match an appropriate difficulty setting. This is generally achieved by removing pre-defined difficulty tiers such as Easy, Medium and Hard; and instead concentrates on balancing the gameplay to match the challenge to the individual’s abilities. The work presented in this paper examines the implementation of DDA in a custom survival game developed by the author, namely Colwell’s Castle Defence. The premise of this arcade-style game is to defend a castle from hordes of oncoming enemies. The AI system that we developed adjusts the enemy spawn rate based on the current performance of the player. Specifically, we read the Player Health and Gate Health at the end of each level and then assign the player with an appropriate difficulty tier for the proceeding level. We tested the impact of our technique on thirty human players and concluded, based on questionnaire feedback, that enabling the technique led to more enjoyable gameplay.",
"title": ""
},
{
"docid": "653bbea24044bd53e4e9e180593d2321",
"text": "In this paper, we present an integrated model of the two central tasks of dialog management: interpreting user actions and generating system actions. We model the interpretation task as a classication problem and the generation task as a prediction problem. These two tasks are interleaved in an incremental parsing-based dialog model. We compare three alternative parsing methods for this dialog model using a corpus of human-human spoken dialog from a catalog ordering domain that has been annotated for dialog acts and task/subtask information. We contrast the amount of context provided by each method and its impact on performance.",
"title": ""
},
{
"docid": "c6a97ab04490f8deecb31c6e429f1953",
"text": "In this paper we describe the new security features of the international standard DLMS/COSEM that came along with its new Green Book Ed. 8. We compare them with those of the German Smart Meter Gateway approach that uses TLS to protect the privacy of connections. We show that the security levels of the cryptographic core methods are similar in both systems. However, there are several aspects concerning the security on which the German approach provides more concrete implementation instructions than DLMS/COSEM does (like lifetimes of certificates and random generators). We describe the differences in security and architecture of the two systems.",
"title": ""
},
{
"docid": "337e638ed6a9147d04649af2ce273e10",
"text": "Paper introduces main characteristics and particulars of an innovative design for an Unmanned Surface Vehicle to autonomously launch and recover AUVs (Autonomous Underwater Vehicles) in open sea. The USV has an unconventional SWATH hull shape and in its smaller size version, it has dedicated hangar at midship that can host one medium size AUV, completely recovered onboard. The focus of this paper is concentrated on the prediction of the steady and unsteady hydrodynamic characteristics in terms of hull resistance and motion in waves. The seakeeping prediction is made by a fully viscous 3D Unsteady NavierStokes solver. The predicted pitch and heave response of the USV-SWATH in relatively high regular waves, in the nonlinear regime, are compared with those of an equivalent catamaran vessel and dramatic reduction in vertical motions and accelerations are found. On this good basis the paper concludes presenting a first hypothesis of the L&R system based on self-tensioning winches with hoists and belt assemblies.",
"title": ""
},
{
"docid": "da3201add57485d574c71c6fa95fc28c",
"text": "Two experiments (modeled after J. Deese's 1959 study) revealed remarkable levels of false recall and false recognition in a list learning paradigm. In Experiment 1, subjects studied lists of 12 words (e.g., bed, rest, awake); each list was composed of associates of 1 nonpresented word (e.g., sleep). On immediate free recall tests, the nonpresented associates were recalled 40% of the time and were later recognized with high confidence. In Experiment 2, a false recall rate of 55% was obtained with an expanded set of lists, and on a later recognition test, subjects produced false alarms to these items at a rate comparable to the hit rate. The act of recall enhanced later remembering of both studied and nonstudied material. The results reveal a powerful illusion of memory: People remember events that never happened.",
"title": ""
},
{
"docid": "95fa8dea9960f1ecdebef3c195819821",
"text": "Microemulsions are clear, stable, isotropic mixtures of oil, water and surfactant, frequently in combination with a cosurfactant. These systems are currently of interest to the pharmaceutical scientist because of their considerable potential to act as drug delivery vehicles by incorporating a wide range of drug molecules. In order to appreciate the potential of microemulsions as delivery vehicles, this review gives an overview of the formation and phase behaviour and characterization of microemulsions. The use of microemulsions and closely related microemulsion-based systems as drug delivery vehicles is reviewed, with particular emphasis being placed on recent developments and future directions.",
"title": ""
},
{
"docid": "48c3152cb78e1bb755966d15f43d6f5a",
"text": "0950-7051/$ see front matter 2011 Elsevier B.V. A doi:10.1016/j.knosys.2011.06.013 ⇑ Corresponding author. E-mail addresses: jimenezv@uji.es (V. García), s mollined@uji.es (R.A. Mollineda). The present paper investigates the influence of both the imbalance ratio and the classifier on the performance of several resampling strategies to deal with imbalanced data sets. The study focuses on evaluating how learning is affected when different resampling algorithms transform the originally imbalanced data into artificially balanced class distributions. Experiments over 17 real data sets using eight different classifiers, four resampling algorithms and four performance evaluation measures show that over-sampling the minority class consistently outperforms under-sampling the majority class when data sets are strongly imbalanced, whereas there are not significant differences for databases with a low imbalance. Results also indicate that the classifier has a very poor influence on the effectiveness of the resampling strategies. 2011 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "deccc92276cca4d064b0161fd8ee7dd9",
"text": "Vast amount of information is available on web. Data analysis applications such as extracting mutual funds information from a website, daily extracting opening and closing price of stock from a web page involves web data extraction. Huge efforts are made by lots of researchers to automate the process of web data scraping. Lots of techniques depends on the structure of web page i.e. html structure or DOM tree structure to scrap data from web page. In this paper we are presenting survey of HTML aware web scrapping techniques. Keywords— DOM Tree, HTML structure, semi structured web pages, web scrapping and Web data extraction.",
"title": ""
},
{
"docid": "bf8ff16c84997fa12e1ae8bee1000565",
"text": "The demand for cloud computing is increasing dramatically due to the high computational requirements of business, social, web and scientific applications. Nowadays, applications and services are hosted on the cloud in order to reduce the costs of hardware, software and maintenance. To satisfy this high demand, the number of large-scale data centers has increased, which consumes a high volume of electrical power, has a negative impact on the environment, and comes with high operational costs. In this paper, we discuss many ongoing or implemented energy aware resource allocation techniques for cloud environments. We also present a comprehensive review on the different energy aware resource allocation and selection algorithms for virtual machines in the cloud. Finally, we come up with further research issues and challenges for future cloud environments.",
"title": ""
},
{
"docid": "7971ac5a8abaefc2ebc814624b5c8546",
"text": "Multibody structure from motion (SfM) is the extension of classical SfM to dynamic scenes with multiple rigidly moving objects. Recent research has unveiled some of the mathematical foundations of the problem, but a practical algorithm which can handle realistic sequences is still missing. In this paper, we discuss the requirements for such an algorithm, highlight theoretical issues and practical problems, and describe how a static structure-from-motion framework needs to be extended to handle real dynamic scenes. Theoretical issues include different situations in which the number of independently moving scene objects changes: Moving objects can enter or leave the field of view, merge into the static background (e.g., when a car is parked), or split off from the background and start moving independently. Practical issues arise due to small freely moving foreground objects with few and short feature tracks. We argue that all of these difficulties need to be handled online as structure-from-motion estimation progresses, and present an exemplary solution using the framework of probabilistic model-scoring.",
"title": ""
},
{
"docid": "afdb022bd163c1d5af226dc9624c1aee",
"text": "Sapienza University of Rome, Italy 1.",
"title": ""
},
{
"docid": "188322d93cb3242ccab716e810faaaac",
"text": "Citation relationship between scientific publications has been successfully used for scholarly bibliometrics, information retrieval and data mining tasks, and citation-based recommendation algorithms are well documented. While previous studies investigated citation relations from various viewpoints, most of them share the same assumption that, if paper1 cites paper2 (or author1 cites author2), they are connected, regardless of citation importance, sentiment, reason, topic, or motivation. However, this assumption is oversimplified. In this study, we employ an innovative \"context-rich heterogeneous network\" approach, which paves a new way for citation recommendation task. In the network, we characterize 1) the importance of citation relationships between citing and cited papers, and 2) the topical citation motivation. Unlike earlier studies, the citation information, in this paper, is characterized by citation textual contexts extracted from the full-text citing paper. We also propose algorithm to cope with the situation when large portion of full-text missing information exists in the bibliographic repository. Evaluation results show that, context-rich heterogeneous network can significantly enhance the citation recommendation performance.",
"title": ""
},
{
"docid": "d5bd7400d4b7e34cbf7af863df5f9935",
"text": "Fine-grained categorisation has been a challenging problem due to small inter-class variation, large intra-class variation and low number of training images. We propose a learning system which first clusters visually similar classes and then learns deep convolutional neural network features specific to each subset. Experiments on the popular fine-grained Caltech-UCSD bird dataset show that the proposed method outperforms recent fine-grained categorisation methods under the most difficult setting: no bounding boxes are presented at test time. It achieves a mean accuracy of 77.5%, compared to the previous best performance of 73.2%. We also show that progressive transfer learning allows us to first learn domain-generic features (for bird classification) which can then be adapted to specific set of bird classes, yielding improvements in accuracy.",
"title": ""
},
{
"docid": "a56e4d881081f9d88c9ca2f40f595c01",
"text": "We describe a framework for building abstraction hierarchies whereby an agent alternates skill- and representation-construction phases to construct a sequence of increasingly abstract Markov decision processes. Our formulation builds on recent results showing that the appropriate abstract representation of a problem is specified by the agent's skills. We describe how such a hierarchy can be used for fast planning, and illustrate the construction of an appropriate hierarchy for the Taxi domain.",
"title": ""
},
{
"docid": "5034b76d2a50d3955ccb9255fa054af9",
"text": "This paper proposed a highly sensitive micro-force sensor using curved PVDF for fetal heart rate monitoring long-termly. Based on the finite element method, numerical simulations were conducted to compare the straight and curved PVDF films in the aspects of the sensitivity for the dynamic excitation. The results showed that the peak voltages of the sensors varied remarkably with the curvature of the PVDF film. The maximum magnitude of the peak voltage response occurred at a certain value of the curvature. In the experiments, the voltage curves of the sensors were also recorded by an oscilloscope to study the effects of the mass on the surface of the sensors and validate the linearity and sensitivity of the sensors. The results showed that the sensitivity of the sensors up to about 60mV/N, which met the needs of fetal heart rate monitoring.",
"title": ""
},
{
"docid": "80bf80719a1751b16be2420635d34455",
"text": "Mood disorders are inherently related to emotion. In particular, the behaviour of people suffering from mood disorders such as unipolar depression shows a strong temporal correlation with the affective dimensions valence, arousal and dominance. In addition to structured self-report questionnaires, psychologists and psychiatrists use in their evaluation of a patient's level of depression the observation of facial expressions and vocal cues. It is in this context that we present the fourth Audio-Visual Emotion recognition Challenge (AVEC 2014). This edition of the challenge uses a subset of the tasks used in a previous challenge, allowing for more focussed studies. In addition, labels for a third dimension (Dominance) have been added and the number of annotators per clip has been increased to a minimum of three, with most clips annotated by 5. The challenge has two goals logically organised as sub-challenges: the first is to predict the continuous values of the affective dimensions valence, arousal and dominance at each moment in time. The second is to predict the value of a single self-reported severity of depression indicator for each recording in the dataset. This paper presents the challenge guidelines, the common data used, and the performance of the baseline system on the two tasks.",
"title": ""
},
{
"docid": "450655ee0532d854c7a5ecf2689035ec",
"text": "Instructional designers are expected to be familiar with the epistemological underpinnings of several theories and their consequences on the process of instruction. Constructivism is the dominant theory of the last decade and supports construction of knowledge by the individual. This paper discusses the basic principles underlying constructivism, particularly active, collaborative and authentic learning. Application of these principles on the process analysis, development, evaluation of instructional design poses certain challenges with regards to issues such as pre-specification of knowledge, authentic evaluation and learner control. Most of the problems are attributed to the fact that constructivism is a learning theory and not an instructional-design theory. Therefore, instructional designers must attempt to translate constructivism into instructional design through a more pragmatic approach that focuses on the principles of moderate rather than extreme constructivism and makes use of emergent technology tools. This shift could facilitate the development of more situated, experiential, meaningful and cost-effective learning environments.",
"title": ""
}
] |
scidocsrr
|
f74102035036a020e0db0d2267c91db1
|
Comparison of two approaches to building a vertical search tool: a case study in the nanotechnology domain
|
[
{
"docid": "c0a67a4d169590fa40dfa9d80768ef09",
"text": "Excerpts of technical papers and magazine articles that serve the purposes of conventional abstracts have been created entirely by automatic means. In the exploratory research described, the complete text of an article in machine-readable form i s scanned by a n IBM 704 data-processing machine and analyzed in accordance with a standard program. Statistical information derived from word frequency and distribution is used by the machine to compute a relative measure of significance, first for individual words and then for sentences. Sentences scoring highest in significance are extracted and printed out to become the \" auto-abstract. \" Introduction",
"title": ""
}
] |
[
{
"docid": "3bc9e621a0cfa7b8791ae3fb94eff738",
"text": "This paper deals with environment perception for automobile applications. Environment perception comprises measuring the surrounding field with onboard sensors such as cameras, radar, lidars, etc., and signal processing to extract relevant information for the planned safety or assistance function. Relevant information is primarily supplied using two well-known methods, namely, object based and grid based. In the introduction, we discuss the advantages and disadvantages of the two methods and subsequently present an approach that combines the two methods to achieve better results. The first part outlines how measurements from stereo sensors can be mapped onto an occupancy grid using an appropriate inverse sensor model. We employ the Dempster-Shafer theory to describe the occupancy grid, which has certain advantages over Bayes' theorem. Furthermore, we generate clusters of grid cells that potentially belong to separate obstacles in the field. These clusters serve as input for an object-tracking framework implemented with an interacting multiple-model estimator. Thereby, moving objects in the field can be identified, and this, in turn, helps update the occupancy grid more effectively. The first experimental results are illustrated, and the next possible research intentions are also discussed.",
"title": ""
},
{
"docid": "0d8c38444954a0003117e7334195cb00",
"text": "Although mature technologies exist for acquiring images, geometry, and normals of small objects, they remain cumbersome and time-consuming for non-experts to employ on a large scale. In an archaeological setting, a practical acquisition system for routine use on every artifact and fragment would open new possibilities for archiving, analysis, and dissemination. We present an inexpensive system for acquiring all three types of information, and associated metadata, for small objects such as fragments of wall paintings. The acquisition system requires minimal supervision, so that a single, non-expert user can scan at least 10 fragments per hour. To achieve this performance, we introduce new algorithms to robustly and automatically align range scans, register 2-D scans to 3-D geometry, and compute normals from 2-D scans. As an illustrative application, we present a novel 3-D matching algorithm that efficiently searches for matching fragments using the scanned geometry.",
"title": ""
},
{
"docid": "cafaea34fd2183d6c43db3f46adde2f2",
"text": "Currently, filling, smoothing, or recontouring the face through the use of injectable fillers is one of the most popular forms of cosmetic surgery. Because these materials promise a more youthful appearance without anesthesia in a noninvasive way, various fillers have been used widely in different parts of the world. However, most of these fillers have not been approved by the Food and Drug Administration, and their applications might cause unpleasant disfiguring complications. This report describes a case of foreign body granuloma in the cheeks secondary to polyethylene glycol injection and shows the possible complications associated with the use of facial fillers.",
"title": ""
},
{
"docid": "5422a4e5a82d0636c8069ec58c2753a2",
"text": "In this talk, I will focus on the applications and the latest development of deep learning technologies at Alibaba. More specifically, I will discuss (a) how to handle high dimensional data in deep learning and its application to recommender system, (b) the development of deep learning models for transfer learning and its application to image classification, (c) the development of combinatorial optimization techniques for DNN model compression and its application to large-scale image classification and object detection, and (d) the exploration of deep learning technique for combinatorial optimization and its application to the packing problem in shipping industry. I will conclude my talk with a discussion of new directions for deep learning that are under development at Alibaba.",
"title": ""
},
{
"docid": "e86ad4e9b61df587d9e9e96ab4eb3978",
"text": "This work presents a novel objective function for the unsupervised training of neural network sentence encoders. It exploits signals from paragraph-level discourse coherence to train these models to understand text. Our objective is purely discriminative, allowing us to train models many times faster than was possible under prior methods, and it yields models which perform well in extrinsic evaluations.",
"title": ""
},
{
"docid": "60b2c3461885ee918b082bb10b468a19",
"text": "Serverless computing platforms provide function(s)-as-a-Service (FaaS) to end users while promising reduced hosting costs, high availability, fault tolerance, and dynamic elasticity for hosting individual functions known as microservices. Serverless Computing environments, unlike Infrastructure-as-a-Service (IaaS) cloud platforms, abstract infrastructure management including creation of virtual machines (VMs), operating system containers, and request load balancing from users. To conserve cloud server capacity and energy, cloud providers allow hosting infrastructure to go COLD, deprovisioning containers when service demand is low freeing infrastructure to be harnessed by others. In this paper, we present results from our comprehensive investigation into the factors which influence microservice performance afforded by serverless computing. We examine hosting implications related to infrastructure elasticity, load balancing, provisioning variation, infrastructure retention, and memory reservation size. We identify four states of serverless infrastructure including: provider cold, VM cold, container cold, and warm and demonstrate how microservice performance varies up to 15x based on these states.",
"title": ""
},
{
"docid": "dbd11235f7b6b515f672b06bb10ebc3d",
"text": "Until recently job seeking has been a tricky, tedious and time consuming process, because people looking for a new position had to collect information from many different sources. Job recommendation systems have been proposed in order to automate and simplify this task, also increasing its effectiveness. However, current approaches rely on scarce manually collected data that often do not completely reveal people skills. Our work aims to find out relationships between jobs and people skills making use of data from LinkedIn users’ public profiles. Semantic associations arise by applying Latent Semantic Analysis (LSA). We use the mined semantics to obtain a hierarchical clustering of job positions and to build a job recommendation system. The outcome proves the effectiveness of our method in recommending job positions. Anyway, we argue that our approach is definitely general, because the extracted semantics could be worthy not only for job recommendation systems but also for recruiting systems. Furthermore, we point out that both the hierarchical clustering and the recommendation system do not require parameters to be tuned.",
"title": ""
},
{
"docid": "805583da675c068b7cc2bca80e918963",
"text": "Designing an actuator system for highly dynamic legged robots has been one of the grand challenges in robotics research. Conventional actuators for manufacturing applications have difficulty satisfying design requirements for high-speed locomotion, such as the need for high torque density and the ability to manage dynamic physical interactions. To address this challenge, this paper suggests a proprioceptive actuation paradigm that enables highly dynamic performance in legged machines. Proprioceptive actuation uses collocated force control at the joints to effectively control contact interactions at the feet under dynamic conditions. Modal analysis of a reduced leg model and dimensional analysis of DC motors address the main principles for implementation of this paradigm. In the realm of legged machines, this paradigm provides a unique combination of high torque density, high-bandwidth force control, and the ability to mitigate impacts through backdrivability. We introduce a new metric named the “impact mitigation factor” (IMF) to quantify backdrivability at impact, which enables design comparison across a wide class of robots. The MIT Cheetah leg is presented, and is shown to have an IMF that is comparable to other quadrupeds with series springs to handle impact. The design enables the Cheetah to control contact forces during dynamic bounding, with contact times down to 85 ms and peak forces over 450 N. The unique capabilities of the MIT Cheetah, achieving impact-robust force-controlled operation in high-speed three-dimensional running and jumping, suggest wider implementation of this holistic actuation approach.",
"title": ""
},
{
"docid": "3ef1f71f47175d2687d5c11b0d023162",
"text": "In attempting to fit a model of analogical problem solving to protocol data of students solving physics problems, several unexpected observations were made. Analogies between examples and exercises (a form of case-based reasoning) consisted of two distinct types of events. During an initialization event, the solver retrieved an example, set up a mapping between it and the problem, and decided whether the example was useful. During a transfer event, the solver inferred something about the problem’s solution. Many different types of initialization and transfer events were observed. Poor solvers tended to follow the example verbatim, copying each solution line over to the problem. Good solvers tried to solve the problem themselves, but referred to the example when they got stuck, or wanted to check a step, or wanted to avoid a detailed calculation. Rather than learn from analogies, both Good and Poor solvers tended to repeat analogies at subsequent similar situations. A revised version of the model is proposed (but not yet implemented) that appears to be consistent with all the findings observed in this and other studies of the same subjects.",
"title": ""
},
{
"docid": "4301aa3bb6a7d1ca9c0c17b8a12ebb37",
"text": "A CAPTCHA is a test that can, automatically, tell human and computer programs apart. It is a mechanism widely used nowadays for protecting web applications, interfaces, and services from malicious users and automated spammers. Usability and robustness are two fundamental aspects with CAPTCHA, where the usability aspect is the ease with which humans pass its challenges, while the robustness is the strength of its segmentation-resistance mechanism. The collapsing mechanism, which is removing the space between characters to prevent segmentation, has been shown to be reasonably resistant to known attacks. On the other hand, this mechanism drops considerably the human-solvability of text-based CAPTCHAs. Accordingly, an optimizer has previously been proposed that automatically enhances the usability of a CAPTCHA generation without sacrificing its robustness level. However, this optimizer has not yet been evaluated in terms of improving the usability. This paper, therefore, evaluates the usability of this optimizer by conducting an experimental study. The results of this evaluation showed that a statistically significant enhancement is found in the usability of text-based CAPTCHA generation. Keywords—text-based CAPTCHA; usability; security; optimization; experimentation; evaluation",
"title": ""
},
{
"docid": "19f1a6c9c5faf73b8868164e8bb310c6",
"text": "Holoprosencephaly refers to a spectrum of craniofacial malformations including cyclopia, ethmocephaly, cebocephaly, and premaxillary agenesis. Etiologic heterogeneity is well documented. Chromosomal, genetic, and teratogenic factors have been implicated. Recognition of holoprosencephaly as a developmental field defect stresses the importance of close scrutiny of relatives for mild forms such as single median incisor, hypotelorism, bifid uvula, or pituitary deficiency.",
"title": ""
},
{
"docid": "2fbe9db6c676dd64c95e72e8990c63f0",
"text": "Community detection is one of the most important problems in the field of complex networks in recent years. Themajority of present algorithms only find disjoint communities, however, community often overlap to some extent in many real-world networks. In this paper, an improvedmulti-objective quantum-behaved particle swarm optimization (IMOQPSO) based on spectral-clustering is proposed to detect the overlapping community structure in complex networks. Firstly, the line graph of the graph modeling the network is formed, and a spectral method is employed to extract the spectral information of the line graph. Secondly, IMOQPSO is employed to solve the multi-objective optimization problem so as to resolve the separated community structure in the line graph which corresponding to the overlapping community structure in the graph presenting the network. Finally, a fine-tuning strategy is adopted to improve the accuracy of community detection. The experiments on both synthetic and real-world networks demonstrate our method achieves cover results which fit the real situation in an even better fashion.",
"title": ""
},
{
"docid": "cb0b7879f61630b467aa595d961bfcef",
"text": "UNLABELLED\nGlucagon-like peptide 1 (GLP-1[7-36 amide]) is an incretin hormone primarily synthesized in the lower gut (ileum, colon/rectum). Nevertheless, there is an early increment in plasma GLP-1 immediately after ingesting glucose or mixed meals, before nutrients have entered GLP-1 rich intestinal regions. The responsible signalling pathway between the upper and lower gut is not clear. It was the aim of this study to see, whether small intestinal resection or colonectomy changes GLP-1[7-36 amide] release after oral glucose. In eight healthy controls, in seven patients with inactive Crohn's disease (no surgery), in nine patients each after primarily jejunal or ileal small intestinal resections, and in six colonectomized patients not different in age (p = 0.10), body-mass-index (p = 0.24), waist-hip-ratio (p = 0.43), and HbA1c (p = 0.22), oral glucose tolerance tests (75 g) were performed in the fasting state. GLP-1[7-36 amide], insulin C-peptide, GIP and glucagon (specific (RIAs) were measured over 240 min.\n\n\nSTATISTICS\nRepeated measures ANOVA, t-test (significance: p < 0.05). A clear and early (peak: 15-30 min) GLP-1[7-36 amide] response was observed in all subjects, without any significant difference between gut-resected and control groups (p = 0.95). There were no significant differences in oral glucose tolerance (p = 0.21) or in the suppression of pancreatic glucagon (p = 0.36). Colonectomized patients had a higher insulin (p = 0.011) and C-peptide (p = 0.0023) response in comparison to all other groups. GIP responses also were higher in the colonectomized patients (p = 0.0005). Inactive Crohn's disease and resections of the small intestine as well as proctocolectomy did not change overall GLP-1[7-36 amide] responses and especially not the early increment after oral glucose. This may indicate release of GLP-1[7-36 amide] after oral glucose from the small number of GLP-1[7-36 amide] producing L-cells in the upper gut rather than from the main source in the ileum, colon and rectum. Colonectomized patients are characterized by insulin hypersecretion, which in combination with their normal oral glucose tolerance possibly indicates a reduced insulin sensitivity in this patient group. GIP may play a role in mediating insulin hypersecretion in these patients.",
"title": ""
},
{
"docid": "7afa24cc5aa346b79436c1b9b7b15b23",
"text": "Humans demonstrate remarkable abilities to predict physical events in complex scenes. Two classes of models for physical scene understanding have recently been proposed: “Intuitive Physics Engines”, or IPEs, which posit that people make predictions by running approximate probabilistic simulations in causal mental models similar in nature to video-game physics engines, and memory-based models, which make judgments based on analogies to stored experiences of previously encountered scenes and physical outcomes. Versions of the latter have recently been instantiated in convolutional neural network (CNN) architectures. Here we report four experiments that, to our knowledge, are the first rigorous comparisons of simulation-based and CNN-based models, where both approaches are concretely instantiated in algorithms that can run on raw image inputs and produce as outputs physical judgments such as whether a stack of blocks will fall. Both approaches can achieve super-human accuracy levels and can quantitatively predict human judgments to a similar degree, but only the simulation-based models generalize to novel situations in ways that people do, and are qualitatively consistent with systematic perceptual illusions and judgment asymmetries that people show.",
"title": ""
},
{
"docid": "d780db3ec609d74827a88c0fa0d25f56",
"text": "Highly automated test vehicles are rare today, and (independent) researchers have often limited access to them. Also, developing fully functioning system prototypes is time and effort consuming. In this paper, we present three adaptions of the Wizard of Oz technique as a means of gathering data about interactions with highly automated vehicles in early development phases. Two of them address interactions between drivers and highly automated vehicles, while the third one is adapted to address interactions between pedestrians and highly automated vehicles. The focus is on the experimental methodology adaptations and our lessons learned.",
"title": ""
},
{
"docid": "ae57246e37060c8338ad9894a19f1b6b",
"text": "This paper seeks to establish the conceptual and empirical basis for an innovative instrument of corporate knowledge management: the knowledge map. It begins by briefly outlining the rationale for knowledge mapping, i.e., providing a common context to access expertise and experience in large companies. It then conceptualizes five types of knowledge maps that can be used in managing organizational knowledge. They are knowledge-sources, assets, -structures, -applications, and -development maps. In order to illustrate these five types of maps, a series of examples will be presented (from a multimedia agency, a consulting group, a market research firm, and a mediumsized services company) and the advantages and disadvantages of the knowledge mapping technique for knowledge management will be discussed. The paper concludes with a series of quality criteria for knowledge maps and proposes a five step procedure to implement knowledge maps in a corporate intranet.",
"title": ""
},
{
"docid": "e14f1292fd3d0f744f041219217f1e15",
"text": "Previous research highlights how adept people are at emotional recovery after rejection, but less research has examined factors that can prevent full recovery. In five studies, we investigate how changing one's self-definition in response to rejection causes more lasting damage. We demonstrate that people who endorse an entity theory of personality (i.e., personality cannot be changed) report alterations in their self-definitions when reflecting on past rejections (Studies 1, 2, and 3) or imagining novel rejection experiences (Studies 4 and 5). Further, these changes in self-definition hinder post-rejection recovery, causing individuals to feel haunted by their past, that is, to fear the recurrence of rejection and to experience lingering negative affect from the rejection. Thus, beliefs that prompt people to tie experiences of rejection to self-definition cause rejection's impact to linger.",
"title": ""
},
{
"docid": "cd108d7b5487cbcf5226b531906364a7",
"text": "There has been a great deal of hype about Amazon's simple storage service (S3). S3 provides infinite scalability and high availability at low cost. Currently, S3 is used mostly to store multi-media documents (videos, photos, audio) which are shared by a community of people and rarely updated. The purpose of this paper is to demonstrate the opportunities and limitations of using S3 as a storage system for general-purpose database applications which involve small objects and frequent updates. Read, write, and commit protocols are presented. Furthermore, the cost ($), performance, and consistency properties of such a storage system are studied.",
"title": ""
},
{
"docid": "762376fb3a4c0b7fe596b76cc5b2dde2",
"text": "We describe our system (DT Team) submitted at SemEval-2017 Task 1, Semantic Textual Similarity (STS) challenge for English (Track 5). We developed three different models with various features including similarity scores calculated using word and chunk alignments, word/sentence embeddings, and Gaussian Mixture Model (GMM). The correlation between our system’s output and the human judgments were up to 0.8536, which is more than 10% above baseline, and almost as good as the best performing system which was at 0.8547 correlation (the difference is just about 0.1%). Also, our system produced leading results when evaluated with a separate STS benchmark dataset. The word alignment and sentence embeddings based features were found to be very effective.",
"title": ""
},
{
"docid": "405a5cbb1caa0d3e85d0978f6cd28f5d",
"text": "BACKGROUND\nTexting while driving and other cell-phone reading and writing activities are high-risk activities associated with motor vehicle collisions and mortality. This paper describes the development and preliminary evaluation of the Distracted Driving Survey (DDS) and score.\n\n\nMETHODS\nSurvey questions were developed by a research team using semi-structured interviews, pilot-tested, and evaluated in young drivers for validity and reliability. Questions focused on texting while driving and use of email, social media, and maps on cellular phones with specific questions about the driving speeds at which these activities are performed.\n\n\nRESULTS\nIn 228 drivers 18-24 years old, the DDS showed excellent internal consistency (Cronbach's alpha = 0.93) and correlations with reported 12-month crash rates. The score is reported on a 0-44 scale with 44 being highest risk behaviors. For every 1 unit increase of the DDS score, the odds of reporting a car crash increases 7 %. The survey can be completed in two minutes, or less than five minutes if demographic and background information is included. Text messaging was common; 59.2 and 71.5 % of respondents said they wrote and read text messages, respectively, while driving in the last 30 days.\n\n\nCONCLUSION\nThe DDS is an 11-item scale that measures cell phone-related distracted driving risk and includes reading/viewing and writing subscores. The scale demonstrated strong validity and reliability in drivers age 24 and younger. The DDS may be useful for measuring rates of cell-phone related distracted driving and for evaluating public health interventions focused on reducing such behaviors.",
"title": ""
}
] |
scidocsrr
|
fae45e2371cbe78186beea269376aec2
|
Supplier Development at Honda , Nissan and Toyota : Comparative Case Studies of Organizational Capability Enhancement *
|
[
{
"docid": "3fd9fd52be3153fe84f2ea6319665711",
"text": "The theories of supermodular optimization and games provide a framework for the analysis of systems marked by complementarity. We summarize the principal results of these theories and indicate their usefulness by applying them to study the shift to 'modern manufacturing'. We also use them to analyze the characteristic features of the Lincoln Electric Company's strategy and structure.",
"title": ""
}
] |
[
{
"docid": "82e866d42fed897b66e49c92209ad805",
"text": "A fingerprinting design extracts discriminating features, called fingerprints. The extracted features are unique and specific to each image/video. The visual hash is usually a global fingerprinting technique with crypto-system constraints. In this paper, we propose an innovative video content identification process which combines a visual hash function and a local fingerprinting. Thanks to a visual hash function, we observe the video content variation and we detect key frames. A local image fingerprint technique characterizes the detected key frames. The set of local fingerprints for the whole video summarizes the video or fragments of the video. The video fingerprinting algorithm identifies an unknown video or a fragment of video within a video fingerprint database. It compares the local fingerprints of the candidate video with all local fingerprints of a database even if strong distortions are applied to an original content.",
"title": ""
},
{
"docid": "63b73a09437ce848426847f17ce9703d",
"text": "A main distinguishing feature of a wireless network compared with a wired network is its broadcast nature, in which the signal transmitted by a node may reach several other nodes, and a node may receive signals from several other nodes simultaneously. Rather than a blessing, this feature is treated more as an interference-inducing nuisance in most wireless networks today (e.g., IEEE 802.11). The goal of this paper is to show how the concept of network coding can be applied at the physical layer to turn the broadcast property into a capacityboosting advantage in wireless ad hoc networks. Specifically, we propose a physical-layer network coding (PNC) scheme to coordinate transmissions among nodes. In contrast to “straightforward” network coding which performs coding arithmetic on digital bit streams after they have been received, PNC makes use of the additive nature of simultaneously arriving electromagnetic (EM) waves for equivalent coding operation. PNC can yield higher capacity than straightforward network coding when applied to wireless networks. We believe this is a first paper that ventures into EM-wave-based network coding at the physical layer and demonstrates its potential for boosting network capacity. PNC opens up a whole new research area because of its implications and new design requirements for the physical, MAC, and network layers of ad hoc wireless stations. The resolution of the many outstanding but interesting issues in PNC may lead to a revolutionary new paradigm for wireless ad hoc networking.",
"title": ""
},
{
"docid": "a91959f4f85902acea8eb4b611b2799f",
"text": "--The fourth generation wireless communication systems have been deployed or are soon to be deployed in many countries. However, with an explosion of wireless mobile devices and services, there are still some challenges that cannot be accommodated even by 4G, such as the spectrum crisis and high energy consumption. Wireless system designers have been facing the continuously increasing demand for high data rates and mobility required by new wireless applications and therefore have started research on fifth generation wireless systems that are expected to be deployed beyond 2020. In this article, we propose a potential cellular architecture that separates indoor and outdoor scenarios, and discuss various promising technologies for 5G wireless communication systems, such as massive MIMO, energy-efficient communications, cognitive radio networks, and visible light communications. Future challenges facing these potential technologies are also",
"title": ""
},
{
"docid": "48c73018637d397c643c9bbf5d8157d6",
"text": "Recent results on active multifeed Ka-band developments for satellite communication are presented. Basically, the active multifeed system is an array-fed, single-offset reflector antenna. The antenna array is fed by an active microstrip beamforming network based on a low temperature cofired ceramics substrate with monolithic microwave integrated circuit based phase and amplitude actuators. In this paper, the benefits of such a technology are shown as well as first project results.",
"title": ""
},
{
"docid": "d6c7ab310c240596dcf7cb6a5572e5f3",
"text": "We present a knowledge-rich approach to computing semantic relatedness which exploits the joint contribution of different languages. Our approach is based on the lexicon and semantic knowledge of a wide-coverage multilingual knowledge base, which is used to compute semantic graphs in a variety of languages. Complementary information from these graphs is then combined to produce a ‘core’ graph where disambiguated translations are connected by means of strong semantic relations. We evaluate our approach on standard monolingual and bilingual datasets, and show that: i) we outperform a graph-based approach which does not use multilinguality in a joint way; ii) we achieve uniformly competitive results for both resource-rich and resource-poor languages.",
"title": ""
},
{
"docid": "13572c74a989b8677eec026788b381fe",
"text": "We examined the effect of stereotype threat on blood pressure reactivity. Compared with European Americans, and African Americans under little or no stereotype threat, African Americans under stereotype threat exhibited larger increases in mean arterial blood pressure during an academic test, and performed more poorly on difficult test items. We discuss the significance of these findings for understanding the incidence of hypertension among African Americans.",
"title": ""
},
{
"docid": "ede8a7a2ba75200dce83e17609ec4b5b",
"text": "We present a complimentary objective for training recurrent neural networks (RNN) with gating units that helps with regularization and interpretability of the trained model. Attention-based RNN models have shown success in many difficult sequence to sequence classification problems with long and short term dependencies, however these models are prone to overfitting. In this paper, we describe how to regularize these models through an L1 penalty on the activation of the gating units, and show that this technique reduces overfitting on a variety of tasks while also providing to us a human-interpretable visualization of the inputs used by the network. These tasks include sentiment analysis, paraphrase recognition, and question answering.",
"title": ""
},
{
"docid": "8b9a7201d3b0ea20705c8aea7751d59f",
"text": "Positive patient outcomes require effective teamwork, communication, and technological literacy. These skills vary among the unprecedented five generations in the nursing workforce, spanning the \"Silent Generation\" nurses deferring retirement to the newest \"iGeneration.\" Nursing professional development educators must understand generational differences; address communication, information technology, and team-building competencies across generations; and promote integration of learner-centered strategies into professional development activities.",
"title": ""
},
{
"docid": "977efac2809f4dc455e1289ef54008b0",
"text": "A novel 3-D NAND flash memory device, VSAT (Vertical-Stacked-Array-Transistor), has successfully been achieved. The VSAT was realized through a cost-effective and straightforward process called PIPE (planarized-Integration-on-the-same-plane). The VSAT combined with PIPE forms a unique 3-D vertical integration method that may be exploited for ultra-high-density Flash memory chip and solid-state-drive (SSD) applications. The off-current level in the polysilicon-channel transistor dramatically decreases by five orders of magnitude by using an ultra-thin body of 20 nm thick and a double-gate-in-series structure. In addition, hydrogen annealing improves the subthreshold swing and the mobility of the polysilicon-channel transistor.",
"title": ""
},
{
"docid": "cece842f05a59c824a2272106ff2e3a9",
"text": "Recent developments in sensor technology [1], [2] have resulted in the deployment of mobile robots equipped with multiple sensors, in specific real-world applications [3]–[6]. A robot equipped with multiple sensors, however, obtains information about different regions of the scene, in different formats and with varying levels of uncertainty. In addition, the bits of information obtained from different sensors may contradict or complement each other. One open challenge to the widespread deployment of robots is the ability to fully utilize the information obtained from each sensor, in order to operate robustly in dynamic environments. This paper presents a probabilistic framework to address autonomous multisensor information fusion on a humanoid robot. The robot exploits the known structure of the environment to autonomously model the expected performance of the individual information processing schemes. The learned models are used to effectively merge the available information. As a result, the robot is able to robustly detect and localize mobile obstacles in its environment. The algorithm is fully implemented and tested on a humanoid robot platform (Aldebaran Naos [7]) in the robot soccer scenario.",
"title": ""
},
{
"docid": "2d955a3e27c6d3419417946066acd9c8",
"text": "Progress in DNA sequencing has revealed the startling complexity of cancer genomes, which typically carry thousands of somatic mutations. However, it remains unclear which are the key driver mutations or dependencies in a given cancer and how these influence pathogenesis and response to therapy. Although tumors of similar types and clinical outcomes can have patterns of mutations that are strikingly different, it is becoming apparent that these mutations recurrently hijack the same hallmark molecular pathways and networks. For this reason, it is likely that successful interpretation of cancer genomes will require comprehensive knowledge of the molecular networks under selective pressure in oncogenesis. Here we announce the creation of a new effort, The Cancer Cell Map Initiative (CCMI), aimed at systematically detailing these complex interactions among cancer genes and how they differ between diseased and healthy states. We discuss recent progress that enables creation of these cancer cell maps across a range of tumor types and how they can be used to target networks disrupted in individual patients, significantly accelerating the development of precision medicine.",
"title": ""
},
{
"docid": "12afcb4303fa763eccbbbe82cfbce96c",
"text": "Hepatocellular carcinoma is the sixth most prevalent cancer and the third most frequent cause of cancer-related death. Patients with cirrhosis are at highest risk of developing this malignant disease, and ultrasonography every 6 months is recommended. Surveillance with ultrasonography allows diagnosis at early stages when the tumour might be curable by resection, liver transplantation, or ablation, and 5-year survival higher than 50% can be achieved. Patients with small solitary tumours and very well preserved liver function are the best candidates for surgical resection. Liver transplantation is most beneficial for individuals who are not good candidates for resection, especially those within Milano criteria (solitary tumour ≤5 cm and up to three nodules ≤3 cm). Donor shortage greatly limits its applicability. Percutaneous ablation is the most frequently used treatment but its effectiveness is limited by tumour size and localisation. In asymptomatic patients with multifocal disease without vascular invasion or extrahepatic spread not amenable to curative treatments, chemoembolisation can provide survival benefit. Findings of randomised trials of sorafenib have shown survival benefits for individuals with advanced hepatocellular carcinoma, suggesting that molecular-targeted therapies could be effective in this chemoresistant cancer. Research is active in the area of pathogenesis and treatment of hepatocellular carcinoma.",
"title": ""
},
{
"docid": "a847c55498e36e2ca6e8f27a7cf59c6e",
"text": "Although three-dimensional computer graphics have been around for several decades, there has been a surge of general interest towards the field in the last couple of years. Just a quick glance at the latest blockbuster movies is enough to see the public's fascination with the new generation of graphics. As exciting as graphics are, however, there is a definite barrier which prevents most people from learning about them. For one thing, there is a lot of math and theory involved. Beyond that, just getting a window to display even simple 2D graphics can often be a daunting task. In this article, we will talk about a powerful yet simple 3D graphics method known as ray tracing, which can be understood and implemented without dealing with much math or the intricacies of windowing systems. The only math we assume is a basic knowledge of vectors, dot products, and cross products. We will skip most of the theory behind ray tracing, focusing instead on a general overview of the technique from an implementation-oriented perspective. Full C++ source code for a simple, hardwareindependent ray tracer is available online, to show how the principles described in this paper are applied.",
"title": ""
},
{
"docid": "f41994d4d916cb9d68840d46655dc63b",
"text": "Cloud computing refers to adopting advanced virtualized resources for high scalability that can be shared with end users. Utilization of this technology is expediting in the world to intensify the potential of cloud computing based on E-learning in higher education institutions. Due to various apparent reasons, some higher education institutions are disinclined to relocate internal services to cloud services. To develop a model for utilizing E-learning based on cloud computing is the focal point of this paper, which is based on two prominent theories: The Fit-Viability Model and Diffusion of Innovation, in addition to information culture factors. The main purpose of this model is to investigate the significant factors to exploit the cloud in the enhancement of E-learning in higher education institutions.",
"title": ""
},
{
"docid": "3c8b9a015157a7dd7ce4a6b0b35847d9",
"text": "While more and more people are relying on social media for news feeds, serious news consumers still resort to well-established news outlets for more accurate and in-depth reporting and analyses. They may also look for reports on related events that have happened before and other background information in order to better understand the event being reported. Many news outlets already create sidebars and embed hyperlinks to help news readers, often with manual efforts. Technologies in IR and NLP already exist to support those features, but standard test collections do not address the tasks of modern news consumption. To help advance such technologies and transfer them to news reporting, NIST, in partnership with the Washington Post, is starting a new TREC track in 2018 known as the News Track.",
"title": ""
},
{
"docid": "6a938ceeec7601c7a7bf1ff0107f0163",
"text": "We have been developing a 4DOF exoskeleton robot system in order to assist shoulder vertical motion, shoulder horizontal motion, elbow motion, and forearm motion of physically weak persons such as elderly, injured, or disabled persons. The robot is directly attached to a user's body and activated based on EMG (electromyogram) signals of the user's muscles, since the EMG signals directly reflect the user's motion intention. A neuro-fuzzy controller has been applied to control the exoskeleton robot system. In this paper, controller adaptation method to user's EMG signals is proposed. A motion indicator is introduced to indicate the motion intention of the user for the controller adaptation. The experimental results show the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "7151c9301a467b29cd4182c9ea3ceef9",
"text": "This study had two purposes: The first was to investigate the effects of instruction on pragmatic acquisition in writing. In particular, the focus was on the use of hedging devices in the academic writing of learners of English as a second language. The second purpose was to discover whether this training transferred to a less-planned, less-formal, computer-mediated type of writing, namely a Daedalus interaction. Graduate students enrolled in an academic writing class for non-native Englishspeakers received treatment designed to increase their metapragmatic awareness and improve their ability to use hedging devices. Data were compared to a control group that did not receive the treatment. The treatment group showed statistically significant increases in the use of hedging devices in the research papers and in the computer-mediated discussion.",
"title": ""
},
{
"docid": "48a0e75b97fdaa734f033c6b7791e81f",
"text": "OBJECTIVE\nTo examine the role of physical activity, inactivity, and dietary patterns on annual weight changes among preadolescents and adolescents, taking growth and development into account.\n\n\nSTUDY DESIGN\nWe studied a cohort of 6149 girls and 4620 boys from all over the United States who were 9 to 14 years old in 1996. All returned questionnaires in the fall of 1996 and a year later in 1997. Each child provided his or her current height and weight and a detailed assessment of typical past-year dietary intakes, physical activities, and recreational inactivities (TV, videos/VCR, and video/computer games).\n\n\nMETHODS\nOur hypotheses were that physical activity and dietary fiber intake are negatively correlated with annual changes in adiposity and that recreational inactivity (TV/videos/games), caloric intake, and dietary fat intake are positively correlated with annual changes in adiposity. Separately for boys and girls, we performed regression analysis of 1-year change in body mass index (BMI; kg/m(2)). All hypothesized factors were in the model simultaneously with several adjustment factors.\n\n\nRESULTS\nLarger increases in BMI from 1996 to 1997 were among girls who reported higher caloric intakes (.0061 +/-.0026 kg/m(2) per 100 kcal/day; beta +/- standard error), less physical activity (-.0284 +/-.0142 kg/m(2)/hour/day) and more time with TV/videos/games (.0372 +/-.0106 kg/m(2)/hour/day) during the year between the 2 BMI assessments. Larger BMI increases were among boys who reported more time with TV/videos/games (.0384 +/-.0101) during the year. For both boys and girls, a larger rise in caloric intake from 1996 to 1997 predicted larger BMI increases (girls:.0059 +/-.0027 kg/m(2) per increase of 100 kcal/day; boys:.0082 +/-.0030). No significant associations were noted for energy-adjusted dietary fat or fiber.\n\n\nCONCLUSIONS\nFor both boys and girls, a 1-year increase in BMI was larger in those who reported more time with TV/videos/games during the year between the 2 BMI measurements, and in those who reported that their caloric intakes increased more from 1 year to the next. Larger year-to-year increases in BMI were also seen among girls who reported higher caloric intakes and less physical activity during the year between the 2 BMI measurements. Although the magnitudes of these estimated effects were small, their cumulative effects, year after year during adolescence, would produce substantial gains in body weight. Strategies to prevent excessive caloric intakes, to decrease time with TV/videos/games, and to increase physical activity would be promising as a means to prevent obesity.",
"title": ""
},
{
"docid": "20745f53566e36f4f9f8090c5914b954",
"text": "Various accessibility activities are improving blind access to the increasingly indispensable WWW. These approaches use various metrics to measure the Web's accessibility. “Ease of navigation” (navigability) is one of the crucial factors for blind usability, especially for complicated webpages used in portals and online shopping sites. However, it is difficult for automatic checking tools to evaluate the navigation capabilities even for a single webpage. Navigability issues for complete Web applications are still far beyond their capabilities.\n This study aims at obtaining quantitative results about the current accessibility status of real world Web applications, and analyzes real users' behavior on such websites. In Study 1, an automatic analysis method for webpage navigability is introduced, and then a broad survey using this method for 30 international online shopping sites is described. The next study (Study 2) focuses on a fine-grained analysis of real users' behavior on some of these online shopping sites. We modified a voice browser to record each user's actions and the information presented to that user. We conducted user testing on existing sites with this tool. We also developed an analysis and visualization method for the recorded information. The results showed us that users strongly depend on scanning navigation instead of logical navigation. A landmark-oriented navigation model was proposed based on the results. Finally, we discuss future possibilities for improving navigability, including proposals for voice browsers.",
"title": ""
},
{
"docid": "a66cc5179dd276acd4d49dd32e3fe9df",
"text": "Improving student achievement is vital for our nation’s competitiveness. Scientific research shows how the physical classroom environment influences student achievement. Two findings are key: First, the building’s structural facilities profoundly influence learning. Inadequate lighting, noise, low air quality, and deficient heating in the classroom are significantly related to worse student achievement. Over half of U.S. schools have inadequate structural facilities, and students of color and lower income students are more likely to attend schools with inadequate structural facilities. Second, scientific studies reveal the unexpected importance of a classroom’s symbolic features, such as objects and wall décor, in influencing student learning and achievement in that environment. Symbols inform students whether they are valued learners and belong within the classroom, with far-reaching consequences for students’ educational choices and achievement. We outline policy implications of the scientific findings—noting relevant policy audiences—and specify critical features of classroom design that can improve student achievement, especially for the most vulnerable students.",
"title": ""
}
] |
scidocsrr
|
1b1567e3d5a0242b31c437b5bcb09e1a
|
Stereo-based 6D object localization for grasping with humanoid robot systems
|
[
{
"docid": "59ac2e47ed0824eeba1621673f2dccf5",
"text": "In this paper we present a framework for grasp planning with a humanoid robot arm and a five-fingered hand. The aim is to provide the humanoid robot with the ability of grasping objects that appear in a kitchen environment. Our approach is based on the use of an object model database that contains the description of all the objects that can appear in the robot workspace. This database is completed with two modules that make use of this object representation: an exhaustive offline grasp analysis system and a real-time stereo vision system. The offline grasp analysis system determines the best grasp for the objects by employing a simulation system, together with CAD models of the objects and the five-fingered hand. The results of this analysis are added to the object database using a description suited to the requirements of the grasp execution modules. A stereo camera system is used for a real-time object localization using a combination of appearance-based and model-based methods. The different components are integrated in a controller architecture to achieve manipulation task goals for the humanoid robot",
"title": ""
}
] |
[
{
"docid": "fb70de7ed3e42c37b130686bfa3aee47",
"text": "Data from vehicles instrumented with GPS or other localization technologies are increasingly becoming widely available due to the investments in Connected and Automated Vehicles (CAVs) and the prevalence of personal mobile devices such as smartphones. Tracking or trajectory data from these probe vehicles are already being used in practice for travel time or speed estimation and for monitoring network conditions. However, there has been limited work on extracting other critical traffic flow variables, in particular density and flow, from probe data. This paper presents a microscopic approach (akin to car-following) for inferring the number of unobserved vehicles in between a set of probe vehicles in the traffic stream. In particular, we develop algorithms to extract and exploit the somewhat regular patterns in the trajectories when the probe vehicles travel through stop-and-go waves in congested traffic. Using certain critical points of trajectories as the input, the number of unobserved vehicles between consecutive probes are then estimated through a Naïve Bayes model. The parameters needed for the Naïve Bayes include means and standard deviations for the probability density functions (pdfs) for the distance headways between vehicles. These parameters are estimated through supervised as well as unsupervised learning methods. The proposed ideas are tested based on the trajectory data collected from US 101 and I-80 in California for the FHWA's NGSIM (next generation simulation) project. Under the dense traffic conditions analyzed, the results show that the number of unobserved vehicles between two probes can be predicted with an accuracy of ±1 vehicle almost always.",
"title": ""
},
{
"docid": "032fb65ac300c477d82ccbe6918115f4",
"text": "Three concepts (a) network programmability by clear separation of data and control planes and (b) sharing of network infrastructure to provide multitenancy, including traffic and address isolation, in large data center networks and (c) replacing the functions that traditionally run on a specialized hardware, with the software-realizations that run on commodity servers have gained lot of attention by both Industry and research-community over past few years. These three concepts are broadly referred as software defined networking (SDN), network virtualization (NV) and network functions virtualization (NFV). This paper presents a thorough study of these three concepts, including how SDN technology can complement the network virtualization and network functions virtualization. SDN, is about applying modularity to network control, which gives network designer the freedom to re-factor the control plane. This modularity has found its application in various areas including network virtualization. This work begins with the survey of software defined networking, considering various perspectives. The survey of SDN is followed by discussing how SDN plays a significant role in NV and NFV. Finally, this work also attempts to explore future directions in SDN based on current trends. Keywords—Software defined networking, Network Virtualization, Network Functions Virtualization, OpenFlow, Data Center, Overlay, Underlay, Network Planes, Programmable networks.",
"title": ""
},
{
"docid": "29f6917a8eaf7958ffa3408a41e981a4",
"text": "Reconstruction and rehabilitation following rhinectomy remains controversial and presents a complex problem. Although reconstruction with local and microvascular flaps is a valid option, the aesthetic results may not always be satisfactory. The aesthetic results achieved with a nasal prosthesis are excellent; however patient acceptance relies on a secure method of retention. The technique used and results obtained in a large series of patients undergoing rhinectomy and receiving zygomatic implants for the retention of a nasal prosthesis are described here. A total of 56 zygomatic implants (28 patients) were placed, providing excellent retention and durability with the loss of only one implant in 15 years.",
"title": ""
},
{
"docid": "bdf3417010f59745e4aaa1d47b71c70e",
"text": "Recent studies witness the success of Bag-of-Features (BoF) frameworks for video based human action recognition. The detection and description of local interest regions are two fundamental problems in BoF framework. In this paper, we propose a motion boundary based sampling strategy and spatialtemporal (3D) co-occurrence descriptors for action video representation and recognition. Our sampling strategy is partly inspired by the recent success of dense trajectory (DT) based features [1] for action recognition. Compared with DT, we densely sample spatial-temporal cuboids along motion boundary which can greatly reduce the number of valid trajectories while preserve the discriminative power. Moreover, we develop a set of 3D co-occurrence descriptors which take account of the spatial-temporal context within local cuboids and deliver rich information for recognition. Furthermore, we decompose each 3D co-occurrence descriptor at pixel level and bin level and integrate the decomposed components with a multi-channel framework, which can improve the performance significantly. To evaluate the proposed methods, we conduct extensive experiments on three benchmarks including KTH, YouTube and HMDB51. The results show that our sampling strategy significantly reduces the computational cost of point tracking without degrading performance. Meanwhile, we achieve superior performance than the state-ofthe-art methods. We report 95.6% on KTH, 87.6% on YouTube and 51.8% on HMDB51.",
"title": ""
},
{
"docid": "3a920687e57591c1abfaf10b691132a7",
"text": "BP3TKI Palembang is the government agencies that coordinate, execute and selection of prospective migrants registration and placement. To simplify the existing procedures and improve decision-making is necessary to build a decision support system (DSS) to determine eligibility for employment abroad by applying Fuzzy Multiple Attribute Decision Making (FMADM), using the linear sequential systems development methods. The system is built using Microsoft Visual Basic. Net 2010 and SQL Server 2008 database. The design of the system using use case diagrams and class diagrams to identify the needs of users and systems as well as systems implementation guidelines. Decision Support System which is capable of ranking the dihasialkan to prospective migrants, making it easier for parties to take keputusna BP3TKI the workers who will be flown out of the country.",
"title": ""
},
{
"docid": "b60124e70f29214d131e9d163727c0bf",
"text": "Differential privacy is a recent notion of privacy tailored to the problem of statistical disclosure control: how to release statistical information about a set of people without compromising the the privacy of any individual [7].\n We describe new work [10, 9] that extends differentially private data analysis beyond the traditional setting of a trusted curator operating, in perfect isolation, on a static dataset. We ask\n • How can we guarantee differential privacy, even against an adversary that has access to the algorithm's internal state, eg, by subpoena? An algorithm that achives this is said to be pan-private.\n • How can we guarantee differential privacy when the algorithm must continually produce outputs? We call this differential privacy under continual observation.\n We also consider these requirements in conjunction.",
"title": ""
},
{
"docid": "7d26c09bf274ae41f19a6aafc6a43d18",
"text": "Converging findings of animal and human studies provide compelling evidence that the amygdala is critically involved in enabling us to acquire and retain lasting memories of emotional experiences. This review focuses primarily on the findings of research investigating the role of the amygdala in modulating the consolidation of long-term memories. Considerable evidence from animal studies investigating the effects of posttraining systemic or intra-amygdala infusions of hormones and drugs, as well as selective lesions of specific amygdala nuclei, indicates that (a) the amygdala mediates the memory-modulating effects of adrenal stress hormones and several classes of neurotransmitters; (b) the effects are selectively mediated by the basolateral complex of the amygdala (BLA); (c) the influences involve interactions of several neuromodulatory systems within the BLA that converge in influencing noradrenergic and muscarinic cholinergic activation; (d) the BLA modulates memory consolidation via efferents to other brain regions, including the caudate nucleus, nucleus accumbens, and cortex; and (e) the BLA modulates the consolidation of memory of many different kinds of information. The findings of human brain imaging studies are consistent with those of animal studies in suggesting that activation of the amygdala influences the consolidation of long-term memory; the degree of activation of the amygdala by emotional arousal during encoding of emotionally arousing material (either pleasant or unpleasant) correlates highly with subsequent recall. The activation of neuromodulatory systems affecting the BLA and its projections to other brain regions involved in processing different kinds of information plays a key role in enabling emotionally significant experiences to be well remembered.",
"title": ""
},
{
"docid": "1585951d989c0e5210e5fee28e91f353",
"text": "Personalized predictive medicine necessitates the modeling of patient illness and care processes, which inherently have long-term temporal dependencies. Healthcare observations, stored in electronic medical records are episodic and irregular in time. We introduce DeepCare, an end-to-end deep dynamic neural network that reads medical records, stores previous illness history, infers current illness states and predicts future medical outcomes. At the data level, DeepCare represents care episodes as vectors and models patient health state trajectories by the memory of historical records. Built on Long Short-Term Memory (LSTM), DeepCare introduces methods to handle irregularly timed events by moderating the forgetting and consolidation of memory. DeepCare also explicitly models medical interventions that change the course of illness and shape future medical risk. Moving up to the health state level, historical and present health states are then aggregated through multiscale temporal pooling, before passing through a neural network that estimates future outcomes. We demonstrate the efficacy of DeepCare for disease progression modeling, intervention recommendation, and future risk prediction. On two important cohorts with heavy social and economic burden - diabetes and mental health - the results show improved prediction accuracy.",
"title": ""
},
{
"docid": "a0d6536cd8c85fe87cb316f92b489d32",
"text": "As a design of information-centric network architecture, Named Data Networking (NDN) provides content-based security. The signature binding the name with the content is the key point of content-based security in NDN. However, signing a content will introduce a significant computation overhead, especially for dynamically generated content. Adversaries can take advantages of such computation overhead to deplete the resources of the content provider. In this paper, we propose Interest Cash, an application-based countermeasure against Interest Flooding for dynamic content. Interest Cash requires a content consumer to solve a puzzle before it sends an Interest. The content consumer should provide a solution to this puzzle as cash to get the signing service from the content provider. The experiment shows that an adversary has to use more than 300 times computation resources of the content provider to commit a successful attack when Interest Cash is used.",
"title": ""
},
{
"docid": "6a36f1f291e1b0e2e3f8a32a5f95d0d4",
"text": "Recommender systems always aim to provide recommendations for a user based on historical ratings collected from a single domain (e.g., movies or books) only, which may suffer from the data sparsity problem. Recently, several recommendation models have been proposed to transfer knowledge by pooling together the rating data from multiple domains to alleviate the sparsity problem, which typically assume that multiple domains share a latent common rating pattern based on the user-item co-clustering. In practice, however, the related domains do not necessarily share such a common rating pattern, and diversity among the related domains might outweigh the advantages of such common pattern, which may result in performance degradations. In this paper, we propose a novel cluster-level based latent factor model to enhance the cross-domain recommendation, which can not only learn the common rating pattern shared across domains with the flexibility in controlling the optimal level of sharing, but also learn the domain-specific rating patterns of users in each domain that involve the discriminative information propitious to performance improvement. To this end, the proposed model is formulated as an optimization problem based on joint nonnegative matrix tri-factorization and an efficient alternating minimization algorithm is developed with convergence guarantee. Extensive experiments on several real world datasets suggest that our proposed model outperforms the state-of-the-art methods for the cross-domain recommendation task.",
"title": ""
},
{
"docid": "ca5eaacea8702798835ca585200b041d",
"text": "ccupational Health Psychology concerns the application of psychology to improving the quality of work life and to protecting and promoting the safety, health, and well-being of workers. Contrary to what its name suggests, Occupational Health Psychology has almost exclusively dealt with ill health and poor wellbeing. For instance, a simple count reveals that about 95% of all articles that have been published so far in the leading Journal of Occupational Health Psychology have dealt with negative aspects of workers' health and well-being, such as cardiovascular disease, repetitive strain injury, and burnout. In contrast, only about 5% of the articles have dealt with positive aspects such as job satisfaction, commitment, and motivation. However, times appear to be changing. Since the beginning of this century, more attention has been paid to what has been coined positive psychology: the scientific study of human strength and optimal functioning. This approach is considered to supplement the traditional focus of psychology on psychopathology, disease, illness, disturbance, and malfunctioning. The emergence of positive (organizational) psychology has naturally led to the increasing popularity of positive aspects of health and well-being in Occupational Health Psychology. One of these positive aspects is work engagement, which is considered to be the antithesis of burnout. While burnout is usually defined as a syndrome of exhaustion, cynicism, and reduced professional efficacy, engagement is defined as a positive, fulfilling, work-related state of mind that is characterized by vigor, dedication, and absorption. Engaged employees have a sense of energetic and effective connection with their work activities. Since this new concept was proposed by Wilmar Schaufeli (Utrecht University, the Netherlands) in 2001, 93 academic articles mainly focusing on the measurement of work engagement and its possible antecedents and consequences have been published (see www.schaufeli.com). In addition, major international academic conferences organized by the International Commission on Occupational 171",
"title": ""
},
{
"docid": "068295e6848b3228d1f25be84c9bf566",
"text": "We describe an automated system for the large-scale monitoring of Web sites that serve as online storefronts for spam-advertised goods. Our system is developed from an extensive crawl of black-market Web sites that deal in illegal pharmaceuticals, replica luxury goods, and counterfeit software. The operational goal of the system is to identify the affiliate programs of online merchants behind these Web sites; the system itself is part of a larger effort to improve the tracking and targeting of these affiliate programs. There are two main challenges in this domain. The first is that appearances can be deceiving: Web pages that render very differently are often linked to the same affiliate program of merchants. The second is the difficulty of acquiring training data: the manual labeling of Web pages, though necessary to some degree, is a laborious and time-consuming process. Our approach in this paper is to extract features that reveal when Web pages linked to the same affiliate program share a similar underlying structure. Using these features, which are mined from a small initial seed of labeled data, we are able to profile the Web sites of forty-four distinct affiliate programs that account, collectively, for hundreds of millions of dollars in illicit e-commerce. Our work also highlights several broad challenges that arise in the large-scale, empirical study of malicious activity on the Web.",
"title": ""
},
{
"docid": "0ec0af632612fbbc9b4dba1aa8843590",
"text": "The diversity in web object types and their resource requirements contributes to the unpredictability of web service provisioning. In this paper, an eÆcient admission control algorithm, PACERS, is proposed to provide di erent levels of services based on the server workload characteristics. Service quality is ensured by periodical allocation of system resources based on the estimation of request rate and service requirements of prioritized tasks. Admission of lower priority tasks is restricted during high load periods to prevent denial-of-services to high priority tasks. A doublequeue structure is implemented to reduce the e ects of estimation inaccuracy and to utilize the spare capacity of the server, thus increasing the system throughput. Response delays of the high priority tasks are bounded by the length of the prediction period. Theoretical analysis and experimental study show that the PACERS algorithm provides desirable throughput and bounded response delay to the prioritized tasks, without any signi cant impact on the aggregate throughput of the system under various workload.",
"title": ""
},
{
"docid": "09e573ba5fdb1aff5533442a897f1e2d",
"text": "Subjectivityin natural language refers to aspects of language used to express opinions and evaluations (Banfield, 1982; Wiebe, 1994). There are numerous applications for which knowledge of subjectivity is relevant, including genre detection, information extraction, and information retrieval. This paper shows promising results for a straightforward method of identifying collocational clues of subjectivity, as well as evidence of the usefulness of these clues for recognizing opinionated documents.",
"title": ""
},
{
"docid": "3665a82c20eb55c8afd2c7f35b68f49f",
"text": "The formulation and delivery of biopharmaceutical drugs, such as monoclonal antibodies and recombinant proteins, poses substantial challenges owing to their large size and susceptibility to degradation. In this Review we highlight recent advances in formulation and delivery strategies — such as the use of microsphere-based controlled-release technologies, protein modification methods that make use of polyethylene glycol and other polymers, and genetic manipulation of biopharmaceutical drugs — and discuss their advantages and limitations. We also highlight current and emerging delivery routes that provide an alternative to injection, including transdermal, oral and pulmonary delivery routes. In addition, the potential of targeted and intracellular protein delivery is discussed.",
"title": ""
},
{
"docid": "b6d3ac278fd39745caa0bb3658a2fab1",
"text": "Consider two data providers, each maintaining private records of different feature sets about common entities. They aim to learn a linear model jointly in a federated setting, namely, data is local and a shared model is trained from locally computed updates. In contrast with most work on distributed learning, in this scenario (i) data is split vertically, i.e. by features, (ii) only one data provider knows the target variable and (iii) entities are not linked across the data providers. Hence, to the challenge of private learning, we add the potentially negative consequences of mistakes in entity resolution. Our contribution is twofold. First, we describe a three-party end-to-end solution in two phases—privacy-preserving entity resolution and federated logistic regression over messages encrypted with an additively homomorphic scheme—, secure against a honest-but-curious adversary. The system allows learning without either exposing data in the clear or sharing which entities the data providers have in common. Our implementation is as accurate as a naive non-private solution that brings all data in one place, and scales to problems with millions of entities with hundreds of features. Second, we provide what is to our knowledge the first formal analysis of the impact of entity resolution’s mistakes on learning, with results on how optimal classifiers, empirical losses, margins and generalisation abilities are affected. Our results bring a clear and strong support for federated learning: under reasonable assumptions on the number and magnitude of entity resolution’s mistakes, it can be extremely beneficial to carry out federated learning in the setting where each peer’s data provides a significant uplift to the other. ∗All authors contributed equally. Richard Nock is jointly with the Australian National University & the University of Sydney. Giorgio Patrini is now at the University of Amsterdam. 1 ar X iv :1 71 1. 10 67 7v 1 [ cs .L G ] 2 9 N ov 2 01 7",
"title": ""
},
{
"docid": "dd82e1c54a2b73e98788eb7400600be3",
"text": "Supernovae Type-Ia (SNeIa) play a significant role in exploring the history of the expansion of the Universe, since they are the best-known standard candles with which we can accurately measure the distance to the objects. Finding large samples of SNeIa and investigating their detailed characteristics has become an important issue in cosmology and astronomy. Existing methods relied on a photometric approach that first measures the luminance of supernova candidates precisely and then fits the results to a parametric function of temporal changes in luminance. However, it inevitably requires a lot of observations and complex luminance measurements. In this work, we present a novel method for detecting SNeIa simply from single-shot observation images without any complex measurements, by effectively integrating the state-of-the-art computer vision methodology into the standard photometric approach. Experimental results show the effectiveness of the proposed method and reveal classification performance comparable to existing photometric methods with many observations.",
"title": ""
},
{
"docid": "ad5b8a1bcea8265351669be4f4c49476",
"text": "Software startups are newly created companies with little operating history and oriented towards producing cutting-edge products. As their time and resources are extremely scarce, and one failed project can put them out of business, startups need effective practices to face with those unique challenges. However, only few scientific studies attempt to address characteristics of failure, especially during the earlystage. With this study we aim to raise our understanding of the failure of early-stage software startup companies. This state-of-practice investigation was performed using a literature review followed by a multiple-case study approach. The results present how inconsistency between managerial strategies and execution can lead to failure by means of a behavioral framework. Despite strategies reveal the first need to understand the problem/solution fit, actual executions prioritize the development of the product to launch on the market as quickly as possible to verify product/market fit, neglecting the necessary learning process.",
"title": ""
},
{
"docid": "9b575699e010919b334ac3c6bc429264",
"text": "Over the last decade, keyword search over relational data has attracted considerable attention. A possible approach to face this issue is to transform keyword queries into one or more SQL queries to be executed by the relational DBMS. Finding these queries is a challenging task since the information they represent may be modeled across different elements where the data of interest is stored, but also to find out how these elements are interconnected. All the approaches that have been proposed so far provide a monolithic solution. In this work, we, instead, divide the problem into three steps: the first one, driven by the user's point of view, takes into account what the user has in mind when formulating keyword queries, the second one, driven by the database perspective, considers how the data is represented in the database schema. Finally, the third step combines these two processes. We present the theory behind our approach, and its implementation into a system called QUEST (QUEry generator for STructured sources), which has been deeply tested to show the efficiency and effectiveness of our approach. Furthermore, we report on the outcomes of a number of experimental results that we",
"title": ""
}
] |
scidocsrr
|
515efb83adb8a27f17b75a8a045e5d35
|
Know What Your Neighbors Do: 3D Semantic Segmentation of Point Clouds
|
[
{
"docid": "395362cb22b0416e8eca67ec58907403",
"text": "This paper presents an approach for labeling objects in 3D scenes. We introduce HMP3D, a hierarchical sparse coding technique for learning features from 3D point cloud data. HMP3D classifiers are trained using a synthetic dataset of virtual scenes generated using CAD models from an online database. Our scene labeling system combines features learned from raw RGB-D images and 3D point clouds directly, without any hand-designed features, to assign an object label to every 3D point in the scene. Experiments on the RGB-D Scenes Dataset v.2 demonstrate that the proposed approach can be used to label indoor scenes containing both small tabletop objects and large furniture pieces.",
"title": ""
},
{
"docid": "d3956443e9e1f9dd0c0d995ecd12bfb4",
"text": "Point clouds are an efficient data format for 3D data. However, existing 3D segmentation methods for point clouds either do not model local dependencies [21] or require added computations [14, 23]. This work presents a novel 3D segmentation framework, RSNet1, to efficiently model local structures in point clouds. The key component of the RSNet is a lightweight local dependency module. It is a combination of a novel slice pooling layer, Recurrent Neural Network (RNN) layers, and a slice unpooling layer. The slice pooling layer is designed to project features of unordered points onto an ordered sequence of feature vectors so that traditional end-to-end learning algorithms (RNNs) can be applied. The performance of RSNet is validated by comprehensive experiments on the S3DIS[1], ScanNet[3], and ShapeNet [34] datasets. In its simplest form, RSNets surpass all previous state-of-the-art methods on these benchmarks. And comparisons against previous state-of-the-art methods [21, 23] demonstrate the efficiency of RSNets.",
"title": ""
}
] |
[
{
"docid": "f2f6a182055df59446e1c6dc9718dd8c",
"text": "The emergence of new technologies and services as well as trillions of devices and petabytes of data to be processed and transferred in the Internet of the Future mean that we have to deal with new threats and vulnerabilities, in addition to handle the remaining old ones. Together with the rise of Cyber Warfare and the resulting impact on the environment means that we have to bring intelligence back to the network. Consequently, effective Cyber Defence will be more and more important. In this paper we will show that the proposed requirements for an Early Warning System are a main part of future Cyber Defence. Special attention is given on the challenges associated to the generation of early warning systems for future attacks on the Internet of the Future. The term Cyber War is used frequently but unfortunately with different intends. Therefore, we start with a definition of the term Cyber War focusing on security aspects related to the Internet of the Future, followed by an exemplification of a Cyber War, of its implications and the challenges associated to it. Then we proceed with an analysis of state of the art recent work that has been proposed on the topic. Additionally the weaknesses of these analyzed systems and approaches are presented. Finally we propose guidelines and requirements for future work which will be needed to implement a next generation early warning system for securing the Internet of the Future.",
"title": ""
},
{
"docid": "ed0d1e110347313285a6b478ff8875e3",
"text": "Data mining is an area of computer science with a huge prospective, which is the process of discovering or extracting information from large database or datasets. There are many different areas under Data Mining and one of them is Classification or the supervised learning. Classification also can be implemented through a number of different approaches or algorithms. We have conducted the comparison between three algorithms with help of WEKA (The Waikato Environment for Knowledge Analysis), which is an open source software. It contains different type's data mining algorithms. This paper explains discussion of Decision tree, Bayesian Network and K-Nearest Neighbor algorithms. Here, for comparing the result, we have used as parameters the correctly classified instances, incorrectly classified instances, time taken, kappa statistic, relative absolute error, and root relative squared error.",
"title": ""
},
{
"docid": "5eab71f546a7dc8bae157a0ca4dd7444",
"text": "We introduce a new usability inspection method called HED (heuristic evaluation during demonstrations) for measuring and comparing usability of competing complex IT systems in public procurement. The method presented enhances traditional heuristic evaluation to include the use context, comprehensive view of the system, and reveals missing functionality by using user scenarios and demonstrations. HED also quantifies the results in a comparable way. We present findings from a real-life validation of the method in a large-scale procurement project of a healthcare and social welfare information system. We analyze and compare the performance of HED to other usability evaluation methods used in procurement. Based on the analysis HED can be used to evaluate the level of usability of an IT system during procurement correctly, comprehensively and efficiently.",
"title": ""
},
{
"docid": "5c056ba2e29e8e33c725c2c9dd12afa8",
"text": "The large amount of text data which are continuously produced over time in a variety of large scale applications such as social networks results in massive streams of data. Typically massive text streams are created by very large scale interactions of individuals, or by structured creations of particular kinds of content by dedicated organizations. An example in the latter category would be the massive text streams created by news-wire services. Such text streams provide unprecedented challenges to data mining algorithms from an efficiency perspective. In this paper, we review text stream mining algorithms for a wide variety of problems in data mining such as clustering, classification and topic modeling. A recent challenge arises in the context of social streams, which are generated by large social networks such as Twitter. We also discuss a number of future challenges in this area of research.",
"title": ""
},
{
"docid": "90c3543eca7a689188725e610e106ce9",
"text": "Lithium-based battery technology offers performance advantages over traditional battery technologies at the cost of increased monitoring and controls overhead. Multiple-cell Lead-Acid battery packs can be equalized by a controlled overcharge, eliminating the need to periodically adjust individual cells to match the rest of the pack. Lithium-based based batteries cannot be equalized by an overcharge, so alternative methods are required. This paper discusses several cell-balancing methodologies. Active cell balancing methods remove charge from one or more high cells and deliver the charge to one or more low cells. Dissipative techniques find the high cells in the pack, and remove excess energy through a resistive element until their charges match the low cells. This paper presents the theory of charge balancing techniques and the advantages and disadvantages of the presented methods. INTRODUCTION Lithium Ion and Lithium Polymer battery chemistries cannot be overcharged without damaging active materials [1-5]. The electrolyte breakdown voltage is precariously close to the fully charged terminal voltage, typically in the range of 4.1 to 4.3 volts/cell. Therefore, careful monitoring and controls must be implemented to avoid any single cell from experiencing an overvoltage due to excessive charging. Single lithium-based cells require monitoring so that cell voltage does not exceed predefined limits of the chemistry. Series connected lithium cells pose a more complex problem: each cell in the string must be monitored and controlled. Even though the pack voltage may appear to be within acceptable limits, one cell of the series string may be experiencing damaging voltage due to cell-to-cell imbalances. Traditionally, cell-to-cell imbalances in lead-acid batteries have been solved by controlled overcharging [6,7]. Leadacid batteries can be brought into overcharge conditions without permanent cell damage, as the excess energy is released by gassing. This gassing mechanism is the natural method for balancing a series string of lead acid battery cells. Other chemistries, such as NiMH, exhibit similar natural cell-to-cell balancing mechanisms [8]. Because a Lithium battery cannot be overcharged, there is no natural mechanism for cell equalization. Therefore, an alternative method must be employed. This paper discusses three categories of cell balancing methodologies: charging methods, active methods, and passive methods. Cell balancing is necessary for highly transient lithium battery applications, especially those applications where charging occurs frequently, such as regenerative braking in electric vehicle (EV) or hybrid electric vehicle (HEV) applications. Regenerative braking can cause problems for Lithium Ion batteries because the instantaneous regenerative braking current inrush can cause battery voltage to increase suddenly, possibly over the electrolyte breakdown threshold voltage. Deviations in cell behaviors generally occur because of two phenomenon: changes in internal impedance or cell capacity reduction due to aging. In either case, if one cell in a battery pack experiences deviant cell behavior, that cell becomes a likely candidate to overvoltage during high power charging events. Cells with reduced capacity or high internal impedance tend to have large voltage swings when charging and discharging. For HEV applications, it is necessary to cell balance lithium chemistry because of this overvoltage potential. For EV applications, cell balancing is desirable to obtain maximum usable capacity from the battery pack. During charging, an out-of-balance cell may prematurely approach the end-of-charge voltage (typically 4.1 to 4.3 volts/cell) and trigger the charger to turn off. Cell balancing is useful to control the higher voltage cells until the rest of the cells can catch up. In this way, the charger is not turned off until the cells simultaneously reach the end-of-charge voltage. END-OF-CHARGE CELL BALANCING METHODS Typically, cell-balancing methods employed during and at end-of-charging are useful only for electric vehicle purposes. This is because electric vehicle batteries are generally fully charged between each use cycle. Hybrid electric vehicle batteries may or may not be maintained fully charged, resulting in unpredictable end-of-charge conditions to enact the balancing mechanism. Hybrid vehicle batteries also require both high power charge (regenerative braking) and discharge (launch assist or boost) capabilities. For this reason, their batteries are usually maintained at a SOC that can discharge the required power but still have enough headroom to accept the necessary regenerative power. To fully charge the HEV battery for cell balancing would diminish charge acceptance capability (regenerative braking). CHARGE SHUNTING The charge-shunting cell balancing method selectively shunts the charging current around each cell as they become fully charged (Figure 1). This method is most efficiently employed on systems with known charge rates. The shunt resistor R is sized to shunt exactly the charging current I when the fully charged cell voltage V is reached. If the charging current decreases, resistor R will discharge the shunted cell. To avoid extremely large power dissipations due to R, this method is best used with stepped-current chargers with a small end-of-charge current.",
"title": ""
},
{
"docid": "4070072c5bd650d1ca0daf3015236b31",
"text": "Automated classiication of digital video is emerging as an important piece of the puzzle in the design of content management systems for digital libraries. The ability to classify videos into various classes such as sports, news, movies, or documentaries, increases the eeciency of indexing, browsing, and retrieval of video in large databases. In this paper, we discuss the extraction of features that enable identiication of sports videos directly from the compressed domain of MPEG video. These features include detecting the presence of action replays, determining the amount of scene text in video, and calculating various statistics on camera and/or object motion. The features are derived from the macroblock, motion, and bit-rate information that is readily accessible from MPEG video with very minimal decoding, leading to substantial gains in processing speeds. Full-decoding of selective frames is required only for text analysis. A decision tree classiier built using these features is able to identify sports clips with an accuracy of about 93%.",
"title": ""
},
{
"docid": "406a8143edfeab7f97d451d0af9b7058",
"text": "One of the core questions when designing modern Natural Language Processing (NLP) systems is how to model input textual data such that the learning algorithm is provided with enough information to estimate accurate decision functions. The mainstream approach is to represent input objects as feature vectors where each value encodes some of their aspects, e.g., syntax, semantics, etc. Feature-based methods have demonstrated state-of-the-art results on various NLP tasks. However, designing good features is a highly empirical-driven process, it greatly depends on a task requiring a significant amount of domain expertise. Moreover, extracting features for complex NLP tasks often requires expensive pre-processing steps running a large number of linguistic tools while relying on external knowledge sources that are often not available or hard to get. Hence, this process is not cheap and often constitutes one of the major challenges when attempting a new task or adapting to a different language or domain. The problem of modelling input objects is even more acute in cases when the input examples are not just single objects but pairs of objects, such as in various learning to rank problems in Information Retrieval and Natural Language processing. An alternative to feature-based methods is using kernels which are essentially nonlinear functions mapping input examples into some high dimensional space thus allowing for learning decision functions with higher discriminative power. Kernels implicitly generate a very large number of features computing similarity between input examples in that implicit space. A well-designed kernel function can greatly reduce the effort to design a large set of manually designed features often leading to superior results. However, in the recent years, the use of kernel methods in NLP has been greatly underestimated primarily due to the following reasons: (i) learning with kernels is slow as it requires to carry out optimizaiton in the dual space leading to quadratic complexity; (ii) applying kernels to the input objects encoded with vanilla structures, e.g., generated by syntactic parsers, often yields minor improvements over carefully designed feature-based methods. In this thesis, we adopt the kernel learning approach for solving complex NLP tasks and primarily focus on solutions to the aforementioned problems posed by the use of kernels. In particular, we design novel learning algorithms for training Support Vector Machines with structural kernels, e.g., tree kernels, considerably speeding up the training over the conventional SVM training methods. We show that using the training algorithms developed in this thesis allows for trainining tree kernel models on large-scale datasets containing millions of instances, which was not possible before. Next, we focus on the problem of designing input structures that are fed to tree kernel functions to automatically generate a large set of tree-fragment features. We demonstrate that previously used plain structures generated by syntactic parsers, e.g., syntactic or dependency trees, are often a poor choice thus compromising the expressivity offered by a tree kernel learning framework. We propose several effective design patterns of the input tree structures for various NLP tasks ranging from sentiment analysis to answer passage reranking. The central idea is to inject additional semantic information relevant for the task directly into the tree nodes and let the expressive kernels generate rich feature spaces. For the opinion mining tasks, the additional semantic information injected into tree nodes can be word polarity labels, while for more complex tasks of modelling text pairs the relational information about overlapping words in a pair appears to significantly improve the accuracy of the resulting models. Finally, we observe that both feature-based and kernel methods typically treat words as atomic units where matching different yet semantically similar words is problematic. Conversely, the idea of distributional approaches to model words as vectors is much more effective in establishing a semantic match between words and phrases. While tree kernel functions do allow for a more flexible matching between phrases and sentences through matching their syntactic contexts, their representation can not be tuned on the training set as it is possible with distributional approaches. Recently, deep learning approaches have been applied to generalize the distributional word matching problem to matching sentences taking it one step further by learning the optimal sentence representations for a given task. Deep neural networks have already claimed state-of-the-art performance in many computer vision, speech recognition, and natural language tasks. Following this trend, this thesis also explores the virtue of deep learning architectures for modelling input texts and text pairs where we build on some of the ideas to model input objects proposed within the tree kernel learning framework. In particular, we explore the idea of relational linking (proposed in the preceding chapters to encode text pairs using linguistic tree structures) to design a state-of-the-art deep learning architecture for modelling text pairs. We compare the proposed deep learning models that require even less manual intervention in the feature design process then previously described tree kernel methods that already offer a very good trade-off between the feature-engineering effort and the expressivity of the resulting representation. Our deep learning models demonstrate the state-of-the-art performance on a recent benchmark for Twitter Sentiment Analysis, Answer Sentence Selection and Microblog retrieval.",
"title": ""
},
{
"docid": "1b4e49064f8d480134f93a77f385d242",
"text": "Electric load forecasting plays a vital role in smart grids. Short term electric load forecasting forecasts the load that is several hours to several weeks ahead. Due to the nonlinear, non-stationary and nonseasonal nature of the short term electric load time series in small scale power systems, accurate forecasting is challenging. This paper explores Long-Short-Term-Memory (LSTM) based Recurrent Neural Network (RNN) to deal with this challenge. LSTM-based RNN is able to exploit the long term dependencies in the electric load time series for more accurate forecasting. Experiments are conducted to demonstrate that LSTM-based RNN is capable of forecasting accurately the complex electric load time series with a long forecasting horizon. Its performance compares favorably to many other forecasting methods.",
"title": ""
},
{
"docid": "e27d949155cef2885a4ab93f4fba18b3",
"text": "Because of its richness and availability, micro-blogging has become an ideal platform for conducting psychological research. In this paper, we proposed to predict active users' personality traits through micro-blogging behaviors. 547 Chinese active users of micro-blogging participated in this study. Their personality traits were measured by the Big Five Inventory, and digital records of micro-blogging behaviors were collected via web crawlers. After extracting 839 micro-blogging behavioral features, we first trained classification models utilizing Support Vector Machine (SVM), differentiating participants with high and low scores on each dimension of the Big Five Inventory [corrected]. The classification accuracy ranged from 84% to 92%. We also built regression models utilizing PaceRegression methods, predicting participants' scores on each dimension of the Big Five Inventory. The Pearson correlation coefficients between predicted scores and actual scores ranged from 0.48 to 0.54. Results indicated that active users' personality traits could be predicted by micro-blogging behaviors.",
"title": ""
},
{
"docid": "f3a08d4f896f7aa2d0f1fff04764efc3",
"text": "The natural distribution of textual data used in text classification is often imbalanced. Categories with fewer examples are under-represented and their classifiers often perform far below satisfactory. We tackle this problem using a simple probability based term weighting scheme to better distinguish documents in minor categories. This new scheme directly utilizes two critical information ratios, i.e. relevance indicators. Such relevance indicators are nicely supported by probability estimates which embody the category membership. Our experimental study using both Support Vector Machines and Naı̈ve Bayes classifiers and extensive comparison with other classic weighting schemes over two benchmarking data sets, including Reuters-21578, shows significant improvement for minor categories, while the performance for major categories are not jeopardized. Our approach has suggested a simple and effective solution to boost the performance of text classification over skewed data sets. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "3471cc591321af991ae82298e25645fa",
"text": "Anomaly detection in videos refers to the identification of events that do not conform to expected behavior. However, almost all existing methods tackle the problem by minimizing the reconstruction errors of training data, which cannot guarantee a larger reconstruction error for an abnormal event. In this paper, we propose to tackle the anomaly detection problem within a video prediction framework. To the best of our knowledge, this is the first work that leverages the difference between a predicted future frame and its ground truth to detect an abnormal event. To predict a future frame with higher quality for normal events, other than the commonly used appearance (spatial) constraints on intensity and gradient, we also introduce a motion (temporal) constraint in video prediction by enforcing the optical flow between predicted frames and ground truth frames to be consistent, and this is the first work that introduces a temporal constraint into the video prediction task. Such spatial and motion constraints facilitate the future frame prediction for normal events, and consequently facilitate to identify those abnormal events that do not conform the expectation. Extensive experiments on both a toy dataset and some publicly available datasets validate the effectiveness of our method in terms of robustness to the uncertainty in normal events and the sensitivity to abnormal events. All codes are released in https://github.com/StevenLiuWen/ano_pred_cvpr2018.",
"title": ""
},
{
"docid": "a1a800cf63f997501e1a35c0da0e075b",
"text": "In this paper, an improved design of an ironless axial flux permanent magnet synchronous generator (AFPMSG) is presented for direct-coupled wind turbine application considering wind speed characteristics. The partial swarm optimization method is used to perform a multi-objective design optimization of the ironless AFPMSG in order to decrease the active material cost and increase the annual energy yield of the generator over the entire range of operating wind speed. General practical and mechanical limitations in the design of the generator are considered as optimization constraints. For accurate analytical design of the generator, distribution of the flux in all parts of the machine is obtained through a modified magnetic equivalent circuit model of AFPMSG. In this model, the magnetic saturation of the rotor back iron cores is considered using a nonlinear iterative algorithm. Various combinations of pole and coil numbers are studied in the design of a 30 kW AFPMSG via the optimization procedure. Finally, 3-D finite-element model of the generator was prepared to confirm the validity of the proposed design procedure and the generator performance for various wind speeds.",
"title": ""
},
{
"docid": "08951a16123c26f5ac4241457b539454",
"text": "High quality, physically accurate rendering at interactiv e rates has widespread application, but is a daunting task. We attempt t o bridge the gap between high-quality offline and interactive render ing by using existing environment mapping hardware in combinatio with a novel Image Based Rendering (IBR) algorithm. The primary c ontribution lies in performing IBR in reflection space. This me thod can be applied to ordinary environment maps, but for more phy sically accurate rendering, we apply reflection space IBR to ra diance environment maps. A radiance environment map pre-integrat s Bidirectional Reflection Distribution Function (BRDF) wit h a lighting environment. Using the reflection-space IBR algorithm o n radiance environment maps allows interactive rendering of ar bitr ry objects with a large class of complex BRDFs in arbitrary ligh ting environments. The ultimate simplicity of the final algor ithm suggests that it will be widely and immediately valuable giv en the ready availability of hardware assisted environment mappi ng. CR categories and subject descriptors: I.3.3 [Computer Graphics]: Picture/Image generation; I.3.7 [Image Proces sing]: Enhancement.",
"title": ""
},
{
"docid": "3c2600b70709cbc167ead250171eec11",
"text": "A compact wideband circularly polarized (CP) patch antenna utilizing a quad-feed network (QFN) and quadruple semi-fan-annulus (QSFA) patches is proposed. For using a coupled-line dual-band power divider (PD) and improved phase shifter (PS) with stepped-impedance-open-stub (SIOS), the QFN has a compact size and exhibits the electromagnetic (EM) simulated bandwidth over 96% for the power distribution -6.5±0.5 dB, return loss (RL) > 14 dB, and a consistent 90° (±9°) phase deviation. Moreover, the QSFA patches can expand the CP bandwidth and reduce the size of the antenna effectively. Measurement results show that the proposed antenna achieves CP bandwidth of 72% from 1.15 to 2.45 GHz for RL > 10 dB, axial ratio dB, and 3-dB gain variation (gain > 4.4 dBi). Therefore, the proposed antenna is a good candidate for Compass Navigation Satellite System (CNSS) applications.",
"title": ""
},
{
"docid": "b2e1b184096433db2bbd46cf01ef99c6",
"text": "This is a short overview of a totally ordered broadcast protocol used by ZooKeeper, called Zab. It is conceptually easy to understand, is easy to implement, and gives high performance. In this paper we present the requirements ZooKeeper makes on Zab, we show how the protocol is used, and we give an overview of how the protocol works.",
"title": ""
},
{
"docid": "08084de7a702b87bd8ffc1d36dbf67ea",
"text": "In recent years, the mobile data traffic is increasing and many more frequency bands have been employed in cellular handsets. A simple π type tunable band elimination filter (BEF) with switching function has been developed using a wideband tunable surface acoustic wave (SAW) resonator circuit. The frequency of BEF is tuned approximately 31% by variable capacitors without spurious. In LTE low band, the arrangement of TX and RX frequencies is to be reversed in Band 13, 14 and 20 compared with the other bands. The steep edge slopes of the developed filter can be exchanged according to the resonance condition and switching. With combining the TX and RX tunable BEFs and the small sized broadband circulator, a new tunable duplexer has been fabricated, and its TX-RX isolation is proved to be more than 50dB in LTE low band operations.",
"title": ""
},
{
"docid": "df63f01ed7b35b9e4e5638305d1aa87c",
"text": "Most prior work on information extraction has focused on extracting information from text in digital documents. However, often, the most important information being reported in an article is presented in tabular form in a digital document. If the data reported in tables can be extracted and stored in a database, the data can be queried and joined with other data using database management systems. In order to prepare the data source for table search, accurately detecting the table boundary plays a crucial role for the later table structure decomposition. Table boundary detection and content extraction is a challenging problem because tabular formats are not standardized across all documents. In this paper, we propose a simple but effective preprocessing method to improve the table boundary detection performance by considering the sparse-line property of table rows. Our method easily simplifies the table boundary detection problem into the sparse line analysis problem with much less noise. We design eight line label types and apply two machine learning techniques, Conditional Random Field (CRF) and Support Vector Machines (SVM), on the table boundary detection field. The experimental results not only compare the performances between the machine learning methods and the heuristics-based method, but also demonstrate the effectiveness of the sparse line analysis in the table boundary detection.",
"title": ""
},
{
"docid": "f9080c1368f2f999832df35eb45ba9b5",
"text": "SIFT (Scale Invariant Feature Transform) is an important local invariant feature descriptor. Since its expensive computation, SURF (Speeded-Up Robust Features) is proposed. Both of them are designed mainly for gray images. However, color provides valuable information in object description and matching tasks. To overcome the drawback and to increase the descriptor's distinctiveness, this paper presents a novel feature descriptor which combines local kernel color histograms and Haar wavelet responses to construct the feature vector. So the descriptor is a two elements vector. In image matching process, SURF descriptor is first compared, then the unmatched points are computed by Bhattacharyya distance between their local kernel color histograms. Extensive experimental evaluations show that the method has better robustness than the original SURF. The ratio of correct matches is increased by about 8.9% in the given dataset.",
"title": ""
},
{
"docid": "8414871e63481b81677a47f0248cdda2",
"text": "Examination is one of the ways to assess the receptiveness of student to various class teachings in various schools or institutions from primary to post primary and tertiary institutions. But this is not without problem like impersonation, student class absenteeism and debtor student sitting for examination especially in private institutions. The proposed system developed with the aim of solving stated problems. The system did not only has the ability to take student class attendance and screening before and after examination but it can also be used to verify whether such student is in debit to the institution or whether the student has paid required percentage of the tuition before being allowed to sit for the examination thereby reducing number of student in debit before examination started, it will also eliminate impersonation during examination and stress of manual class attendance taking and record keeping, also avoid writing of name of absent student unlike in manual process of class attendance. The system will also make it easier to calculate required percentage of class attendance before any student is allowed to sit for an examination in any institution. The system adopted biometric access control techniques, which is designed with extended graphical user interface by using Microsoft visual studio 2010 and integrated with Microsoft fingerprint reader. The student information is stored by MySQL which serve as database located in the user's computer or server. KeywordsBiometric; Fingerprint; Examination; Attendance; Tuition; Fees; Institution.",
"title": ""
},
{
"docid": "f5aa0531d2b560b4bddd8f93d308f5bc",
"text": "Even well-designed software systems suffer from chronic performance degradation, also known as “software aging”, due to internal (e.g., software bugs) or external (e.g., resource exhaustion) impairments. These chronic problems often fly under the radar of software monitoring systems before causing severe impacts (e.g., system failures). Therefore, it is a challenging issue how to timely predict the occurrence of failures caused by these problems. Unfortunately, the effectiveness of prior approaches are far from satisfactory due to the insufficiency of aging indicators adopted by them. To accurately predict failures caused by software aging which are named as Aging-Related Failure (ARFs), this paper presents a novel entropy-based aging indicator, namely Multidimensional Multi-scale Entropy (MMSE) which leverages the complexity embedded in runtime performance metrics to indicate software aging. To the best of our knowledge, this is the first time to leverage entropy to predict ARFs. Based upon MMSE, we implement three failure prediction approaches encapsulated in a proof-of-concept prototype named ARF-Predictor. The experimental evaluations in a Video on Demand (VoD) system, and in a real-world production system, AntVision, show that ARF-Predictor can predict ARFs with a very high accuracy and a low <italic>Ahead-Time-To-Failure (<inline-formula> <tex-math notation=\"LaTeX\">$ATTF$</tex-math><alternatives><inline-graphic xlink:href=\"chen-ieq1-2604381.gif\"/> </alternatives></inline-formula>)</italic>. Compared to previous approaches, ARF-Predictor improves the prediction accuracy by about 5 times and reduces <inline-formula><tex-math notation=\"LaTeX\">$ATTF$</tex-math><alternatives> <inline-graphic xlink:href=\"chen-ieq2-2604381.gif\"/></alternatives></inline-formula> even by 3 orders of magnitude. In addition, ARF-Predictor is light-weight enough to satisfy the real-time requirement.",
"title": ""
}
] |
scidocsrr
|
d51662e9c51116200c3e2a36ebf195f2
|
Extraction and Processing of Rich Semantics from Medical Texts
|
[
{
"docid": "f3a1789e765ea0325a3b31e0b436543d",
"text": "Medical care is vital and challenging task as the amount of unstructured and unformalized data has grown dramatically over last decades. The article is dedicated to SMDA project an attempt to build a framework for semantic medicine application for Almazov medical research center, FANW MRC. In this paper we investigate modern approaches to medical textual data processing and analysis, however mentioned approaches do not give a complete background for solving our task. We spot a process as a combination of existing tools as well as our heuristic algorithms, techniques and tools. The paper proposes a new approach to natural language processing and concept extraction applied to medical certificates, doctors’ notes and patients’ diaries. The main purpose of the article is to present a way to solve a particular problem of medical concept extraction and knowledge formalization from an unstructured, lacking in syntax and noisy text.",
"title": ""
}
] |
[
{
"docid": "e6034310ee28d8ed4cbd1ea4c71cd76b",
"text": "This study emphasizes the need for standardized measurement tools for human robot interaction (HRI). If we are to make progress in this field then we must be able to compare the results from different studies. A literature review has been performed on the measurements of five key concepts in HRI: anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety. The results have been distilled into five consistent questionnaires using semantic differential scales. We report reliability and validity indicators based on several empirical studies that used these questionnaires. It is our hope that these questionnaires can be used by robot developers to monitor their progress. Psychologists are invited to further develop the questionnaires by adding new concepts, and to conduct further validations where it appears necessary. C. Bartneck ( ) Department of Industrial Design, Eindhoven University of Technology, Den Dolech 2, 5600 Eindhoven, The Netherlands e-mail: c.bartneck@tue.nl D. Kulić Nakamura & Yamane Lab, Department of Mechano-Informatics, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan e-mail: dana@ynl.t.u-tokyo.ac.jp E. Croft · S. Zoghbi Department of Mechanical Engineering, University of British Columbia, 6250 Applied Science Lane, Room 2054, Vancouver, V6T 1Z4, Canada E. Croft e-mail: ecroft@mech.ubc.ca S. Zoghbi e-mail: szoghbi@mech.ubc.ca",
"title": ""
},
{
"docid": "b8b02f98f21b81ad5e25a73f5f95598f",
"text": "Datalog is a family of ontology languages that combine good computational properties with high expressive power. Datalog languages are provably able to capture many relevant Semantic Web languages. In this paper we consider the class of weakly-sticky (WS) Datalog programs, which allow for certain useful forms of joins in rule bodies as well as extending the well-known class of weakly-acyclic TGDs. So far, only nondeterministic algorithms were known for answering queries on WS Datalog programs. We present novel deterministic query answering algorithms under WS Datalog. In particular, we propose: (1) a bottom-up grounding algorithm based on a query-driven chase, and (2) a hybrid approach based on transforming a WS program into a so-called sticky one, for which query rewriting techniques are known. We discuss how our algorithms can be optimized and effectively applied for query answering in real-world scenarios.",
"title": ""
},
{
"docid": "e055656bcc3a8b2131454f249040cc4a",
"text": "A double-identity fingerprint is a fake fingerprint created by combining features from two different fingers, so that it has a high chance to be falsely matched with fingerprints from both fingers. This paper studies the feasibility of creating double-identity fingerprints by proposing two possible techniques and evaluating to what extent they may be used to fool the state-of-the-art fingerprint recognition systems. The results of systematic experiments suggest that existing algorithms are highly vulnerable to this specific attack (about 90% chance of success at FAR = 0.1%) and that the fingerprint patterns generated might be realistic enough to fool human examiners.",
"title": ""
},
{
"docid": "f0bbe4e6d61a808588153c6b5fc843aa",
"text": "The development of Information and Communications Technologies (ICT) has affected various fields including the automotive industry. Therefore, vehicle network protocols such as Controller Area Network (CAN), Local Interconnect Network (LIN), and FlexRay have been introduced. Although CAN is the most widely used for vehicle network protocol, its security issue is not properly addressed. In this paper, we propose a security gateway, an improved version of existing CAN gateways, to protect CAN from spoofing and DoS attacks. We analyze sequence of messages based on the driver’s behavior to resist against spoofing attack and utilize a temporary ID and SipHash algorithm to resist against DoS attack. For the verification of our proposed method, OMNeT++ is used. The suggested method shows high detection rate and low increase of traffic. Also, analysis of frame drop rate during DoS attack shows that our suggested method can defend DoS attack.",
"title": ""
},
{
"docid": "4b10fb997b4c38745b030e5f525a99a6",
"text": "Regular machine learning and data mining techniques study the training data for future inferences under a major assumption that the future data are within the same feature space or have the same distribution as the training data. However, due to the limited availability of human labeled training data, training data that stay in the same feature space or have the same distribution as the future data cannot be guaranteed to be sufficient enough to avoid the over-fitting problem. In real-world applications, apart from data in the target domain, related data in a different domain can also be included to expand the availability of our prior knowledge about the target future data. Transfer learning addresses such cross-domain learning problems by extracting useful information from data in a related domain and transferring them for being used in target tasks. In recent years, with transfer learning being applied to visual categorization, some typical problems, e.g., view divergence in action recognition tasks and concept drifting in image classification tasks, can be efficiently solved. In this paper, we survey state-of-the-art transfer learning algorithms in visual categorization applications, such as object recognition, image classification, and human action recognition.",
"title": ""
},
{
"docid": "1fd8b9ea33ad60c23fa90b3b971be111",
"text": "Precise positioning of an automobile to within lane-level precision can enable better navigation and context-awareness. However, GPS by itself cannot provide such precision in obstructed urban environments. In this paper, we present a system called CARLOC for lane-level positioning of automobiles. CARLOC uses three key ideas in concert to improve positioning accuracy: it uses digital maps to match the vehicle to known road segments; it uses vehicular sensors to obtain odometry and bearing information; and it uses crowd-sourced location of estimates of roadway landmarks that can be detected by sensors available in modern vehicles. CARLOC unifies these ideas in a probabilistic position estimation framework, widely used in robotics, called the sequential Monte Carlo method. Through extensive experiments on a real vehicle, we show that CARLOC achieves sub-meter positioning accuracy in an obstructed urban setting, an order-of-magnitude improvement over a high-end GPS device.",
"title": ""
},
{
"docid": "b52da336c6d70923a1c4606f5076a3ba",
"text": "Given the recent explosion of interest in streaming data and online algorithms, clustering of time-series subsequences, extracted via a sliding window, has received much attention. In this work, we make a surprising claim. Clustering of time-series subsequences is meaningless. More concretely, clusters extracted from these time series are forced to obey a certain constraint that is pathologically unlikely to be satisfied by any dataset, and because of this, the clusters extracted by any clustering algorithm are essentially random. While this constraint can be intuitively demonstrated with a simple illustration and is simple to prove, it has never appeared in the literature. We can justify calling our claim surprising because it invalidates the contribution of dozens of previously published papers. We will justify our claim with a theorem, illustrative examples, and a comprehensive set of experiments on reimplementations of previous work. Although the primary contribution of our work is to draw attention to the fact that an apparent solution to an important problem is incorrect and should no longer be used, we also introduce a novel method that, based on the concept of time-series motifs, is able to meaningfully cluster subsequences on some time-series datasets.",
"title": ""
},
{
"docid": "8c884de141350858bc1ffaa8ef56987e",
"text": "The simple Bayesian classi er (SBC), sometimes called Naive-Bayes, is built based on a conditional independence model of each attribute given the class. The model was previously shown to be surprisingly robust to obvious violations of this independence assumption, yielding accurate classi cation models even when there are clear conditional dependencies. The SBC can serve as an excellent tool for initial exploratory data analysis when coupled with a visualizer that makes its structure comprehensible. We describe such a visual representation of the SBC model that has been successfully implemented. We describe the requirements we had for such a visualization and the design decisions we made to satisfy them.",
"title": ""
},
{
"docid": "cc57e42da57af33edc53ba64f33e0178",
"text": "This paper focuses on the design and development of a low-cost QFN package that is based on wirebond interconnects. One of the design goals is to extend the frequency at which the package can be used to 40-50 GHz (above the K band), in the millimeter-wave range. Owing to the use of mass production assembly protocols and materials, such as commercially available QFN in a mold compound, the design that is outlined in this paper significantly reduces the cost of assembly of millimeter wave modules. To operate the package at 50 GHz or a higher frequency, several key design features are proposed. They include the use of through vias (backside vias) and ground bondwires to provide ground return currents. This paper also provides rigorous validation steps that we took to obtain the key high frequency characteristics. Since a molding compound is used in conventional QFN packages, the material and its effectiveness in determining the signal propagation have to be incorporated in the overall design. However, the mold compound creates some extra challenges in the de-embedding task. For example, the mold compound must be removed to expose the probing pads so the effect of the microstrip on the GaAs chip can be obtained and de-embedded. Careful simulation and experimental validation reveal that the proposed QFN design achieves a return loss of -10 dB and an insertion loss of -1.5 dB up to 50 GHz.",
"title": ""
},
{
"docid": "259073050d1a126bfae6db72e5f5d6b3",
"text": "AIM\nTo describe the clinical properties and psychometric soundness of pediatric oral motor feeding assessments.\n\n\nMETHODS\nA systematic search was conducted using Medline, CINAHL, EMBASE, PsycInfo, and HAPI databases. Assessments were analyzed for their clinical and psychometric characteristics.\n\n\nRESULTS\n12 assessment tools were identified to meet the inclusion/exclusion criteria. Clinical properties varied from assessments evaluating oral-motor deficits, screening to identify feeding problems, and monitoring feeding progress. Most assessments were designed for children with developmental disabilities or cerebral palsy. Eleven assessments had psychometric evidence, of these nine had reliability and validity testing (Ability for Basic Feeding and Swallowing Scale for Children, Behavioral Assessment Scale of Oral Functions in Feeding, Dysphagia Disorder Survey, Functional Feeding Assessment-modified, Gisel Video Assessment, Montreal Children's Hospital Feeding Scale, Oral Motor Assessment Scale, Schedule for Oral Motor Assessment, and Screening Tool of Feeding Problems Applied to Children). The Brief Assessment of Motor Function-Oral Motor Deglutition and the Pediatric Assessment Scale for Severe Feeding Problems had reliability testing only. The Slurp Test was not tested for any psychometric properties. Overall, psychometric evidence was inconsistent and inadequate for the evaluative tools.",
"title": ""
},
{
"docid": "8c56987e08f33c4d763341ec251cc463",
"text": "BACKGROUND\nA neonatal haemoglobinopathy screening programme was implemented in Brussels more than a decade ago and in Liège 5 years ago; the programme was adapted to the local situation.\n\n\nMETHODS\nNeonatal screening for haemoglobinopathies was universal, performed using liquid cord blood and an isoelectric focusing technique. All samples with abnormalities underwent confirmatory testing. Major and minor haemoglobinopathies were reported. Affected children were referred to a specialist centre. A central database in which all screening results were stored was available and accessible to local care workers. A central clinical database to monitor follow-up is under construction.\n\n\nRESULTS\nA total of 191,783 newborns were screened. One hundred and twenty-three (1:1559) newborns were diagnosed with sickle cell disease, seven (1:27,398) with beta thalassaemia major, five (1:38,357) with haemoglobin H disease, and seven (1:27,398) with haemoglobin C disease. All major haemoglobinopathies were confirmed, and follow-up of the infants was undertaken except for three infants who did not attend the first medical consultation despite all efforts.\n\n\nCONCLUSIONS\nThe universal neonatal screening programme was effective because no case of major haemoglobinopathy was identified after the neonatal period. The affected children received dedicated medical care from birth. The screening programme, and specifically the reporting of minor haemoglobinopathies, has been an excellent health education tool in Belgium for more than 12 years.",
"title": ""
},
{
"docid": "124c73eb861c0b2fb64d0084b3961859",
"text": "Treemaps are an important and commonly-used approach to hierarchy visualization, but an important limitation of treemaps is the difficulty of discerning the structure of a hierarchy. This paper presents cascaded treemaps, a new approach to treemap presentation that is based in cascaded rectangles instead of the traditional nested rectangles. Cascading uses less space to present the same containment relationship, and the space savings enable a depth effect and natural padding between siblings in complex hierarchies. In addition, we discuss two general limitations of existing treemap layout algorithms: disparities between node weight and relative node size that are introduced by layout algorithms ignoring the space dedicated to presenting internal nodes, and a lack of stability when generating views of different levels of treemaps as a part of supporting interactive zooming. We finally present a two-stage layout process that addresses both concerns, computing a stable structure for the treemap and then using that structure to consider the presentation of internal nodes when arranging the treemap. All of this work is presented in the context of two large real-world hierarchies, the Java package hierarchy and the eBay auction hierarchy.",
"title": ""
},
{
"docid": "caa252bbfad7ab5c989ae7687818f8ae",
"text": "Nowadays, GPU accelerators are widely used in areas with large data-parallel computations such as scientific computations or neural networks. Programmers can either write code in low-level CUDA/OpenCL code or use a GPU extension for a high-level programming language for better productivity. Most extensions focus on statically-typed languages, but many programmers prefer dynamically-typed languages due to their simplicity and flexibility. \n This paper shows how programmers can write high-level modular code in Ikra, a Ruby extension for array-based GPU computing. Programmers can compose GPU programs of multiple reusable parallel sections, which are subsequently fused into a small number of GPU kernels. We propose a seamless syntax for separating code regions that extensively use dynamic language features from those that are compiled for efficient execution. Moreover, we propose symbolic execution and a program analysis for kernel fusion to achieve performance that is close to hand-written CUDA code.",
"title": ""
},
{
"docid": "9333fab791f45ba737158f46dc7e857c",
"text": "In recent years, much progress has been made on the development of biodegradable magnesium alloys as \"smart\" implants in cardiovascular and orthopedic applications. Mg-based alloys as biodegradable implants have outstanding advantages over Fe-based and Zn-based ones. However, the extensive applications of Mg-based alloys are still inhibited mainly by their high degradation rates and consequent loss in mechanical integrity. Consequently, extensive studies have been conducted to develop Mg-based alloys with superior mechanical and corrosion performance. This review focuses on the following topics: (i) the design criteria of biodegradable materials; (ii) alloy development strategy; (iii) in vitro performances of currently developed Mg-based alloys; and (iv) in vivo performances of currently developed Mg-based implants, especially Mg-based alloys under clinical trials.",
"title": ""
},
{
"docid": "cc4c0a749c6a3f4ac92b9709f24f03f4",
"text": "Modern GPUs with their several hundred cores and more accessible programming models are becoming attractive devices for compute-intensive applications. They are particularly well suited for applications, such as image processing, where the end result is intended to be displayed via the graphics card. One of the more versatile and powerful graphics techniques is ray tracing. However, tracing each ray of light in a scene is very computational expensive and have traditionally been preprocessed on CPUs over hours, if not days. In this paper, Nvidia’s new OptiX ray tracing engine is used to show how the power of modern graphics cards, such as the Nvidia Quadro FX 5800, can be harnessed to ray trace several scenes that represent real-life applications in real-time speeds ranging from 20.63 to 67.15 fps. Near-perfect speedup is demonstrated on dual GPUs for scenes with complex geometries. The impact on ray tracing of the recently announced Nvidia Fermi processor, is also discussed.",
"title": ""
},
{
"docid": "84cbc3773d0572439a9ff6a5ab661d62",
"text": "Parallel I/O library performance can vary greatly in response to user-tunable parameter values such as aggregator count, file count, and aggregation strategy. Unfortunately, manual selection of these values is time consuming and dependent on characteristics of the target machine, the underlying file system, and the dataset itself. Some characteristics, such as the amount of memory per core, can also impose hard constraints on the range of viable parameter values. In this work we address these problems by using machine learning techniques to model the performance of the PIDX parallel I/O library and select appropriate tunable parameter values. We characterize both the network and I/O phases of PIDX on a Cray XE6 as well as an IBM Blue Gene/P system. We use the results of this study to develop a machine learning model for parameter space exploration and performance prediction.",
"title": ""
},
{
"docid": "86278d8f36145bdf1df57964623a1a2a",
"text": "We introduce a new memory architecture for navigation in previously unseen environments, inspired by landmark-based navigation in animals. The proposed semiparametric topological memory (SPTM) consists of a (non-parametric) graph with nodes corresponding to locations in the environment and a (parametric) deep network capable of retrieving nodes from the graph based on observations. The graph stores no metric information, only connectivity of locations corresponding to the nodes. We use SPTM as a planning module in a navigation system. Given only 5 minutes of footage of a previously unseen maze, an SPTM-based navigation agent can build a topological map of the environment and use it to confidently navigate towards goals. The average success rate of the SPTM agent in goal-directed navigation across test environments is higher than the best-performing baseline by a factor of three.",
"title": ""
},
{
"docid": "34a7ae3283c4f3bcb3e9afff2383de72",
"text": "Latent variable models have been a preferred choice in conversational modeling compared to sequence-to-sequence (seq2seq) models which tend to generate generic and repetitive responses. Despite so, training latent variable models remains to be difficult. In this paper, we propose Latent Topic Conversational Model (LTCM) which augments seq2seq with a neural latent topic component to better guide response generation and make training easier. The neural topic component encodes information from the source sentence to build a global “topic” distribution over words, which is then consulted by the seq2seq model at each generation step. We study in details how the latent representation is learnt in both the vanilla model and LTCM. Our extensive experiments contribute to better understanding and training of conditional latent models for languages. Our results show that by sampling from the learnt latent representations, LTCM can generate diverse and interesting responses. In a subjective human evaluation, the judges also confirm that LTCM is the overall preferred option.",
"title": ""
},
{
"docid": "ab8cc15fe47a9cf4aa904f7e1eea4bc9",
"text": "Autism, a severe disorder of development, is difficult to detect in very young children. However, children who receive early intervention have improved long-term prognoses. The Modified Checklist for Autism in Toddlers (M-CHAT), consisting of 23 yes/no items, was used to screen 1,293 children. Of the 58 children given a diagnostic/developmental evaluation, 39 were diagnosed with a disorder on the autism spectrum. Six items pertaining to social relatedness and communication were found to have the best discriminability between children diagnosed with and without autism/PDD. Cutoff scores were created for the best items and the total checklist. Results indicate that the M-CHAT is a promising instrument for the early detection of autism.",
"title": ""
},
{
"docid": "93e5ed1d67fe3d20c7b0177539e509c4",
"text": "Business models that rely on social media and user-generated content have shifted from the more traditional business model, where value for the organization is derived from the one-way delivery of products and/or services, to the provision of intangible value based on user engagement. This research builds a model that hypothesizes that the user experiences from social interactions among users, operationalized as personalization, transparency, access to social resources, critical mass of social acquaintances, and risk, as well as with the technical features of the social media platform, operationalized as the completeness, flexibility, integration, and evolvability, influence user engagement and subsequent usage behavior. Using survey responses from 408 social media users, findings suggest that both social and technical factors impact user engagement and ultimately usage with additional direct impacts on usage by perceptions of the critical mass of social acquaintances and risk. KEywORdS Social Interactions, Social Media, Social Networking, Technical Features, Use, User Engagement, User Experience",
"title": ""
}
] |
scidocsrr
|
8fdd51eb7fd7655a22aaf5c8147db9f5
|
Learning to Play With Intrinsically-Motivated, Self-Aware Agents
|
[
{
"docid": "bff43eb80a07c68f372664a8220af8d0",
"text": "Humans powerfully and flexibly interpret the behaviour of other people based on an understanding of their minds: that is, we use a \"theory of mind.\" In this study we distinguish theory of mind, which represents another person's mental states, from a representation of the simple presence of another person per se. The studies reported here establish for the first time that a region in the human temporo-parietal junction (here called the TPJ-M) is involved specifically in reasoning about the contents of another person's mind. First, the TPJ-M was doubly dissociated from the nearby extrastriate body area (EBA; Downing et al., 2001). Second, the TPJ-M does not respond to false representations in non-social control stories. Third, the BOLD response in the TPJ-M bilaterally was higher when subjects read stories about a character's mental states, compared with stories that described people in physical detail, which did not differ from stories about nonhuman objects. Thus, the role of the TPJ-M in understanding other people appears to be specific to reasoning about the content of mental states.",
"title": ""
}
] |
[
{
"docid": "1e4292950f907d26b27fa79e1e8fa41f",
"text": "All over the world every business and profit earning firm want to make their consumer loyal. There are many factors responsible for this customer loyalty but two of them are prominent. This research study is focused on that how customer satisfaction and customer retention contribute towards customer loyalty. For analysis part of this study, Universities students of Peshawar Region were targeted. A sample of 120 were selected from three universities of Peshawar. These universities were Preston University, Sarhad University and City University of Science and Information technology. Analysis was conducted with the help of SPSS 19. Results of the study shows that customer loyalty is more dependent upon Customer satisfaction in comparison of customer retention. Customer perceived value and customer perceived quality are the major factors which contribute for the customer loyalty of Universities students for mobile handsets.",
"title": ""
},
{
"docid": "f24bfd745d9f28a96de1d3a897bf91e6",
"text": "In this paper, autoregressive parameter estimation for Kalman filtering speech enhancement is studied. In conventional Kalman filtering speech enhancement, spectral subtraction is usually used for speech autoregressive (AR) parameter estimation. We propose log spectral amplitude (LSA) minimum mean-square error (MMSE) instead of spectral subtraction for the estimation of speech AR parameters. Based on an observation that full-band Kalman filtering speech enhancement often causes an unbalanced noise reduction between speech and non-speech segments, a spectral solution is proposed to overcome the unbalanced reduction of noise. This is done by shaping the spectral envelopes of the noise through likelihood ratio. Our simulation results show the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "89eee86640807e11fa02d0de4862b3a5",
"text": "The evolving fifth generation (5G) cellular wireless networks are envisioned to overcome the fundamental challenges of existing cellular networks, for example, higher data rates, excellent end-to-end performance, and user-coverage in hot-spots and crowded areas with lower latency, energy consumption, and cost per information transfer. To address these challenges, 5G systems will adopt a multi-tier architecture consisting of macrocells, different types of licensed small cells, relays, and device-to-device (D2D) networks to serve users with different quality-of-service (QoS) requirements in a spectrum and energy-efficient manner. Starting with the visions and requirements of 5G multi-tier networks, this article outlines the challenges of interference management (e.g. power control, cell association) in these networks with shared spectrum access (i.e. when the different network tiers share the same licensed spectrum). It is argued that the existing interference management schemes will not be able to address the interference management problem in prioritized 5G multi-tier networks where users in different tiers have different priorities for channel access. In this context a survey and qualitative comparison of the existing cell association and power control schemes is provided to demonstrate their limitations for interference management in 5G networks. Open challenges are highlighted and guidelines are provided to modify the existing schemes in order to overcome these limitations and make them suitable for the emerging 5G systems.",
"title": ""
},
{
"docid": "8414116e8ac1dbbad2403565e897c53e",
"text": "Underground is a challenging environment for wireless communication since the propagation medium is no longer air but soil, rock and water. The well established wireless communication techniques using electromagnetic (EM) waves do not work well in this environment due to three problems: high path loss, dynamic channel condition and large antenna size. New techniques using magnetic induction (MI) can solve two of the three problems (dynamic channel condition and large antenna size), but may still cause even higher path loss. In this paper, a complete characterization of the underground MI communication channel is provided. Based on the channel model, the MI waveguide technique for communication is developed in order to reduce the MI path loss. The performance of the traditional EM wave systems, the current MI systems and our improved MI waveguide system are quantitatively compared. The results reveal that our MI waveguide system has much lower path loss than the other two cases for any channel conditions.",
"title": ""
},
{
"docid": "1bfe17bba2d4a846f5745283594c1464",
"text": "Software engineers need to be able to create, modify, and analyze knowledge stored in software artifacts. A significant amount of these artifacts contain natural language, like version control commit messages, source code comments, or bug reports. Integrated software development environments (IDEs) are widely used, but they are only concerned with structured software artifacts – they do not offer support for analyzing unstructured natural language and relating this knowledge with the source code. We present an integration of natural language processing capabilities into the Eclipse framework, a widely used software IDE. It allows to execute NLP analysis pipelines through the Semantic Assistants framework, a service-oriented architecture for brokering NLP services based on GATE. We demonstrate a number of semantic analysis services helpful in software engineering tasks, and evaluate one task in detail, the quality analysis of source code comments.",
"title": ""
},
{
"docid": "c2332c4484fa18482ef072c003cf2caf",
"text": "The rapid development of smartphone technologies have resulted in the evolution of mobile botnets. The implications of botnets have inspired attention from the academia and the industry alike, which includes vendors, investors, hackers, and researcher community. Above all, the capability of botnets is uncovered through a wide range of malicious activities, such as distributed denial of service (DDoS), theft of business information, remote access, online or click fraud, phishing, malware distribution, spam emails, and building mobile devices for the illegitimate exchange of information and materials. In this study, we investigate mobile botnet attacks by exploring attack vectors and subsequently present a well-defined thematic taxonomy. By identifying the significant parameters from the taxonomy, we compared the effects of existing mobile botnets on commercial platforms as well as open source mobile operating system platforms. The parameters for review include mobile botnet architecture, platform, target audience, vulnerabilities or loopholes, operational impact, and detection approaches. In relation to our findings, research challenges are then presented in this domain.",
"title": ""
},
{
"docid": "bac9584a31e42129fb7a5fe2640f5725",
"text": "During the last few years, continuous progresses in wireless communications have opened new research fields in computer networking, aimed at extending data networks connectivity to environments where wired solutions are impracticable. Among these, vehicular communication is attracting growing attention from both academia and industry, owing to the amount and importance of the related applications, ranging from road safety to traffic control and up to mobile entertainment. Vehicular Ad-hoc Networks (VANETs) are self-organized networks built up from moving vehicles, and are part of the broader class of Mobile Ad-hoc Networks (MANETs). Owing to their peculiar characteristics, VANETs require the definition of specific networking techniques, whose feasibility and performance are usually tested by means of simulation. One of the main challenges posed by VANETs simulations is the faithful characterization of vehicular mobility at both the macroscopic and microscopic levels, leading to realistic non-uniform distributions of cars and velocity, and unique connectivity dynamics. However, freely distributed tools which are commonly used for academic studies only consider limited vehicular mobility issues, while they pay little or no attention to vehicular traffic generation and its interaction with its motion constraints counterpart. Such a simplistic approach can easily raise doubts on the confidence of derived VANETs simulation results. In this paper we present VanetMobiSim, a freely available generator of realistic vehicular movement traces for networks simulators. The traces generated by VanetMobiSim are validated first by illustrating how the interaction between featured motion constraints and traffic generator models is able to reproduce typical phenomena of vehicular traffic. Then, the traces are formally validated against those obtained by TSIS-CORSIM, a benchmark traffic simulator in transportation research. This makes VanetMobiSim one of the few vehicular mobility simulator fully validated and freely available to the vehicular networks research community.",
"title": ""
},
{
"docid": "9d6a0b31bf2b64f1ec624222a2222e2a",
"text": "This is the translation of a paper by Marc Prensky, the originator of the famous metaphor digital natives digital immigrants. Here, ten years after the birth of that successful metaphor, Prensky outlines that, while the distinction between digital natives and immigrants will progressively become less important, new concepts will be needed to represent the continuous evolution of the relationship between man and digital technologies. In this paper Prensky introduces the concept of digital wisdom, a human quality which develops as a result of the empowerment that the natural human skills can receive through a creative and clever use of digital technologies. KEY-WORDS Digital natives, digital immigrants, digital wisdom, digital empowerment. Prensky M. (2010). H. Sapiens Digitale: dagli Immigrati digitali e nativi digitali alla saggezza digitale. TD-Tecnologie Didattiche, 50, pp. 17-24 17 I problemi del mondo d’oggi non possono essere risolti facendo ricorso allo stesso tipo di pensiero che li ha creati",
"title": ""
},
{
"docid": "a5158ad349d57948e23825a612e187de",
"text": "Topic models, which frequently represent topics as multinomial distributions over words, have been extensively used for discovering latent topics in text corpora. Topic labeling, which aims to assign meaningful labels for discovered topics, has recently gained significant attention. In this paper, we argue that the quality of topic labeling can be improved by considering ontology concepts rather than words alone, in contrast to previous works in this area, which usually represent topics via groups of words selected from topics. We have created: (1) a topic model that integrates ontological concepts with topic models in a single framework, where each topic and each concept are represented as a multinomial distribution over concepts and over words, respectively, and (2) a topic labeling method based on the ontological meaning of the concepts included in the discovered topics. In selecting the best topic labels, we rely on the semantic relatedness of the concepts and their ontological classifications. The results of our experiments conducted on two different data sets show that introducing concepts as additional, richer features between topics and words and describing topics in terms of concepts offers an effective method for generating meaningful labels for the discovered topics.",
"title": ""
},
{
"docid": "a488a74817a8401eff1373d4e21f060f",
"text": "We propose a neural machine translation architecture that models the surrounding text in addition to the source sentence. These models lead to better performance, both in terms of general translation quality and pronoun prediction, when trained on small corpora, although this improvement largely disappears when trained with a larger corpus. We also discover that attention-based neural machine translation is well suited for pronoun prediction and compares favorably with other approaches that were specifically designed for this task.",
"title": ""
},
{
"docid": "7bd7b0b85ae68f0ccd82d597667d8acb",
"text": "Trust evaluation plays an important role in securing wireless sensor networks (WSNs), which is one of the most popular network technologies for the Internet of Things (IoT). The efficiency of the trust evaluation process is largely governed by the trust derivation, as it dominates the overhead in the process, and performance of WSNs is particularly sensitive to overhead due to the limited bandwidth and power. This paper proposes an energy-aware trust derivation scheme using game theoretic approach, which manages overhead while maintaining adequate security of WSNs. A risk strategy model is first presented to stimulate WSN nodes' cooperation. Then, a game theoretic approach is applied to the trust derivation process to reduce the overhead of the process. We show with the help of simulations that our trust derivation scheme can achieve both intended security and high efficiency suitable for WSN-based IoT networks.",
"title": ""
},
{
"docid": "b6c175a5734ad35a9ebdd2a602100769",
"text": "BACKGROUND\nHigh-flow oxygen therapy through a nasal cannula has been increasingly used in infants with bronchiolitis, despite limited high-quality evidence of its efficacy. The efficacy of high-flow oxygen therapy through a nasal cannula in settings other than intensive care units (ICUs) is unclear.\n\n\nMETHODS\nIn this multicenter, randomized, controlled trial, we assigned infants younger than 12 months of age who had bronchiolitis and a need for supplemental oxygen therapy to receive either high-flow oxygen therapy (high-flow group) or standard oxygen therapy (standard-therapy group). Infants in the standard-therapy group could receive rescue high-flow oxygen therapy if their condition met criteria for treatment failure. The primary outcome was escalation of care due to treatment failure (defined as meeting ≥3 of 4 clinical criteria: persistent tachycardia, tachypnea, hypoxemia, and medical review triggered by a hospital early-warning tool). Secondary outcomes included duration of hospital stay, duration of oxygen therapy, and rates of transfer to a tertiary hospital, ICU admission, intubation, and adverse events.\n\n\nRESULTS\nThe analyses included 1472 patients. The percentage of infants receiving escalation of care was 12% (87 of 739 infants) in the high-flow group, as compared with 23% (167 of 733) in the standard-therapy group (risk difference, -11 percentage points; 95% confidence interval, -15 to -7; P<0.001). No significant differences were observed in the duration of hospital stay or the duration of oxygen therapy. In each group, one case of pneumothorax (<1% of infants) occurred. Among the 167 infants in the standard-therapy group who had treatment failure, 102 (61%) had a response to high-flow rescue therapy.\n\n\nCONCLUSIONS\nAmong infants with bronchiolitis who were treated outside an ICU, those who received high-flow oxygen therapy had significantly lower rates of escalation of care due to treatment failure than those in the group that received standard oxygen therapy. (Funded by the National Health and Medical Research Council and others; Australian and New Zealand Clinical Trials Registry number, ACTRN12613000388718 .).",
"title": ""
},
{
"docid": "68cf646ecd3aa857ec819485eab03d93",
"text": "Since their introduction as a means of front propagation and their first application to edge-based segmentation in the early 90’s, level set methods have become increasingly popular as a general framework for image segmentation. In this paper, we present a survey of a specific class of region-based level set segmentation methods and clarify how they can all be derived from a common statistical framework. Region-based segmentation schemes aim at partitioning the image domain by progressively fitting statistical models to the intensity, color, texture or motion in each of a set of regions. In contrast to edge-based schemes such as the classical Snakes, region-based methods tend to be less sensitive to noise. For typical images, the respective cost functionals tend to have less local minima which makes them particularly well-suited for local optimization methods such as the level set method. We detail a general statistical formulation for level set segmentation. Subsequently, we clarify how the integration of various low level criteria leads to a set of cost functionals. We point out relations between the different segmentation schemes. In experimental results, we demonstrate how the level set function is driven to partition the image plane into domains of coherent color, texture, dynamic texture or motion. Moreover, the Bayesian formulation allows to introduce prior shape knowledge into the level set method. We briefly review a number of advances in this domain.",
"title": ""
},
{
"docid": "f054d091b6d9f190106a0b7642620302",
"text": "In experimental investigations of the persuasive effect of source credibility, it has been frequently demonstrated that highly trustworthy and expert spokespeople induce a greater positive attitude toward the position they advocate than do communicators with less credibility (cf. Stemthal, Phillips, and Dholakia in press). This finding can be explained in terms of cognitive response (cf. Greenwald 1968, 1970; Petty, Ostrom, and Brock 1978). According to this formulation, a message recipient's initial opinion is an important determinant of influence. In response to a persuasive appeal, individuals rehearse their issuerelevant thoughts, as well as those presented to them. Message rejection occurs when people opposed to the communicator's advocacy review counterarguments to assertions made to the message. If a highly credible source inhibits counterarguing. whereas a less credible source does not, cognitive response predicts the superior persuasive power of a highly credible communicator. Consistent with this interpretation. Cook (1969) reported less counterargumentation in response to a competent source than to an incompetent source. Despite the substantial number of studies indicating that a highly credible source is more persuasive than a low credibility source, this finding is less than univocal.",
"title": ""
},
{
"docid": "15d3618efa3413456c6aebf474b18c92",
"text": "The aim of this paper is to elucidate the implications of quantum computing in present cryptography and to introduce the reader to basic post-quantum algorithms. In particular the reader can delve into the following subjects: present cryptographic schemes (symmetric and asymmetric), differences between quantum and classical computing, challenges in quantum computing, quantum algorithms (Shor’s and Grover’s), public key encryption schemes affected, symmetric schemes affected, the impact on hash functions, and post quantum cryptography. Specifically, the section of Post-Quantum Cryptography deals with different quantum key distribution methods and mathematicalbased solutions, such as the BB84 protocol, lattice-based cryptography, multivariate-based cryptography, hash-based signatures and code-based cryptography. Keywords—quantum computers; post-quantum cryptography; Shor’s algorithm; Grover’s algorithm; asymmetric cryptography; symmetric cryptography",
"title": ""
},
{
"docid": "87baf6381f4297b6e9af7659ef111f5c",
"text": "Indonesian Sign Language System (ISLS) has been used widely by Indonesian for translating the sign language of disabled people to many applications, including education or entertainment. ISLS consists of static and dynamic gestures in representing words or sentences. However, individual variations in performing sign language have been a big challenge especially for developing automatic translation. The accuracy of recognizing the signs will decrease linearly with the increase of variations of gestures. This research is targeted to solve these issues by implementing the multimodal methods: leap motion and Myo armband controllers (EMG electrodes). By combining these two data and implementing Naïve Bayes classifier, we hypothesized that the accuracy of gesture recognition system for ISLS then can be increased significantly. The data streams captured from hand-poses were based on time-domain series method which will warrant the generated data synchronized accurately. The selected features for leap motion data would be based on fingers positions, angles, and elevations, while for the Myo armband would be based on electrical signal generated by eight channels of EMG electrodes relevant to the activities of linked finger’s and forearm muscles. This study will investigate the accuracy of gesture recognition by using either single modal or multimodal for translating Indonesian sign language. For multimodal strategy, both features datasets were merged into a single dataset which was then used for generating a model for each hand gesture. The result showed that there was a significant improvement on its accuracy, from 91% for single modal using leap motion to 98% for multi-modal (combined with Myo armband). The confusion matrix of multimodal method also showed better performance than the single-modality. Finally, we concluded that the implementation of multi-modal controllers for ISLS’s gesture recognition showed better accuracy and performance compared of single modality of using only leap motion controller.",
"title": ""
},
{
"docid": "1a19b42e030820ae0b9ec944fe8c05a6",
"text": "We introduce a paradigm for nonlocal sparsity reinforced deep convolutional neural network denoising. It is a combination of a local multiscale denoising by a convolutional neural network (CNN) based denoiser and a nonlocal denoising based on a nonlocal filter (NLF), exploiting the mutual similarities between groups of patches. CNN models are leveraged with noise levels that progressively decrease at every iteration of our framework, while their output is regularized by a nonlocal prior implicit within the NLF. Unlike complicated neural networks that embed the nonlocality prior within the layers of the network, our framework is modular, and it uses standard pretrained CNNs together with standard nonlocal filters. An instance of the proposed framework, called NN3D, is evaluated over large grayscale image datasets showing state-of-the-art performance.",
"title": ""
},
{
"docid": "fed29ca6eff6fdb79cdb56a65b30217f",
"text": "During the last decade, the local manufacturing of small wind turbines is becoming an increasingly common approach in rural electrification applications, especially among international networks of renewable energy practitioners. As the number of locally manufactured small wind turbine installations is increasing on all continents, supply chain issues, material shortages or design issues in custom applications are becoming evident. In this paper the development of a series of open access design and analysis tools is presented, which allows the local manufacturer to redesign the small wind turbine according to available materials. In the two case studies, the design tools are used in the field by practitioners in order to overcome supply chain issues during the implementation and design phases of rural electrification projects in Ethiopia and Nepal.",
"title": ""
},
{
"docid": "e55191b2c60a9ebba75c2b36926814fd",
"text": "The dynamics of phloem growth ring formation in silver fir (Abies alba Mill.) and Norway spruce (Picea abies Karst.) at different sites in Slovenia during the droughty growing season of 2003 was studied. We also determined the timing of cambial activity, xylem and phloem formation, and counted the number of cells in the completed phloem and xylem growth rings. Light microscopy of cross-sections revealed that cambial activity started on the phloem and xylem side simultaneously at all three plots. However, prior to this, 1–2 layers of phloem derivatives near the cambium were differentiated without previous divisions. The structure of the early phloem was similar in silver fir and Norway spruce. Differences in the number of late phloem cells were found among sites. Phloem growth rings were the widest in Norway spruce growing at the lowland site. In all investigated trees, the cambium produced 5–12 times more xylem cells than phloem ones. In addition, the variability in the number of cells in the 2003 growth ring around the stem circumference of the same tree and among different trees was higher on the xylem side than on the phloem side. Phloem formation is presumably less dependent on environmental factors but is more internally driven than xylem formation.",
"title": ""
},
{
"docid": "0210a0cd8c530dd181bbae1a5bdd9b1a",
"text": "Most of the social media platforms generate a massive amount of raw data that is slow-paced. On the other hand, Internet Relay Chat (IRC) protocol, which has been extensively used by hacker community to discuss and share their knowledge, facilitates fast-paced and real-time text communications. Previous studies of malicious IRC behavior analysis were mostly either offline or batch processing. This results in a long response time for data collection, pre-processing, and threat detection. However, since the threats can use the latest vulnerabilities to exploit systems (e.g. zero-day attack) and which can spread fast using IRC channels. Current IRC channel monitoring techniques cannot provide the required fast detection and alerting. In this paper, we present an alternative approach to overcome this limitation by providing real-time and autonomic threat detection in IRC channels. We demonstrate the capabilities of our approach using as an example the shadow brokers' leak exploit (the exploit leveraged by WannaCry ransomware attack) that was captured and detected by our framework.",
"title": ""
}
] |
scidocsrr
|
babcea3df82950dd3c1a5fc390fa790b
|
Full soft-switching bidirectional isolated current-fed dual inductor push-pull DC-DC converter for battery energy storage applications
|
[
{
"docid": "f773798785419625b8f283fc052d4ab2",
"text": "The increasing interest in energy storage for the grid can be attributed to multiple factors, including the capital costs of managing peak demands, the investments needed for grid reliability, and the integration of renewable energy sources. Although existing energy storage is dominated by pumped hydroelectric, there is the recognition that battery systems can offer a number of high-value opportunities, provided that lower costs can be obtained. The battery systems reviewed here include sodium-sulfur batteries that are commercially available for grid applications, redox-flow batteries that offer low cost, and lithium-ion batteries whose development for commercial electronics and electric vehicles is being applied to grid storage.",
"title": ""
}
] |
[
{
"docid": "08e8629cf29da3532007c5cf5c57d8bb",
"text": "Social networks are growing in number and size, with hundreds of millions of user accounts among them. One added benefit of these networks is that they allow users to encode more information about their relationships than just stating who they know. In this work, we are particularly interested in trust relationships, and how they can be used in designing interfaces. In this paper, we present FilmTrust, a website that uses trust in web-based social networks to create predictive movie recommendations. Using the FilmTrust system as a foundation, we show that these recommendations are more accurate than other techniques when the user’s opinions about a film are divergent from the average. We discuss this technique both as an application of social network analysis, as well as how it suggests other analyses that can be performed to help improve collaborative filtering algorithms of all types.",
"title": ""
},
{
"docid": "5025766e66589289ccc31e60ca363842",
"text": "The use of web applications has become increasingly popular in our routine activities, such as reading the news, paying bills, and shopping on-line. As the availability of these services grows, we are witnessing an increase in the number and sophistication of attacks that target them. In particular, SQL injection, a class of code-injection attacks in which specially crafted input strings result in illegal queries to a database, has become one of the most serious threats to web applications. In this paper we present and evaluate a new technique for detecting and preventing SQL injection attacks. Our technique uses a model-based approach to detect illegal queries before they are executed on the database. In its static part, the technique uses program analysis to automatically build a model of the legitimate queries that could be generated by the application. In its dynamic part, the technique uses runtime monitoring to inspect the dynamically-generated queries and check them against the statically-built model. We developed a tool, AMNESIA, that implements our technique and used the tool to evaluate the technique on seven web applications. In the evaluation we targeted the subject applications with a large number of both legitimate and malicious inputs and measured how many attacks our technique detected and prevented. The results of the study show that our technique was able to stop all of the attempted attacks without generating any false positives.",
"title": ""
},
{
"docid": "5d354e8358bd4f52acae2d05e12e28e5",
"text": "In this paper, we exploit memory-augmented neural networks to predict accurate answers to visual questions, even when those answers rarely occur in the training set. The memory network incorporates both internal and external memory blocks and selectively pays attention to each training exemplar. We show that memory-augmented neural networks are able to maintain a relatively long-term memory of scarce training exemplars, which is important for visual question answering due to the heavy-tailed distribution of answers in a general VQA setting. Experimental results in two large-scale benchmark datasets show the favorable performance of the proposed algorithm with the comparison to state of the art.",
"title": ""
},
{
"docid": "5ce6bac4ec1f916c1ebab9da09816c0e",
"text": "High-performance parallel computing architectures are increasingly based on multi-core processors. While current commercially available processors are at 8 and 16 cores, technological and power constraints are limiting the performance growth of the cores and are resulting in architectures with much higher core counts, such as the experimental many-core Intel Single-chip Cloud Computer (SCC) platform. These trends are presenting new sets of challenges to HPC applications including programming complexity and the need for extreme energy efficiency.\n In this paper, we first investigate the power behavior of scientific Partitioned Global Address Space (PGAS) application kernels on the SCC platform, and explore opportunities and challenges for power management within the PGAS framework. Results obtained via empirical evaluation of Unified Parallel C (UPC) applications on the SCC platform under different constraints, show that, for specific operations, the potential for energy savings in PGAS is large; and power/performance trade-offs can be effectively managed using a cross-layer approach. We investigate cross-layer power management using PGAS language extensions and runtime mechanisms that manipulate power/performance tradeoffs. Specifically, we present the design, implementation and evaluation of such a middleware for application-aware cross-layer power management of UPC applications on the SCC platform. Finally, based on our observations, we provide a set of insights that can be used to support similar power management for PGAS applications on other many-core platforms.",
"title": ""
},
{
"docid": "bcae6eb2ad3a379f889ec9fea12d203b",
"text": "Within the last few decades inkjet printing has grown into a mature noncontact patterning method, since it can produce large-area patterns with high resolution at relatively high speeds while using only small amounts of functional materials. The main fields of interest where inkjet printing can be applied include the manufacturing of radiofrequency identification (RFID) tags, organic thin-film transistors (OTFTs), and electrochromic devices (ECDs), and are focused on the future of plastic electronics. In view of these applications on polymer foils, micrometersized conductive features on flexible substrates are essential. To fabricate conductive features onto polymer substrates, solutionprocessable materials are often used. The most frequently used are dispersions of silver nanoparticles in an organic solvent. Inks of silver nanoparticle dispersions are relatively easy to prepare and, moreover, silver has the lowest resistivity of all metals (1.59mV cm). After printing and evaporation of the solvent, the particles require a thermal-processing step to render the features conductive by removing the organic binder that is present around the nanoparticles. In nonpolar solvents, long alkyl chains with a polar head, like thiols or carboxylic acids, are usually used to stabilize the nanoparticles. Steric stabilization of these particles in nonpolar solvents substantially screens van der Waals attractions and introduces steep steric repulsion between the particles at contact, which avoids agglomeration. In addition, organic binders are often added to the ink to assure not only mechanical integrity and adhesion to the substrate, but also to promote the printability of the ink. Nanoparticles with a diameter below 50 nmhave a significantly reduced sintering temperature, typically between 160 and 300 8C, which is well below the melting temperature of the bulk material (Tm1⁄4 963 8C). Despite these low sintering temperatures conventional heating methods are still not compatible with common polymer foils, such as polycarbonate (PC) and polyethylene terephthalate (PET), due to their low glass-transition temperatures (Tg). In fact, only the expensive high-performance polymers, like polytetrafluoroethylene (PTFE), poly(ether ether ketone) (PEEK), and polyimide (PI) can be used at these temperatures. This represents, however, a significant drawback for the implementation in a large-area production of plastic electronics, being unfavorable in terms of costs. Furthermore, the long sintering time of 60min or more that is generally required to create conductive features also obstructs industrial implementation. Therefore, other techniques have to be used in order to facilitate fast and selective heating of materials. One selective technique for nanoparticle sintering that has been described in literature is based on an argon-ion laser beam that follows the as-printed feature and selectively sinters the central region. Features with a line width smaller than 10mm have been created with this technique. However, the large overall thermal energy impact together with the low writing speed of 0.2mm s 1 of the translational stage are limiting factors. A faster alternative to selectively heat silver nanoparticles is to use microwave radiation. Ceramics and other dielectric materials can be heated by microwaves due to dielectric losses that are caused by dipole polarization. Under ambient conditions, however, metals behave as reflectors for microwave radiation, because of their small skin depth, which is defined as the distance at which the incident power is reduced to half of its initial value. The small skin depth results from the high conductance s and the high dielectric loss factor e00 together with a small capacitance. When instead of bulk material, the metal consists of particles and/or is heated to at least 400 8C, the materials absorbs microwave radiation to a greater extent. It is believed that the conductive particle interaction with microwave radiation, i.e., inductive coupling, is mainly based on Maxwell–Wagner polarization, which results from the accumulation of charge at the materials interfaces, electric conduction, and eddy currents. However, the main reasons for successful heating of metallic particles through microwave radiation are not yet fully understood. In contrast to the relatively strongmicrowave absorption by the conductive particles, the polarization of dipoles in thermoplastic polymers below the Tg is limited, which makes the polymer foil’s skin depth almost infinite, hence transparent, to microwave radiation. Therefore, only the conductive particles absorb the microwaves and can be sintered selectively. Recently, it has been shown that it is possible to create conductive printed features with microwave radiation within 3–4min. The resulting conductivity, however, is only approximately 5% of the bulk silver value. In this contribution, we present a study on antenna-supported microwave sintering of conducted features on polymer foils. We",
"title": ""
},
{
"docid": "9333061323e63b7f7adbe9690bd17bcc",
"text": "Vector symbolic architectures (VSAs) are high-dimensional vector representations of objects (e.g., words, image parts), relations (e.g., sentence structures), and sequences for use with machine learning algorithms. They consist of a vector addition operator for representing a collection of unordered objects, a binding operator for associating groups of objects, and a methodology for encoding complex structures. We first develop constraints that machine learning imposes on VSAs; for example, similar structures must be represented by similar vectors. The constraints suggest that current VSAs should represent phrases (“The smart Brazilian girl”) by binding sums of terms, in addition to simply binding the terms directly. We show that matrix multiplication can be used as the binding operator for a VSA, and that matrix elements can be chosen at random. A consequence for living systems is that binding is mathematically possible without the need to specify, in advance, precise neuron-to-neuron connection properties for large numbers of synapses. A VSA that incorporates these ideas, Matrix Binding of Additive Terms (MBAT), is described that satisfies all constraints. With respect to machine learning, for some types of problems appropriate VSA representations permit us to prove learnability rather than relying on simulations. We also propose dividing machine (and neural) learning and representation into three stages, with differing roles for learning in each stage. For neural modeling, we give representational reasons for nervous systems to have many recurrent connections, as well as for the importance of phrases in language processing. Sizing simulations and analyses suggest that VSAs in general, and MBAT in particular, are ready for real-world applications.",
"title": ""
},
{
"docid": "7d63624d982c202de1cfff3951a799a1",
"text": "OBJECTIVE\nVaping may increase the cytotoxic effects of e-cigarette liquid (ECL). We compared the effect of unvaped ECL to e-cigarette vapour condensate (ECVC) on alveolar macrophage (AM) function.\n\n\nMETHODS\nAMs were treated with ECVC and nicotine-free ECVC (nfECVC). AM viability, apoptosis, necrosis, cytokine, chemokine and protease release, reactive oxygen species (ROS) release and bacterial phagocytosis were assessed.\n\n\nRESULTS\nMacrophage culture with ECL or ECVC resulted in a dose-dependent reduction in cell viability. ECVC was cytotoxic at lower concentrations than ECL and resulted in increased apoptosis and necrosis. nfECVC resulted in less cytotoxicity and apoptosis. Exposure of AMs to a sub-lethal 0.5% ECVC/nfECVC increased ROS production approximately 50-fold and significantly inhibited phagocytosis. Pan and class one isoform phosphoinositide 3 kinase inhibitors partially inhibited the effects of ECVC/nfECVC on macrophage viability and apoptosis. Secretion of interleukin 6, tumour necrosis factor α, CXCL-8, monocyte chemoattractant protein 1 and matrix metalloproteinase 9 was significantly increased following ECVC challenge. Treatment with the anti-oxidant N-acetyl-cysteine (NAC) ameliorated the cytotoxic effects of ECVC/nfECVC to levels not significantly different from baseline and restored phagocytic function.\n\n\nCONCLUSIONS\nECVC is significantly more toxic to AMs than non-vaped ECL. Excessive production of ROS, inflammatory cytokines and chemokines induced by e-cigarette vapour may induce an inflammatory state in AMs within the lung that is partly dependent on nicotine. Inhibition of phagocytosis also suggests users may suffer from impaired bacterial clearance. While further research is needed to fully understand the effects of e-cigarette exposure in humans in vivo, we caution against the widely held opinion that e-cigarettes are safe.",
"title": ""
},
{
"docid": "3a25c950c1758616330054e98e9c1ed5",
"text": "Providing security support for mobile ad-hoc networks is challenging for several reasons: (a) wireless networks are susceptible to attacks ranging from passive eavesdropping to active interfering, occasional break-ins by adversaries may be inevitable in a large time window; (b) mobile users demand “anywhere, anytime” services; (c) a scalable solution is needed for a large-scale mobile network. In this paper, we describe a solution that supports ubiquitous security services for mobile hosts, scales to network size, and is robust against break-ins. In our design, we distribute the certification authority functions through a threshold secret sharing mechanism, in which each entity holds a secret share and multiple entities in a local neighborhood jointly provide complete services. We employ localized certification schemes to enable ubiquitous services. We also update the secret shares to further enhance robustness against break-ins. Both simulations and implementation confirm the effectiveness of our design.",
"title": ""
},
{
"docid": "8b0a90d4f31caffb997aced79c59e50c",
"text": "Visual SLAM systems aim to estimate the motion of a moving camera together with the geometric structure and appearance of the world being observed. To the extent that this is possible using only an image stream, the core problem that must be solved by any practical visual SLAM system is that of obtaining correspondence throughout the images captured. Modern visual SLAM pipelines commonly obtain correspondence by using sparse feature matching techniques and construct maps using a composition of point, line or other simple geometric primitives. The resulting sparse feature map representations provide sparsely furnished, incomplete reconstructions of the observed scene. Related techniques from multiple view stereo (MVS) achieve high quality dense reconstruction by obtaining dense correspondences over calibrated image sequences. Despite the usefulness of the resulting dense models, these techniques have been of limited use in visual SLAM systems. The computational complexity of estimating dense surface geometry has been a practical barrier to its use in real-time SLAM. Furthermore, MVS algorithms have typically required a fixed length, calibrated image sequence to be available throughout the optimisation — a condition fundamentally at odds with the online nature of SLAM. With the availability of massively-parallel commodity computing hardware, we demonstrate new algorithms that achieve high quality incremental dense reconstruction within online visual SLAM. The result is a live dense reconstruction (LDR) of scenes that makes possible numerous applications that can utilise online surface modelling, for instance: planning robot interactions with unknown objects, augmented reality with characters that interact with the scene, or providing enhanced data for object recognition. The core of this thesis goes beyond LDR to demonstrate fully dense visual SLAM. We replace the sparse feature map representation with an incrementally updated, non-parametric, dense surface model. By enabling real-time dense depth map estimation through novel short baseline MVS, we can continuously update the scene model and further leverage its predictive capabilities to achieve robust camera pose estimation with direct whole image alignment. We demonstrate the capabilities of dense visual SLAM using a single moving passive camera, and also when real-time surface measurements are provided by a commodity depth camera. The results demonstrate state-of-the-art, pick-up-and-play 3D reconstruction and camera tracking systems useful in many real world scenarios. Acknowledgements There are key individuals who have provided me with all the support and tools that a student who sets out on an adventure could want. Here, I wish to acknowledge those friends and colleagues, that by providing technical advice or much needed fortitude, helped bring this work to life. Prof. Andrew Davison’s robot vision lab provides a unique research experience amongst computer vision labs in the world. First and foremost, I thank my supervisor Andy for giving me the chance to be part of that experience. His brilliant guidance and support of my growth as a researcher are well matched by his enthusiasm for my work. This is made most clear by his fostering the joy of giving live demonstrations of work in progress. His complete faith in my ability drove me on and gave me license to develop new ideas and build bridges to research areas that we knew little about. Under his guidance I’ve been given every possible opportunity to develop my research interests, and this thesis would not be possible without him. My appreciation for Prof. Murray Shanahan’s insights and spirit began with our first conversation. Like ripples from a stone cast into a pond, the presence of his ideas and depth of knowledge instantly propagated through my mind. His enthusiasm and capacity to discuss any topic, old or new to him, and his ability to bring ideas together across the worlds of science and philosophy, showed me an openness to thought that I continue to try to emulate. I am grateful to Murray for securing a generous scholarship for me in the Department of Computing and for providing a home away from home in his cognitive robotics lab. I am indebted to Prof. Owen Holland who introduced me to the world of research at the University of Essex. Owen showed me a first glimpse of the breadth of ideas in robotics, AI, cognition and beyond. I thank Owen for introducing me to the idea of continuing in academia for a doctoral degree and for introducing me to Murray. I have learned much with many friends and colleagues at Imperial College, but there are three who have been instrumental. I thank Steven Lovegrove, Ankur Handa and Renato Salas-Moreno who travelled with me on countless trips into the unknown, sometimes to chase a small concept but more often than not in pursuit of the bigger picture we all wanted to see. They indulged me with months of exploration, collaboration and fun, leading to us understand ideas and techniques that were once out of reach. Together, we were able to learn much more. Thank you Hauke Strasdatt, Luis Pizarro, Jan Jachnick, Andreas Fidjeland and members of the robot vision and cognitive robotics labs for brilliant discussions and for sharing the",
"title": ""
},
{
"docid": "3f3df1db38cd30cb5fb8a3e0f35687ac",
"text": "Access locality in single-core system is not easily observed in multicore system due to interleaved requests from different cores. Requests that come to DRAM-based main memory is mapped to a specific bank in DRAM and accessed through row buffer in the bank. DRAM row-buffer conflicts occur when a sequence of requests on different rows goes to the same memory bank, causing much higher memory access latency than requests to the same row or to different banks.\n In this paper, we first show that strong locality exists in memory requests of multicore systems. For many workloads, the accesses to some rows are more frequent than others, which we call them hot rows. Based on the observation, we propose a simple hot row buffer (HRB) scheme that is able to detect and capture these hot rows in DRAM. Results have shown that the proposed scheme is able to provide average 56% hit ratio over all row accesses in a bank for 10 selected workloads from SPEC CPU2006. A simple stream prefetcher prefetching in the hot row is implemented and results show an average of 9.1% IPC improvement over no prefetcher design.",
"title": ""
},
{
"docid": "bbb9412a61bb8497e1d8b6e955e0217b",
"text": "There has been great interest in developing methodologies that are capable of dealing with imprecision and uncertainty. The large amount of research currently being carried out in fuzzy and rough sets is representative of this. Many deep relationships have been established, and recent studies have concluded as to the complementary nature of the two methodologies. Therefore, it is desirable to extend and hybridize the underlying concepts to deal with additional aspects of data imperfection. Such developments offer a high degree of flexibility and provide robust solutions and advanced tools for data analysis. Fuzzy-rough set-based feature (FS) selection has been shown to be highly useful at reducing data dimensionality but possesses several problems that render it ineffective for large datasets. This paper proposes three new approaches to fuzzy-rough FS-based on fuzzy similarity relations. In particular, a fuzzy extension to crisp discernibility matrices is proposed and utilized. Initial experimentation shows that the methods greatly reduce dimensionality while preserving classification accuracy.",
"title": ""
},
{
"docid": "3a5ac4dc112c079955104bda98f80b58",
"text": "This review examines vestibular compensation and vestibular rehabilitation from a unified translational research perspective. Laboratory studies illustrate neurobiological principles of vestibular compensation at the molecular, cellular and systems levels in animal models that inform vestibular rehabilitation practice. However, basic research has been hampered by an emphasis on 'naturalistic' recovery, with time after insult and drug interventions as primary dependent variables. The vestibular rehabilitation literature, on the other hand, provides information on how the degree of compensation can be shaped by specific activity regimens. The milestones of the early spontaneous static compensation mark the re-establishment of static gaze stability, which provides a common coordinate frame for the brain to interpret residual vestibular information in the context of visual, somatosensory and visceral signals that convey gravitoinertial information. Stabilization of the head orientation and the eye orientation (suppression of spontaneous nystagmus) appear to be necessary by not sufficient conditions for successful rehabilitation, and define a baseline for initiating retraining. The lessons from vestibular rehabilitation in animal models offer the possibility of shaping the recovery trajectory to identify molecular and genetic factors that can improve vestibular compensation.",
"title": ""
},
{
"docid": "e98e902e22d9b8acb6e9e9dcd241471c",
"text": "We introduce a novel iterative approach for event coreference resolution that gradually builds event clusters by exploiting inter-dependencies among event mentions within the same chain as well as across event chains. Among event mentions in the same chain, we distinguish withinand cross-document event coreference links by using two distinct pairwise classifiers, trained separately to capture differences in feature distributions of withinand crossdocument event clusters. Our event coreference approach alternates between WD and CD clustering and combines arguments from both event clusters after every merge, continuing till no more merge can be made. And then it performs further merging between event chains that are both closely related to a set of other chains of events. Experiments on the ECB+ corpus show that our model outperforms state-of-the-art methods in joint task of WD and CD event coreference resolution.",
"title": ""
},
{
"docid": "852f745d3d5b63d8739439020674a509",
"text": "Most of the countries evaluate their energy networks in terms of national security and define as critical infrastructure. Monitoring and controlling of these systems are generally provided by Industrial Control Systems (ICSs) and/or Supervisory Control and Data Acquisition (SCADA) systems. Therefore, this study focuses on the cyber-attack vectors on SCADA systems to research the threats and risks targeting them. For this purpose, TCP/IP based protocols used in SCADA systems have been determined and analyzed at first. Then, the most common cyber-attacks are handled systematically considering hardware-side threats, software-side ones and the threats for communication infrastructures. Finally, some suggestions are given.",
"title": ""
},
{
"docid": "a4d789c37eea4505fff66ebe875601a3",
"text": "A mechanistic model for out-of-order superscalar processors is developed and then applied to the study of microarchitecture resource scaling. The model divides execution time into intervals separated by disruptive miss events such as branch mispredictions and cache misses. Each type of miss event results in characterizable performance behavior for the execution time interval. By considering an interval's type and length (measured in instructions), execution time can be predicted for the interval. Overall execution time is then determined by aggregating the execution time over all intervals. The mechanistic model provides several advantages over prior modeling approaches, and, when estimating performance, it differs from detailed simulation of a 4-wide out-of-order processor by an average of 7%.\n The mechanistic model is applied to the general problem of resource scaling in out-of-order superscalar processors. First, we use the model to determine size relationships among microarchitecture structures in a balanced processor design. Second, we use the mechanistic model to study scaling of both pipeline depth and width in balanced processor designs. We corroborate previous results in this area and provide new results. For example, we show that at optimal design points, the pipeline depth times the square root of the processor width is nearly constant. Finally, we consider the behavior of unbalanced, overprovisioned processor designs based on insight gained from the mechanistic model. We show that in certain situations an overprovisioned processor may lead to improved overall performance. Designs where a processor's dispatch width is wider than its issue width are of particular interest.",
"title": ""
},
{
"docid": "6ebd75996b8a652720b23254c9d77be4",
"text": "This paper focuses on a biometric cryptosystem implementation and evaluation based on a number of fingerprint texture descriptors. The texture descriptors, namely, the Gabor filter-based FingerCode, a local binary pattern (LBP), and a local direction pattern (LDP), and their various combinations are considered. These fingerprint texture descriptors are binarized using a biometric discretization method and used in a fuzzy commitment scheme (FCS). We constructed the biometric cryptosystems, which achieve a good performance, by fusing discretized fingerprint texture descriptors and using effective error-correcting codes. We tested the proposed system on a FVC2000 DB2a fingerprint database, and the results demonstrate that the new system significantly improves the performance of the FCS for texture-based",
"title": ""
},
{
"docid": "a608f681a3833d932bf723ca26dfe511",
"text": "The purpose of the study was to explore whether personality traits moderate the association between social comparison on Facebook and subjective well-being, measured as both life satisfaction and eudaimonic well-being. Data were collected via an online questionnaire which measured Facebook use, social comparison behavior and personality traits for 337 respondents. The results showed positive associations between Facebook intensity and both measures of subjective well-being, and negative associations between Facebook social comparison and both measures of subjective well-being. Personality traits were assessed by the Reinforcement Sensitivity Theory personality questionnaire, which revealed that Reward Interest was positively associated with eudaimonic well-being, and Goal-Drive Persistence was positively associated with both measures of subjective well-being. Impulsivity was negatively associated with eudaimonic well-being and the Behavioral Inhibition System was negatively associated with both measures of subjective well-being. Interactions between personality traits and social comparison on Facebook indicated that for respondents with high Goal-Drive Persistence, Facebook social comparison had a positive association with eudaimonic well-being, thus confirming that some personality traits moderate the association between Facebook social comparison and subjective well-being. The results of this study highlight how individual differences in personality may impact how social comparison on Facebook affects individuals’ subjective well-being.",
"title": ""
},
{
"docid": "de0a118cfc02cb830142001f55872ecb",
"text": "The inherent uncertainty associated with unstructured grasping tasks makes establishing a successful grasp difficult. Traditional approaches to this problem involve hands that are complex, fragile, require elaborate sensor suites, and are difficult to control. In this paper, we demonstrate a novel autonomous grasping system that is both simple and robust. The four-fingered hand is driven by a single actuator, yet can grasp objects spanning a wide range of size, shape, and mass. The hand is constructed using polymer-based shape deposition manufacturing, with joints formed by elastomeric flexures and actuator and sensor components embedded in tough rigid polymers. The hand has superior robustness properties, able to withstand large impacts without damage and capable of grasping objects in the presence of large positioning errors. We present experimental results showing that the hand mounted on a three degree of freedom manipulator arm can reliably grasp 5 cm-scale objects in the presence of positioning error of up to 100% of the object size and 10 cm-scale objects in the presence of positioning error of up to 33% of the object size, while keeping acquisition contact forces low.",
"title": ""
},
{
"docid": "fc421a5ef2556b86c34d6f2bb4dc018e",
"text": "It's been over a decade now. We've forgotten how slow the adoption of consumer Internet commerce has been compared to other Internet growth metrics. And we're surprised when security scares like spyware and phishing result in lurches in consumer use.This paper re-visits an old theme, and finds that consumer marketing is still characterised by aggression and dominance, not sensitivity to customer needs. This conclusion is based on an examination of terms and privacy policy statements, which shows that businesses are confronting the people who buy from them with fixed, unyielding interfaces. Instead of generating trust, marketers prefer to wield power.These hard-headed approaches can work in a number of circumstances. Compelling content is one, but not everyone sells sex, gambling services, short-shelf-life news, and even shorter-shelf-life fashion goods. And, after decades of mass-media-conditioned consumer psychology research and experimentation, it's far from clear that advertising can convert everyone into salivating consumers who 'just have to have' products and services brand-linked to every new trend, especially if what you sell is groceries or handyman supplies.The thesis of this paper is that the one-dimensional, aggressive concept of B2C has long passed its use-by date. Trading is two-way -- consumers' attention, money and loyalty, in return for marketers' products and services, and vice versa.So B2C is conceptually wrong, and needs to be replaced by some buzzphrase that better conveys 'B-with-C' rather than 'to-C' and 'at-C'. Implementations of 'customised' services through 'portals' have to mature beyond data-mining-based manipulation to support two-sided relationships, and customer-managed profiles.It's all been said before, but now it's time to listen.",
"title": ""
},
{
"docid": "8c0d3cfffb719f757f19bbb33412d8c6",
"text": "In this paper, we present a parallel Image-to-Mesh Conversion (I2M) algorithm with quality and fidelity guarantees achieved by dynamic point insertions and removals. Starting directly from an image, it is able to recover the isosurface and mesh the volume with tetrahedra of good shape. Our tightly-coupled shared-memory parallel speculative execution paradigm employs carefully designed contention managers, load balancing, synchronization and optimizations schemes which boost the parallel efficiency with little overhead: our single-threaded performance is faster than CGAL, the state of the art sequential mesh generation software we are aware of. The effectiveness of our method is shown on Blacklight, the Pittsburgh Supercomputing Center's cache-coherent NUMA machine, via a series of case studies justifying our choices. We observe a more than 82% strong scaling efficiency for up to 64 cores, and a more than 95% weak scaling efficiency for up to 144 cores, reaching a rate of 14.7 Million Elements per second. To the best of our knowledge, this is the fastest and most scalable 3D Delaunay refinement algorithm.",
"title": ""
}
] |
scidocsrr
|
06bf6b1c3ad2f5fb1261ddd6fb80f033
|
DeClarE: Debunking Fake News and False Claims using Evidence-Aware Deep Learning
|
[
{
"docid": "541ebcc2e081ea1a08bbaba2e9820510",
"text": "We present an analytic study on the language of news media in the context of political fact-checking and fake news detection. We compare the language of real news with that of satire, hoaxes, and propaganda to find linguistic characteristics of untrustworthy text. To probe the feasibility of automatic political fact-checking, we also present a case study based on PolitiFact.com using their factuality judgments on a 6-point scale. Experiments show that while media fact-checking remains to be an open research question, stylistic cues can help determine the truthfulness of text.",
"title": ""
},
{
"docid": "26cedddd8a5a5f3a947fd6c85b8c41ad",
"text": "In today's world, online social media plays a vital role during real world events, especially crisis events. There are both positive and negative effects of social media coverage of events, it can be used by authorities for effective disaster management or by malicious entities to spread rumors and fake news. The aim of this paper, is to highlight the role of Twitter, during Hurricane Sandy (2012) to spread fake images about the disaster. We identified 10,350 unique tweets containing fake images that were circulated on Twitter, during Hurricane Sandy. We performed a characterization analysis, to understand the temporal, social reputation and influence patterns for the spread of fake images. Eighty six percent of tweets spreading the fake images were retweets, hence very few were original tweets. Our results showed that top thirty users out of 10,215 users (0.3%) resulted in 90% of the retweets of fake images; also network links such as follower relationships of Twitter, contributed very less (only 11%) to the spread of these fake photos URLs. Next, we used classification models, to distinguish fake images from real images of Hurricane Sandy. Best results were obtained from Decision Tree classifier, we got 97% accuracy in predicting fake images from real. Also, tweet based features were very effective in distinguishing fake images tweets from real, while the performance of user based features was very poor. Our results, showed that, automated techniques can be used in identifying real images from fake images posted on Twitter.",
"title": ""
}
] |
[
{
"docid": "e78e70d347fb76a79755442cabe1fbe0",
"text": "Recent advances in neural variational inference have facilitated efficient training of powerful directed graphical models with continuous latent variables, such as variational autoencoders. However, these models usually assume simple, unimodal priors — such as the multivariate Gaussian distribution — yet many realworld data distributions are highly complex and multi-modal. Examples of complex and multi-modal distributions range from topics in newswire text to conversational dialogue responses. When such latent variable models are applied to these domains, the restriction of the simple, uni-modal prior hinders the overall expressivity of the learned model as it cannot possibly capture more complex aspects of the data distribution. To overcome this critical restriction, we propose a flexible, simple prior distribution which can be learned efficiently and potentially capture an exponential number of modes of a target distribution. We develop the multi-modal variational encoder-decoder framework and investigate the effectiveness of the proposed prior in several natural language processing modeling tasks, including document modeling and dialogue modeling.",
"title": ""
},
{
"docid": "d0649a8b51f61ead177dc60838d749b4",
"text": "Reduction otoplasty is an uncommon procedure performed for macrotia and ear asymmetry. Techniques described in the literature for this procedure are few. The authors present their ear reduction approach that not only achieves the desired reduction effectively and accurately, but also addresses and creates the natural anatomic proportions of the ear, leaving a scar well hidden within the fold of the helix.",
"title": ""
},
{
"docid": "5c2297cf5892ebf9864850dc1afe9cbf",
"text": "In this paper, we propose a novel technique for generating images in the 3D domain from images with high degree of geometrical transformations. By coalescing two popular concurrent methods that have seen rapid ascension to the machine learning zeitgeist in recent years: GANs (Goodfellow et. al.) and Capsule networks (Sabour, Hinton et. al.) we present: CapsGAN. We show that CapsGAN performs better than or equal to traditional CNN based GANs in generating images with high geometric transformations using rotated MNIST. In the process, we also show the efficacy of using capsules architecture in the GANs domain. Furthermore, we tackle the Gordian Knot in training GANs the performance control and training stability by experimenting with using Wasserstein distance (gradient clipping, penalty) and Spectral Normalization. The experimental findings of this paper should propel the application of capsules and GANs in the still exciting and nascent domain of 3D image generation, and plausibly video (frame) generation.",
"title": ""
},
{
"docid": "6b81fe23d8c2cb7ad7d296546a3cdadf",
"text": "Please cite this article in press as: H.J. Oh Vis. Comput. (2008), doi:10.1016/j.imavis In this paper, we propose a novel occlusion invariant face recognition algorithm based on Selective Local Non-negative Matrix Factorization (S-LNMF) technique. The proposed algorithm is composed of two phases; the occlusion detection phase and the selective LNMF-based recognition phase. We use a local approach to effectively detect partial occlusions in an input face image. A face image is first divided into a finite number of disjointed local patches, and then each patch is represented by PCA (Principal Component Analysis), obtained by corresponding occlusion-free patches of training images. And the 1-NN threshold classifier is used for occlusion detection for each patch in the corresponding PCA space. In the recognition phase, by employing the LNMF-based face representation, we exclusively use the LNMF bases of occlusion-free image patches for face recognition. Euclidean nearest neighbor rule is applied for the matching. We have performed experiments on AR face database that includes many occluded face images by sunglasses and scarves. The experimental results demonstrate that the proposed local patch-based occlusion detection technique works well and the S-LNMF method shows superior performance to other conventional approaches. 2008 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "0423596618a9d779c9ca5f3d899fdfe6",
"text": "An essential tension can be found between researchers interested in ecological validity and those concerned with maintaining experimental control. Research in the human neurosciences often involves the use of simple and static stimuli lacking many of the potentially important aspects of real world activities and interactions. While this research is valuable, there is a growing interest in the human neurosciences to use cues about target states in the real world via multimodal scenarios that involve visual, semantic, and prosodic information. These scenarios should include dynamic stimuli presented concurrently or serially in a manner that allows researchers to assess the integrative processes carried out by perceivers over time. Furthermore, there is growing interest in contextually embedded stimuli that can constrain participant interpretations of cues about a target's internal states. Virtual reality environments proffer assessment paradigms that combine the experimental control of laboratory measures with emotionally engaging background narratives to enhance affective experience and social interactions. The present review highlights the potential of virtual reality environments for enhanced ecological validity in the clinical, affective, and social neurosciences.",
"title": ""
},
{
"docid": "2cd53bcf5d0df4cfafd1801378ab20d5",
"text": "0191-8869/$ see front matter 2009 Elsevier Ltd. A doi:10.1016/j.paid.2009.07.008 * Corresponding author. Tel.: +1 251 460 6548. E-mail address: foster@usouthal.edu (J.D. Foster). 1 Narcissistic personality is best thought of as a dim (Foster & Campbell, 2007). We use the term ‘‘narciss matter of convenience to refer to individuals who sco measures of narcissism, such as a the NPI. Much prior research demonstrates that narcissists take more risks than others, but almost no research has examined what motivates this behavior. The present study tested two potential driving mechanisms of risk-taking by narcissists (i.e., heightened perceptions of benefits and diminished perceptions of risks stemming from risky behaviors) by administering survey measures of narcissism and risk-taking to a sample of 605 undergraduate college students. Contrary to what might be expected, the results suggest that narcissists appreciate the risks associated with risky behaviors just as much as do less narcissistic individuals. Their risk-taking appears to instead be fueled by heightened perceptions of benefits stemming from risky behaviors. These results are consistent with a growing body of evidence suggesting that narcissists engage in some forms of potentially problematic behaviors, such as risk-taking, because of a surplus of eagerness rather than a deficit of inhibition. 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "5495aeaa072a1f8f696298ebc7432045",
"text": "Deep neural networks (DNNs) are widely used in data analytics, since they deliver state-of-the-art accuracies. Binarized neural networks (BNNs) are recently proposed optimized variant of DNNs. BNNs constraint network weight and/or neuron value to either +1 or −1, which is representable in 1 bit. This leads to dramatic algorithm efficiency improvement, due to reduction in the memory and computational demands. This paper evaluates the opportunity to further improve the execution efficiency of BNNs through hardware acceleration. We first proposed a BNN hardware accelerator design. Then, we implemented the proposed accelerator on Aria 10 FPGA as well as 14-nm ASIC, and compared them against optimized software on Xeon server CPU, Nvidia Titan X server GPU, and Nvidia TX1 mobile GPU. Our evaluation shows that FPGA provides superior efficiency over CPU and GPU. Even though CPU and GPU offer high peak theoretical performance, they are not as efficiently utilized since BNNs rely on binarized bit-level operations that are better suited for custom hardware. Finally, even though ASIC is still more efficient, FPGA can provide orders of magnitudes in efficiency improvements over software, without having to lock into a fixed ASIC solution.",
"title": ""
},
{
"docid": "c04cf54a40cd84961657bf50153ff68b",
"text": "Neural IR models, such as DRMM and PACRR, have achieved strong results by successfully capturing relevance matching signals. We argue that the context of these matching signals is also important. Intuitively, when extracting, modeling, and combining matching signals, one would like to consider the surrounding text(local context) as well as other signals from the same document that can contribute to the overall relevance score. In this work, we highlight three potential shortcomings caused by not considering context information and propose three neural ingredients to address them: a disambiguation component, cascade k-max pooling, and a shuffling combination layer. Incorporating these components into the PACRR model yields Co-PACER, a novel context-aware neural IR model. Extensive comparisons with established models on TREC Web Track data confirm that the proposed model can achieve superior search results. In addition, an ablation analysis is conducted to gain insights into the impact of and interactions between different components. We release our code to enable future comparisons.",
"title": ""
},
{
"docid": "9b4800f8cd89cce37bada95cf044b1a0",
"text": "Jumping is used in nature by many small animals to locomote in cluttered environments or in rough terrain. It offers small systems the benefit of overcoming relatively large obstacles at a low energetic cost. In order to be able to perform repetitive jumps in a given direction, it is important to be able to upright after landing, steer and jump again. In this article, we review and evaluate the uprighting and steering principles of existing jumping robots and present a novel spherical robot with a mass of 14 g and a size of 18 cm that can jump up to 62 cm at a take-off angle of 75°, recover passively after landing, orient itself, and jump again. We describe its design details and fabrication methods, characterize its jumping performance, and demonstrate the remote controlled prototype repetitively moving over an obstacle course where it has to climb stairs and go through a window. (See videos 1–4 in the electronic supplementary",
"title": ""
},
{
"docid": "b417b412334d8d5ce931f93f564df528",
"text": "The field of dataset shift has received a growing amount of interest in the last few years. The fact that most real-world applications have to cope with some form of shift makes its study highly relevant. The literature on the topic is mostly scattered, and different authors use different names to refer to the same concepts, or use the same name for different concepts. With this work, we attempt to present a unifying framework through the review and comparison of some of the most important works in the",
"title": ""
},
{
"docid": "c4b6df3abf37409d6a6a19646334bffb",
"text": "Classification in imbalanced domains is a recent challenge in data mining. We refer to imbalanced classification when data presents many examples from one class and few from the other class, and the less representative class is the one which has more interest from the point of view of the learning task. One of the most used techniques to tackle this problem consists in preprocessing the data previously to the learning process. This preprocessing could be done through under-sampling; removing examples, mainly belonging to the majority class; and over-sampling, by means of replicating or generating new minority examples. In this paper, we propose an under-sampling procedure guided by evolutionary algorithms to perform a training set selection for enhancing the decision trees obtained by the C4.5 algorithm and the rule sets obtained by PART rule induction algorithm. The proposal has been compared with other under-sampling and over-sampling techniques and the results indicate that the new approach is very competitive in terms of accuracy when comparing with over-sampling and it outperforms standard under-sampling. Moreover, the obtained models are smaller in terms of number of leaves or rules generated and they can considered more interpretable. The results have been contrasted through non-parametric statistical tests over multiple data sets. Crown Copyright 2009 Published by Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "9ece98aee7056ff6c686c12bcdd41d31",
"text": "Modeling the distribution of natural images is challenging, partly because of strong statistical dependencies which can extend over hundreds of pixels. Recurrent neural networks have been successful in capturing long-range dependencies in a number of problems but only recently have found their way into generative image models. We here introduce a recurrent image model based on multidimensional long short-term memory units which are particularly suited for image modeling due to their spatial structure. Our model scales to images of arbitrary size and its likelihood is computationally tractable. We find that it outperforms the state of the art in quantitative comparisons on several image datasets and produces promising results when used for texture synthesis and inpainting.",
"title": ""
},
{
"docid": "e306933b27867c99585d7fc82cc380ff",
"text": "We introduce a new OS abstraction—light-weight contexts (lwCs)—that provides independent units of protection, privilege, and execution state within a process. A process may include several lwCs, each with possibly different views of memory, file descriptors, and access capabilities. lwCs can be used to efficiently implement roll-back (process can return to a prior recorded state), isolated address spaces (lwCs within the process may have different views of memory, e.g., isolating sensitive data from network-facing components or isolating different user sessions), and privilege separation (in-process reference monitors can arbitrate and control access). lwCs can be implemented efficiently: the overhead of a lwC is proportional to the amount of memory exclusive to the lwC; switching lwCs is quicker than switching kernel threads within the same process. We describe the lwC abstraction and API, and an implementation of lwCs within the FreeBSD 11.0 kernel. Finally, we present an evaluation of common usage patterns, including fast rollback, session isolation, sensitive data isolation, and inprocess reference monitoring, using Apache, nginx, PHP, and OpenSSL.",
"title": ""
},
{
"docid": "34a21bf5241d8cc3a7a83e78f8e37c96",
"text": "A current-biased voltage-programmed (CBVP) pixel circuit for active-matrix organic light-emitting diode (AMOLED) displays is proposed. The pixel circuit can not only ensure an accurate and fast compensation for the threshold voltage variation and degeneration of the driving TFT and the OLED, but also provide the OLED with a negative bias during the programming period. The negative bias prevents the OLED from a possible light emitting during the programming period and potentially suppresses the degradation of the OLED.",
"title": ""
},
{
"docid": "8e2f1f2c73ca3f9754348dd938d4f897",
"text": "During the long history of computer vision, one of the grand challenges has been semantic segmentation which is the ability to segment an unknown image into different parts and objects (e.g., beach, ocean, sun, dog, swimmer). Furthermore, segmentation is even deeper than object recognition because recognition is not necessary for segmentation. Specifically, humans can perform image segmentation without even knowing what the objects are (for example, in satellite imagery or medical X-ray scans, there may be several objects which are unknown, but they can still be segmented within the image typically for further investigation). Performing segmentation without knowing the exact identity of all objects in the scene is an important part of our visual understanding process which can give us a powerful model to understand the world and also be used to improve or augment existing computer vision techniques. Herein this work, we review the field of semantic segmentation as pertaining to deep convolutional neural networks. We provide comprehensive coverage of the top approaches and summarize the strengths, weaknesses and major challenges.",
"title": ""
},
{
"docid": "4f967ef2b57a7e22e61fb4f26286f69a",
"text": "Chemical imaging technology is a rapid examination technique that combines molecular spectroscopy and digital imaging, providing information on morphology, composition, structure, and concentration of a material. Among many other applications, chemical imaging offers an array of novel analytical testing methods, which limits sample preparation and provides high-quality imaging data essential in the detection of latent fingerprints. Luminescence chemical imaging and visible absorbance chemical imaging have been successfully applied to ninhydrin, DFO, cyanoacrylate, and luminescent dye-treated latent fingerprints, demonstrating the potential of this technology to aid forensic investigations. In addition, visible absorption chemical imaging has been applied successfully to visualize untreated latent fingerprints.",
"title": ""
},
{
"docid": "4b8ed77a97d2eb2c83ae49da7db9314f",
"text": "From early 1967 to the summer of 1969, the bolic nest-site disnlav between male and feauthor had the opportunity to observe the male during thei; precopulatory courtship.” behavior of a pair of African Ostrich (Struthio The courtship is initiated by male and fecamelus) and their hand-raised chicks in the male as they begin to feed, often with heads Oklahoma City Zoo. Because agonistic behavclose together, while pecking in a nervous, iors could be observed almost daily throughhighly synchronized fashion. As the excitation out the year, and courtship for at least 5 mounts, “the two birds walk towards and months, and because the author knew of very around an area chosen for the symbolic nestfew accounts of ostrich behavior, it was desite display by the male. He throws his wings cided to accumulate as much information up in an alternating rhythm of right-left, flashfrom these birds as possible. Sauer and Sauer ing his white wing feathers. Then suddenlv he (In: The living bird, Vol. 5, Cornell Univ. drops to the groind and begins nesting symPress, Ithaca, 1966, p. 45-76) in their study, bolically in a very exaggerated manner, whirlmainly from the Namib Desert Game Reserve ing dust when his wings sweep the ground. 3, pointed out that the hens having molted earAt the same time he twists his neck in a wav lier than the males initiate the prenuptial that resembles a continuous ‘ corkscrew adactivities. “They will posture and stand very tion’ .” The female responds by walking with erect, urinate and defecate, and otherwise lowered head, curved downward pointing behave in exaggerated manners in front of wings, and drooping tail. When finally she potential or familiar mates.” They become squats on the ground, the cock gets up and increasingly aggressive toward birds other rushes toward her with flapping wings and than the male they court, and particularly so mounts her. toward immature birds. The males begin their courtship later, at which time a red coloration AIM, SUBJECTS, AND METHOD OF of their shins, feet, and faces appears. Their STUDY ceremonial rivalries toward one another beIn the present study a more detailed analysis of agocome increasingly frequent. They may be nistic and courtship displays was attempted than those seen “chasing around in groups, wings held known to the author. The principal subjects were a high, and ‘ dancing’ in flocks numbering up to couple of birds belonging to, and housed in, the Okla-",
"title": ""
},
{
"docid": "bdaa8b87cdaef856b88b7397ddc77d97",
"text": "In artificial neural networks (ANNs), the activation function most used in practice are the logistic sigmoid function and the hyperbolic tangent function. The activation functions used in ANNs have been said to play an important role in the convergence of the learning algorithms. In this paper, we evaluate the use of different activation functions and suggest the use of three new simple functions, complementary log-log, probit and log-log, as activation functions in order to improve the performance of neural networks. Financial time series were used to evaluate the performance of ANNs models using these new activation functions and to compare their performance with some activation functions existing in the literature. This evaluation is performed through two learning algorithms: conjugate gradient backpropagation with Fletcher–Reeves updates and Levenberg–Marquardt.",
"title": ""
},
{
"docid": "584456ef251fbf31363832fc82bd3d42",
"text": "Neural network architectures found by sophistic search algorithms achieve strikingly good test performance, surpassing most human-crafted network models by significant margins. Although computationally efficient, their design is often very complex, impairing execution speed. Additionally, finding models outside of the search space is not possible by design. While our space is still limited, we implement undiscoverable expert knowledge into the economic search algorithm Efficient Neural Architecture Search (ENAS), guided by the design principles and architecture of ShuffleNet V2. While maintaining baselinelike 2.85% test error on CIFAR-10, our ShuffleNASNets are significantly less complex, require fewer parameters, and are two times faster than the ENAS baseline in a classification task. These models also scale well to a low parameter space, achieving less than 5% test error with little regularization and only 236K parameters.",
"title": ""
},
{
"docid": "c81fb61f8c12dfe3bb88d417d9ec645a",
"text": "Existing timeline generation systems for complex events consider only information from traditional media, ignoring the rich social context provided by user-generated content that reveals representative public interests or insightful opinions. We instead aim to generate socially-informed timelines that contain both news article summaries and selected user comments. We present an optimization framework designed to balance topical cohesion between the article and comment summaries along with their informativeness and coverage of the event. Automatic evaluations on real-world datasets that cover four complex events show that our system produces more informative timelines than state-of-theart systems. In human evaluation, the associated comment summaries are furthermore rated more insightful than editor’s picks and comments ranked highly by users.",
"title": ""
}
] |
scidocsrr
|
b1fb21ea0df87d4e0e1538ae386d240f
|
PointFlowNet: Learning Representations for Rigid Motion Estimation from Point Clouds.
|
[
{
"docid": "ff272c41a811b6e0031d6e90a895f919",
"text": "Three-dimensional reconstruction of dynamic scenes is an important prerequisite for applications like mobile robotics or autonomous driving. While much progress has been made in recent years, imaging conditions in natural outdoor environments are still very challenging for current reconstruction and recognition methods. In this paper, we propose a novel unified approach which reasons jointly about 3D scene flow as well as the pose, shape and motion of vehicles in the scene. Towards this goal, we incorporate a deformable CAD model into a slanted-plane conditional random field for scene flow estimation and enforce shape consistency between the rendered 3D models and the parameters of all superpixels in the image. The association of superpixels to objects is established by an index variable which implicitly enables model selection. We evaluate our approach on the challenging KITTI scene flow dataset in terms of object and scene flow estimation. Our results provide a prove of concept and demonstrate the usefulness of our method.",
"title": ""
},
{
"docid": "8b43d399ec64a1d89a62a744720f453e",
"text": "Object tracking is one of the key components of the perception system of autonomous cars and ADASs. With tracking, an ego-vehicle can make a prediction about the location of surrounding objects in the next time epoch and plan for next actions. Object tracking algorithms typically rely on sensory data (from RGB cameras or LIDAR). In fact, the integration of 2D-RGB camera images and 3D-LIDAR data can provide some distinct benefits. This paper proposes a 3D object tracking algorithm using a 3D-LIDAR, an RGB camera and INS (GPS/IMU) sensors data by analyzing sequential 2D-RGB, 3D point-cloud, and the ego-vehicle's localization data and outputs the trajectory of the tracked object, an estimation of its current velocity, and its predicted location in the 3D world coordinate system in the next time-step. Tracking starts with a known initial 3D bounding box for the object. Two parallel mean-shift algorithms are applied for object detection and localization in the 2D image and 3D point-cloud, followed by a robust 2D/3D Kalman filter based fusion and tracking. Reported results, from both quantitative and qualitative experiments using the KITTI database demonstrate the applicability and efficiency of the proposed approach in driving environments.",
"title": ""
}
] |
[
{
"docid": "d157d7b6e1c5796b6d7e8fedf66e81d8",
"text": "Intrusion detection for computer network systems becomes one of the most critical tasks for network administrators today. It has an important role for organizations, governments and our society due to its valuable resources on computer networks. Traditional misuse detection strategies are unable to detect new and unknown intrusion. Besides , anomaly detection in network security is aim to distinguish between illegal or malicious events and normal behavior of network systems. Anomaly detection can be considered as a classification problem where it builds models of normal network behavior, which it uses to detect new patterns that significantly deviate from the model. Most of the current research on anomaly detection is based on the learning of normally and anomaly behaviors. They do not take into account the previous, recent events to detect the new incoming one. In this paper, we propose a real time collective anomaly detection model based on neural network learning and feature operating. Normally a Long Short-Term Memory Recurrent Neural Network (LSTM RNN) is trained only on normal data and it is capable of predicting several time steps ahead of an input. In our approach, a LSTM RNN is trained with normal time series data before performing a live prediction for each time step. Instead of considering each time step separately, the observation of prediction errors from a certain number of time steps is now proposed as a new idea for detecting collective anomalies. The prediction errors from a number of the latest time steps above a threshold will indicate a collective anomaly. The model is built on a time series version of the KDD 1999 dataset. The experiments demonstrate that it is possible to offer reliable and efficient for collective anomaly detection.",
"title": ""
},
{
"docid": "2dd20fa690d3e9c363401ad1d080afe5",
"text": "There have been some research efforts into identifying activities from smartphone accelerometer data. Kwapisz et al. [1] mined data from smartphone sensors of different users doing different activities, and extracted statistics like the average, standard deviation, and time between peaks from portions of the data. With the features they used, they achieved 91.7% precision overall using Multilayer Perceptron. Ravi et al. [2] attempted to classify similar activities using the mean, standard deviation, energy of the Fourier Transform and correlation between signals on each axis. On a dataset similar to Kwapisz et al., they achieved 72% accuracy using Boosted SVM.",
"title": ""
},
{
"docid": "08255cbafcf9a3dd9dd9d084c1de543e",
"text": "The sustained growth of data traffic volume calls for an introduction of an efficient and scalable transport platform for links of 100 Gb/s and beyond in the future optical network. In this article, after briefly reviewing the existing major technology options, we propose a novel, spectrum- efficient, and scalable optical transport network architecture called SLICE. The SLICE architecture enables sub-wavelength, superwavelength, and multiple-rate data traffic accommodation in a highly spectrum-efficient manner, thereby providing a fractional bandwidth service. Dynamic bandwidth variation of elastic optical paths provides network operators with new business opportunities offering cost-effective and highly available connectivity services through time-dependent bandwidth sharing, energy-efficient network operation, and highly survivable restoration with bandwidth squeezing. We also discuss an optical orthogonal frequency-division multiplexing-based flexible-rate transponder and a bandwidth-variable wavelength cross-connect as the enabling technologies of SLICE concept. Finally, we present the performance evaluation and technical challenges that arise in this new network architecture.",
"title": ""
},
{
"docid": "c824c8bb8fd9b0b3f0f89df24e8f53d0",
"text": "Ovarian cysts are an extremely common gynecological problem in adolescent. Majority of ovarian cysts are benign with few cases being malignant. Ovarian serous cystadenoma are rare in children. A 14-year-old presented with abdominal pain and severe abdominal distention. She underwent laparotomy and after surgical removal, the mass was found to be ovarian serous cystadenoma on histology. In conclusions, germ cell tumors the most important causes for the giant ovarian masses in children. Epithelial tumors should not be forgotten in the differential diagnosis. Keyword: Adolescent; Ovarian Cysts/diagnosis*; Cystadenoma, Serous/surgery; Ovarian Neoplasms/surgery; Ovarian cystadenoma",
"title": ""
},
{
"docid": "b254f1e5bbafa8c824842f78b594490b",
"text": "In a previous examination of feedback research (Mory, 1996), the use of feedback in the facilitation of learning was examined extensively according to various historical and paradigmatic views of the past feedback literature. Most of the research presented in that volume in the area of feedback was completed with specific assumptions as to what purpose feedback serves. This still holds true, and even more so, because our theories and paradigms have expanded, and the field of instructional design has undergone and will continue to undergo rapid changes in technologies that will afford new advances to take place in both the delivery and the context of using feedback in instruction. It is not surprising that feedback may have various functions according to the particular learning environment in which it is examined and the particular learning paradigm under which it is viewed. In fact, feedback is incorporated in many paradigms of learning, from the early views of behaviorism (Skinner, 1958), to cognitivism (Gagné, 1985; Kulhavy & Wager 1993) through more recent models of constructivism (Jonassen, 1991, 1999; Mayer, 1999; Willis, 2000), settings such as open learning environments (Hannafin, Land, & Oliver, 1999), and views that support multiple approaches to understanding (Gardner, 1999), to name just a few. While feedback has been an essential element of theories of learning and instruction in the past (Bangert-Drowns, Kulik, Kulik, & Morgan, 1991), it still pervades the literature and instructional models as an important aspect of instruction (Collis, De Boer, & Slotman, 2001; Dick, Carey, & Carey, 2001).",
"title": ""
},
{
"docid": "622b0d9526dfee6abe3a605fa83e92ed",
"text": "Biomedical Image Processing is a growing and demanding field. It comprises of many different types of imaging methods likes CT scans, X-Ray and MRI. These techniques allow us to identify even the smallest abnormalities in the human body. The primary goal of medical imaging is to extract meaningful and accurate information from these images with the least error possible. Out of the various types of medical imaging processes available to us, MRI is the most reliable and safe. It does not involve exposing the body to any sorts of harmful radiation. This MRI can then be processed, and the tumor can be segmented. Tumor Segmentation includes the use of several different techniques. The whole process of detecting brain tumor from an MRI can be classified into four different categories: Pre-Processing, Segmentation, Optimization and Feature Extraction. This survey involves reviewing the research by other professionals and compiling it into one paper.",
"title": ""
},
{
"docid": "a341bcf8efb975c078cc452e0eecc183",
"text": "We show that, during inference with Convolutional Neural Networks (CNNs), more than 2× to 8× ineffectual work can be exposed if instead of targeting those weights and activations that are zero, we target different combinations of value stream properties. We demonstrate a practical application with Bit-Tactical (TCL), a hardware accelerator which exploits weight sparsity, per layer precision variability and dynamic fine-grain precision reduction for activations, and optionally the naturally occurring sparse effectual bit content of activations to improve performance and energy efficiency. TCL benefits both sparse and dense CNNs, natively supports both convolutional and fully-connected layers, and exploits properties of all activations to reduce storage, communication, and computation demands. While TCL does not require changes to the CNN to deliver benefits, it does reward any technique that would amplify any of the aforementioned weight and activation value properties. Compared to an equivalent data-parallel accelerator for dense CNNs, TCLp, a variant of TCL improves performance by 5.05× and is 2.98× more energy efficient while requiring 22% more area.",
"title": ""
},
{
"docid": "f7c61d02ac097c0e8ae0d5613e4a561c",
"text": "A popular recent approach to answering open-domain questions is to first search for question-related passages and then apply reading comprehension models to extract answers. Existing methods usually extract answers from single passages independently. But some questions require a combination of evidence from across different sources to answer correctly. In this paper, we propose two models which make use of multiple passages to generate their answers. Both use an answerreranking approach which reorders the answer candidates generated by an existing state-of-the-art QA model. We propose two methods, namely, strengthbased re-ranking and coverage-based re-ranking, to make use of the aggregated evidence from different passages to better determine the answer. Our models have achieved state-of-the-art results on three public open-domain QA datasets: Quasar-T, SearchQA and the open-domain version of TriviaQA, with about 8 percentage points of improvement over the former two datasets.",
"title": ""
},
{
"docid": "b1e8f1b40c3a1ca34228358a2e8d8024",
"text": "When the training and the test data belong to different domains, the accuracy of an object classifier is significantly reduced. Therefore, several algorithms have been proposed in the last years to diminish the so called domain shift between datasets. However, all available evaluation protocols for domain adaptation describe a closed set recognition task, where both domains, namely source and target, contain exactly the same object classes. In this work, we also explore the field of domain adaptation in open sets, which is a more realistic scenario where only a few categories of interest are shared between source and target data. Therefore, we propose a method that fits in both closed and open set scenarios. The approach learns a mapping from the source to the target domain by jointly solving an assignment problem that labels those target instances that potentially belong to the categories of interest present in the source dataset. A thorough evaluation shows that our approach outperforms the state-of-the-art.",
"title": ""
},
{
"docid": "084b83aed850aca07bed298de455c110",
"text": "Leveraging built-in cameras on smartphones and tablets, face authentication provides an attractive alternative of legacy passwords due to its memory-less authentication process. However, it has an intrinsic vulnerability against the media-based facial forgery (MFF) where adversaries use photos/videos containing victims' faces to circumvent face authentication systems. In this paper, we propose FaceLive, a practical and robust liveness detection mechanism to strengthen the face authentication on mobile devices in fighting the MFF-based attacks. FaceLive detects the MFF-based attacks by measuring the consistency between device movement data from the inertial sensors and the head pose changes from the facial video captured by built-in camera. FaceLive is practical in the sense that it does not require any additional hardware but a generic front-facing camera, an accelerometer, and a gyroscope, which are pervasively available on today's mobile devices. FaceLive is robust to complex lighting conditions, which may introduce illuminations and lead to low accuracy in detecting important facial landmarks; it is also robust to a range of cumulative errors in detecting head pose changes during face authentication.",
"title": ""
},
{
"docid": "5221c87f7ee877a0a7ac0a972df4636d",
"text": "These are exciting times for medical image processing. Innovations in deep learning and the increasing availability of large annotated medical image datasets are leading to dramatic advances in automated understanding of medical images. From this perspective, I give a personal view of how computer-aided diagnosis of medical images has evolved and how the latest advances are leading to dramatic improvements today. I discuss the impact of deep learning on automated disease detection and organ and lesion segmentation, with particular attention to applications in diagnostic radiology. I provide some examples of how time-intensive and expensive manual annotation of huge medical image datasets by experts can be sidestepped by using weakly supervised learning from routine clinically generated medical reports. Finally, I identify the remaining knowledge gaps that must be overcome to achieve clinician-level performance of automated medical image processing systems. Computer-aided diagnosis (CAD) in medical imaging has flourished over the past several decades. New advances in computer software and hardware and improved quality of images from scanners have enabled this progress. The main motivations for CAD have been to reduce error and to enable more efficient measurement and interpretation of images. From this perspective, I will describe how deep learning has led to radical changes in howCAD research is conducted and in howwell it performs. For brevity, I will include automated disease detection and image processing under the rubric of CAD. Financial Disclosure The author receives patent royalties from iCAD Medical. Disclaimer No NIH endorsement of any product or company mentioned in this manuscript should be inferred. The opinions expressed herein are the author’s and do not necessarily represent those of NIH. R.M. Summers (B) Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bldg. 10, Room 1C224D MSC 1182, Bethesda, MD 20892-1182, USA e-mail: rms@nih.gov URL: http://www.cc.nih.gov/about/SeniorStaff/ronald_summers.html © Springer International Publishing Switzerland 2017 L. Lu et al. (eds.), Deep Learning and Convolutional Neural Networks for Medical Image Computing, Advances in Computer Vision and Pattern Recognition, DOI 10.1007/978-3-319-42999-1_1 3",
"title": ""
},
{
"docid": "efe4706d63dfbbc19027c6cd8dd80f6a",
"text": "Transient receptor potential ion channels (TRP) are a superfamily of non-selective ion channels which are opened in response to a diverse range of stimuli. The TRP vanilloid 4 (TRPV4) ion channel is opened in response to heat, mechanical stimuli, hypo-osmolarity and arachidonic acid metabolites. However, recently TRPV4 has been identified as an ion channel that is modulated by, and opened by intracellular signalling cascades from other receptors and signalling pathways. Although TRPV4 knockout mice show relatively mild phenotypes, some mutations in TRPV4 cause severe developmental abnormalities, such as the skeletal dyplasia and arthropathy. Regulated TRPV4 function is also essential for healthy cardiovascular system function as a potent agonist compromises endothelial cell function, leading to vascular collapse. A better understanding of the signalling mechanisms that modulate TRPV4 function is necessary to understand its physiological roles. Post translational modification of TRPV4 by kinases and other signalling molecules can modulate TRPV4 opening in response to stimuli such as mechanical and hyposmolarity and there is an emerging area of research implicating TRPV4 as a transducer of these signals as opposed to a direct sensor of the stimuli. Due to its wide expression profile, TRPV4 is implicated in multiple pathophysiological states. TRPV4 contributes to the sensation of pain due to hypo-osmotic stimuli and inflammatory mechanical hyperalsgesia, where TRPV4 sensitizaton by intracellular signalling leads to pain behaviors in mice. In the vasculature, TRPV4 is a regulator of vessel tone and is implicated in hypertension and diabetes due to endothelial dysfunction. TRPV4 is a key regulator of epithelial and endothelial barrier function and signalling to and opening of TRPV4 can disrupt these critical protective barriers. In respiratory function, TRPV4 is involved in cystic fibrosis, cilary beat frequency, bronchoconstriction, chronic obstructive pulmonary disease, pulmonary hypertension, acute lung injury, acute respiratory distress syndrome and cough.In this review we highlight how modulation of TRPV4 opening is a vital signalling component in a range of tissues and why understanding of TRPV4 regulation in the body may lead to novel therapeutic approaches to treating a range of disease states.",
"title": ""
},
{
"docid": "a330c7ec22ab644404bbb558158e69e7",
"text": "With the advance in both hardware and software technologies, automated data generation and storage has become faster than ever. Such data is referred to as data streams. Streaming data is ubiquitous today and it is often a challenging task to store, analyze and visualize such rapid large volumes of data. Most conventional data mining techniques have to be adapted to run in a streaming environment, because of the underlying resource constraints in terms of memory and running time. Furthermore, the data stream may often show concept drift, because of which adaptation of conventional algorithms becomes more challenging. One such important conventional data mining problem is that of classification. In the classification problem, we attempt to model the class variable on the basis of one or more feature variables. While this problem has been extensively studied from a conventional mining perspective, it is a much more challenging problem in the data stream domain. In this chapter, we will re-visit the problem of classification from the data stream perspective. The techniques for this problem need to be thoroughly re-designed to address the issue of resource constraints and concept drift. This chapter reviews the state-of-the-art techniques in the literature along with their corresponding advantages and disadvantages.",
"title": ""
},
{
"docid": "f2b4f786ecd63b454437f066deecfe4a",
"text": "The causal role of human papillomavirus (HPV) in all cancers of the uterine cervix has been firmly established biologically and epidemiologically. Most cancers of the vagina and anus are likewise caused by HPV, as are a fraction of cancers of the vulva, penis, and oropharynx. HPV-16 and -18 account for about 70% of cancers of the cervix, vagina, and anus and for about 30-40% of cancers of the vulva, penis, and oropharynx. Other cancers causally linked to HPV are non-melanoma skin cancer and cancer of the conjunctiva. Although HPV is a necessary cause of cervical cancer, it is not a sufficient cause. Thus, other cofactors are necessary for progression from cervical HPV infection to cancer. Long-term use of hormonal contraceptives, high parity, tobacco smoking, and co-infection with HIV have been identified as established cofactors; co-infection with Chlamydia trachomatis (CT) and herpes simplex virus type-2 (HSV-2), immunosuppression, and certain dietary deficiencies are other probable cofactors. Genetic and immunological host factors and viral factors other than type, such as variants of type, viral load and viral integration, are likely to be important but have not been clearly identified.",
"title": ""
},
{
"docid": "36b96bf304de86c5796e285087f23942",
"text": "A variant of the popular non-parametric non-uniform intensity normalization (N3) algorithm is proposed for bias field correction. Several studies have been performed under a variety of conditions evaluating the performance of N3. These studies have demonstrated the importance of certain parameters on the results such as those of the B-spline approximation strategy. We propose the substitution of a fast and robust Bspline approximation routine for improved bias field correction over the B-spline fitting approach of the original N3 algorithm. Our strategy features additional advantages such as hierarchical B-spline fitting within a multiresolution framework. Similar to the original N3 algorithm, we also make the source code, testing, and technical documentation of our contribution available to the public albeit through the Insight Toolkit of the National Institutes of Health.",
"title": ""
},
{
"docid": "3c3980cb427c2630016f26f18cbd4ab9",
"text": "MOS (mean opinion score) subjective quality studies are used to evaluate many signal processing methods. Since laboratory quality studies are time consuming and expensive, researchers often run small studies with less statistical significance or use objective measures which only approximate human perception. We propose a cost-effective and convenient measure called crowdMOS, obtained by having internet users participate in a MOS-like listening study. Workers listen and rate sentences at their leisure, using their own hardware, in an environment of their choice. Since these individuals cannot be supervised, we propose methods for detecting and discarding inaccurate scores. To automate crowdMOS testing, we offer a set of freely distributable, open-source tools for Amazon Mechanical Turk, a platform designed to facilitate crowdsourcing. These tools implement the MOS testing methodology described in this paper, providing researchers with a user-friendly means of performing subjective quality evaluations without the overhead associated with laboratory studies. Finally, we demonstrate the use of crowdMOS using data from the Blizzard text-to-speech competition, showing that it delivers accurate and repeatable results.",
"title": ""
},
{
"docid": "0ce556418f6557d86c59f178a206cd11",
"text": "The efficiency of decision processes which can be divided into two stages has been measured for the whole process as well as for each stage independently by using the conventional data envelopment analysis (DEA) methodology in order to identify the causes of inefficiency. This paper modifies the conventional DEA model by taking into account the series relationship of the two sub-processes within the whole process. Under this framework, the efficiency of the whole process can be decomposed into the product of the efficiencies of the two sub-processes. In addition to this sound mathematical property, the case of Taiwanese non-life insurance companies shows that some unusual results which have appeared in the independent model do not exist in the relational model. In other words, the relational model developed in this paper is more reliable in measuring the efficiencies and consequently is capable of identifying the causes of inefficiency more accurately. Based on the structure of the model, the idea of efficiency decomposition can be extended to systems composed of multiple stages connected in series. 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "6cc046077267564ed38ed3b28e593ef1",
"text": "Human activity recognition is an active area of research in Computer Vision. One of the challenges of activity recognition system is the presence of noise between related activity classes along with high training and testing time complexity of the system. In this paper, we address these problems by introducing a Robust Least Squares Twin Support Vector Machine (RLS-TWSVM) algorithm. RLS-TWSVM handles the heteroscedastic noise and outliers present in activity recognition framework. Incremental RLS-TWSVM is proposed to speed up the training phase. Further, we introduce the hierarchical approach with RLS-TWSVM to deal with multi-category activity recognition problem. Computational comparisons of our proposed approach on four wellknown activity recognition datasets along with real world machine learning benchmark datasets have been carried out. Experimental results show that our method is not only fast but, yields significantly better generalization performance and is robust in order to handle heteroscedastic noise and outliers.",
"title": ""
},
{
"docid": "051aa7421187bab5d9e11184da16cc9e",
"text": "This paper compares the approaches to reuse in software engineering and knowledge engineering. In detail, definitions are given, the history is enlightened, the main approaches are described, and their feasibility is discussed. The aim of the paper is to show the close relation between software and knowledge engineering and to help the knowledge engineering community to learn from experiences in software engineering with respect to reuse. 1 Reuse in Software Engineering",
"title": ""
},
{
"docid": "63b31d490c626241b067c3d4d65764bf",
"text": "Context: This research is positioned in the field of methods for creating software design and the teaching thereof. Goal: The goal of this research is to study the effects of using a collection of examples for creating a software design. Method: We ran a controlled experiment for evaluating the use of a broad collection of examples for creating software designs by software engineering students. In this study, we focus on software designs as represented through UML class diagrams. The treatment is the use of the collection of examples. These examples are offered via a searchable repository. The outcome variable we study is the quality of the design (as assessed by a group of experts). After this, all students were offered the opportunity to improve their design using the collection of examples. We ran a post-assignment questionnaire to collect qualitative data about the experience of the participants. Results: Considering six quality attributes measured by experts, our results show that: 1) the models of the students who used examples are 18% better than those of who did not use examples. 2) the models of the students who did not use examples for constructing became 19% better after updating their models using examples. We complement our statistical analysis with insights from the post assignment questionnaire. Also, we observed that students are more confident about their design when they use examples. Conclusion: Students deliver better software designs when they use a collection of example software designs.",
"title": ""
}
] |
scidocsrr
|
2278291f5a6973e2df8076b00d2407d2
|
Retinal prosthesis for the blind.
|
[
{
"docid": "b17a8e121f865b7143bc2e38fa367b07",
"text": "Radio frequency (r.f.) has been investigated as a means of externally powering miniature and long term implant telemetry systems. Optimum power transfer from the transmitter to the receiving coil is desired for total system efficiency. A seven step design procedure for the transmitting and receiving coils is described based on r.f., coil diameter, coil spacing, load and the number of turns of the coil. An inductance tapping circuit and a voltage doubler circuit have been built in accordance with the design procedure. Experimental results were within the desired total system efficiency ranges of 18% and 23%, respectively. On a étudié la fréquence radio (f.r.) en tant que source extérieure permettant de faire fonctionner les systèmes télémétriques d'implants miniatures à long terme. Afin d'assurer une efficacité totale au système, il est nécessaire d'obtenir un transfert de puissance optimum de l'émetteur à la bobine réceptrice. On donne la description d'une technique de conception en sept temps, fondée sur la fréquence radio, le diamètre de la bobine, l'espacement des spires, la charge et le nombre de tours de la bobine. Un circuit de captage de tension par induction et un circuit doubleur de tension ont été construits conformément à la méthode de conception. Les résultats expérimentaux étaient compris dans les limites d'efficacité totale souhaitable pour le système, soit 18% à 23%, respectivement. Hochfrequenz wurde als Mittel zur externen Energieversorgung von Miniatur und langfristigen Implantat-Telemetriesystemen untersucht. Zur Verwirklichung der höchsten Leistungsfähigkeit braucht das System optimale Energieübertragung von Sendegerät zu Empfangsspule. Ein auf Hochfrequenz beruhendes siebenstufiges Konstruktionssystem für Sende- und Empfangsspulen wird beschrieben, mit Hinweisen über Spulendurchmesser, Spulenanordnung, Ladung und die Anzahl der Wicklungen. Ein Induktionsanzapfstromkreis und ein Spannungsverdoppler wurden dem Konstruktionsverfahren entsprechend gebaut. Versuchsergebnisse lagen im Bereich des gewünschten Systemleistungsgrades von 18% und 23%.",
"title": ""
}
] |
[
{
"docid": "87f126fcc6c06da7d8e6b23a5630d966",
"text": "Medicinal plants are a validated source for discovery of new leads and standardized herbal medicines. The aim of this study was to assess the activity of Vernonia amygdalina leaf extracts and isolated compounds against gametocytes and sporogonic stages of Plasmodium berghei and to validate the findings on field isolates of Plasmodium falciparum. Aqueous (Ver-H2O) and ethanolic (Ver-EtOH) leaf extracts were tested in vivo for activity against sexual and asexual blood stage P. berghei parasites. In vivo transmission blocking effects of Ver-EtOH and Ver-H2O were estimated by assessing P. berghei oocyst prevalence and density in Anopheles stephensi mosquitoes. Activity targeting early sporogonic stages (ESS), namely gametes, zygotes and ookinetes was assessed in vitro using P. berghei CTRPp.GFP strain. Bioassay guided fractionation was performed to characterize V. amygdalina fractions and molecules for anti-ESS activity. Fractions active against ESS of the murine parasite were tested for ex vivo transmission blocking activity on P. falciparum field isolates. Cytotoxic effects of extracts and isolated compounds vernolide and vernodalol were evaluated on the human cell lines HCT116 and EA.hy926. Ver-H2O reduced the P. berghei macrogametocyte density in mice by about 50% and Ver-EtOH reduced P. berghei oocyst prevalence and density by 27 and 90%, respectively, in An. stephensi mosquitoes. Ver-EtOH inhibited almost completely (>90%) ESS development in vitro at 50 μg/mL. At this concentration, four fractions obtained from the ethylacetate phase of the methanol extract displayed inhibitory activity >90% against ESS. Three tested fractions were also found active against field isolates of the human parasite P. falciparum, reducing oocyst prevalence in Anopheles coluzzii mosquitoes to one-half and oocyst density to one-fourth of controls. The molecules and fractions displayed considerable cytotoxicity on the two tested cell-lines. Vernonia amygdalina leaves contain molecules affecting multiple stages of Plasmodium, evidencing its potential for drug discovery. Chemical modification of the identified hit molecules, in particular vernodalol, could generate a library of druggable sesquiterpene lactones. The development of a multistage phytomedicine designed as preventive treatment to complement existing malaria control tools appears a challenging but feasible goal.",
"title": ""
},
{
"docid": "32ae0b0c5b3ca3a7ede687872d631d29",
"text": "Background—The benefit of catheter-based reperfusion for acute myocardial infarction (MI) is limited by a 5% to 15% incidence of in-hospital major ischemic events, usually caused by infarct artery reocclusion, and a 20% to 40% need for repeat percutaneous or surgical revascularization. Platelets play a key role in the process of early infarct artery reocclusion, but inhibition of aggregation via the glycoprotein IIb/IIIa receptor has not been prospectively evaluated in the setting of acute MI. Methods and Results —Patients with acute MI of,12 hours’ duration were randomized, on a double-blind basis, to placebo or abciximab if they were deemed candidates for primary PTCA. The primary efficacy end point was death, reinfarction, or any (urgent or elective) target vessel revascularization (TVR) at 6 months by intention-to-treat (ITT) analysis. Other key prespecified end points were early (7 and 30 days) death, reinfarction, or urgent TVR. The baseline clinical and angiographic variables of the 483 (242 placebo and 241 abciximab) patients were balanced. There was no difference in the incidence of the primary 6-month end point (ITT analysis) in the 2 groups (28.1% and 28.2%, P50.97, of the placebo and abciximab patients, respectively). However, abciximab significantly reduced the incidence of death, reinfarction, or urgent TVR at all time points assessed (9.9% versus 3.3%, P50.003, at 7 days; 11.2% versus 5.8%, P50.03, at 30 days; and 17.8% versus 11.6%, P50.05, at 6 months). Analysis by actual treatment with PTCA and study drug demonstrated a considerable effect of abciximab with respect to death or reinfarction: 4.7% versus 1.4%, P50.047, at 7 days; 5.8% versus 3.2%, P50.20, at 30 days; and 12.0% versus 6.9%, P50.07, at 6 months. The need for unplanned, “bail-out” stenting was reduced by 42% in the abciximab group (20.4% versus 11.9%, P50.008). Major bleeding occurred significantly more frequently in the abciximab group (16.6% versus 9.5%, P 0.02), mostly at the arterial access site. There was no intracranial hemorrhage in either group. Conclusions—Aggressive platelet inhibition with abciximab during primary PTCA for acute MI yielded a substantial reduction in the acute (30-day) phase for death, reinfarction, and urgent target vessel revascularization. However, the bleeding rates were excessive, and the 6-month primary end point, which included elective revascularization, was not favorably affected.(Circulation. 1998;98:734-741.)",
"title": ""
},
{
"docid": "fb46f67ba94cb4d7dd7620e2bdf5f00e",
"text": "We design and implement TwinsCoin, the first cryptocurrency based on a provably secure and scalable public blockchain design using both proof-of-work and proof-of-stake mechanisms. Different from the proof-of-work based Bitcoin, our construction uses two types of resources, computing power and coins (i.e., stake). The blockchain in our system is more robust than that in a pure proof-of-work based system; even if the adversary controls the majority of mining power, we can still have the chance to secure the system by relying on honest stake. In contrast, Bitcoin blockchain will be insecure if the adversary controls more than 50% of mining power.\n Our design follows a recent provably secure proof-of-work/proof-of-stake hybrid blockchain[11]. In order to make our construction practical, we considerably enhance its design. In particular, we introduce a new strategy for difficulty adjustment in the hybrid blockchain and provide a theoretical analysis of it. We also show how to construct a light client for proof-of-stake cryptocurrencies and evaluate the proposal practically.\n We implement our new design. Our implementation uses a recent modular development framework for blockchains, called Scorex. It allows us to change only certain parts of an application leaving other codebase intact. In addition to the blockchain implementation, a testnet is deployed. Source code is publicly available.",
"title": ""
},
{
"docid": "2b5310f06277cebc3cace46c56306677",
"text": "Security and privacy of data are one of the prime concerns in today’s Internet of Things (IoT). Conventional security techniques like signature-based detection of malware and regular updates of a signature database are not feasible solutions as they cannot secure such systems effectively, having limited resources. Programming languages permitting immediate memory accesses through pointers often result in applications having memory-related errors, which may lead to unpredictable failures and security vulnerabilities. Furthermore, energy efficient IoT devices running on batteries cannot afford the implementation of cryptography algorithms as such techniques have significant impact on the system power consumption. Therefore, in order to operate IoT in a secure manner, the system must be able to detect and prevent any kind of intrusions before the network (i.e., sensor nodes and base station) is destabilised by the attackers. In this article, we have presented an intrusion detection and prevention mechanism by implementing an intelligent security architecture using random neural networks (RNNs). The application’s source code is also instrumented at compile time in order to detect out-of-bound memory accesses. It is based on creating tags, to be coupled with each memory allocation and then placing additional tag checking instructions for each access made to the memory. To validate the feasibility of the proposed security solution, it is implemented for an existing IoT system and its functionality is practically demonstrated by successfully detecting the presence of any suspicious sensor node within the system operating range and anomalous activity in the base station with an accuracy of 97.23%. Overall, the proposed security solution has presented a minimal performance overhead.",
"title": ""
},
{
"docid": "6089f02c3fc3b1760c03190818c28af1",
"text": "In this paper we suggest viewing images (as well as attacks on them) as a sequence of linear operators and propose novel hashing algorithms employing transforms that are based on matrix invariants. To derive this sequence, we simply cover a two dimensional representation of an image by a sequence of (possibly overlapping) rectangles R/sub i/ whose sizes and locations are chosen randomly/sup 1/ from a suitable distribution. The restriction of the image (representation) to each R/sub i/ gives rise to a matrix A/sub i/. The fact that A/sub i/'s will overlap and are random, makes the sequence (respectively) a redundant and non-standard representation of images, but is crucial for our purposes. Our algorithms first construct a secondary image, derived from input image by pseudo-randomly extracting features that approximately capture semi-global geometric characteristics. From the secondary image (which does not perceptually resemble the input), we further extract the final features which can be used as a hash value (and can be further suitably quantized). In this paper, we use spectral matrix invariants as embodied by singular value decomposition. Surprisingly, formation of the secondary image turns out be quite important since it not only introduces further robustness (i.e., resistance against standard signal processing transformations), but also enhances the security properties (i.e. resistance against intentional attacks). Indeed, our experiments reveal that our hashing algorithms extract most of the geometric information from the images and hence are robust to severe perturbations (e.g. up to %50 cropping by area with 20 degree rotations) on images while avoiding misclassification. Our methods are general enough to yield a watermark embedding scheme, which will be studied in another paper.",
"title": ""
},
{
"docid": "f0db1871712c0e430162ad8ebbb17f5d",
"text": "Visualizing entire neuronal networks for analysis in the intact brain has been impossible up to now. Techniques like computer tomography or magnetic resonance imaging (MRI) do not yield cellular resolution, and mechanical slicing procedures are insufficient to achieve high-resolution reconstructions in three dimensions. Here we present an approach that allows imaging of whole fixed mouse brains. We modified 'ultramicroscopy' by combining it with a special procedure to clear tissue. We show that this new technique allows optical sectioning of fixed mouse brains with cellular resolution and can be used to detect single GFP-labeled neurons in excised mouse hippocampi. We obtained three-dimensional (3D) images of dendritic trees and spines of populations of CA1 neurons in isolated hippocampi. Also in fruit flies and in mouse embryos, we were able to visualize details of the anatomy by imaging autofluorescence. Our method is ideally suited for high-throughput phenotype screening of transgenic mice and thus will benefit the investigation of disease models.",
"title": ""
},
{
"docid": "91bbea10b8df8a708b65947c8a8832dc",
"text": "Event sequence, asynchronously generated with random timestamp, is ubiquitous among applications. The precise and arbitrary timestamp can carry important clues about the underlying dynamics, and has lent the event data fundamentally different from the time-series whereby series is indexed with fixed and equal time interval. One expressive mathematical tool for modeling event is point process. The intensity functions of many point processes involve two components: the background and the effect by the history. Due to its inherent spontaneousness, the background can be treated as a time series while the other need to handle the history events. In this paper, we model the background by a Recurrent Neural Network (RNN) with its units aligned with time series indexes while the history effect is modeled by another RNN whose units are aligned with asynchronous events to capture the long-range dynamics. The whole model with event type and timestamp prediction output layers can be trained end-to-end. Our approach takes an RNN perspective to point process, and models its background and history effect. For utility, our method allows a black-box treatment for modeling the intensity which is often a pre-defined parametric form in point processes. Meanwhile end-to-end training opens the venue for reusing existing rich techniques in deep network for point process modeling. We apply our model to the predictive maintenance problem using a log dataset by more than 1000 ATMs from a global bank headquartered in North America.",
"title": ""
},
{
"docid": "4334f0fffe71b3250ac8ee78f326f04d",
"text": "The frequency distribution of words has been a key object of study in statistical linguistics for the past 70 years. This distribution approximately follows a simple mathematical form known as Zipf's law. This article first shows that human language has a highly complex, reliable structure in the frequency distribution over and above this classic law, although prior data visualization methods have obscured this fact. A number of empirical phenomena related to word frequencies are then reviewed. These facts are chosen to be informative about the mechanisms giving rise to Zipf's law and are then used to evaluate many of the theoretical explanations of Zipf's law in language. No prior account straightforwardly explains all the basic facts or is supported with independent evaluation of its underlying assumptions. To make progress at understanding why language obeys Zipf's law, studies must seek evidence beyond the law itself, testing assumptions and evaluating novel predictions with new, independent data.",
"title": ""
},
{
"docid": "dc2d2fe3c6dcbe57b257218029091d8c",
"text": "One motivation in the study of development is the discovery of mechanisms that may guide evolutionary change. Here we report how development governs relative size and number of cheek teeth, or molars, in the mouse. We constructed an inhibitory cascade model by experimentally uncovering the activator–inhibitor logic of sequential tooth development. The inhibitory cascade acts as a ratchet that determines molar size differences along the jaw, one effect being that the second molar always makes up one-third of total molar area. By using a macroevolutionary test, we demonstrate the success of the model in predicting dentition patterns found among murine rodent species with various diets, thereby providing an example of ecologically driven evolution along a developmentally favoured trajectory. In general, our work demonstrates how to construct and test developmental rules with evolutionary predictability in natural systems.",
"title": ""
},
{
"docid": "328d2b9a5786729245f18195f36ca75c",
"text": "As CMOS technology is scaled down and adopted for many RF and millimeter-wave radio systems, design of T/R switches in CMOS has received considerable attention. Many T/R switches designed in 0.5 ¿m 65 nm CMOS processes have been reported. Table 4 summarizes these T/R switches. Some of them have become great candidates for WLAN and UWB radios. However, none of them met the requirements of mobile cellular and WPAN 60-GHz radios. CMOS device innovations and novel ideas such as artificial dielectric strips and bandgap structures may provide a comprehensive solution to the challenges of design of T/R switches for mobile cellular and 60-GHz radios.",
"title": ""
},
{
"docid": "b950d3b1bc2a30730b12e2f0016ecd9c",
"text": "Application distribution platforms - or app stores - such as Google Play or Apple AppStore allow users to submit feedback in form of ratings and reviews to downloaded applications. In the last few years, these platforms have become very popular to both application developers and users. However, their real potential for and impact on requirements engineering processes are not yet well understood. This paper reports on an exploratory study, which analyzes over one million reviews from the Apple AppStore. We investigated how and when users provide feedback, inspected the feedback content, and analyzed its impact on the user community. We found that most of the feedback is provided shortly after new releases, with a quickly decreasing frequency over time. Reviews typically contain multiple topics, such as user experience, bug reports, and feature requests. The quality and constructiveness vary widely, from helpful advices and innovative ideas to insulting offenses. Feedback content has an impact on download numbers: positive messages usually lead to better ratings and vice versa. Negative feedback such as shortcomings is typically destructive and misses context details and user experience. We discuss our findings and their impact on software and requirements engineering teams.",
"title": ""
},
{
"docid": "aecef2d4d6716046265c559dbfb351b6",
"text": "This handbook is about writing software requirements specifications and legal contracts, two kinds of documents with similar needs for completeness, consistency, and precision. Particularly when these are written, as they usually are, in natural language, ambiguity—by any definition—is a major cause of their not specifying what they should. Simple misuse of the language in which the document is written is one source of these ambiguities.",
"title": ""
},
{
"docid": "9ff977a9486b2bbc22aff46c3106f9f6",
"text": "Trust and security have prevented businesses from fully accepting cloud platforms. To protect clouds, providers must first secure virtualized data center resources, uphold user privacy, and preserve data integrity. The authors suggest using a trust-overlay network over multiple data centers to implement a reputation system for establishing trust between service providers and data owners. Data coloring and software watermarking techniques protect shared data objects and massively distributed software modules. These techniques safeguard multi-way authentications, enable single sign-on in the cloud, and tighten access control for sensitive data in both public and private clouds.",
"title": ""
},
{
"docid": "b8dbd71ff09f2e07a523532a65f690c7",
"text": "OBJECTIVE\nTo assess whether adolescent obesity is associated with risk for development of major depressive disorder (MDD) or anxiety disorder. Obesity has been linked to psychosocial difficulties among youth.\n\n\nMETHODS\nAnalysis of a prospective community-based cohort originally from upstate New York, assessed four times over 20 years. Participants (n = 776) were 9 to 18 years old in 1983; subsequent assessments took place in 1985 to 1986 (n = 775), 1991 to 1994 (n = 776), and 2001 to 2003 (n = 661). Using Cox proportional hazards analysis, we evaluated the association of adolescent (age range, 12-17.99 years) weight status with risk for subsequent MDD or anxiety disorder (assessed at each wave by structured diagnostic interviews) in males and females. A total of 701 participants were not missing data on adolescent weight status and had > or = 1 subsequent assessments. MDD and anxiety disorder analyses included 674 and 559 participants (free of current or previous MDD or anxiety disorder), respectively. Adolescent obesity was defined as body mass index above the age- and gender-specific 95th percentile of the Centers for Disease Control and Prevention growth reference.\n\n\nRESULTS\nAdolescent obesity in females predicted an increased risk for subsequent MDD (adjusted hazard ratio (HR) = 3.9; 95% confidence interval (CI) = 1.3, 11.8) and for anxiety disorder (HR = 3.8; CI = 1.3, 11.3). Adolescent obesity in males was not statistically significantly associated with risk for MDD (HR = 1.5; CI = 0.5, 3.5) or anxiety disorder (HR = 0.7; CI = 0.2, 2.9).\n\n\nCONCLUSION\nFemales obese as adolescents may be at increased risk for development of depression or anxiety disorders.",
"title": ""
},
{
"docid": "bc95a68be39fe767c06db3c694ec961c",
"text": "Due to the random nature of the ship's motion in an open water environment, the deployment and the landing of air vehicles from a ship can often be difficult and even dangerous. The ability to reliably predict the motion will allow improvements in safety on board ships and facilitate more accurate deployment of vehicles off ships. This paper presents an investigation into the application of artificial neural network methods trained using singular value decomposition and genetic algorithms for the prediction of ship motion. It is shown that the artificial neural network produces excellent predictions and is able to predict the ship motion satisfactorily for up to 7 seconds.",
"title": ""
},
{
"docid": "8c35fd3040e4db2d09e3d6dc0e9ae130",
"text": "Internet of Things is referred to a combination of physical devices having sensors and connection capabilities enabling them to interact with each other (machine to machine) and can be controlled remotely via cloud engine. Success of an IoT device depends on the ability of systems and devices to securely sample, collect, and analyze data, and then transmit over link, protocol, or media selections based on stated requirements, all without human intervention. Among the requirements of the IoT, connectivity is paramount. It's hard to imagine that a single communication technology can address all the use cases possible in home, industry and smart cities. Along with the existing low power technologies like Zigbee, Bluetooth and 6LoWPAN, 802.11 WiFi standards are also making its way into the market with its own advantages in high range and better speed. Along with IEEE, WiFi Alliance has a new standard for the proximity applications. Neighbor Awareness Network (NAN) popularly known as WiFi Aware is that standard which enables low power discovery over WiFi and can light up many proximity based used cases. In this paper we discuss how NAN can influence the emerging IoT market as a connectivity solution for proximity assessment and contextual notifications with its benefits in some of the scenarios. When we consider WiFi the infrastructure already exists in terms of access points all around in public and smart phones or tablets come with WiFi as a default feature hence enabling NAN can be easy and if we can pair them with IoT, many innovative use cases can evolve.",
"title": ""
},
{
"docid": "f1fa371e9e17ee136a101c8e69376bd4",
"text": "Many tools allow programmers to develop applications in high-level languages and deploy them in web browsers via compilation to JavaScript. While practical and widely used, these compilers are ad hoc: no guarantee is provided on their correctness for whole programs, nor their security for programs executed within arbitrary JavaScript contexts. This paper presents a compiler with such guarantees. We compile an ML-like language with higher-order functions and references to JavaScript, while preserving all source program properties. Relying on type-based invariants and applicative bisimilarity, we show full abstraction: two programs are equivalent in all source contexts if and only if their wrapped translations are equivalent in all JavaScript contexts. We evaluate our compiler on sample programs, including a series of secure libraries.",
"title": ""
},
{
"docid": "626e4d90b16a4e874c391d79b3ec39fe",
"text": "We propose novel neural temporal models for predicting and synthesizing human motion, achieving state-of-theart in modeling long-term motion trajectories while being competitive with prior work in short-term prediction, with significantly less required computation. Key aspects of our proposed system include: 1) a novel, two-level processing architecture that aids in generating planned trajectories, 2) a simple set of easily computable features that integrate derivative information into the model, and 3) a novel multi-objective loss function that helps the model to slowly progress from the simpler task of next-step prediction to the harder task of multi-step closed-loop prediction. Our results demonstrate that these innovations facilitate improved modeling of long-term motion trajectories. Finally, we propose a novel metric, called Normalized Power Spectrum Similarity (NPSS), to evaluate the long-term predictive ability of motion synthesis models, complementing the popular mean-squared error (MSE) measure of the Euler joint angles over time. We conduct a user study to determine if the proposed NPSS correlates with human evaluation of longterm motion more strongly than MSE and find that it indeed does.",
"title": ""
},
{
"docid": "c0dd3979344c5f327fe447f46c13cffc",
"text": "Clinicians and researchers often ask patients to remember their past pain. They also use patient's reports of relief from pain as evidence of treatment efficacy, assuming that relief represents the difference between pretreatment pain and present pain. We have estimated the accuracy of remembering pain and described the relationship between remembered pain, changes in pain levels and reports of relief during treatment. During a 10-week randomized controlled clinical trial on the effectiveness of oral appliances for the management of chronic myalgia of the jaw muscles, subjects recalled their pretreatment pain and rated their present pain and perceived relief. Multiple regression analysis and repeated measures analyses of variance (ANOVA) were used for data analysis. Memory of the pretreatment pain was inaccurate and the errors in recall got significantly worse with the passage of time (P < 0.001). Accuracy of recall for pretreatment pain depended on the level of pain before treatment (P < 0.001): subjects with low pretreatment pain exaggerated its intensity afterwards, while it was underestimated by those with the highest pretreatment pain. Memory of pretreatment pain was also dependent on the level of pain at the moment of recall (P < 0.001). Ratings of relief increased over time (P < 0.001), and were dependent on both present and remembered pain (Ps < 0.001). However, true changes in pain were not significantly related to relief scores (P = 0.41). Finally, almost all patients reported relief, even those whose pain had increased. These results suggest that reports of perceived relief do not necessarily reflect true changes in pain.",
"title": ""
},
{
"docid": "e4d4a77d7b5ecfaf7450f5b82fe92d17",
"text": "INTRODUCTION The Information Technology – Business Process Outsourcing (IT-BPO) Industry is one of the most dynamic emerging sectors of the Philippines. It has expanded widely and it exhibits great dynamism, but people have the notion that the BPO industry is solely comprised of call centers, when it is actually more diverse with back-offices, knowledge process outsourcing, software design and engineering, animation, game development, as well as transcription. These sub-sectors are still small in terms of the number of establishments, companies and employees, but they are growing steadily, supported by several government programs and industry associations. Given such support and in addition, the technology-intensive nature of the sector, the ITBPO industry could significantly shape the future of the services industry of the Philippines.",
"title": ""
}
] |
scidocsrr
|
cf5105829062cb5aa9769ca860d1d606
|
Waking and dreaming: Related but structurally independent. Dream reports of congenitally paraplegic and deaf-mute persons
|
[
{
"docid": "63bd93cf0294d71db4aa0eb7b9a39fa2",
"text": "Sleep researchers in different disciplines disagree about how fully dreaming can be explained in terms of brain physiology. Debate has focused on whether REM sleep dreaming is qualitatively different from nonREM (NREM) sleep and waking. A review of psychophysiological studies shows clear quantitative differences between REM and NREM mentation and between REM and waking mentation. Recent neuroimaging and neurophysiological studies also differentiate REM, NREM, and waking in features with phenomenological implications. Both evidence and theory suggest that there are isomorphisms between the phenomenology and the physiology of dreams. We present a three-dimensional model with specific examples from normally and abnormally changing conscious states.",
"title": ""
}
] |
[
{
"docid": "9ce3f1a67d23425e3920670ac5a1f9b4",
"text": "We examine the limits of consistency in highly available and fault-tolerant distributed storage systems. We introduce a new property—convergence—to explore the these limits in a useful manner. Like consistency and availability, convergence formalizes a fundamental requirement of a storage system: writes by one correct node must eventually become observable to other connected correct nodes. Using convergence as our driving force, we make two additional contributions. First, we close the gap between what is known to be impossible (i.e. the consistency, availability, and partition-tolerance theorem) and known systems that are highly-available but that provide weaker consistency such as causal. Specifically, in an asynchronous system, we show that natural causal consistency, a strengthening of causal consistency that respects the real-time ordering of operations, provides a tight bound on consistency semantics that can be enforced without compromising availability and convergence. In an asynchronous system with Byzantine-failures, we show that it is impossible to implement many of the recently introduced forking-based consistency semantics without sacrificing either availability or convergence. Finally, we show that it is not necessary to compromise availability or convergence by showing that there exist practically useful semantics that are enforceable by available, convergent, and Byzantine-fault tolerant systems.",
"title": ""
},
{
"docid": "93d4e6aba0ef5c17bb751ff93f0d3848",
"text": "In this work we propose a new SIW structure, called the corrugated SIW (CSIW), which does not require conducting vias to achieve TE10 type boundary conditions at the side walls. Instead, the vias are replaced by quarter wavelength microstrip stubs arranged in a corrugated pattern on the edges of the waveguide. This, along with series interdigitated capacitors, results in a waveguide section comprising two separate conductors, which facilitates shunt connection of active components such as Gunn diodes.",
"title": ""
},
{
"docid": "7ca863355d1fb9e4954c360c810ece53",
"text": "The detection of community structure is a widely accepted means of investigating the principles governing biological systems. Recent efforts are exploring ways in which multiple data sources can be integrated to generate a more comprehensive model of cellular interactions, leading to the detection of more biologically relevant communities. In this work, we propose a mathematical programming model to cluster multiplex biological networks, i.e. multiple network slices, each with a different interaction type, to determine a single representative partition of composite communities. Our method, known as SimMod, is evaluated through its application to yeast networks of physical, genetic and co-expression interactions. A comparative analysis involving partitions of the individual networks, partitions of aggregated networks and partitions generated by similar methods from the literature highlights the ability of SimMod to identify functionally enriched modules. It is further shown that SimMod offers enhanced results when compared to existing approaches without the need to train on known cellular interactions.",
"title": ""
},
{
"docid": "4d089acf0f7e1bae074fc4d9ad8ee7e3",
"text": "The consequences of exodontia include alveolar bone resorption and ultimately atrophy to basal bone of the edentulous site/ridges. Ridge resorption proceeds quickly after tooth extraction and significantly reduces the possibility of placing implants without grafting procedures. The aims of this article are to describe the rationale behind alveolar ridge augmentation procedures aimed at preserving or minimizing the edentulous ridge volume loss. Because the goal of these approaches is to preserve bone, exodontia should be performed to preserve as much of the alveolar process as possible. After severance of the supra- and subcrestal fibrous attachment using scalpels and periotomes, elevation of the tooth frequently allows extraction with minimal socket wall damage. Extraction sockets should not be acutely infected and be completely free of any soft tissue fragments before any grafting or augmentation is attempted. Socket bleeding that mixes with the grafting material seems essential for success of this procedure. Various types of bone grafting materials have been suggested for this purpose, and some have shown promising results. Coverage of the grafted extraction site with wound dressing materials, coronal flap advancement, or even barrier membranes may enhance wound stability and an undisturbed healing process. Future controlled clinical trials are necessary to determine the ideal regimen for socket augmentation.",
"title": ""
},
{
"docid": "8b060d80674bd3f329a675f1a3f4bce2",
"text": "Smartphones are ubiquitous devices that offer endless possibilities for health-related applications such as Ambient Assisted Living (AAL). They are rich in sensors that can be used for Human Activity Recognition (HAR) and monitoring. The emerging problem now is the selection of optimal combinations of these sensors and existing methods to accurately and efficiently perform activity recognition in a resource and computationally constrained environment. To accomplish efficient activity recognition on mobile devices, the most discriminative features and classification algorithms must be chosen carefully. In this study, sensor fusion is employed to improve the classification results of a lightweight classifier. Furthermore, the recognition performance of accelerometer, gyroscope and magnetometer when used separately and simultaneously on a feature-level sensor fusion is examined to gain valuable knowledge that can be used in dynamic sensing and data collection. Six ambulatory activities, namely, walking, running, sitting, standing, walking upstairs and walking downstairs, are inferred from low-sensor data collected from the right trousers pocket of the subjects and feature selection is performed to further optimize resource use.",
"title": ""
},
{
"docid": "2e475a64d99d383b85730e208703e654",
"text": "—Detecting a variety of anomalies in computer network, especially zero-day attacks, is one of the real challenges for both network operators and researchers. An efficient technique detecting anomalies in real time would enable network operators and administrators to expeditiously prevent serious consequences caused by such anomalies. We propose an alternative technique, which based on a combination of time series and feature spaces, for using machine learning algorithms to automatically detect anomalies in real time. Our experimental results show that the proposed technique can work well for a real network environment, and it is a feasible technique with flexible capabilities to be applied for real-time anomaly detection.",
"title": ""
},
{
"docid": "8d9246e7780770b5f7de9ef0adbab3e6",
"text": "This paper proposes a self-adaption Kalman observer (SAKO) used in a permanent-magnet synchronous motor (PMSM) servo system. The proposed SAKO can make up measurement noise of the absolute encoder with limited resolution ratio and avoid differentiating process and filter delay of the traditional speed measuring methods. To be different from the traditional Kalman observer, the proposed observer updates the gain matrix by calculating the measurement noise at the current time. The variable gain matrix is used to estimate and correct the observed position, speed, and load torque to solve the problem that the motor speed calculated by the traditional methods is prone to large speed error and time delay when PMSM runs at low speeds. The state variables observed by the proposed observer are used as the speed feedback signals and compensation signal of the load torque disturbance in PMSM servo system. The simulations and experiments prove that the SAKO can observe speed and load torque precisely and timely and that the feedforward and feedback control system of PMSM can improve the speed tracking ability.",
"title": ""
},
{
"docid": "0dafc618dbeb04c5ee347142d915a415",
"text": "Grid cells in the brain respond when an animal occupies a periodic lattice of 'grid fields' during navigation. Grids are organized in modules with different periodicity. We propose that the grid system implements a hierarchical code for space that economizes the number of neurons required to encode location with a given resolution across a range equal to the largest period. This theory predicts that (i) grid fields should lie on a triangular lattice, (ii) grid scales should follow a geometric progression, (iii) the ratio between adjacent grid scales should be √e for idealized neurons, and lie between 1.4 and 1.7 for realistic neurons, (iv) the scale ratio should vary modestly within and between animals. These results explain the measured grid structure in rodents. We also predict optimal organization in one and three dimensions, the number of modules, and, with added assumptions, the ratio between grid periods and field widths.",
"title": ""
},
{
"docid": "757c7ede10552c51ad4e91bff275f96c",
"text": "For several years, web caching has been used to meet the ever-increasing Web access loads. A fundamental capability of all such systems is that of inter-cache coordination, which can be divided into two main types: explicit and implicit coordination. While the former allows for greater control over resource allocation, the latter does not suffer from the additional communication overhead needed for coordination. In this paper, we consider a network in which each router has a local cache that caches files passing through it. By additionally storing minimal information regarding caching history, we develop a simple content caching, location, and routing systems that adopts an implicit, transparent, and best-effort approach towards caching. Though only best effort, the policy outperforms classic policies that allow explicit coordination between caches.",
"title": ""
},
{
"docid": "64723e2bb073d0ba4412a9affef16107",
"text": "The debate on the entrepreneurial university has raised questions about what motivates academics to engage with industry. This paper provides evidence, based on survey data for a comprehensive sample of UK investigators in the physical and engineering sciences. Our results suggest that most academics engage with industry to further their research rather than to commercialize their knowledge. However, there are differences in terms of the channels of engagement. While patenting and spin-off company formation is motivated exclusively by commercialization, joint research, contract research and consulting are strongly informed by research-related motives. We conclude that policy should refrain from focusing on monetary incentives for industry engagement and consider a broader range of incentives for promoting interaction between academia and industry.",
"title": ""
},
{
"docid": "dc71b53847d33e82c53f0b288da89bfa",
"text": "We explore the use of convolutional neural networks for the semantic classification of remote sensing scenes. Two recently proposed architectures, CaffeNet and GoogLeNet, are adopted, with three different learning modalities. Besides conventional training from scratch, we resort to pre-trained networks that are only fine-tuned on the target data, so as to avoid overfitting problems and reduce design time. Experiments on two remote sensing datasets, with markedly different characteristics, testify on the effectiveness and wide applicability of the proposed solution, which guarantees a significant performance improvement over all state-of-the-art references.",
"title": ""
},
{
"docid": "a064a4b8e19068526e417643788d0b04",
"text": "Generic object detection is the challenging task of proposing windows that localize all the objects in an image, regardless of their classes. Such detectors have recently been shown to benefit many applications such as speeding-up class-specific object detection, weakly supervised learning of object detectors and object discovery. In this paper, we introduce a novel and very efficient method for generic object detection based on a randomized version of Prim's algorithm. Using the connectivity graph of an image's super pixels, with weights modelling the probability that neighbouring super pixels belong to the same object, the algorithm generates random partial spanning trees with large expected sum of edge weights. Object localizations are proposed as bounding-boxes of those partial trees. Our method has several benefits compared to the state-of-the-art. Thanks to the efficiency of Prim's algorithm, it samples proposals very quickly: 1000 proposals are obtained in about 0.7s. With proposals bound to super pixel boundaries yet diversified by randomization, it yields very high detection rates and windows that tightly fit objects. In extensive experiments on the challenging PASCAL VOC 2007 and 2012 and SUN2012 benchmark datasets, we show that our method improves over state-of-the-art competitors for a wide range of evaluation scenarios.",
"title": ""
},
{
"docid": "2b942943bebdc891a4c9fa0f4ac65a4b",
"text": "A new architecture based on the Multi-channel Convolutional Neural Network (MCCNN) is proposed for recognizing facial expressions. Two hard-coded feature extractors are replaced by a single channel which is partially trained in an unsupervised fashion as a Convolutional Autoencoder (CAE). One additional channel that contains a standard CNN is left unchanged. Information from both channels converges in a fully connected layer and is then used for classification. We perform two distinct experiments on the JAFFE dataset (leave-one-out and ten-fold cross validation) to evaluate our architecture. Our comparison with the previous model that uses hard-coded Sobel features shows that an additional channel of information with unsupervised learning can significantly boost accuracy and reduce the overall training time. Furthermore, experimental results are compared with benchmarks from the literature showing that our method provides state-of-the-art recognition rates for facial expressions. Our method outperforms previously published methods that used hand-crafted features by a large margin.",
"title": ""
},
{
"docid": "b598cf655e2a039923163271fefb8ede",
"text": "The 3GPP has recently published the first version of the Release 14 standard that includes support for V2V communications using LTE sidelink communications (referred to as LTE-V, LTE-V2X, LTE-V2V or Cellular V2X). The standard includes a mode (mode 4) where vehicles autonomously select and manage the radio resources without any cellular infrastructure support. This is highly relevant since V2V safety applications cannot depend on the availability of infrastructure-based cellular coverage, and transforms LTE-V into a possible (or complimentary) alternative to 802.11p. The performance of LTE-V in mode 4 is highly dependent on its distributed scheduling protocol (sensing-based Semi-Persistent Scheduling) that is used by vehicles to reserve resources for their transmissions. This paper presents the first evaluation of the performance and operation of this protocol under realistic traffic conditions in urban scenarios. The evaluation demonstrates that further enhancements should be investigated to reduce packet collisions.",
"title": ""
},
{
"docid": "25305e33949beff196ff6c0946d1807b",
"text": "Clinical and preclinical studies have gathered substantial evidence that stress response alterations play a major role in the development of major depression, panic disorder and posttraumatic stress disorder. The stress response, the hypothalamic pituitary adrenocortical (HPA) system and its modulation by CRH, corticosteroids and their receptors as well as the role of natriuretic peptides and neuroactive steroids are described. Examplarily, we review the role of the HPA system in major depression, panic disorder and posttraumatic stress disorder as well as its possible relevance for treatment. Impaired glucocorticoid receptor function in major depression is associated with an excessive release of neurohormones, like CRH to which a number of signs and symptoms characteristic of depression can be ascribed. In panic disorder, a role of central CRH in panic attacks has been suggested. Atrial natriuretic peptide (ANP) is causally involved in sodium lactate-induced panic attacks. Furthermore, preclinical and clinical data on its anxiolytic activity suggest that non-peptidergic ANP receptor ligands may be of potential use in the treatment of anxiety disorders. Recent data further suggest a role of 3alpha-reduced neuroactive steroids in major depression, panic attacks and panic disorder. Posttraumatic stress disorder is characterized by a peripheral hyporesponsive HPA-system and elevated CRH concentrations in CSF. This dissociation is probably related to an increased risk for this disorder. Antidepressants are effective both in depression and anxiety disorders and have major effects on the HPA-system, especially on glucocorticoid and mineralocorticoid receptors. Normalization of HPA-system abnormalities is a strong predictor of the clinical course, at least in major depression and panic disorder. CRH-R1 or glucorticoid receptor antagonists and ANP receptor agonists are currently being studied and may provide future treatment options more closely related to the pathophysiology of the disorders.",
"title": ""
},
{
"docid": "ea304e700faa3d3cae4bff89cf01c397",
"text": "Ternary logic is a promising alternative to the conventional binary logic in VLSI design as it provides the advantages of reduced interconnects, higher operating speeds, and smaller chip area. This paper presents a pair of circuits for implementing a ternary half adder using carbon nanotube field-effect transistors. The proposed designs combine both futuristic ternary and conventional binary logic design approach. One of the proposed circuits for ternary to binary decoder simplifies further circuit implementation and provides excellent delay and power advantages in data path circuit such as adder. These circuits have been extensively simulated using HSPICE to obtain power, delay, and power delay product. The circuit performances are compared with alternative designs reported in recent literature. One of the proposed ternary adders has been demonstrated power, power delay product improvement up to 63% and 66% respectively, with lesser transistor count. So, the use of these half adders in complex arithmetic circuits will be advantageous.",
"title": ""
},
{
"docid": "5935224c53222d0234adffddae23eb04",
"text": "The multipath-rich wireless environment associated with typical wireless usage scenarios is characterized by a fading channel response that is time-varying, location-sensitive, and uniquely shared by a given transmitter-receiver pair. The complexity associated with a richly scattering environment implies that the short-term fading process is inherently hard to predict and best modeled stochastically, with rapid decorrelation properties in space, time, and frequency. In this paper, we demonstrate how the channel state between a wireless transmitter and receiver can be used as the basis for building practical secret key generation protocols between two entities. We begin by presenting a scheme based on level crossings of the fading process, which is well-suited for the Rayleigh and Rician fading models associated with a richly scattering environment. Our level crossing algorithm is simple, and incorporates a self-authenticating mechanism to prevent adversarial manipulation of message exchanges during the protocol. Since the level crossing algorithm is best suited for fading processes that exhibit symmetry in their underlying distribution, we present a second and more powerful approach that is suited for more general channel state distributions. This second approach is motivated by observations from quantizing jointly Gaussian processes, but exploits empirical measurements to set quantization boundaries and a heuristic log likelihood ratio estimate to achieve an improved secret key generation rate. We validate both proposed protocols through experimentations using a customized 802.11a platform, and show for the typical WiFi channel that reliable secret key establishment can be accomplished at rates on the order of 10 b/s.",
"title": ""
},
{
"docid": "9e8a1a70af4e52de46d773cec02f99a7",
"text": "In this paper, we build a corpus of tweets from Twitter annotated with keywords using crowdsourcing methods. We identify key differences between this domain and the work performed on other domains, such as news, which makes existing approaches for automatic keyword extraction not generalize well on Twitter datasets. These datasets include the small amount of content in each tweet, the frequent usage of lexical variants and the high variance of the cardinality of keywords present in each tweet. We propose methods for addressing these issues, which leads to solid improvements on this dataset for this task.",
"title": ""
},
{
"docid": "5e9f0743d7f913769967772038a85c01",
"text": "A human listener has the remarkable ability to segregate an acoustic mixture and attend to a target sound. This perceptual process is called auditory scene analysis (ASA). Moreover, the listener can accomplish much of auditory scene analysis with only one ear. Research in ASA has inspired many studies in computational auditory scene analysis (CASA) for sound segregation. In this chapter we introduce a CASA approach to monaural speech segregation. After a brief overview of CASA, we present in detail a CASA system that segregates both voiced and unvoiced speech. Our description covers the major stages of CASA, including feature extraction, auditory segmentation, and grouping.",
"title": ""
},
{
"docid": "b51c309fb2d77da3647739c41d71fd5a",
"text": "We propose a benchmark for 6D pose estimation of a rigid object from a single RGB-D input image. The training data consists of a texture-mapped 3D object model or images of the object in known 6D poses. The benchmark comprises of: i) eight datasets in a unified format that cover different practical scenarios, including two new datasets focusing on varying lighting conditions, ii) an evaluation methodology with a pose-error function that deals with pose ambiguities, iii) a comprehensive evaluation of 15 diverse recent methods that captures the status quo of the field, and iv) an online evaluation system that is open for continuous submission of new results. The evaluation shows that methods based on point-pair features currently perform best, outperforming template matching methods, learning-based methods and methods based on 3D local features. The project website is available at bop.felk.cvut.cz.",
"title": ""
}
] |
scidocsrr
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.