query_id
stringlengths
32
32
query
stringlengths
6
5.38k
positive_passages
listlengths
1
17
negative_passages
listlengths
9
100
subset
stringclasses
7 values
63e603175c9009da16d78693caab1772
Spectral and Energy-Efficient Wireless Powered IoT Networks: NOMA or TDMA?
[ { "docid": "1a615a022c441f413fcbdb3dbff9e66d", "text": "Narrowband Internet of Things (NB-IoT) is a new cellular technology introduced in 3GPP Release 13 for providing wide-area coverage for IoT. This article provides an overview of the air interface of NB-IoT. We describe how NB-IoT addresses key IoT requirements such as deployment flexibility, low device complexity, long battery lifetime, support of massive numbers of devices in a cell, and significant coverage extension beyond existing cellular technologies. We also share the various design rationales during the standardization of NB-IoT in Release 13 and point out several open areas for future evolution of NB-IoT.", "title": "" }, { "docid": "29360e31131f37830e0d6271bab63a6e", "text": "In this letter, the performance of non-orthogonal multiple access (NOMA) is investigated in a cellular downlink scenario with randomly deployed users. The developed analytical results show that NOMA can achieve superior performance in terms of ergodic sum rates; however, the outage performance of NOMA depends critically on the choices of the users' targeted data rates and allocated power. In particular, a wrong choice of the targeted data rates and allocated power can lead to a situation in which the user's outage probability is always one, i.e. the user's targeted quality of service will never be met.", "title": "" } ]
[ { "docid": "93b87e8dde0de0c1b198f6a073858d80", "text": "The current project is an initial attempt at validating the Virtual Reality Cognitive Performance Assessment Test (VRCPAT), a virtual environment-based measure of learning and memory. To examine convergent and discriminant validity, a multitrait-multimethod matrix was used in which we hypothesized that the VRCPAT's total learning and memory scores would correlate with other neuropsychological measures involving learning and memory but not with measures involving potential confounds (i.e., executive functions; attention; processing speed; and verbal fluency). Using a sequential hierarchical strategy, each stage of test development did not proceed until specified criteria were met. The 15-minute VRCPAT battery and a 1.5-hour in-person neuropsychological assessment were conducted with a sample of 30 healthy adults, between the ages of 21 and 36, that included equivalent distributions of men and women from ethnically diverse populations. Results supported both convergent and discriminant validity. That is, findings suggest that the VRCPAT measures a capacity that is (a) consistent with that assessed by traditional paper-and-pencil measures involving learning and memory and (b) inconsistent with that assessed by traditional paper-and-pencil measures assessing neurocognitive domains traditionally assumed to be other than learning and memory. We conclude that the VRCPAT is a valid test that provides a unique opportunity to reliably and efficiently study memory function within an ecologically valid environment.", "title": "" }, { "docid": "45494f14c2d9f284dd3ad3a5be49ca78", "text": "Developing segmentation techniques for overlapping cells has become a major hurdle for automated analysis of cervical cells. In this paper, an automated three-stage segmentation approach to segment the nucleus and cytoplasm of each overlapping cell is described. First, superpixel clustering is conducted to segment the image into small coherent clusters that are used to generate a refined superpixel map. The refined superpixel map is passed to an adaptive thresholding step to initially segment the image into cellular clumps and background. Second, a linear classifier with superpixel-based features is designed to finalize the separation between nuclei and cytoplasm. Finally, edge and region based cell segmentation are performed based on edge enhancement process, gradient thresholding, morphological operations, and region properties evaluation on all detected nuclei and cytoplasm pairs. The proposed framework has been evaluated using the ISBI 2014 challenge dataset. The dataset consists of 45 synthetic cell images, yielding 270 cells in total. Compared with the state-of-the-art approaches, our approach provides more accurate nuclei boundaries, as well as successfully segments most of overlapping cells.", "title": "" }, { "docid": "fd9461aeac51be30c9d0fbbba298a79b", "text": "Disaster management is a crucial and urgent research issue. Emergency communication networks (ECNs) provide fundamental functions for disaster management, because communication service is generally unavailable due to large-scale damage and restrictions in communication services. Considering the features of a disaster (e.g., limited resources and dynamic changing of environment), it is always a key problem to use limited resources effectively to provide the best communication services. Big data analytics in the disaster area provides possible solutions to understand the situations happening in disaster areas, so that limited resources can be optimally deployed based on the analysis results. In this paper, we survey existing ECNs and big data analytics from both the content and the spatial points of view. From the content point of view, we survey existing data mining and analysis techniques, and further survey and analyze applications and the possibilities to enhance ECNs. From the spatial point of view, we survey and discuss the most popular methods and further discuss the possibility to enhance ECNs. Finally, we highlight the remaining challenging problems after a systematic survey and studies of the possibilities.", "title": "" }, { "docid": "cb266f07461a58493d35f75949c4605e", "text": "Zero shot learning in Image Classification refers to the setting where images from some novel classes are absent in the training data but other information such as natural language descriptions or attribute vectors of the classes are available. This setting is important in the real world since one may not be able to obtain images of all the possible classes at training. While previous approaches have tried to model the relationship between the class attribute space and the image space via some kind of a transfer function in order to model the image space correspondingly to an unseen class, we take a different approach and try to generate the samples from the given attributes, using a conditional variational autoencoder, and use the generated samples for classification of the unseen classes. By extensive testing on four benchmark datasets, we show that our model outperforms the state of the art, particularly in the more realistic generalized setting, where the training classes can also appear at the test time along with the novel classes.", "title": "" }, { "docid": "9a7c915803c84bc2270896bd82b4162d", "text": "In this paper we present a voice command and mouth gesture based robot command interface which is capable of controlling three degrees of freedom. The gesture set was designed in order to avoid head rotation and translation, and thus relying solely in mouth movements. Mouth segmentation is performed by using the normalized a* component, as in [1]. The gesture detection process is carried out by a Gaussian Mixture Model (GMM) based classifier. After that, a state machine stabilizes the system response by restricting the number of possible movements depending on the initial state. Voice commands are modeled using a Hidden Markov Model (HMM) isolated word recognition scheme. The interface was designed taking into account the specific pose restrictions found in the DaVinci Assisted Surgery command console.", "title": "" }, { "docid": "4e6709bf897352c4e8b24a5b77e4e2c5", "text": "Large-scale classification is an increasingly critical Big Data problem. So far, however, very little has been published on how this is done in practice. In this paper we describe Chimera, our solution to classify tens of millions of products into 5000+ product types at WalmartLabs. We show that at this scale, many conventional assumptions regarding learning and crowdsourcing break down, and that existing solutions cease to work. We describe how Chimera employs a combination of learning, rules (created by in-house analysts), and crowdsourcing to achieve accurate, continuously improving, and cost-effective classification. We discuss a set of lessons learned for other similar Big Data systems. In particular, we argue that at large scales crowdsourcing is critical, but must be used in combination with learning, rules, and in-house analysts. We also argue that using rules (in conjunction with learning) is a must, and that more research attention should be paid to helping analysts create and manage (tens of thousands of) rules more effectively.", "title": "" }, { "docid": "344e5742cc3c1557589cea05b429d743", "text": "Herein we present a novel big-data framework for healthcare applications. Healthcare data is well suited for bigdata processing and analytics because of the variety, veracity and volume of these types of data. In recent times, many areas within healthcare have been identified that can directly benefit from such treatment. However, setting up these types of architecture is not trivial. We present a novel approach of building a big-data framework that can be adapted to various healthcare applications with relative use, making this a one-stop “Big-Data-Healthcare-in-a-Box”.", "title": "" }, { "docid": "ddc18f2d129d95737b8f0591560d202d", "text": "A variety of real-life mobile sensing applications are becoming available, especially in the life-logging, fitness tracking and health monitoring domains. These applications use mobile sensors embedded in smart phones to recognize human activities in order to get a better understanding of human behavior. While progress has been made, human activity recognition remains a challenging task. This is partly due to the broad range of human activities as well as the rich variation in how a given activity can be performed. Using features that clearly separate between activities is crucial. In this paper, we propose an approach to automatically extract discriminative features for activity recognition. Specifically, we develop a method based on Convolutional Neural Networks (CNN), which can capture local dependency and scale invariance of a signal as it has been shown in speech recognition and image recognition domains. In addition, a modified weight sharing technique, called partial weight sharing, is proposed and applied to accelerometer signals to get further improvements. The experimental results on three public datasets, Skoda (assembly line activities), Opportunity (activities in kitchen), Actitracker (jogging, walking, etc.), indicate that our novel CNN-based approach is practical and achieves higher accuracy than existing state-of-the-art methods.", "title": "" }, { "docid": "48aa68862748ab502f3942300b4d8e1e", "text": "While data volumes continue to rise, the capacity of human attention remains limited. As a result, users need analytics engines that can assist in prioritizing attention in this fast data that is too large for manual inspection. We present a set of design principles for the design of fast data analytics engines that leverage the relative scarcity of human attention and overabundance of data: return fewer results, prioritize iterative analysis, and filter fast to compute less. We report on our early experiences employing these principles in the design and deployment of MacroBase, an open source analysis engine for prioritizing attention in fast data. By combining streaming operators for feature transformation, classification, and data summarization, MacroBase provides users with interpretable explanations of key behaviors, acting as a search engine for fast data.", "title": "" }, { "docid": "b4a2c3679fe2490a29617c6a158b9dbc", "text": "We present a general approach to automating ethical decisions, drawing on machine learning and computational social choice. In a nutshell, we propose to learn a model of societal preferences, and, when faced with a specific ethical dilemma at runtime, efficiently aggregate those preferences to identify a desirable choice. We provide a concrete algorithm that instantiates our approach; some of its crucial steps are informed by a new theory of swap-dominance efficient voting rules. Finally, we implement and evaluate a system for ethical decision making in the autonomous vehicle domain, using preference data collected from 1.3 million people through the Moral Machine website.", "title": "" }, { "docid": "77b78ec70f390289424cade3850fc098", "text": "As the primary barrier between an organism and its environment, epithelial cells are well-positioned to regulate tolerance while preserving immunity against pathogens. Class II major histocompatibility complex molecules (MHC class II) are highly expressed on the surface of epithelial cells (ECs) in both the lung and intestine, although the functional consequences of this expression are not fully understood. Here, we summarize current information regarding the interactions that regulate the expression of EC MHC class II in health and disease. We then evaluate the potential role of EC as non-professional antigen presenting cells. Finally, we explore future areas of study and the potential contribution of epithelial surfaces to gut-lung crosstalk.", "title": "" }, { "docid": "1c915d0ffe515aa2a7c52300d86e90ba", "text": "This paper presents a tool developed for the purpose of assessing teaching presence in online courses that make use of computer conferencing, and preliminary results from the use of this tool. The method of analysis is based on Garrison, Anderson, and Archer’s [1] model of critical thinking and practical inquiry in a computer conferencing context. The concept of teaching presence is constitutively defined as having three categories – design and organization, facilitating discourse, and direct instruction. Indicators that we search for in the computer conference transcripts identify each category. Pilot testing of the instrument reveals interesting differences in the extent and type of teaching presence found in different graduate level online courses.", "title": "" }, { "docid": "c3e4ef9e9fd5b6301cb0a07ced5c02fc", "text": "The classification problem of assigning several observations into different disjoint groups plays an important role in business decision making and many other areas. Developing more accurate and widely applicable classification models has significant implications in these areas. It is the reason that despite of the numerous classification models available, the research for improving the effectiveness of these models has never stopped. Combining several models or using hybrid models has become a common practice in order to overcome the deficiencies of single models and can be an effective way of improving upon their predictive performance, especially when the models in combination are quite different. In this paper, a novel hybridization of artificial neural networks (ANNs) is proposed using multiple linear regression models in order to yield more general and more accurate model than traditional artificial neural networks for solving classification problems. Empirical results indicate that the proposed hybrid model exhibits effectively improved classification accuracy in comparison with traditional artificial neural networks and also some other classification models such as linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), K-nearest neighbor (KNN), and support vector machines (SVMs) using benchmark and real-world application data sets. These data sets vary in the number of classes (two versus multiple) and the source of the data (synthetic versus real-world). Therefore, it can be applied as an appropriate alternate approach for solving classification problems, specifically when higher forecasting", "title": "" }, { "docid": "82a0169afe20e2965f7fdd1a8597b7d3", "text": "Accurate face recognition is critical for many security applications. Current automatic face-recognition systems are defeated by natural changes in lighting and pose, which often affect face images more profoundly than changes in identity. The only system that can reliably cope with such variability is a human observer who is familiar with the faces concerned. We modeled human familiarity by using image averaging to derive stable face representations from naturally varying photographs. This simple procedure increased the accuracy of an industry standard face-recognition algorithm from 54% to 100%, bringing the robust performance of a familiar human to an automated system.", "title": "" }, { "docid": "387634b226820f3aa87fede466acd6c2", "text": "Objectives To evaluate the ability of a short-form FCE to predict future timely and sustained return-to-work. Methods A prospective cohort study was conducted using data collected during a cluster RCT. Subject performance on the items in the short-form FCE was compared to administrative recovery outcomes from a workers’ compensation database. Outcomes included days to claim closure, days to time loss benefit suspension and future recurrence (defined as re-opening a closed claim, restarting benefits, or filing a new claim for injury to the same body region). Analysis included multivariable Cox and logistic regression using a risk factor modeling strategy. Potential confounders included age, sex, injury duration, and job attachment status, among others. Results The sample included 147 compensation claimants with a variety of musculoskeletal injuries. Subjects who demonstrated job demand levels on all FCE items were more likely to have their claims closed (adjusted Hazard Ratio 5.52 (95% Confidence Interval 3.42–8.89), and benefits suspended (adjusted Hazard Ratio 5.45 (95% Confidence Interval 2.73–10.85) over the follow-up year. The proportion of variance explained by the FCE ranged from 18 to 27%. FCE performance was not significantly associated with future recurrence. Conclusion A short-form FCE appears to provide useful information for predicting time to recovery as measured through administrative outcomes, but not injury recurrence. The short-form FCE may be an efficient option for clinicians using FCE in the management of injured workers.", "title": "" }, { "docid": "9fd247bb0f45d09e11c05fca48372ee8", "text": "Based on the CSMC 0.6um 40V BCD process and the bandgap principle a reference circuit used in high voltage chip is designed. The simulation results show that a temperature coefficient of 26.5ppm/°C in the range of 3.5∼40V supply, the output voltage is insensitive to the power supply, when the supply voltage rages from 3.5∼40V, the output voltage is equal to 1.2558V to 1.2573V at room temperature. The circuit we designed has high precision and stability, thus it can be used as stability reference voltage in power management IC.", "title": "" }, { "docid": "0d0fae25e045c730b68d63e2df1dfc7f", "text": "It is very difficult to over-emphasize the benefits of accurate data. Errors in data are generally the most expensive aspect of data entry, costing the users even much more compared to the original data entry. Unfortunately, these costs are intangibles or difficult to measure. If errors are detected at an early stage then it requires little cost to remove the errors. Incorrect and misleading data lead to all sorts of unpleasant and unnecessary expenses. Unluckily, it would be very expensive to correct the errors after the data has been processed, particularly when the processed data has been converted into the knowledge for decision making. No doubt a stitch in time saves nine i.e. a timely effort will prevent more work at later stage. Moreover, time spent in processing errors can also have a significant cost. One of the major problems with automated data entry systems are errors. In this paper we discuss many well known techniques to minimize errors, different cleansing approaches and, suggest how we can improve accuracy rate. Framework available for data cleansing offer the fundamental services such as attribute selection, formation of tokens, selection of clustering algorithms, selection of eliminator functions etc.", "title": "" }, { "docid": "04373474e0d9902fdee169492ece6dd0", "text": "The development of ivermectin as a complementary vector control tool will require good quality evidence. This paper reviews the different eco-epidemiological contexts in which mass drug administration with ivermectin could be useful. Potential scenarios and pharmacological strategies are compared in order to help guide trial design. The rationale for a particular timing of an ivermectin-based tool and some potentially useful outcome measures are suggested.", "title": "" }, { "docid": "5c1d6a2616a54cd8d8316b8d37f0147d", "text": "Cadmium (Cd) is a toxic, nonessential transition metal and contributes a health risk to humans, including various cancers and cardiovascular diseases; however, underlying molecular mechanisms remain largely unknown. Cells transmit information to the next generation via two distinct ways: genetic and epigenetic. Chemical modifications to DNA or histone that alters the structure of chromatin without change of DNA nucleotide sequence are known as epigenetics. These heritable epigenetic changes include DNA methylation, post-translational modifications of histone tails (acetylation, methylation, phosphorylation, etc), and higher order packaging of DNA around nucleosomes. Apart from DNA methyltransferases, histone modification enzymes such as histone acetyltransferase, histone deacetylase, and methyltransferase, and microRNAs (miRNAs) all involve in these epigenetic changes. Recent studies indicate that Cd is able to induce various epigenetic changes in plant and mammalian cells in vitro and in vivo. Since aberrant epigenetics plays a critical role in the development of various cancers and chronic diseases, Cd may cause the above-mentioned pathogenic risks via epigenetic mechanisms. Here we review the in vitro and in vivo evidence of epigenetic effects of Cd. The available findings indicate that epigenetics occurred in association with Cd induction of malignant transformation of cells and pathological proliferation of tissues, suggesting that epigenetic effects may play a role in Cd toxic, particularly carcinogenic effects. The future of environmental epigenomic research on Cd should include the role of epigenetics in determining long-term and late-onset health effects following Cd exposure.", "title": "" } ]
scidocsrr
c5c495f5eac4239f4d35d20581d38d58
A multi-source dataset of urban life in the city of Milan and the Province of Trentino
[ { "docid": "a026cb81bddfa946159d02b5bb2e341d", "text": "In this paper we are concerned with the practical issues of working with data sets common to finance, statistics, and other related fields. pandas is a new library which aims to facilitate working with these data sets and to provide a set of fundamental building blocks for implementing statistical models. We will discuss specific design issues encountered in the course of developing pandas with relevant examples and some comparisons with the R language. We conclude by discussing possible future directions for statistical computing and data analysis using Python.", "title": "" }, { "docid": "f8fe22b2801a250a52e3d19ae23804e9", "text": "Human movements contribute to the transmission of malaria on spatial scales that exceed the limits of mosquito dispersal. Identifying the sources and sinks of imported infections due to human travel and locating high-risk sites of parasite importation could greatly improve malaria control programs. Here, we use spatially explicit mobile phone data and malaria prevalence information from Kenya to identify the dynamics of human carriers that drive parasite importation between regions. Our analysis identifies importation routes that contribute to malaria epidemiology on regional spatial scales.", "title": "" } ]
[ { "docid": "e05fc780d1f3fd4061918e50f5dd26a0", "text": "The need for organizations to operate in changing environments is addressed by proposing an approach that integrates organizational development with information system (IS) development taking into account changes in the application context of the solution. This is referred to as Capability Driven Development (CDD). A meta-model representing business and IS designs consisting of goals, key performance indicators, capabilities, context and capability delivery patterns, is being proposed. The use of the meta-model is validated in three industrial case studies as part of an ongoing collaboration project, whereas one case is presented in the paper. Issues related to the use of the CDD approach, namely, CDD methodology and tool support are also discussed.", "title": "" }, { "docid": "c74cd5b9753579517462909bd196ad90", "text": "Interactions around money and financial services are a critical part of our lives on and off-line. New technologies and new ways of interacting with these technologies are of huge interest; they enable new business models and ways of making sense of this most important aspect of our everyday lives. At the same time, money is an essential element in HCI research and design. This workshop is intended to bring together researchers and practitioners involved in the design and use of systems that combine digital and new media with monetary and financial interactions to build on an understanding of these technologies and their impacts on users' behaviors. The workshop will focus on social, technical, and economic aspects around everyday user interactions with money and emerging financial technologies and systems.", "title": "" }, { "docid": "88ca6c25c4be7523eea29d909bd84813", "text": "A health risk appraisal function has been developed for the prediction of stroke using the Framingham Study cohort. The stroke risk factors included in the profile are age, systolic blood pressure, the use of antihypertensive therapy, diabetes mellitus, cigarette smoking, prior cardiovascular disease (coronary heart disease, cardiac failure, or intermittent claudication), atrial fibrillation, and left ventricular hypertrophy by electrocardiogram. Based on 472 stroke events occurring during 10 years' follow-up from biennial examinations 9 and 14, stroke probabilities were computed using the Cox proportional hazards model for each sex based on a point system. On the basis of the risk factors in the profile, which can be readily determined on routine physical examination in a physician's office, stroke risk can be estimated. An individual's risk can be related to the average risk of stroke for persons of the same age and sex. The information that one's risk of stroke is several times higher than average may provide the impetus for risk factor modification. It may also help to identify persons at substantially increased stroke risk resulting from borderline levels of multiple risk factors such as those with mild or borderline hypertension and facilitate multifactorial risk factor modification.", "title": "" }, { "docid": "9954793c44b1b8fc87c0ae8724e0e4de", "text": "The Khanya project has been equipping schools and educators with ICT skills and equipment to be used in the curriculum delivery in South Africa. However, research and anecdotal evidence show that there is low adoption rate of ICT among educators in Khanya schools. This interpretive study sets out to analyse the factors which are preventing the educators from using the technology in their work. The perspective of limited access and/or use of ICT as deprivation of capabilities provides a conceptual base for this paper. We employed Sen’s Capability Approach as a conceptual lens to examine the educators’ situation regarding ICT for teaching and learning. Data was collected through in-depth interviews with fourteen educators and two Khanya personnel. The results of the study show that there are a number of factors (personal, social and environmental) which are preventing the educators from realising their potential capabilities from the ICT.", "title": "" }, { "docid": "cb7b53be8ef7cd9330445668f8f0eee6", "text": "Humans have an innate tendency to anthropomorphize surrounding entities and have always been fascinated by the creation of machines endowed with human-inspired capabilities and traits. In the last few decades, this has become a reality with enormous advances in hardware performance, computer graphics, robotics technology, and artificial intelligence. New interdisciplinary research fields have brought forth cognitive robotics aimed at building a new generation of control systems and providing robots with social, empathetic and affective capabilities. This paper presents the design, implementation, and test of a human-inspired cognitive architecture for social robots. State-of-the-art design approaches and methods are thoroughly analyzed and discussed, cases where the developed system has been successfully used are reported. The tests demonstrated the system’s ability to endow a social humanoid robot with human social behaviors and with in-silico robotic emotions.", "title": "" }, { "docid": "8f089d55c0ce66db7bbf27476267a8e5", "text": "Planning radar sites is very important for several civilian and military applications. Depending on the security or defence issue different requirements exist regarding the radar coverage and the radar sites. QSiteAnalysis offers several functions to automate, improve and speed up this highly complex task. Wave propagation effects such as diffraction, refraction, multipath and atmospheric attenuation are considered for the radar coverage calculation. Furthermore, an automatic optimisation of the overall coverage is implemented by optimising the radar sites. To display the calculation result, the calculated coverage is visualised in 2D and 3D. Therefore, QSiteAnalysis offers several functions to improve and automate radar site studies.", "title": "" }, { "docid": "aabf75855e39682b353c46332bc218db", "text": "Semantic Web Mining is the outcome of two new and fast developing domains: Semantic Web and Data Mining. The Semantic Web is an extension of the current web in which information is given well-defined meaning, better enabling computers and people to work in cooperation. Data Mining is the nontrivial process of identifying valid, previously unknown, potentially useful patterns in data. Semantic Web Mining refers to the application of data mining techniques to extract knowledge from World Wide Web or the area of data mining that refers to the use of algorithms for extracting patterns from resources distributed over in the web. The aim of Semantic Web Mining is to discover and retrieve useful and interesting patterns from a huge set of web data. This web data consists of different kind of information, including web structure data, web log data and user profiles data. Semantic Web Mining is a relatively new area, broadly interdisciplinary, attracting researchers from: computer science, information retrieval specialists and experts from business studies fields. Web data mining includes web content mining, web structure mining and web usage mining. All of these approaches attempt to extract knowledge from the web, produce some useful results from the knowledge extracted and apply these results to the real world problems. To improve the internet service quality and increase the user click rate on a specific website, it is necessary for a web developer to know what the user really want to do, predict which pages the user is potentially interested in. In this paper, various techniques for Semantic Web mining like web content mining, web usage mining and web structure mining are discussed. Our main focus is on web usage mining and its application in web personalization. Study shows that the accuracy of recommendation system has improved significantly with the use of semantic web mining in web personalization.", "title": "" }, { "docid": "d4ffeb204691f9a9188e8deecaf2d811", "text": "Salsify is a new architecture for real-time Internet video that tightly integrates a video codec and a network transport protocol, allowing it to respond quickly to changing network conditions and avoid provoking packet drops and queueing delays. To do this, Salsify optimizes the compressed length and transmission time of each frame, based on a current estimate of the network’s capacity; in contrast, existing systems generally control longer-term metrics like frame rate or bit rate. Salsify’s per-frame optimization strategy relies on a purely functional video codec, which Salsify uses to explore alternative encodings of each frame at different quality levels. We developed a testbed for evaluating real-time video systems end-to-end with reproducible video content and network conditions. Salsify achieves lower video delay and, over variable network paths, higher visual quality than five existing systems: FaceTime, Hangouts, Skype, and WebRTC’s reference implementation with and without scalable video coding.", "title": "" }, { "docid": "66878197b06f3fac98f867d5457acafe", "text": "As a result of disparities in the educational system, numerous scholars and educators across disciplines currently support the STEAM (Science, Technology, Engineering, Art, and Mathematics) movement for arts integration. An educational approach to learning focusing on guiding student inquiry, dialogue, and critical thinking through interdisciplinary instruction, STEAM values proficiency, knowledge, and understanding. Despite extant literature urging for this integration, the trend has yet to significantly influence federal or state standards for K-12 education in the United States. This paper provides a brief and focused review of key theories and research from the fields of cognitive psychology and neuroscience outlining the benefits of arts integrative curricula in the classroom. Cognitive psychologists have found that the arts improve participant retention and recall through semantic elaboration, generation of information, enactment, oral production, effort after meaning, emotional arousal, and pictorial representation. Additionally, creativity is considered a higher-order cognitive skill and EEG results show novel brain patterns associated with creative thinking. Furthermore, cognitive neuroscientists have found that long-term artistic training can augment these patterns as well as lead to greater plasticity and neurogenesis in associated brain regions. Research suggests that artistic training increases retention and recall, generates new patterns of thinking, induces plasticity, and results in strengthened higher-order cognitive functions related to creativity. These benefits of arts integration, particularly as approached in the STEAM movement, are what develops students into adaptive experts that have the skills to then contribute to innovation in a variety of disciplines.", "title": "" }, { "docid": "193aee1131ce05d5d4a4316871c193b8", "text": "In this paper, we discuss wireless sensor and networking technologies for swarms of inexpensive aquatic surface drones in the context of the HANCAD project. The goal is to enable the swarm to perform maritime tasks such as sea-border patrolling and environmental monitoring, while keeping the cost of each drone low. Communication between drones is essential for the success of the project. Preliminary experiments show that XBee modules are promising for energy efficient multi-hop drone-to-drone communication.", "title": "" }, { "docid": "2ad8723c9fce1a6264672f41824963f8", "text": "Psychologists have repeatedly shown that a single statistical factor--often called \"general intelligence\"--emerges from the correlations among people's performance on a wide variety of cognitive tasks. But no one has systematically examined whether a similar kind of \"collective intelligence\" exists for groups of people. In two studies with 699 people, working in groups of two to five, we find converging evidence of a general collective intelligence factor that explains a group's performance on a wide variety of tasks. This \"c factor\" is not strongly correlated with the average or maximum individual intelligence of group members but is correlated with the average social sensitivity of group members, the equality in distribution of conversational turn-taking, and the proportion of females in the group.", "title": "" }, { "docid": "37572963400c8a78cef3cd4a565b328e", "text": "The impressive performance of utilizing deep learning or neural network has attracted much attention in both the industry and research communities, especially towards computer vision aspect related applications. Despite its superior capability of learning, generalization and interpretation on various form of input, micro-expression analysis field is yet remains new in applying this kind of computing system in automated expression recognition system. A new feature extractor, BiVACNN is presented in this paper, where it first estimates the optical flow fields from the apex frame, then encode the flow fields features using CNN. Concretely, the proposed method consists of three stages: apex frame acquisition, multivariate features formation and feature learning using CNN. In the multivariate features formation stage, we attempt to derive six distinct features from the apex details, which include: the apex itself, difference between the apex and onset frames, horizontal optical flow, vertical optical flow, magnitude and orientation. It is demonstrated that utilizing the horizontal and vertical optical flow capable to achieve 80% recognition accuracy in CASME II and SMIC-HS databases.", "title": "" }, { "docid": "37642371bbcc3167f96548d02ccd832e", "text": "The manipulation of light-matter interactions in two-dimensional atomically thin crystals is critical for obtaining new optoelectronic functionalities in these strongly confined materials. Here, by integrating chemically grown monolayers of MoS2 with a silver-bowtie nanoantenna array supporting narrow surface-lattice plasmonic resonances, a unique two-dimensional optical system has been achieved. The enhanced exciton-plasmon coupling enables profound changes in the emission and excitation processes leading to spectrally tunable, large photoluminescence enhancement as well as surface-enhanced Raman scattering at room temperature. Furthermore, due to the decreased damping of MoS2 excitons interacting with the plasmonic resonances of the bowtie array at low temperatures stronger exciton-plasmon coupling is achieved resulting in a Fano line shape in the reflection spectrum. The Fano line shape, which is due to the interference between the pathways involving the excitation of the exciton and plasmon, can be tuned by altering the coupling strengths between the two systems via changing the design of the bowties lattice. The ability to manipulate the optical properties of two-dimensional systems with tunable plasmonic resonators offers a new platform for the design of novel optical devices with precisely tailored responses.", "title": "" }, { "docid": "3cf458392fb61a5e70647c9c951d5db8", "text": "This paper presents an online feature selection mechanism for evaluating multiple features while tracking and adjusting the set of features used to improve tracking performance. Our hypothesis is that the features that best discriminate between object and background are also best for tracking the object. Given a set of seed features, we compute log likelihood ratios of class conditional sample densities from object and background to form a new set of candidate features tailored to the local object/background discrimination task. The two-class variance ratio is used to rank these new features according to how well they separate sample distributions of object and background pixels. This feature evaluation mechanism is embedded in a mean-shift tracking system that adaptively selects the top-ranked discriminative features for tracking. Examples are presented that demonstrate how this method adapts to changing appearances of both tracked object and scene background. We note susceptibility of the variance ratio feature selection method to distraction by spatially correlated background clutter and develop an additional approach that seeks to minimize the likelihood of distraction.", "title": "" }, { "docid": "f0f7bd0223d69184f3391aaf790a984d", "text": "Smart buildings equipped with state-of-the-art sensors and meters are becoming more common. Large quantities of data are being collected by these devices. For a single building to benefit from its own collected data, it will need to wait for a long time to collect sufficient data to build accurate models to help improve the smart buildings systems. Therefore, multiple buildings need to cooperate to amplify the benefits from the collected data and speed up the model building processes. Apparently, this is not so trivial and there are associated challenges. In this paper, we study the importance of collaborative data analytics for smart buildings, its benefits, as well as presently possible models of carrying it out. Furthermore, we present a framework for collaborative fault detection and diagnosis as a case of collaborative data analytics for smart buildings. We also provide a preliminary analysis of the energy efficiency benefit of such collaborative framework for smart buildings. The result shows that significant energy savings can be achieved for smart buildings using collaborative data analytics.", "title": "" }, { "docid": "99efebd647fa083fab4e0f091b0b471b", "text": "This paper proposes a novel method to detect fire and/or flames in real-time by processing the video data generated by an ordinary camera monitoring a scene. In addition to ordinary motion and color clues, flame and fire flicker is detected by analyzing the video in the wavelet domain. Quasi-periodic behavior in flame boundaries is detected by performing temporal wavelet transform. Color variations in flame regions are detected by computing the spatial wavelet transform of moving fire-colored regions. Another clue used in the fire detection algorithm is the irregularity of the boundary of the fire-colored region. All of the above clues are combined to reach a final decision. Experimental results show that the proposed method is very successful in detecting fire and/or flames. In addition, it drastically reduces the false alarms issued to ordinary fire-colored moving objects as compared to the methods using only motion and color clues. 2005 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "be427b129a89edb6da1b21c4f8df526b", "text": "Distributed Artificial Intelligence systems, in which multiple agents interact to improve their individual performance and to enhance the system’s overall utility, are becoming an increasingly pervasive means of conceptualising a diverse range of applications. As the discipline matures, researchers are beginning to strive for the underlying theories and principles which guide the central processes of coordination and cooperation. Here agent communities are modelled using a distributed goal search formalism and it is argued that commitments (pledges to undertake a specified course of action) and conventions (means of monitoring commitments in changing circumstances) are the foundation of coordination in multi-agent systems. An analysis of existing coordination models which use concepts akin to commitments and conventions is undertaken before a new unifying framework is presented. Finally a number of prominent coordination techniques which do not explicitly involve commitments or conventions are reformulated in these terms to demonstrate their compliance with the central hypothesis of this paper.", "title": "" }, { "docid": "42167e7708bb73b08972e15a44a6df02", "text": "A wavelet scattering network computes a translation invariant image representation which is stable to deformations and preserves high-frequency information for classification. It cascades wavelet transform convolutions with nonlinear modulus and averaging operators. The first network layer outputs SIFT-type descriptors, whereas the next layers provide complementary invariant information that improves classification. The mathematical analysis of wavelet scattering networks explains important properties of deep convolution networks for classification. A scattering representation of stationary processes incorporates higher order moments and can thus discriminate textures having the same Fourier power spectrum. State-of-the-art classification results are obtained for handwritten digits and texture discrimination, with a Gaussian kernel SVM and a generative PCA classifier.", "title": "" }, { "docid": "374383490d88240b410a14a185ff082e", "text": "A substantial part of the operating costs of public transport is attributable to drivers, whose efficient use therefore is important. The compilation of optimal work packages is difficult, being NP-hard. In practice, algorithmic advances and enhanced computing power have led to significant progress in achieving better schedules. However, differences in labor practices among modes of transport and operating companies make production of a truly general system with acceptable performance a difficult proposition. TRACS II has overcome these difficulties, being used with success by a substantial number of bus and train operators. Many theoretical aspects of the system have been published previously. This paper shows for the first time how theory and practice have been brought together, explaining the many features which have been added to the algorithmic kernel to provide a user-friendly and adaptable system designed to provide maximum flexibility in practice. We discuss the extent to which users have been involved in system development, leading to many practical successes, and we summarize some recent achievements.", "title": "" }, { "docid": "96da5252dac0eb0010a49519592c4104", "text": "Three-level converters are becoming a realistic alternative to the conventional converters in high-power wind-energy applications. In this paper, a complete analytical strategy to model a back-to-back three-level converter is described. This tool permits us to adapt the control strategy to the specific application. Moreover, the model of different loads can be incorporated to the overall model. Both control strategy and load models are included in the complete system model. The proposed model pays special attention to the unbalance in the capacitors' voltage of three-level converters, including the dynamics of the capacitors' voltage. In order to validate the model and the control strategy proposed in this paper, a 3-MW three-level back-to-back power converter used as a power conditioning system of a variable speed wind turbine has been simulated. Finally, the described strategy has been implemented in a 50-kVA scalable prototype as well, providing a satisfactory performance", "title": "" } ]
scidocsrr
b10bd07f3a3c5cb0ff56d279dac00f02
Modelling IT projects success with Fuzzy Cognitive Maps
[ { "docid": "447c36d34216b8cb890776248d9cc010", "text": "Fuzzy cognitive maps (FCMs) are fuzzy-graph structures for representing causal reasoning. Their fuzziness allows hazy degrees of causality between hazy causal objects (concepts). Their graph structure allows systematic causal propagation, in particular forward and backward chaining, and it allows knowledge bases to be grown by connecting different FCMs. FCMs are especially applicable to soft knowledge domains and several example FCMs are given. Causality is represented as a fuzzy relation on causal concepts. A fuzzy causal algebra for governing causal propagation on FCMs is developed. FCM matrix representation and matrix operations are presented in the Appendix.", "title": "" } ]
[ { "docid": "347509d68f6efd4da747a7a3e704a9a2", "text": "Stack Overflow is widely regarded as the most popular Community driven Question Answering (CQA) website for programmers. Questions posted on Stack Overflow which are not related to programming topics, are marked as `closed' by experienced users and community moderators. A question can be `closed' for five reasons -- duplicate, off-topic, subjective, not a real question and too localized. In this work, we present the first study of `closed' questions on Stack Overflow. We download 4 years of publicly available data which contains 3.4 Million questions. We first analyze and characterize the complete set of 0.1 Million `closed' questions. Next, we use a machine learning framework and build a predictive model to identify a `closed' question at the time of question creation.\n One of our key findings is that despite being marked as `closed', subjective questions contain high information value and are very popular with the users. We observe an increasing trend in the percentage of closed questions over time and find that this increase is positively correlated to the number of newly registered users. In addition, we also see a decrease in community participation to mark a `closed' question which has led to an increase in moderation job time. We also find that questions closed with the Duplicate and Off Topic labels are relatively more prone to reputation gaming. Our analysis suggests broader implications for content quality maintenance on CQA websites. For the `closed' question prediction task, we make use of multiple genres of feature sets based on - user profile, community process, textual style and question content. We use a state-of-art machine learning classifier based on an ensemble framework and achieve an overall accuracy of 70.3%. Analysis of the feature space reveals that `closed' questions are relatively less informative and descriptive than non-`closed' questions. To the best of our knowledge, this is the first experimental study to analyze and predict `closed' questions on Stack Overflow.", "title": "" }, { "docid": "5b0eaf636d6d8cf0523e3f00290b780f", "text": "Toward materializing the recently identified potential of cognitive neuroscience for IS research (Dimoka, Pavlou and Davis 2007), this paper demonstrates how functional neuroimaging tools can enhance our understanding of IS theories. Specifically, this study aims to uncover the neural mechanisms that underlie technology adoption by identifying the brain areas activated when users interact with websites that differ on their level of usefulness and ease of use. Besides localizing the neural correlates of the TAM constructs, this study helps understand their nature and dimensionality, as well as uncover hidden processes associated with intentions to use a system. The study also identifies certain technological antecedents of the TAM constructs, and shows that the brain activations associated with perceived usefulness and perceived ease of use predict selfreported intentions to use a system. The paper concludes by discussing the study’s implications for underscoring the potential of functional neuroimaging for IS research and the TAM literature.", "title": "" }, { "docid": "7f070d85f4680a2b88d3b530dff0cfc5", "text": "An extensive data search among various types of developmental and evolutionary sequences yielded a `four quadrant' model of consciousness and its development (the four quadrants being intentional, behavioural, cultural, and social). Each of these dimensions was found to unfold in a sequence of at least a dozen major stages or levels. Combining the four quadrants with the dozen or so major levels in each quadrant yields an integral theory of consciousness that is quite comprehensive in its nature and scope. This model is used to indicate how a general synthesis and integration of twelve of the most influential schools of consciousness studies can be effected, and to highlight some of the most significant areas of future research. The conclusion is that an `all-quadrant, all-level' approach is the minimum degree of sophistication that we need into order to secure anything resembling a genuinely integral theory of consciousness.", "title": "" }, { "docid": "a33f862d0b7dfde7b9f18aa193db9acf", "text": "Phytoremediation is an important process in the removal of heavy metals and contaminants from the soil and the environment. Plants can help clean many types of pollution, including metals, pesticides, explosives, and oil. Phytoremediation in phytoextraction is a major technique. In this process is the use of plants or algae to remove contaminants in the soil, sediment or water in the harvesting of plant biomass. Heavy metal is generally known set of elements with atomic mass (> 5 gcm -3), particularly metals such as exchange of cadmium, lead and mercury. Between different pollutant cadmium (Cd) is the most toxic and plant and animal heavy metals. Mustard (Brassica juncea L.) and Sunflower (Helianthus annuus L.) are the plant for the production of high biomass and rapid growth, and it seems that the appropriate species for phytoextraction because it can compensate for the low accumulation of cadmium with a much higher biomass yield. To use chelators, such as acetic acid, ethylene diaminetetraacetic acid (EDTA), and to increase the solubility of metals in the soil to facilitate easy availability indiscernible and the absorption of the plant from root leg in vascular plants. *Corresponding Author: Awais Shakoor  awais.shakoor22@gmail.com Journal of Biodiversity and Environmental Sciences (JBES) ISSN: 2220-6663 (Print) 2222-3045 (Online) Vol. 10, No. 3, p. 88-98, 2017 http://www.innspub.net J. Bio. Env. Sci. 2017 89 | Shakoor et al. Introduction Phytoremediation consists of Greek and words of \"station\" and Latin remedium plants, which means \"rebalancing\" describes the treatment of environmental problems treatment (biological) through the use of plants that mitigate the environmental problem without digging contaminated materials and disposed of elsewhere. Controlled by the plant interactions with groundwater and organic and inorganic contaminated materials in specific locations to achieve therapeutic targets molecules site application (Landmeyer, 2011). Phytoremediation is the use of green plants to remove contaminants from the environment or render them harmless. The technology that uses plants to\" green space \"of heavy metals in the soil through the roots. While vacuum cleaners and you should be able to withstand and survive high levels of heavy metals in the soil unique plants (Baker, 2000). The main result in increasing the population and more industrialization are caused water and soil contamination that is harmful for environment as well as human health. In the whole world, contamination in the soil by heavy metals has become a very serious issue. So, removal of these heavy metals from the soil is very necessary to protect the soil and human health. Both inorganic and organic contaminants, like petroleum, heavy metals, agricultural waste, pesticide and fertilizers are the main source that deteriorate the soil health (Chirakkara et al., 2016). Heavy metals have great role in biological system, so we can divide into two groups’ essentials and non essential. Those heavy metals which play a vital role in biochemical and physiological function in some living organisms are called essential heavy metals, like zinc (Zn), nickel (Ni) and cupper (Cu) (Cempel and Nikel, 2006). In some living organisms, heavy metals don’t play any role in biochemical as well as physiological functions are called non essential heavy metals, such as mercury (Hg), lead (Pb), arsenic (As), and Cadmium (Cd) (Dabonne et al., 2010). Cadmium (Cd) is consider as a non essential heavy metal that is more toxic at very low concentration as compare to other non essential heavy metals. It is toxic to plant, human and animal health. Cd causes serious diseases in human health through the food chain (Rafiq et al., 2014). So, removal of Cd from the soil is very important problem to overcome these issues (Neilson and Rajakaruna, 2015). Several methods are used to remove the Cd from the soil, such as physical, chemical and physiochemical to increase the soil pH (Liu et al., 2015). The main source of Cd contamination in the soil and environment is automobile emissions, batteries and commercial fertilizers (Liu et al., 2015). Phytoremediation is a promising technique that is used in removing the heavy metals form the soil (Ma et al., 2011). Plants update the heavy metals through the root and change the soil properties which are helpful in increasing the soil fertility (Mench et al., 2009). Plants can help clean many types of pollution, including metals, pesticides, explosives, and oil. Plants also help prevent wind and rain, groundwater and implementation of pollution off site to other areas. Phytoremediation works best in locations with low to moderate amounts of pollution. Plants absorb harmful chemicals from the soil when the roots take in water and nutrients from contaminated soils, streams and groundwater. Once inside the plant and chemicals can be stored in the roots, stems, or leaves. Change of less harmful chemicals within the plant. Or a change in the gases that are released into the air as a candidate plant Agency (US Environmental Protection, 2001). Phytoremediation is the direct use of living green plants and minutes to stabilize or reduce pollution in soil, sludge, sediment, surface water or groundwater bodies with low concentrations of pollutants a large clean space and shallow depths site offers favorable treatment plant (associated with US Environmental Protection Agency 0.2011) circumstances. Phytoremediation is the use of plants for the treatment of contaminated soil sites and sediments J. Bio. Env. Sci. 2017 90 | Shakoor et al. and water. It is best applied at sites of persistent organic pollution with shallow, nutrient, or metal. Phytoremediation is an emerging technology for contaminated sites is attractive because of its low cost and versatility (Schnoor, 1997). Contaminated soils on the site using the processing plants. Phytoremediation is a plant that excessive accumulation of metals in contaminated soils in growth (National Research Council, 1997). Phytoremediation to facilitate the concentration of pollutants in contaminated soil, water or air is composed, and plants able to contain, degrade or eliminate metals, pesticides, solvents, explosives, crude oil and its derivatives, and other contaminants in the media that contain them. Phytoremediation have several techniques and these techniques depend on different factors, like soil type, contaminant type, soil depth and level of ground water. Special operation situations and specific technology applied at the contaminated site (Hyman and Dupont 2001). Techniques of phytoremediation Different techniques are involved in phytoremediation, such as phytoextraction, phytostabilisation, phytotransformation, phytostimulation, phytovolatilization, and rhizofiltration. Phytoextraction Phytoextraction is also called phytoabsorption or phytoaccumulation, in this technique heavy metals are removed by up taking through root form the water and soil environment, and accumulated into the shoot part (Rafati et al., 2011). Phytostabilisation Phytostabilisation is also known as phytoimmobilization. In this technique different type of plants are used for stabilization the contaminants from the soil environment (Ali et al., 2013). By using this technique, the bioavailability and mobility of the different contaminants are reduced. So, this technique is help to avoiding their movement into food chain as well as into ground water (Erakhrumen, 2007). Nevertheless, Phytostabilisation is the technique by which movement of heavy metals can be stop but its not permanent solution to remove the contamination from the soil. Basically, phytostabilisation is the management approach for inactivating the potential of toxic heavy metals form the soil environment contaminants (Vangronsveld et al., 2009).", "title": "" }, { "docid": "d0253bb3efe714e6a34e8dd5fc7dcf81", "text": "ICN has received a lot of attention in recent years, and is a promising approach for the Future Internet design. As multimedia is the dominating traffic in today's and (most likely) the Future Internet, it is important to consider this type of data transmission in the context of ICN. In particular, the adaptive streaming of multimedia content is a promising approach for usage within ICN, as the client has full control over the streaming session and has the possibility to adapt the multimedia stream to its context (e.g. network conditions, device capabilities), which is compatible with the paradigms adopted by ICN. In this article we investigate the implementation of adaptive multimedia streaming within networks adopting the ICN approach. In particular, we present our approach based on the recently ratified ISO/IEC MPEG standard Dynamic Adaptive Streaming over HTTP and the ICN representative Content-Centric Networking, including baseline evaluations and open research challenges.", "title": "" }, { "docid": "73267467deec2701d6628a0d3572132e", "text": "Neuromyelitis optica (NMO) is an inflammatory CNS syndrome distinct from multiple sclerosis (MS) that is associated with serum aquaporin-4 immunoglobulin G antibodies (AQP4-IgG). Prior NMO diagnostic criteria required optic nerve and spinal cord involvement but more restricted or more extensive CNS involvement may occur. The International Panel for NMO Diagnosis (IPND) was convened to develop revised diagnostic criteria using systematic literature reviews and electronic surveys to facilitate consensus. The new nomenclature defines the unifying term NMO spectrum disorders (NMOSD), which is stratified further by serologic testing (NMOSD with or without AQP4-IgG). The core clinical characteristics required for patients with NMOSD with AQP4-IgG include clinical syndromes or MRI findings related to optic nerve, spinal cord, area postrema, other brainstem, diencephalic, or cerebral presentations. More stringent clinical criteria, with additional neuroimaging findings, are required for diagnosis of NMOSD without AQP4IgG or when serologic testing is unavailable. The IPND also proposed validation strategies and achieved consensus on pediatric NMOSD diagnosis and the concepts of monophasic NMOSD and opticospinal MS. Neurology® 2015;85:1–13 GLOSSARY ADEM 5 acute disseminated encephalomyelitis; AQP4 5 aquaporin-4; IgG 5 immunoglobulin G; IPND 5 International Panel for NMO Diagnosis; LETM 5 longitudinally extensive transverse myelitis lesions; MOG 5 myelin oligodendrocyte glycoprotein; MS 5 multiple sclerosis; NMO 5 neuromyelitis optica; NMOSD 5 neuromyelitis optica spectrum disorders; SLE 5 systemic lupus erythematosus; SS 5 Sjögren syndrome. Neuromyelitis optica (NMO) is an inflammatory CNS disorder distinct from multiple sclerosis (MS). It became known as Devic disease following a seminal 1894 report. Traditionally, NMO was considered a monophasic disorder consisting of simultaneous bilateral optic neuritis and transverse myelitis but relapsing cases were described in the 20th century. MRI revealed normal brain scans and$3 vertebral segment longitudinally extensive transverse myelitis lesions (LETM) in NMO. The nosology of NMO, especially whether it represented a topographically restricted form of MS, remained controversial. A major advance was the discovery that most patients with NMO have detectable serum antibodies that target the water channel aquaporin-4 (AQP4–immunoglobulin G [IgG]), are highly specific for clinically diagnosed NMO, and have pathogenic potential. In 2006, AQP4-IgG serology was incorporated into revised NMO diagnostic criteria that relaxed clinical From the Departments of Neurology (D.M.W.) and Library Services (K.E.W.), Mayo Clinic, Scottsdale, AZ; the Children’s Hospital of Philadelphia (B.B.), PA; the Departments of Neurology and Ophthalmology (J.L.B.), University of Colorado Denver, Aurora; the Service de Neurologie (P.C.), Centre Hospitalier Universitaire de Fort de France, Fort-de-France, Martinique; Department of Neurology (W.C.), Sir Charles Gairdner Hospital, Perth, Australia; the Department of Neurology (T.C.), Massachusetts General Hospital, Boston; the Department of Neurology (J.d.S.), Strasbourg University, France; the Department of Multiple Sclerosis Therapeutics (K.F.), Tohoku University Graduate School of Medicine, Sendai, Japan; the Departments of Neurology and Neurotherapeutics (B.G.), University of Texas Southwestern Medical Center, Dallas; The Walton Centre NHS Trust (A.J.), Liverpool, UK; the Molecular Neuroimmunology Group, Department of Neurology (S.J.), University Hospital Heidelberg, Germany; the Center for Multiple Sclerosis Investigation (M.L.-P.), Federal University of Minas Gerais Medical School, Belo Horizonte, Brazil; the Department of Neurology (M.L.), Johns Hopkins University, Baltimore, MD; Portland VA Medical Center and Oregon Health and Sciences University (J.H.S.), Portland; the Department of Neurology (S.T.), National Pediatric Hospital Dr. Juan P. Garrahan, Buenos Aires, Argentina; the Department of Medicine (A.L.T.), University of British Columbia, Vancouver, Canada; Nuffield Department of Clinical Neurosciences (P.W.), University of Oxford, UK; and the Department of Neurology (B.G.W.), Mayo Clinic, Rochester, MN. Go to Neurology.org for full disclosures. Funding information and disclosures deemed relevant by the authors, if any, are provided at the end of the article. The Article Processing Charge was paid by the Guthy-Jackson Charitable Foundation. This is an open access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives License 4.0 (CC BY-NC-ND), which permits downloading and sharing the work provided it is properly cited. The work cannot be changed in any way or used commercially. © 2015 American Academy of Neurology 1 a 2015 American Academy of Neurology. Unauthorized reproduction of this article is prohibited. Published Ahead of Print on June 19, 2015 as 10.1212/WNL.0000000000001729", "title": "" }, { "docid": "6a1073b72ef20fd59e705400dbdcc868", "text": "In today’s world, there is a continuous global need for more energy which, at the same time, has to be cleaner than the energy produced from the traditional generation technologies. This need has facilitated the increasing penetration of distributed generation (DG) technologies and primarily of renewable energy sources (RES). The extensive use of such energy sources in today’s electricity networks can indisputably minimize the threat of global warming and climate change. However, the power output of these energy sources is not as reliable and as easy to adjust to changing demand cycles as the output from the traditional power sources. This disadvantage can only be effectively overcome by the storing of the excess power produced by DG-RES. Therefore, in order for these new sources to become completely reliable as primary sources of energy, energy storage is a crucial factor. In this work, an overview of the current and future energy storage technologies used for electric power applications is carried out. Most of the technologies are in use today while others are still under intensive research and development. A comparison between the various technologies is presented in terms of the most important technological characteristics of each technology. The comparison shows that each storage technology is different in terms of its ideal network application environment and energy storage scale. This means that in order to achieve optimum results, the unique network environment and the specifications of the storage device have to be studied thoroughly, before a decision for the ideal storage technology to be selected is taken. 2008 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "e67bb4c784b89b2fee1ab7687b545683", "text": "Many people have a strong intuition that there is something morallyobjectionable about playing violent video games, particularly withincreases in the number of people who are playing them and the games'alleged contribution to some highly publicized crimes. In this paper,I use the framework of utilitarian, deontological, and virtue ethicaltheories to analyze the possibility that there might be some philosophicalfoundation for these intuitions. I raise the broader question of whetheror not participating in authentic simulations of immoral acts in generalis wrong. I argue that neither the utilitarian, nor the Kantian hassubstantial objections to violent game playing, although they offersome important insights into playing games in general and what it ismorally to be a ``good sport.'' The Aristotelian, however, has a plausibleand intuitive way to protest participation in authentic simulations ofviolent acts in terms of character: engaging in simulated immoral actserodes one's character and makes it more difficult for one to live afulfilled eudaimonic life.", "title": "" }, { "docid": "b36cc742445db810d40c884a90e2cf42", "text": "Telecommunication sector generates a huge amount of data due to increasing number of subscribers, rapidly renewable technologies; data based applications and other value added service. This data can be usefully mined for churn analysis and prediction. Significant research had been undertaken by researchers worldwide to understand the data mining practices that can be used for predicting customer churn. This paper provides a review of around 100 recent journal articles starting from year 2000 to present the various data mining techniques used in multiple customer based churn models. It then summarizes the existing telecom literature by highlighting the sample size used, churn variables employed and the findings of different DM techniques. Finally, we list the most popular techniques for churn prediction in telecom as decision trees, regression analysis and clustering, thereby providing a roadmap to new researchers to build upon novel churn management models.", "title": "" }, { "docid": "6bf38b6decda962ea03ab429f5fbde4f", "text": "Frame semantic representations have been useful in several applications ranging from text-to-scene generation, to question answering and social network analysis. Predicting such representations from raw text is, however, a challenging task and corresponding models are typically only trained on a small set of sentence-level annotations. In this paper, we present a semantic role labeling system that takes into account sentence and discourse context. We introduce several new features which we motivate based on linguistic insights and experimentally demonstrate that they lead to significant improvements over the current state-of-the-art in FrameNet-based semantic role labeling.", "title": "" }, { "docid": "5e86f40cfc3b2e9664ea1f7cc5bf730c", "text": "Due to a wide range of applications, wireless sensor networks (WSNs) have recently attracted a lot of interest to the researchers. Limited computational capacity and power usage are two major challenges to ensure security in WSNs. Recently, more secure communication or data aggregation techniques have discovered. So, familiarity with the current research in WSN security will benefit researchers greatly. In this paper, security related issues and challenges in WSNs are investigated. We identify the security threats and review proposed security mechanisms for WSNs. Moreover, we provide a brief discussion on the future research direction in WSN security.", "title": "" }, { "docid": "e16bf4ab7c56b6827369f19afb2d4744", "text": "In acoustic modeling for large vocabulary continuous speech recognition, it is essential to model long term dependency within speech signals. Usually, recurrent neural network (RNN) architectures, especially the long short term memory (LSTM) models, are the most popular choice. Recently, a novel architecture, namely feedforward sequential memory networks (FSMN), provides a non-recurrent architecture to model long term dependency in sequential data and has achieved better performance over RNNs on acoustic modeling and language modeling tasks. In this work, we propose a compact feedforward sequential memory networks (cFSMN) by combining FSMN with low-rank matrix factorization. We also make a slight modification to the encoding method used in FSMNs in order to further simplify the network architecture. On the Switchboard task, the proposed new cFSMN structures can reduce the model size by 60% and speed up the learning by more than 7 times while the models still significantly outperform the popular bidirection LSTMs for both frame-level cross-entropy (CE) criterion based training and MMI based sequence training.", "title": "" }, { "docid": "fbcdb3d565519b47922394dc9d84985f", "text": "We present a novel end-to-end trainable neural network model for task-oriented dialog systems. The model is able to track dialog state, issue API calls to knowledge base (KB), and incorporate structured KB query results into system responses to successfully complete task-oriented dialogs. The proposed model produces well-structured system responses by jointly learning belief tracking and KB result processing conditioning on the dialog history. We evaluate the model in a restaurant search domain using a dataset that is converted from the second Dialog State Tracking Challenge (DSTC2) corpus. Experiment results show that the proposed model can robustly track dialog state given the dialog history. Moreover, our model demonstrates promising results in producing appropriate system responses, outperforming prior end-to-end trainable neural network models using per-response accuracy evaluation metrics.", "title": "" }, { "docid": "b8322d65e61be7fb252b2e418df85d3e", "text": "the od. cted ly genof 997 Abstract. Algorithms of filtering, edge detection, and extraction of details and their implementation using cellular neural networks (CNN) are developed in this paper. The theory of CNN based on universal binary neurons (UBN) is also developed. A new learning algorithm for this type of neurons is carried out. Implementation of low-pass filtering algorithms using CNN is considered. Separate processing of the binary planes of gray-scale images is proposed. Algorithms of edge detection and impulsive noise filtering based on this approach and their implementation using CNN-UBN are presented. Algorithms of frequency correction reduced to filtering in the spatial domain are considered. These algorithms make it possible to extract details of given sizes. Implementation of such algorithms using CNN is presented. Finally, a general strategy of gray-scale image processing using CNN is considered. © 1997 SPIE and IS&T. [S1017-9909(97)00703-4]", "title": "" }, { "docid": "b4bd19c2285199e280cb41e733ec5498", "text": "In the past few years, mobile augmented reality (AR) has attracted a great deal of attention. It presents us a live, direct or indirect view of a real-world environment whose elements are augmented (or supplemented) by computer-generated sensory inputs such as sound, video, graphics or GPS data. Also, deep learning has the potential to improve the performance of current AR systems. In this paper, we propose a distributed mobile logo detection framework. Our system consists of mobile AR devices and a back-end server. Mobile AR devices can capture real-time videos and locally decide which frame should be sent to the back-end server for logo detection. The server schedules all detection jobs to minimise the maximum latency. We implement our system on the Google Nexus 5 and a desktop with a wireless network interface. Evaluation results show that our system can detect the view change activity with an accuracy of 95:7% and successfully process 40 image processing jobs before deadline. ARTICLE HISTORY Received 6 June 2018 Accepted 30 June 2018", "title": "" }, { "docid": "cc2a7d6ac63f12b29a6d30f20b5547be", "text": "The CyberDesk project is aimed at providing a software architecture that dynamically integrates software modules. This integration is driven by a user’s context, where context includes the user’s physical, social, emotional, and mental (focus-of-attention) environments. While a user’s context changes in all settings, it tends to change most frequently in a mobile setting. We have used the CyberDesk ystem in a desktop setting and are currently using it to build an intelligent home nvironment.", "title": "" }, { "docid": "5b0b8da7faa91343bad6296fd7cb181f", "text": "Transportation research relies heavily on a variety of data. From sensors to surveys, data supports dayto-day operations as well as long-term planning and decision-making. The challenges that arise due to the volume and variety of data that are found in transportation research can be effectively addressed by ontologies. This opportunity has already been recognized – there are a number of existing transportation ontologies, however the relationship between them is unclear. The goal of this work is to provide an overview of the opportunities for ontologies in transportation research and operation, and to present a survey of existing transportation ontologies to serve two purposes: (1) to provide a resource for the transportation research community to aid in understanding (and potentially selecting between) existing transportation ontologies; and (2) to identify future work for the development of transportation ontologies, by identifying areas that may be lacking.", "title": "" }, { "docid": "96f616c7a821c1f74fc77e5649483343", "text": "Study of the forecasting models using large scale microblog discussions and the search behavior data can provide a good insight for better understanding the market movements. In this work we collected a dataset of 2 million tweets and search volume index (SVI from Google) for a period of June 2010 to September 2011. We model a set of comprehensive causative relationships over this dataset for various market securities like equity (Dow Jones Industrial Average-DJIA and NASDAQ-100), commodity markets (oil and gold) and Euro Forex rates. We also investigate the lagged and statistically causative relations of Twitter sentiments developed during active trading days and market inactive days in combination with the search behavior of public before any change in the prices/indices. Our results show extent of lagged significance with high correlation value upto 0.82 between search volumes and gold price in USD. We find weekly accuracy in direction (up and down prediction) uptil 94.3% for DJIA and 90% for NASDAQ-100 with significant reduction in mean average percentage error for all the forecasting models.", "title": "" }, { "docid": "a5168d6ca63300f26b7388f67d10cb3c", "text": "In recent years, the improvement of wireless protocols, the development of cloud services and the lower cost of hardware have started a new era for smart homes. One such enabling technologies is fog computing, which extends cloud computing to the edge of a network allowing for developing novel Internet of Things (IoT) applications and services. Under the IoT fog computing paradigm, IoT gateways are usually utilized to exchange messages with IoT nodes and a cloud. WiFi and ZigBee stand out as preferred communication technologies for smart homes. WiFi has become very popular, but it has a limited application due to its high energy consumption and the lack of standard mesh networking capabilities for low-power devices. For such reasons, ZigBee was selected by many manufacturers for developing wireless home automation devices. As a consequence, these technologies may coexist in the 2.4 GHz band, which leads to collisions, lower speed rates and increased communications latencies. This article presents ZiWi, a distributed fog computing Home Automation System (HAS) that allows for carrying out seamless communications among ZigBee and WiFi devices. This approach diverges from traditional home automation systems, which often rely on expensive central controllers. In addition, to ease the platform's building process, whenever possible, the system makes use of open-source software (all the code of the nodes is available on GitHub) and Commercial Off-The-Shelf (COTS) hardware. The initial results, which were obtained in a number of representative home scenarios, show that the developed fog services respond several times faster than the evaluated cloud services, and that cross-interference has to be taken seriously to prevent collisions. In addition, the current consumption of ZiWi's nodes was measured, showing the impact of encryption mechanisms.", "title": "" }, { "docid": "52d2ff16f6974af4643a15440ae09fec", "text": "The adoption of Course Management Systems (CMSs) for web-based instruction continues to increase in today’s higher education. A CMS is a software program or integrated platform that contains a series of web-based tools to support a number of activities and course management procedures (Severson, 2004). Examples of Course Management Systems are Blackboard, WebCT, eCollege, Moodle, Desire2Learn, Angel, etc. An argument for the adoption of elearning environments using CMSs is the flexibility of such environments when reaching out to potential learners in remote areas where brick and mortar institutions are non-existent. It is also believed that e-learning environments can have potential added learning benefits and can improve students’ and educators’ self-regulation skills, in particular their metacognitive skills. In spite of this potential to improve learning by means of using a CMS for the delivery of e-learning, the features and functionalities that have been built into these systems are often underutilized. As a consequence, the created learning environments in CMSs do not adequately scaffold learners to improve their selfregulation skills. In order to support the improvement of both the learners’ subject matter knowledge and learning strategy application, the e-learning environments within CMSs should be designed to address learners’ diversity in terms of learning styles, prior knowledge, culture, and self-regulation skills. Self-regulative learners are learners who can demonstrate ‘personal initiative, perseverance and adaptive skill in pursuing learning’ (Zimmerman, 2002). Self-regulation requires adequate monitoring strategies and metacognitive skills. The created e-learning environments should encourage the application of learners’ metacognitive skills by prompting learners to plan, attend to relevant content, and monitor and evaluate their learning. This position paper sets out to inform policy makers, educators, researchers, and others of the importance of a metacognitive e-learning approach when designing instruction using Course Management Systems. Such a metacognitive approach will improve the utilization of CMSs to support learners on their path to self-regulation. We argue that a powerful CMS incorporates features and functionalities that can provide extensive scaffolding to learners and support them in becoming self-regulated learners. Finally, we believe that extensive training and support is essential if educators are expected to develop and implement CMSs as powerful learning tools.", "title": "" } ]
scidocsrr
e461a00ceb5f8937f05bf68665b57ec8
Rumor Identification and Belief Investigation on Twitter
[ { "docid": "0c886080015642aa5b7c103adcd2a81d", "text": "The problem of gauging information credibility on social networks has received considerable attention in recent years. Most previous work has chosen Twitter, the world's largest micro-blogging platform, as the premise of research. In this work, we shift the premise and study the problem of information credibility on Sina Weibo, China's leading micro-blogging service provider. With eight times more users than Twitter, Sina Weibo is more of a Facebook-Twitter hybrid than a pure Twitter clone, and exhibits several important characteristics that distinguish it from Twitter. We collect an extensive set of microblogs which have been confirmed to be false rumors based on information from the official rumor-busting service provided by Sina Weibo. Unlike previous studies on Twitter where the labeling of rumors is done manually by the participants of the experiments, the official nature of this service ensures the high quality of the dataset. We then examine an extensive set of features that can be extracted from the microblogs, and train a classifier to automatically detect the rumors from a mixed set of true information and false information. The experiments show that some of the new features we propose are indeed effective in the classification, and even the features considered in previous studies have different implications with Sina Weibo than with Twitter. To the best of our knowledge, this is the first study on rumor analysis and detection on Sina Weibo.", "title": "" }, { "docid": "860894abbbafdcb71178cb9ddd173970", "text": "Twitter is useful in a situation of disaster for communication, announcement, request for rescue and so on. On the other hand, it causes a negative by-product, spreading rumors. This paper describe how rumors have spread after a disaster of earthquake, and discuss how can we deal with them. We first investigated actual instances of rumor after the disaster. And then we attempted to disclose characteristics of those rumors. Based on the investigation we developed a system which detects candidates of rumor from twitter and then evaluated it. The result of experiment shows the proposed algorithm can find rumors with acceptable accuracy.", "title": "" } ]
[ { "docid": "45390290974f347d559cd7e28c33c993", "text": "Text ambiguity is one of the most interesting phenomenon in human communication and a difficult problem in Natural Language Processing (NLP). Identification of text ambiguities is an important task for evaluating the quality of text and uncovering its vulnerable points. There exist several types of ambiguity. In the present work we review and compare different approaches to ambiguity identification task. We also propose our own approach to this problem. Moreover, we present the prototype of a tool for ambiguity identification and measurement in natural language text. The tool is intended to support the process of writing high quality documents.", "title": "" }, { "docid": "0c67628fb24c8cbd4a8e49fb30ba625e", "text": "Modeling the evolution of topics with time is of great value in automatic summarization and analysis of large document collections. In this work, we propose a new probabilistic graphical model to address this issue. The new model, which we call the Multiscale Topic Tomography Model (MTTM), employs non-homogeneous Poisson processes to model generation of word-counts. The evolution of topics is modeled through a multi-scale analysis using Haar wavelets. One of the new features of the model is its modeling the evolution of topics at various time-scales of resolution, allowing the user to zoom in and out of the time-scales. Our experiments on Science data using the new model uncovers some interesting patterns in topics. The new model is also comparable to LDA in predicting unseen data as demonstrated by our perplexity experiments.", "title": "" }, { "docid": "fc62e84fc995deb1932b12821dfc0ada", "text": "As these paired Commentaries discuss, neuroscientists and architects are just beginning to collaborate, each bringing what they know about their respective fields to the task of improving the environment of research buildings and laboratories.", "title": "" }, { "docid": "e4405c71336ea13ccbd43aa84651dc60", "text": "Nurses are often asked to think about leadership, particularly in times of rapid change in healthcare, and where questions have been raised about whether leaders and managers have adequate insight into the requirements of care. This article discusses several leadership styles relevant to contemporary healthcare and nursing practice. Nurses who are aware of leadership styles may find this knowledge useful in maintaining a cohesive working environment. Leadership knowledge and skills can be improved through training, where, rather than having to undertake formal leadership roles without adequate preparation, nurses are able to learn, nurture, model and develop effective leadership behaviours, ultimately improving nursing staff retention and enhancing the delivery of safe and effective care.", "title": "" }, { "docid": "bc30cd034185df96d20174b9719f3177", "text": "Toxicity in online environments is a complex and a systemic issue. Esports communities seem to be particularly suffering from toxic behaviors. Especially in competitive esports games, negative behavior, such as harassment, can create barriers to players achieving high performance and can reduce players' enjoyment which may cause them to leave the game. The aim of this study is to review design approaches in six major esports games to deal with toxic behaviors and to investigate how players perceive and deal with toxicity in those games. Our preliminary findings from an interview study with 17 participants (3 female) from a university esports club show that players define toxicity as behaviors disrupt their morale and team dynamics, and participants are inclined to normalize negative behaviors and rationalize it as part of the competitive game culture. If they choose to take an action against toxic players, they are likely to ostracize toxic players.", "title": "" }, { "docid": "cbe1dc1b56716f57fca0977383e35482", "text": "This project explores a novel experimental setup towards building spoken, multi-modally rich, and human-like multiparty tutoring agent. A setup is developed and a corpus is collected that targets the development of a dialogue system platform to explore verbal and nonverbal tutoring strategies in multiparty spoken interactions with embodied agents. The dialogue task is centered on two participants involved in a dialogue aiming to solve a card-ordering game. With the participants sits a tutor that helps the participants perform the task and organizes and balances their interaction. Different multimodal signals captured and auto-synchronized by different audio-visual capture technologies were coupled with manual annotations to build a situated model of the interaction based on the participants personalities, their temporally-changing state of attention, their conversational engagement and verbal dominance, and the way these are correlated with the verbal and visual feedback, turn-management, and conversation regulatory actions generated by the tutor. At the end of this chapter we discuss the potential areas of research and developments this work opens and some of the challenges that lie in the road ahead.", "title": "" }, { "docid": "6a8ac2a2786371dcb043d92fa522b726", "text": "We propose a modular reinforcement learning algorithm which decomposes a Markov decision process into independent modules. Each module is trained using Sarsa(λ). We introduce three algorithms for forming global policy from modules policies, and demonstrate our results using a 2D grid world.", "title": "" }, { "docid": "f264d5b90dfb774e9ec2ad055c4ebe62", "text": "Automatic citation recommendation can be very useful for authoring a paper and is an AI-complete problem due to the challenge of bridging the semantic gap between citation context and the cited paper. It is not always easy for knowledgeable researchers to give an accurate citation context for a cited paper or to find the right paper to cite given context. To help with this problem, we propose a novel neural probabilistic model that jointly learns the semantic representations of citation contexts and cited papers. The probability of citing a paper given a citation context is estimated by training a multi-layer neural network. We implement and evaluate our model on the entire CiteSeer dataset, which at the time of this work consists of 10,760,318 citation contexts from 1,017,457 papers. We show that the proposed model significantly outperforms other stateof-the-art models in recall, MAP, MRR, and nDCG.", "title": "" }, { "docid": "57d162c64d93b28f6be1e086b5a1c134", "text": "Making deep convolutional neural networks more accurate typically comes at the cost of increased computational and memory resources. In this paper, we reduce this cost by exploiting the fact that the importance of features computed by convolutional layers is highly input-dependent, and propose feature boosting and suppression (FBS), a new method to predictively amplify salient convolutional channels and skip unimportant ones at run-time. FBS introduces small auxiliary connections to existing convolutional layers. In contrast to channel pruning methods which permanently remove channels, it preserves the full network structures and accelerates convolution by dynamically skipping unimportant input and output channels. FBS-augmented networks are trained with conventional stochastic gradient descent, making it readily available for many state-of-the-art CNNs. We compare FBS to a range of existing channel pruning and dynamic execution schemes and demonstrate large improvements on ImageNet classification. Experiments show that FBS can respectively provide 5× and 2× savings in compute on VGG-16 and ResNet-18, both with less than 0.6% top-5 accuracy loss.", "title": "" }, { "docid": "2d7892534b0e279a426e3fdbc3849454", "text": "What do we see when we glance at a natural scene and how does it change as the glance becomes longer? We asked naive subjects to report in a free-form format what they saw when looking at briefly presented real-life photographs. Our subjects received no specific information as to the content of each stimulus. Thus, our paradigm differs from previous studies where subjects were cued before a picture was presented and/or were probed with multiple-choice questions. In the first stage, 90 novel grayscale photographs were foveally shown to a group of 22 native-English-speaking subjects. The presentation time was chosen at random from a set of seven possible times (from 27 to 500 ms). A perceptual mask followed each photograph immediately. After each presentation, subjects reported what they had just seen as completely and truthfully as possible. In the second stage, another group of naive individuals was instructed to score each of the descriptions produced by the subjects in the first stage. Individual scores were assigned to more than a hundred different attributes. We show that within a single glance, much object- and scene-level information is perceived by human subjects. The richness of our perception, though, seems asymmetrical. Subjects tend to have a propensity toward perceiving natural scenes as being outdoor rather than indoor. The reporting of sensory- or feature-level information of a scene (such as shading and shape) consistently precedes the reporting of the semantic-level information. But once subjects recognize more semantic-level components of a scene, there is little evidence suggesting any bias toward either scene-level or object-level recognition.", "title": "" }, { "docid": "5cc26542d0f4602b2b257e19443839b3", "text": "Accurate performance evaluation of cloud computing resources is a necessary prerequisite for ensuring that quality of service parameters remain within agreed limits. In this paper, we employ both the analytical and simulation modeling to addresses the complexity of cloud computing systems. Analytical model is comprised of distinct functional submodels, the results of which are combined in an iterative manner to obtain the solution with required accuracy. Our models incorporate the important features of cloud centers such as batch arrival of user requests, resource virtualization, and realistic servicing steps, to obtain important performance metrics such as task blocking probability and total waiting time incurred on user requests. Also, our results reveal important insights for capacity planning to control delay of servicing users requests.", "title": "" }, { "docid": "4704f3ed7a5d5d9b244689019025730f", "text": "To address the need for fundamental universally valid definitions of exact bandwidth and quality factor (Q) of tuned antennas, as well as the need for efficient accurate approximate formulas for computing this bandwidth and Q, exact and approximate expressions are found for the bandwidth and Q of a general single-feed (one-port) lossy or lossless linear antenna tuned to resonance or antiresonance. The approximate expression derived for the exact bandwidth of a tuned antenna differs from previous approximate expressions in that it is inversely proportional to the magnitude |Z'/sub 0/(/spl omega//sub 0/)| of the frequency derivative of the input impedance and, for not too large a bandwidth, it is nearly equal to the exact bandwidth of the tuned antenna at every frequency /spl omega//sub 0/, that is, throughout antiresonant as well as resonant frequency bands. It is also shown that an appropriately defined exact Q of a tuned lossy or lossless antenna is approximately proportional to |Z'/sub 0/(/spl omega//sub 0/)| and thus this Q is approximately inversely proportional to the bandwidth (for not too large a bandwidth) of a simply tuned antenna at all frequencies. The exact Q of a tuned antenna is defined in terms of average internal energies that emerge naturally from Maxwell's equations applied to the tuned antenna. These internal energies, which are similar but not identical to previously defined quality-factor energies, and the associated Q are proven to increase without bound as the size of an antenna is decreased. Numerical solutions to thin straight-wire and wire-loop lossy and lossless antennas, as well as to a Yagi antenna and a straight-wire antenna embedded in a lossy dispersive dielectric, confirm the accuracy of the approximate expressions and the inverse relationship between the defined bandwidth and the defined Q over frequency ranges that cover several resonant and antiresonant frequency bands.", "title": "" }, { "docid": "82917c4e6fb56587cc395078c14f3bb7", "text": "We can leverage data and complex systems science to better understand society and human nature on a population scale through language — utilizing tools that include sentiment analysis, machine learning, and data visualization. Data-driven science and the sociotechnical systems that we use every day are enabling a transformation from hypothesis-driven, reductionist methodology to complex systems sciences. Namely, the emergence and global adoption of social media has rendered possible the real-time estimation of population-scale sentiment, with profound implications for our understanding of human behavior. Advances in computing power, natural language processing, and digitization of text now make it possible to study a culture’s evolution through its texts using a “big data” lens. Given the growing assortment of sentiment measuring instruments, it is imperative to understand which aspects of sentiment dictionaries contribute to both their classification accuracy and their ability to provide richer understanding of texts. Here, we perform detailed, quantitative tests and qualitative assessments of 6 dictionary-based methods applied to 4 different corpora, and briefly examine a further 20 methods. We show that while inappropriate for sentences, dictionary-based methods are generally robust in their classification accuracy for longer texts. Most importantly they can aid understanding of texts with reliable and meaningful word shift graphs if (1) the dictionary covers a sufficiently large enough portion of a given text’s lexicon when weighted by word usage frequency; and (2) words are scored on a continuous scale. Our ability to communicate relies in part upon a shared emotional experience, with stories often following distinct emotional trajectories, forming patterns that are meaningful to us. By classifying the emotional arcs for a filtered subset of 4,803 stories from Project Gutenberg’s fiction collection, we find a set of six core trajectories which form the building blocks of complex narratives. We strengthen our findings by separately applying optimization, linear decomposition, supervised learning, and unsupervised learning. For each of these six core emotional arcs, we examine the closest characteristic stories in publication today and find that particular emotional arcs enjoy greater success, as measured by downloads. Within stories lie the core values of social behavior, rich with both strategies and proper protocol, which we can begin to study more broadly and systematically as a true reflection of culture. Of profound scientific interest will be the degree to which we can eventually understand the full landscape of human stories, and data driven approaches will play a crucial role. Finally, we utilize web-scale data from Twitter to study the limits of what social data can tell us about public health, mental illness, discourse around the protest movement of #BlackLivesMatter, discourse around climate change, and hidden networks. We conclude with a review of published works in complex systems that separately analyze charitable donations, the happiness of words in 10 languages, 100 years of daily temperature data across the United States, and Australian Rules Football games.", "title": "" }, { "docid": "70d874f2f919c6749c4105f35776532b", "text": "Effective processing of extremely large volumes of spatial data has led to many organizations employing distributed processing frameworks. Hadoop is one such open-source framework that is enjoying widespread adoption. In this paper, we detail an approach to indexing and performing key analytics on spatial data that is persisted in HDFS. Our technique differs from other approaches in that it combines spatial indexing, data load balancing, and data clustering in order to optimize performance across the cluster. In addition, our index supports efficient, random-access queries without requiring a MapReduce job; neither a full table scan, nor any MapReduce overhead is incurred when searching. This facilitates large numbers of concurrent query executions. We will also demonstrate how indexing and clustering positively impacts the performance of range and k-NN queries on large real-world datasets. The performance analysis will enable a number of interesting observations to be made on the behavior of spatial indexes and spatial queries in this distributed processing environment.", "title": "" }, { "docid": "b7d13c090e6d61272f45b1e3090f0341", "text": "Deep Neural Networks (DNN) have achieved state-of-the-art results in a wide range of tasks, with the best results obtained with large training sets and large models. In the past, GPUs enabled these breakthroughs because of their greater computational speed. In the future, faster computation at both training and test time is likely to be crucial for further progress and for consumer applications on low-power devices. As a result, there is much interest in research and development of dedicated hardware for Deep Learning (DL). Binary weights, i.e., weights which are constrained to only two possible values (e.g. -1 or 1), would bring great benefits to specialized DL hardware by replacing many multiply-accumulate operations by simple accumulations, as multipliers are the most space and powerhungry components of the digital implementation of neural networks. We introduce BinaryConnect, a method which consists in training a DNN with binary weights during the forward and backward propagations, while retaining precision of the stored weights in which gradients are accumulated. Like other dropout schemes, we show that BinaryConnect acts as regularizer and we obtain near state-of-the-art results with BinaryConnect on the permutation-invariant MNIST, CIFAR-10 and SVHN.", "title": "" }, { "docid": "865d7b8fae1cab739570229889177d58", "text": "This paper presents design and implementation of scalar control of induction motor. This method leads to be able to adjust the speed of the motor by control the frequency and amplitude of the stator voltage of induction motor, the ratio of stator voltage to frequency should be kept constant, which is called as V/F or scalar control of induction motor drive. This paper presents a comparative study of open loop and close loop V/F control induction motor. The V/F", "title": "" }, { "docid": "1f7fb5da093f0f0b69b1cc368cea0701", "text": "This tutorial focuses on the sense of touch within the context of a fully active human observer. It is intended for graduate students and researchers outside the discipline who seek an introduction to the rapidly evolving field of human haptics. The tutorial begins with a review of peripheral sensory receptors in skin, muscles, tendons, and joints. We then describe an extensive body of research on \"what\" and \"where\" channels, the former dealing with haptic perception of objects, surfaces, and their properties, and the latter with perception of spatial layout on the skin and in external space relative to the perceiver. We conclude with a brief discussion of other significant issues in the field, including vision-touch interactions, affective touch, neural plasticity, and applications.", "title": "" }, { "docid": "86f25f09b801d28ce32f1257a39ddd44", "text": "Modern mobile devices have access to a wealth of data suitable for learning models, which in turn can greatly improve the user experience on the device. For example, language models can improve speech recognition and text entry, and image models can automatically select good photos. However, this rich data is often privacy sensitive, large in quantity, or both, which may preclude logging to the data-center and training there using conventional approaches. We advocate an alternative that leaves the training data distributed on the mobile devices, and learns a shared model by aggregating locally-computed updates. We term this decentralized approach Federated Learning. We present a practical method for the federated learning of deep networks that proves robust to the unbalanced and non-IID data distributions that naturally arise. This method allows high-quality models to be trained in relatively few rounds of communication, the principal constraint for federated learning. The key insight is that despite the non-convex loss functions we optimize, parameter averaging over updates from multiple clients produces surprisingly good results, for example decreasing the communication needed to train an LSTM language model by two orders of magnitude.", "title": "" }, { "docid": "8fc87a5f89792b3ea69833dcae90cd6e", "text": "The Joint Conference on Lexical and Computational Semantics (*SEM) each year hosts a shared task on semantic related topics. In its first edition held in 2012, the shared task was dedicated to resolving the scope and focus of negation. This paper presents the specifications, datasets and evaluation criteria of the task. An overview of participating systems is provided and their results are summarized.", "title": "" }, { "docid": "1c2cc1120129eca44443a637c0f06729", "text": "Direct volume rendering (DVR) is of increasing diagnostic value in the analysis of data sets captured using the latest medical imaging modalities. The deployment of DVR in everyday clinical work, however, has so far been limited. One contributing factor is that current transfer function (TF) models can encode only a small fraction of the user's domain knowledge. In this paper, we use histograms of local neighborhoods to capture tissue characteristics. This allows domain knowledge on spatial relations in the data set to be integrated into the TF. As a first example, we introduce partial range histograms in an automatic tissue detection scheme and present its effectiveness in a clinical evaluation. We then use local histogram analysis to perform a classification where the tissue-type certainty is treated as a second TF dimension. The result is an enhanced rendering where tissues with overlapping intensity ranges can be discerned without requiring the user to explicitly define a complex, multidimensional TF", "title": "" } ]
scidocsrr
97b97b86086b35fd1b19558349c1a489
Character-Aware Neural Networks for Arabic Named Entity Recognition for Social Media
[ { "docid": "cb929b640f8ee7b550512dd4d0dc8e17", "text": "The existing machine translation systems, whether phrase-based or neural, have relied almost exclusively on word-level modelling with explicit segmentation. In this paper, we ask a fundamental question: can neural machine translation generate a character sequence without any explicit segmentation? To answer this question, we evaluate an attention-based encoder– decoder with a subword-level encoder and a character-level decoder on four language pairs–En-Cs, En-De, En-Ru and En-Fi– using the parallel corpora from WMT’15. Our experiments show that the models with a character-level decoder outperform the ones with a subword-level decoder on all of the four language pairs. Furthermore, the ensembles of neural models with a character-level decoder outperform the state-of-the-art non-neural machine translation systems on En-Cs, En-De and En-Fi and perform comparably on En-Ru.", "title": "" }, { "docid": "fe1bc993047a95102f4331f57b1f9197", "text": "Document classification tasks were primarily tackled at word level. Recent research that works with character-level inputs shows several benefits over word-level approaches such as natural incorporation of morphemes and better handling of rare words. We propose a neural network architecture that utilizes both convolution and recurrent layers to efficiently encode character inputs. We validate the proposed model on eight large scale document classification tasks and compare with character-level convolution-only models. It achieves comparable performances with much less parameters.", "title": "" }, { "docid": "51ef96b352d36f5ab933c10184bb385b", "text": "We present a language agnostic, unsupervised method for inducing morphological transformations between words. The method relies on certain regularities manifest in highdimensional vector spaces. We show that this method is capable of discovering a wide range of morphological rules, which in turn are used to build morphological analyzers. We evaluate this method across six different languages and nine datasets, and show significant improvements across all languages.", "title": "" }, { "docid": "5db42e1ef0e0cf3d4c1c3b76c9eec6d2", "text": "Named entity recognition is a challenging task that has traditionally required large amounts of knowledge in the form of feature engineering and lexicons to achieve high performance. In this paper, we present a novel neural network architecture that automatically detects word- and character-level features using a hybrid bidirectional LSTM and CNN architecture, eliminating the need for most feature engineering. We also propose a novel method of encoding partial lexicon matches in neural networks and compare it to existing approaches. Extensive evaluation shows that, given only tokenized text and publicly available word embeddings, our system is competitive on the CoNLL-2003 dataset and surpasses the previously reported state of the art performance on the OntoNotes 5.0 dataset by 2.13 F1 points. By using two lexicons constructed from publicly-available sources, we establish new state of the art performance with an F1 score of 91.62 on CoNLL-2003 and 86.28 on OntoNotes, surpassing systems that employ heavy feature engineering, proprietary lexicons, and rich entity linking information.", "title": "" } ]
[ { "docid": "2f2cab35a8cf44c4564c0e26e0490f29", "text": "In this paper, we propose a synthetic generationmethod for time-series data based on generative adversarial networks (GANs) and apply it to data augmentation for biosinal classification. GANs are a recently proposed framework for learning a generative model, where two neural networks, one generating synthetic data and the other discriminating synthetic and real data, are trained while competing with each other. In the proposed method, each neural network in GANs is developed based on a recurrent neural network using long short-term memories, thereby allowing the adaptation of the GANs framework to time-series data generation. In the experiments, we confirmed the capability of the proposed method for generating synthetic biosignals using the electrocardiogram and electroencephalogram datasets. We also showed the effectiveness of the proposed method for data augmentation in the biosignal classification problem.", "title": "" }, { "docid": "f779fbb68782deb5e386bf266fbeae5a", "text": "The paper attempts to provide forecast methodological framework and concrete models to estimate long run probability of default term structure for Hungarian corporate debt instruments, in line with IFRS 9 requirements. Long run probability of default and expected loss can be estimated by various methods and has fifty-five years of history in literature. After studying literature and empirical models, the Markov chain approach was selected to accomplish lifetime probability of default modeling for Hungarian corporate debt instruments. Empirical results reveal that both discrete and continuous homogeneous Markov chain models systematically overestimate the long term corporate probability of default. However, the continuous nonhomogeneous Markov chain gives both intuitively and empirically appropriate probability of default trajectories. The estimated term structure mathematically and professionally properly expresses the probability of default element of expected loss that can realistically occur in the long-run in Hungarian corporate lending. The elaborated models can be easily implemented at Hungarian corporate financial institutions.", "title": "" }, { "docid": "796f46abd496ebb8784122a9c9f65e1d", "text": "Authorship verification can be checked using stylometric techniques through the analysis of linguistic styles and writing characteristics of the authors. Stylometry is a behavioral feature that a person exhibits during writing and can be extracted and used potentially to check the identity of the author of online documents. Although stylometric techniques can achieve high accuracy rates for long documents, it is still challenging to identify an author for short documents, in particular when dealing with large authors populations. These hurdles must be addressed for stylometry to be usable in checking authorship of online messages such as emails, text messages, or twitter feeds. In this paper, we pose some steps toward achieving that goal by proposing a supervised learning technique combined with n-gram analysis for authorship verification in short texts. Experimental evaluation based on the Enron email dataset involving 87 authors yields very promising results consisting of an Equal Error Rate (EER) of 14.35% for message blocks of 500 characters.", "title": "" }, { "docid": "a34efaa2a8739cce020cb5fe1da6883d", "text": "Graphical models, as applied to multi-target prediction problems, commonly utilize interaction terms to impose structure among the output variables. Often, such structure is based on the assumption that related outputs need to be similar and interaction terms that force them to be closer are adopted. Here we relax that assumption and propose a feature that is based on distance and can adapt to ensure that variables have smaller or larger difference in values. We utilized a Gaussian Conditional Random Field model, where we have extended its originally proposed interaction potential to include a distance term. The extended model is compared to the baseline in various structured regression setups. An increase in predictive accuracy was observed on both synthetic examples and real-world applications, including challenging tasks from climate and healthcare domains.", "title": "" }, { "docid": "33dcba37947e3bdb5956f7355393eea5", "text": "Big Data and Cloud computing are the most important technologies that give the opportunity for government agencies to gain a competitive advantage and improve their organizations. On one hand, Big Data implementation requires investing a significant amount of money in hardware, software, and workforce. On the other hand, Cloud Computing offers an unlimited, scalable and on-demand pool of resources which provide the ability to adopt Big Data technology without wasting on the financial resources of the organization and make the implementation of Big Data faster and easier. The aim of this study is to conduct a systematic literature review in order to collect data to identify the benefits and challenges of Big Data on Cloud for government agencies and to make a clear understanding of how combining Big Data and Cloud Computing help to overcome some of these challenges. The last objective of this study is to identify the solutions for related challenges of Big Data. Four research questions were designed to determine the information that is related to the objectives of this study. Data is collected using literature review method and the results are deduced from there.", "title": "" }, { "docid": "3d81f003b29ad4cea90a533a002f3082", "text": "Technology roadmapping is becoming an increasingly important and widespread approach for aligning technology with organizational goals. The popularity of roadmapping is due mainly to the communication and networking benefits that arise from the development and dissemination of roadmaps, particularly in terms of building common understanding across internal and external organizational boundaries. From its origins in Motorola and Corning more than 25 years ago, where it was used to link product and technology plans, the approach has been adapted for many different purposes in a wide variety of sectors and at all levels, from small enterprises to national foresight programs. Building on previous papers presented at PICMET, concerning the rapid initiation of the technique, and how to customize the approach, this paper highlights the evolution and continuing growth of the method and its application to general strategic planning. The issues associated with extending the roadmapping method to form a central element of an integrated strategic planning process are considered.", "title": "" }, { "docid": "f5d2052dd5f5bb359cfbc80856cb7793", "text": "We describe our experience with collecting roughly 250, 000 image annotations on Amazon Mechanical Turk (AMT). The annotations we collected range from location of keypoints and figure ground masks of various object categories, 3D pose estimates of head and torsos of people in images and attributes like gender, race, type of hair, etc. We describe the setup and strategies we adopted to automatically approve and reject the annotations, which becomes important for large scale annotations. These annotations were used to train algorithms for detection, segmentation, pose estimation, action recognition and attribute recognition of people in images.", "title": "" }, { "docid": "4207c7f69d65c5b46abce85a369dada1", "text": "We present a novel approach, called selectional branching, which uses confidence estimates to decide when to employ a beam, providing the accuracy of beam search at speeds close to a greedy transition-based dependency parsing approach. Selectional branching is guaranteed to perform a fewer number of transitions than beam search yet performs as accurately. We also present a new transition-based dependency parsing algorithm that gives a complexity of O(n) for projective parsing and an expected linear time speed for non-projective parsing. With the standard setup, our parser shows an unlabeled attachment score of 92.96% and a parsing speed of 9 milliseconds per sentence, which is faster and more accurate than the current state-of-the-art transitionbased parser that uses beam search.", "title": "" }, { "docid": "f1f72a6d5d2ab8862b514983ac63480b", "text": "Grids are commonly used as histograms to process spatial data in order to detect frequent patterns, predict destinations, or to infer popular places. However, they have not been previously used for GPS trajectory similarity searches or retrieval in general. Instead, slower and more complicated algorithms based on individual point-pair comparison have been used. We demonstrate how a grid representation can be used to compute four different route measures: novelty, noteworthiness, similarity, and inclusion. The measures may be used in several applications such as identifying taxi fraud, automatically updating GPS navigation software, optimizing traffic, and identifying commuting patterns. We compare our proposed route similarity measure, C-SIM, to eight popular alternatives including Edit Distance on Real sequence (EDR) and Frechet distance. The proposed measure is simple to implement and we give a fast, linear time algorithm for the task. It works well under noise, changes in sampling rate, and point shifting. We demonstrate that by using the grid, a route similarity ranking can be computed in real-time on the Mopsi20141 route dataset, which consists of over 6,000 routes. This ranking is an extension of the most similar route search and contains an ordered list of all similar routes from the database. The real-time search is due to indexing the cell database and comes at the cost of spending 80% more memory space for the index. The methods are implemented inside the Mopsi2 route module.", "title": "" }, { "docid": "591438f31d3f7b8093f8d10874a17d5b", "text": "Previously, techniques such as class hierarchy analysis and profile-guided receiver class prediction have been demonstrated to greatly improve the performance of applications written in pure object-oriented languages, but the degree to which these results are transferable to applications written in hybrid languages has been unclear. In part to answer this question, we have developed the Vortex compiler infrastructure, a language-independent optimizing compiler for object-oriented languages, with front-ends for Cecil, C++, Java, and Modula-3. In this paper, we describe the Vortex compiler's intermediate language, internal structure, and optimization suite, and then we report the results of experiments assessing the effectiveness of different combinations of optimizations on sizable applications across these four languages. We characterize the benchmark programs in terms of a collection of static and dynamic metrics, intended to quantify aspects of the \"object-orientedness\" of a program.", "title": "" }, { "docid": "cdd43b3baa9849441817b5f31d7cb0e0", "text": "Traffic light control systems are widely used to monitor and control the flow of automobiles through the junction of many roads. They aim to realize smooth motion of cars in the transportation routes. However, the synchronization of multiple traffic light systems at adjacent intersections is a complicated problem given the various parameters involved. Conventional systems do not handle variable flows approaching the junctions. In addition, the mutual interference between adjacent traffic light systems, the disparity of cars flow with time, the accidents, the passage of emergency vehicles, and the pedestrian crossing are not implemented in the existing traffic system. This leads to traffic jam and congestion. We propose a system based on PIC microcontroller that evaluates the traffic density using IR sensors and accomplishes dynamic timing slots with different levels. Moreover, a portable controller device is designed to solve the problem of emergency vehicles stuck in the overcrowded roads.", "title": "" }, { "docid": "c1f095252c6c64af9ceeb33e78318b82", "text": "Augmented reality (AR) is a technology in which a user's view of the real world is enhanced or augmented with additional information generated from a computer model. To have a working AR system, the see-through display system must be calibrated so that the graphics are properly rendered. The optical see-through systems present an additional challenge because, unlike the video see-through systems, we do not have direct access to the image data to be used in various calibration procedures. This paper reports on a calibration method we developed for optical see-through headmounted displays. We first introduce a method for calibrating monocular optical seethrough displays (that is, a display for one eye only) and then extend it to stereo optical see-through displays in which the displays for both eyes are calibrated in a single procedure. The method integrates the measurements for the camera and a six-degrees-offreedom tracker that is attached to the camera to do the calibration. We have used both an off-the-shelf magnetic tracker as well as a vision-based infrared tracker we have built. In the monocular case, the calibration is based on the alignment of image points with a single 3D point in the world coordinate system from various viewpoints. In this method, the user interaction to perform the calibration is extremely easy compared to prior methods, and there is no requirement for keeping the head immobile while performing the calibration. In the stereo calibration case, the user aligns a stereoscopically fused 2D marker, which is perceived in depth, with a single target point in the world whose coordinates are known. As in the monocular case, there is no requirement that the user keep his or her head fixed.", "title": "" }, { "docid": "e18a8e3622ae85763c729bd2844ce14c", "text": "0957-4174/$ see front matter 2012 Elsevier Ltd. A http://dx.doi.org/10.1016/j.eswa.2012.05.028 ⇑ Corresponding author. E-mail address: dgil@dtic.ua.es (D. Gil). 1 These authors equally contributed to this work. Fertility rates have dramatically decreased in the last two decades, especially in men. It has been described that environmental factors, as well as life habits, may affect semen quality. Artificial intelligence techniques are now an emerging methodology as decision support systems in medicine. In this paper we compare three artificial intelligence techniques, decision trees, Multilayer Perceptron and Support Vector Machines, in order to evaluate their performance in the prediction of the seminal quality from the data of the environmental factors and lifestyle. To do that we collect data by a normalized questionnaire from young healthy volunteers and then, we use the results of a semen analysis to asses the accuracy in the prediction of the three classification methods mentioned above. The results show that Multilayer Perceptron and Support Vector Machines show the highest accuracy, with prediction accuracy values of 86% for some of the seminal parameters. In contrast decision trees provide a visual and illustrative approach that can compensate the slightly lower accuracy obtained. In conclusion artificial intelligence methods are a useful tool in order to predict the seminal profile of an individual from the environmental factors and life habits. From the studied methods, Multilayer Perceptron and Support Vector Machines are the most accurate in the prediction. Therefore these tools, together with the visual help that decision trees offer, are the suggested methods to be included in the evaluation of the infertile patient. 2012 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "f5f1300baf7ed92626c912b98b6308c9", "text": "The constant increase in global energy demand, together with the awareness of the finite supply of fossil fuels, has brought about an imperious need to take advantage of renewable energy sources. At the same time, concern over CO(2) emissions and future rises in the cost of gasoline has boosted technological efforts to make hybrid and electric vehicles available to the general public. Energy storage is a vital issue to be addressed within this scenario, and batteries are certainly a key player. In this tutorial review, the most recent and significant scientific advances in the field of rechargeable batteries, whose performance is dependent on their underlying chemistry, are covered. In view of its utmost current significance and future prospects, special emphasis is given to progress in lithium-based technologies.", "title": "" }, { "docid": "867c8c0286c0fed4779f550f7483770d", "text": "Numerous studies report that standard volatility models have low explanatory power, leading some researchers to question whether these models have economic value. We examine this question by using conditional mean-variance analysis to assess the value of volatility timing to short-horizon investors. We nd that the volatility timing strategies outperform the unconditionally e cient static portfolios that have the same target expected return and volatility. This nding is robust to estimation risk and transaction costs.", "title": "" }, { "docid": "71cd341da48223745e0abc5aa9aded7b", "text": "MIMO is a technology that utilizes multiple antennas at transmitter/receiver to improve the throughput, capacity and coverage of wireless system. Massive MIMO where Base Station is equipped with orders of magnitude more antennas have shown over 10 times spectral efficiency increase over MIMO with simpler signal processing algorithms. Massive MIMO has benefits of enhanced capacity, spectral and energy efficiency and it can be built by using low cost and low power components. Despite its potential benefits, this paper also summarizes some challenges faced by massive MIMO such as antenna spatial correlation and mutual coupling as well as non-linear hardware impairments. These challenges encountered in massive MIMO uncover new problems that need further investigation.", "title": "" }, { "docid": "13e8fd8e8462e4bbb267f909403f9872", "text": "Ergative case, the special case of transitive subjects, rai ses questions not only for the theory of case but also for theories of subjectho od and transitivity. This paper analyzes the case system of Nez Perce, a ”three-way erg tiv ” language, with an eye towards a formalization of the category of transitive subject . I show that it is object agreement that is determinative of transitivity, an d hence of ergative case, in Nez Perce. I further show that the transitivity condition on ergative case must be coupled with a criterion of subjecthood that makes reference to participation in subject agreement, not just to origin in a high argument-structural position. These two results suggest a formalization of the transitive subject as that ar gument uniquely accessing both high and low agreement information, the former through its (agreement-derived) connection with T and the latter through its origin in the spe cifi r of a head associated with object agreement (v). In view of these findings, I ar gue that ergative case morphology should be analyzed not as the expression of a synt ctic primitive but as the morphological spell-out of subject agreement and objec t agreement on a nominal.", "title": "" }, { "docid": "e5380801d69c3acf7bfe36e868b1dadb", "text": "Skin-mountable chemical sensors using flexible chemically sensitive nanomaterials are of great interest for electronic skin (e-skin) application. To build these sensors, the emerging atomically thin two-dimensional (2D) layered semiconductors could be a good material candidate. Herein, we show that a large-area WS2 film synthesized by sulfurization of a tungsten film exhibits high humidity sensing performance both in natural flat and high mechanical flexible states (bending curvature down to 5 mm). The conductivity of as-synthesized WS2 increases sensitively over a wide relative humidity range (up to 90%) with fast response and recovery times in a few seconds. By using graphene as electrodes and thin polydimethylsiloxane (PDMS) as substrate, a transparent, flexible, and stretchable humidity sensor was fabricated. This senor can be well laminated onto skin and shows stable water moisture sensing behaviors in the undeformed relaxed state as well as under compressive and tensile loadings. Furthermore, its high sensing performance enables real-time monitoring of human breath, indicating a potential mask-free breath monitoring for healthcare application. We believe that such a skin-activity compatible WS2 humidity sensor may shed light on developing low power consumption wearable chemical sensors based on 2D semiconductors.", "title": "" }, { "docid": "e3d8ce945e727e8b31a764ffd226353b", "text": "Epilepsy is a neurological disorder with prevalence of about 1-2% of the world’s population (Mormann, Andrzejak, Elger & Lehnertz, 2007). It is characterized by sudden recurrent and transient disturbances of perception or behaviour resulting from excessive synchronization of cortical neuronal networks; it is a neurological condition in which an individual experiences chronic abnormal bursts of electrical discharges in the brain. The hallmark of epilepsy is recurrent seizures termed \"epileptic seizures\". Epileptic seizures are divided by their clinical manifestation into partial or focal, generalized, unilateral and unclassified seizures (James, 1997; Tzallas, Tsipouras & Fotiadis, 2007a, 2009). Focal epileptic seizures involve only part of cerebral hemisphere and produce symptoms in corresponding parts of the body or in some related mental functions. Generalized epileptic seizures involve the entire brain and produce bilateral motor symptoms usually with loss of consciousness. Both types of epileptic seizures can occur at all ages. Generalized epileptic seizures can be subdivided into absence (petit mal) and tonic-clonic (grand mal) seizures (James, 1997).", "title": "" }, { "docid": "68388b2f67030d85030d5813df2e147d", "text": "Radio signal propagation modeling plays an important role in designing wireless communication systems. The propagation models are used to calculate the number and position of base stations and predict the radio coverage. Different models have been developed to predict radio propagation behavior for wireless communication systems in different operating environments. In this paper we shall limit our discussion to the latest achievements in radio propagation modeling related to tunnels. The main modeling approaches used for propagation in tunnels are reviewed, namely, numerical methods for solving Maxwell equations, waveguide or modal approach, ray tracing based methods and two-slope path loss modeling. They are discussed in terms of modeling complexity and required information on the environment including tunnel geometry and electric as well as magnetic properties of walls.", "title": "" } ]
scidocsrr
e70cedd385532e99fbddfbf98fdb5494
Variations in cognitive maps: understanding individual differences in navigation.
[ { "docid": "f13ffbb31eedcf46df1aaecfbdf61be9", "text": "Finding one's way in a large-scale environment may engage different cognitive processes than following a familiar route. The neural bases of these processes were investigated using functional MRI (fMRI). Subjects found their way in one virtual-reality town and followed a well-learned route in another. In a control condition, subjects followed a visible trail. Within subjects, accurate wayfinding activated the right posterior hippocampus. Between-subjects correlations with performance showed that good navigators (i.e., accurate wayfinders) activated the anterior hippocampus during wayfinding and head of caudate during route following. These results coincide with neurophysiological evidence for distinct response (caudate) and place (hippocampal) representations supporting navigation. We argue that the type of representation used influences both performance and concomitant fMRI activation patterns.", "title": "" } ]
[ { "docid": "afd6d41c0985372a88ff3bb6f91ce5b5", "text": "Following your need to always fulfil the inspiration to obtain everybody is now simple. Connecting to the internet is one of the short cuts to do. There are so many sources that offer and connect us to other world condition. As one of the products to see in internet, this website becomes a very available place to look for countless ggplot2 elegant graphics for data analysis sources. Yeah, sources about the books from countries in the world are provided.", "title": "" }, { "docid": "7ca2bde925c4bb322fe266b4a9de5004", "text": "Nuclear transfer of an oocyte into the cytoplasm of another enucleated oocyte has shown that embryogenesis and implantation are influenced by cytoplasmic factors. We report a case of a 30-year-old nulligravida woman who had two failed IVF cycles characterized by all her embryos arresting at the two-cell stage and ultimately had pronuclear transfer using donor oocytes. After her third IVF cycle, eight out of 12 patient oocytes and 12 out of 15 donor oocytes were fertilized. The patient's pronuclei were transferred subzonally into an enucleated donor cytoplasm resulting in seven reconstructed zygotes. Five viable reconstructed embryos were transferred into the patient's uterus resulting in a triplet pregnancy with fetal heartbeats, normal karyotypes and nuclear genetic fingerprinting matching the mother's genetic fingerprinting. Fetal mitochondrial DNA profiles were identical to those from donor cytoplasm with no detection of patient's mitochondrial DNA. This report suggests that a potentially viable pregnancy with normal karyotype can be achieved through pronuclear transfer. Ongoing work to establish the efficacy and safety of pronuclear transfer will result in its use as an aid for human reproduction.", "title": "" }, { "docid": "c9b7832cd306fc022e4a376f10ee8fc8", "text": "This paper describes a study to assess the influence of a variety of factors on reported level of presence in immersive virtual environments. It introduces the idea of stacking depth, that is, where a participant can simulate the process of entering the virtual environment while already in such an environment, which can be repeated to several levels of depth. An experimental study including 24 subjects was carried out. Half of the subjects were transported between environments by using virtual head-mounted displays, and the other half by going through doors. Three other binary factors were whether or not gravity operated, whether or not the subject experienced a virtual precipice, and whether or not the subject was followed around by a virtual actor. Visual, auditory, and kinesthetic representation systems and egocentric/exocentric perceptual positions were assessed by a preexperiment questionnaire. Presence was assessed by the subjects as their sense of being there, the extent to which they experienced the virtual environments as more the presenting reality than the real world in which the experiment was taking place, and the extent to which the subject experienced the virtual environments as places visited rather than images seen. A logistic regression analysis revealed that subjective reporting of presence was significantly positively associated with visual and kinesthetic representation systems, and negatively with the auditory system. This was not surprising since the virtual reality system used was primarily visual. The analysis also showed a significant and positive association with stacking level depth for those who were transported between environments by using the virtual HMD, and a negative association for those who were transported through doors. Finally, four of the subjects moved their real left arm to match movement of the left arm of the virtual body displayed by the system. These four scored significantly higher on the kinesthetic representation system than the remainder of the subjects.", "title": "" }, { "docid": "d055bc9f5c7feb9712ec72f8050f5fd8", "text": "An intelligent observer looks at the world and sees not only what is, but what is moving and what can be moved. In other words, the observer sees how the present state of the world can transform in the future. We propose a model that predicts future images by learning to represent the present state and its transformation given only a sequence of images. To do so, we introduce an architecture with a latent state composed of two components designed to capture (i) the present image state and (ii) the transformation between present and future states, respectively. We couple this latent state with a recurrent neural network (RNN) core that predicts future frames by transforming past states into future states by applying the accumulated state transformation with a learned operator. We describe how this model can be integrated into an encoder-decoder convolutional neural network (CNN) architecture that uses weighted residual connections to integrate representations of the past with representations of the future. Qualitatively, our approach generates image sequences that are stable and capture realistic motion over multiple predicted frames, without requiring adversarial training. Quantitatively, our method achieves prediction results comparable to state-of-the-art results on standard image prediction benchmarks (Moving MNIST, KTH, and UCF101).", "title": "" }, { "docid": "0ff1837d40bbd6bbfe4f5ec69f83de90", "text": "Nowadays, Telemarketing is an interactive technique of direct marketing that many banks apply to present a long term deposit to bank customers via the phone. Although the offering like this manner is powerful, it may make the customers annoyed. The data prediction is a popular task in data mining because it can be applied to solve this problem. However, the predictive performance may be decreased in case of the input data have many features like the bank customer information. In this paper, we focus on how to reduce the feature of input data and balance the training set for the predictive model to help the bank to increase the prediction rate. In the system performance evaluation, all accuracy rates of each predictive model based on the proposed approach compared with the original predictive model based on the truth positive and receiver operating characteristic measurement show the high performance in which the smaller number of features.", "title": "" }, { "docid": "dc8af68ed9bbfd8e24c438771ca1d376", "text": "Pedestrian detection has progressed significantly in the last years. However, occluded people are notoriously hard to detect, as their appearance varies substantially depending on a wide range of occlusion patterns. In this paper, we aim to propose a simple and compact method based on the FasterRCNN architecture for occluded pedestrian detection. We start with interpreting CNN channel features of a pedestrian detector, and we find that different channels activate responses for different body parts respectively. These findings motivate us to employ an attention mechanism across channels to represent various occlusion patterns in one single model, as each occlusion pattern can be formulated as some specific combination of body parts. Therefore, an attention network with self or external guidances is proposed as an add-on to the baseline FasterRCNN detector. When evaluating on the heavy occlusion subset, we achieve a significant improvement of 8pp to the baseline FasterRCNN detector on CityPersons and on Caltech we outperform the state-of-the-art method by 4pp.", "title": "" }, { "docid": "6bc94f9b5eb90ba679964cf2a7df4de4", "text": "New high-frequency data collection technologies and machine learning analysis techniques could offer new insights into learning, especially in tasks in which students have ample space to generate unique, personalized artifacts, such as a computer program, a robot, or a solution to an engineering challenge. To date most of the work on learning analytics and educational data mining has focused on online courses or cognitive tutors, in which the tasks are more structured and the entirety of interaction happens in front of a computer. In this paper, I argue that multimodal learning analytics could offer new insights into students' learning trajectories, and present several examples of this work and its educational application.", "title": "" }, { "docid": "04a7039b3069816b157e5ca0f7541b94", "text": "Visible light communication is innovative and active technique in modern digital wireless communication. In this paper, we describe a new innovative vlc system which having a better performance and efficiency to other previous system. Visible light communication (VLC) is an efficient technology in order to the improve the speed and the robustness of the communication link in indoor optical wireless communication system. In order to achieve high data rate in communication for VLC system, multiple input multiple output (MIMO) with OFDM is a feasible option. However, the contemporary MIMO with OFDM VLC system are lacks from the diversity and the experiences performance variation through different optical channels. This is mostly because of characteristics of optical elements used for making the receiver. In this paper, we analyze the imaging diversity in MIMO with OFDM VLC system. Simulation results are shown diversity achieved in the different cases.", "title": "" }, { "docid": "53d04c06efb468e14e2ee0b485caf66f", "text": "The analysis of time-oriented data is an important task in many application scenarios. In recent years, a variety of techniques for visualizing such data have been published. This variety makes it difficult for prospective users to select methods or tools that are useful for their particular task at hand. In this article, we develop and discuss a systematic view on the diversity of methods for visualizing time-oriented data. With the proposed categorization we try to untangle the visualization of time-oriented data, which is such an important concern in Visual Analytics. The categorization is not only helpful for users, but also for researchers to identify future tasks in Visual Analytics. r 2007 Elsevier Ltd. All rights reserved. MSC: primary 68U05; 68U35", "title": "" }, { "docid": "c49d4b1f2ac185bcb070cb105798417a", "text": "The performance of face detection has been largely improved with the development of convolutional neural network. However, the occlusion issue due to mask and sunglasses, is still a challenging problem. The improvement on the recall of these occluded cases usually brings the risk of high false positives. In this paper, we present a novel face detector called Face Attention Network (FAN), which can significantly improve the recall of the face detection problem in the occluded case without compromising the speed. More specifically, we propose a new anchor-level attention, which will highlight the features from the face region. Integrated with our anchor assign strategy and data augmentation techniques, we obtain state-of-art results on public face ∗Equal contribution. †Work was done during an internship at Megvii Research. detection benchmarks like WiderFace and MAFA. The code will be released for reproduction.", "title": "" }, { "docid": "11d1978a3405f63829e02ccb73dcd75f", "text": "The performance of two commercial simulation codes, Ansys Fluent and Comsol Multiphysics, is thoroughly examined for a recently established two-phase flow benchmark test case. In addition, the commercial codes are directly compared with the newly developed academic code, FeatFlow TP2D. The results from this study show that the commercial codes fail to converge and produce accurate results, and leave much to be desired with respect to direct numerical simulation of flows with free interfaces. The academic code on the other hand was shown to be computationally efficient, produced very accurate results, and outperformed the commercial codes by a magnitude or more.", "title": "" }, { "docid": "8cd666c0796c0fe764bc8de0d7a20fa3", "text": "$$\\mathcal{Q}$$ -learning (Watkins, 1989) is a simple way for agents to learn how to act optimally in controlled Markovian domains. It amounts to an incremental method for dynamic programming which imposes limited computational demands. It works by successively improving its evaluations of the quality of particular actions at particular states. This paper presents and proves in detail a convergence theorem for $$\\mathcal{Q}$$ -learning based on that outlined in Watkins (1989). We show that $$\\mathcal{Q}$$ -learning converges to the optimum action-values with probability 1 so long as all actions are repeatedly sampled in all states and the action-values are represented discretely. We also sketch extensions to the cases of non-discounted, but absorbing, Markov environments, and where many $$\\mathcal{Q}$$ values can be changed each iteration, rather than just one.", "title": "" }, { "docid": "8f183ac262aac98c563bf9dcc69b1bf5", "text": "Functional infrared thermal imaging (fITI) is considered a promising method to measure emotional autonomic responses through facial cutaneous thermal variations. However, the facial thermal response to emotions still needs to be investigated within the framework of the dimensional approach to emotions. The main aim of this study was to assess how the facial thermal variations index the emotional arousal and valence dimensions of visual stimuli. Twenty-four participants were presented with three groups of standardized emotional pictures (unpleasant, neutral and pleasant) from the International Affective Picture System. Facial temperature was recorded at the nose tip, an important region of interest for facial thermal variations, and compared to electrodermal responses, a robust index of emotional arousal. Both types of responses were also compared to subjective ratings of pictures. An emotional arousal effect was found on the amplitude and latency of thermal responses and on the amplitude and frequency of electrodermal responses. The participants showed greater thermal and dermal responses to emotional than to neutral pictures with no difference between pleasant and unpleasant ones. Thermal responses correlated and the dermal ones tended to correlate with subjective ratings. Finally, in the emotional conditions compared to the neutral one, the frequency of simultaneous thermal and dermal responses increased while both thermal or dermal isolated responses decreased. Overall, this study brings convergent arguments to consider fITI as a promising method reflecting the arousal dimension of emotional stimulation and, consequently, as a credible alternative to the classical recording of electrodermal activity. The present research provides an original way to unveil autonomic implication in emotional processes and opens new perspectives to measure them in touchless conditions.", "title": "" }, { "docid": "2eb344b6701139be184624307a617c1b", "text": "This work combines the central ideas from two different areas, crowd simulation and social network analysis, to tackle some existing problems in both areas from a new angle. We present a novel spatio-temporal social crowd simulation framework, Social Flocks, to revisit three essential research problems, (a) generation of social networks, (b) community detection in social networks, (c) modeling collective social behaviors in crowd simulation. Our framework produces social networks that satisfy the properties of high clustering coefficient, low average path length, and power-law degree distribution. It can also be exploited as a novel dynamic model for community detection. Finally our framework can be used to produce real-life collective social behaviors over crowds, including community-guided flocking, leader following, and spatio-social information propagation. Social Flocks can serve as visualization of simulated crowds for domain experts to explore the dynamic effects of the spatial, temporal, and social factors on social networks. In addition, it provides an experimental platform of collective social behaviors for social gaming and movie animations. Social Flocks demo is at http://mslab.csie.ntu.edu.tw/socialflocks/ .", "title": "" }, { "docid": "b5b8ae3b7b307810e1fe39630bc96937", "text": "Up to this point in the text we have considered the use of the logistic regression model in settings where we observe a single dichotomous response for a sample of statistically independent subjects. However, there are settings where the assumption of independence of responses may not hold for a variety of reasons. For example, consider a study of asthma in children in which subjects are interviewed bi-monthly for 1 year. At each interview the date is recorded and the mother is asked whether, during the previous 2 months, her child had an asthma attack severe enough to require medical attention, whether the child had a chest cold, and how many smokers lived in the household. The child’s age and race are recorded at the first interview. The primary outcome is the occurrence of an asthma attack. What differs here is the lack of independence in the observations due to the fact that we have six measurements on each child. In this example, each child represents a cluster of correlated observations of the outcome. The measurements of the presence or absence of a chest cold and the number of smokers residing in the household can change from observation to observation and thus are called clusterspecific or time-varying covariates. The date changes in a systematic way and is recorded to model possible seasonal effects. The child’s age and race are constant for the duration of the study and are referred to as cluster-level or time-invariant covariates. The terms clusters, subjects, cluster-specific and cluster-level covariates are general enough to describe multiple measurements on a single subject or single measurements on different but related subjects. An example of the latter setting would be a study of all children in a household. Repeated measurements on the same subject or a subject clustered in some sort of unit (household, hospital, or physician) are the two most likely scenarios leading to correlated data.", "title": "" }, { "docid": "cfc5b5676552c2e90b70ed3cfa5ac022", "text": "UNLABELLED\nWe present a first-draft digital reconstruction of the microcircuitry of somatosensory cortex of juvenile rat. The reconstruction uses cellular and synaptic organizing principles to algorithmically reconstruct detailed anatomy and physiology from sparse experimental data. An objective anatomical method defines a neocortical volume of 0.29 ± 0.01 mm(3) containing ~31,000 neurons, and patch-clamp studies identify 55 layer-specific morphological and 207 morpho-electrical neuron subtypes. When digitally reconstructed neurons are positioned in the volume and synapse formation is restricted to biological bouton densities and numbers of synapses per connection, their overlapping arbors form ~8 million connections with ~37 million synapses. Simulations reproduce an array of in vitro and in vivo experiments without parameter tuning. Additionally, we find a spectrum of network states with a sharp transition from synchronous to asynchronous activity, modulated by physiological mechanisms. The spectrum of network states, dynamically reconfigured around this transition, supports diverse information processing strategies.\n\n\nPAPERCLIP\nVIDEO ABSTRACT.", "title": "" }, { "docid": "526e36dd9e3db50149687ea6358b4451", "text": "A query over RDF data is usually expressed in terms of matching between a graph representing the target and a huge graph representing the source. Unfortunately, graph matching is typically performed in terms of subgraph isomorphism, which makes semantic data querying a hard problem. In this paper we illustrate a novel technique for querying RDF data in which the answers are built by combining paths of the underlying data graph that align with paths specified by the query. The approach is approximate and generates the combinations of the paths that best align with the query. We show that, in this way, the complexity of the overall process is significantly reduced and verify experimentally that our framework exhibits an excellent behavior with respect to other approaches in terms of both efficiency and effectiveness.", "title": "" }, { "docid": "96d8174d188fa89c17ffa0b255584628", "text": "In the past years we have witnessed the emergence of the new discipline of computational social science, which promotes a new data-driven and computation-based approach to social sciences. In this article we discuss how the availability of new technologies such as online social media and mobile smartphones has allowed researchers to passively collect human behavioral data at a scale and a level of granularity that were just unthinkable some years ago. We also discuss how these digital traces can then be used to prove (or disprove) existing theories and develop new models of human behavior.", "title": "" }, { "docid": "1d9e03eb11328f96eaee1f70dcf2a539", "text": "Crossbar architecture has been widely adopted in neural network accelerators due to the efficient implementations on vector-matrix multiplication operations. However, in the case of convolutional neural networks (CNNs), the efficiency is compromised dramatically because of the large amounts of data reuse. Although some mapping methods have been designed to achieve a balance between the execution throughput and resource overhead, the resource consumption cost is still huge while maintaining the throughput. Network pruning is a promising and widely studied method to shrink the model size, whereas prior work for CNNs compression rarely considered the crossbar architecture and the corresponding mapping method and cannot be directly utilized by crossbar-based neural network accelerators. This paper proposes a crossbar-aware pruning framework based on a formulated $L_{0}$ -norm constrained optimization problem. Specifically, we design an $L_{0}$ -norm constrained gradient descent with relaxant probabilistic projection to solve this problem. Two types of sparsity are successfully achieved: 1) intuitive crossbar-grain sparsity and 2) column-grain sparsity with output recombination, based on which we further propose an input feature maps reorder method to improve the model accuracy. We evaluate our crossbar-aware pruning framework on the median-scale CIFAR10 data set and the large-scale ImageNet data set with VGG and ResNet models. Our method is able to reduce the crossbar overhead by 44%–72% with insignificant accuracy degradation. This paper significantly reduce the resource overhead and the related energy cost and provides a new co-design solution for mapping CNNs onto various crossbar devices with much better efficiency.", "title": "" } ]
scidocsrr
93a44a6ab7dd1262366bd2f3081f0595
RIOT : One OS to Rule Them All in the IoT
[ { "docid": "5fd6462e402e3a3ab1e390243d80f737", "text": "We present TinyOS, a flexible, application-specific operating system for sensor networks. Sensor networks consist of (potentially) thousands of tiny, low-power nodes, each of which execute concurrent, reactive programs that must operate with severe memory and power constraints. The sensor network challenges of limited resources, event-centric concurrent applications, and low-power operation drive the design of TinyOS. Our solution combines flexible, fine-grain components with an execution model that supports complex yet safe concurrent operations. TinyOS meets these challenges well and has become the platform of choice for sensor network research; it is in use by over a hundred groups worldwide, and supports a broad range of applications and research topics. We provide a qualitative and quantitative evaluation of the system, showing that it supports complex, concurrent programs with very low memory requirements (many applications fit within 16KB of memory, and the core OS is 400 bytes) and efficient, low-power operation. We present our experiences with TinyOS as a platform for sensor network innovation and applications.", "title": "" }, { "docid": "cf54c485a54d9b22d06710684061eac2", "text": "Many threads packages have been proposed for programming wireless sensor platforms. However, many sensor network operating systems still choose to provide an event-driven model, due to efficiency concerns. We present TOS-Threads, a threads package for TinyOS that combines the ease of a threaded programming model with the efficiency of an event-based kernel. TOSThreads is backwards compatible with existing TinyOS code, supports an evolvable, thread-safe kernel API, and enables flexible application development through dynamic linking and loading. In TOS-Threads, TinyOS code runs at a higher priority than application threads and all kernel operations are invoked only via message passing, never directly, ensuring thread-safety while enabling maximal concurrency. The TOSThreads package is non-invasive; it does not require any large-scale changes to existing TinyOS code.\n We demonstrate that TOSThreads context switches and system calls introduce an overhead of less than 0.92% and that dynamic linking and loading takes as little as 90 ms for a representative sensing application. We compare different programming models built using TOSThreads, including standard C with blocking system calls and a reimplementation of Tenet. Additionally, we demonstrate that TOSThreads is able to run computationally intensive tasks without adversely affecting the timing of critical OS services.", "title": "" } ]
[ { "docid": "235e6e4537e9f336bf80e6d648fdc8fb", "text": "Communication between the deaf and non-deaf has always been a very cumbersome task. This paper aims to cover the various prevailing methods of deaf-mute communication interpreter system. The two broad classification of the communication methodologies used by the deaf –mute people are Wearable Communication Device and Online Learning System. Under Wearable communication method, there are Glove based system, Keypad method and Handicom Touchscreen. All the above mentioned three sub-divided methods make use of various sensors, accelerometer, a suitable microcontroller, a text to speech conversion module, a keypad and a touch-screen. The need for an external device to interpret the message between a deaf –mute and non-deaf-mute people can be overcome by the second method i.e online learning system. The Online Learning System has different methods under it, five of which are explained in this paper. The five sub-divided methods areSLIM module, TESSA, Wi-See Technology, SWI_PELE System and Web-Sign Technology. The working of the individual components used and the operation of the whole system for the communication purpose has been explained in detail in this paper.", "title": "" }, { "docid": "6daa93f2a7cfaaa047ecdc04fb802479", "text": "Facial landmark localization is important to many facial recognition and analysis tasks, such as face attributes analysis, head pose estimation, 3D face modelling, and facial expression analysis. In this paper, we propose a new approach to localizing landmarks in facial image by deep convolutional neural network (DCNN). We make two enhancements on the CNN to adapt it to the feature localization task as follows. Firstly, we replace the commonly used max pooling by depth-wise convolution to obtain better localization performance. Secondly, we define a response map for each facial points as a 2D probability map indicating the presence likelihood, and train our model with a KL divergence loss. To obtain robust localization results, our approach first takes the expectations of the response maps of Enhanced CNN and then applies auto-encoder model to the global shape vector, which is effective to rectify the outlier points by the prior global landmark configurations. The proposed ECNN method achieves 5.32% mean error on the experiments on the 300-W dataset, which is comparable to the state-of-the-art performance on this standard benchmark, showing the effectiveness of our methods.", "title": "" }, { "docid": "4c67d3686008e377220314323a35eecb", "text": "Summarization based on text extraction is inherently limited, but generation-style abstractive methods have proven challenging to build. In this work, we propose a fully data-driven approach to abstractive sentence summarization. Our method utilizes a local attention-based model that generates each word of the summary conditioned on the input sentence. While the model is structurally simple, it can easily be trained end-to-end and scales to a large amount of training data. The model shows significant performance gains on the DUC-2004 shared task compared with several strong baselines.", "title": "" }, { "docid": "8272f6d511cc8aa104ba10c23deb17a5", "text": "The challenge of developing facial recognition systems has been the focus of many research efforts in recent years and has numerous applications in areas such as security, entertainment, and biometrics. Recently, most progress in this field has come from training very deep neural networks on massive datasets which is computationally intensive and time consuming. Here, we propose a deep transfer learning (DTL) approach that integrates transfer learning techniques and convolutional neural networks and apply it to the problem of facial recognition to fine-tune facial recognition models. Transfer learning can allow for the training of robust, high-performance machine learning models that require much less time and resources to produce than similarly performing models that have been trained from scratch. Using a pre-trained face recognition model, we were able to perform transfer learning to produce a network that is capable of making accurate predictions on much smaller datasets. We also compare our results with results produced by a selection of classical algorithms on the same datasets to demonstrate the effectiveness of the proposed DTL approach.", "title": "" }, { "docid": "ad584a07befbfff1dff36c18ea830a4e", "text": "In this paper, we review some of the novel emerging memory technologies and how they can enable energy-efficient implementation of large neuromorphic computing systems. We will highlight some of the key aspects of biological computation that are being mimicked in these novel nanoscale devices, and discuss various strategies employed to implement them efficiently. Though large scale learning systems have not been implemented using these devices yet, we will discuss the ideal specifications and metrics to be satisfied by these devices based on theoretical estimations and simulations. We also outline the emerging trends and challenges in the path towards successful implementations of large learning systems that could be ubiquitously deployed for a wide variety of cognitive computing tasks.", "title": "" }, { "docid": "116ab901f60a7282f8a2ea245c59b679", "text": "Image classification is a vital technology many people in all arenas of human life utilize. It is pervasive in every facet of the social, economic, and corporate spheres of influence, worldwide. This need for more accurate, detail-oriented classification increases the need for modifications, adaptations, and innovations to Deep learning algorithms. This paper uses Convolutional Neural Networks (CNN) to classify handwritten digits in the MNIST database, and scenes in the CIFAR-10 database. Our proposed method preprocesses the data in the wavelet domain to attain greater accuracy and comparable efficiency to the spatial domain processing. By separating the image into different subbands, important feature learning occurs over varying low to high frequencies. The fusion of the learned low and high frequency features, and processing the combined feature mapping results in an increase in the detection accuracy. Comparing the proposed methods to spatial domain CNN and Stacked Denoising Autoencoder (SDA), experimental findings reveal a substantial increase in accuracy.", "title": "" }, { "docid": "a52a90bb69f303c4a31e4f24daf609e6", "text": "The effects of Arctium lappa L. (root) on anti-inflammatory and free radical scavenger activity were investigated. Subcutaneous administration of A. lappa crude extract significantly decreased carrageenan-induced rat paw edema. When simultaneously treated with CCl4, it produced pronounced activities against CCl4-induced acute liver damage. The free radical scavenging activity of its crude extract was also examined by means of an electron spin resonance (ESR) spectrometer. The IC50 of A. lappa extract on superoxide and hydroxyl radical scavenger activity was 2.06 mg/ml and 11.8 mg/ml, respectively. These findings suggest that Arctium lappa possess free radical scavenging activity. The inhibitory effects on carrageenan-induced paw edema and CCl4-induced hepatotoxicity could be due to the scavenging effect of A. lappa.", "title": "" }, { "docid": "dacf2f44c3f8fc0931dceda7e4cb9bef", "text": "Brain-computer interaction has already moved from assistive care to applications such as gaming. Improvements in usability, hardware, signal processing, and system integration should yield applications in other nonmedical areas.", "title": "" }, { "docid": "a9709367bc84ececd98f65ed7359f6b0", "text": "Though many tools are available to help programmers working on change tasks, and several studies have been conducted to understand how programmers comprehend systems, little is known about the specific kinds of questions programmers ask when evolving a code base. To fill this gap we conducted two qualitative studies of programmers performing change tasks to medium to large sized programs. One study involved newcomers working on assigned change tasks to a medium-sized code base. The other study involved industrial programmers working on their own change tasks on code with which they had experience. The focus of our analysis has been on what information a programmer needs to know about a code base while performing a change task and also on howthey go about discovering that information. Based on this analysis we catalog and categorize 44 different kinds of questions asked by our participants. We also describe important context for how those questions were answered by our participants, including their use of tools.", "title": "" }, { "docid": "9f68df51d0d47b539a6c42207536d012", "text": "Schizophrenia-spectrum risk alleles may persist in the population, despite their reproductive costs in individuals with schizophrenia, through the possible creativity benefits of mild schizotypy in non-psychotic relatives. To assess this creativity-benefit model, we measured creativity (using 6 verbal and 8 drawing tasks), schizotypy, Big Five personality traits, and general intelligence in 225 University of New Mexico students. Multiple regression analyses showed that openness and intelligence, but not schizotypy, predicted reliable observer ratings of verbal and drawing creativity. Thus, the 'madness-creativity' link seems mediated by the personality trait of openness, and standard creativity-benefit models seem unlikely to explain schizophrenia's evolutionary persistence.", "title": "" }, { "docid": "5591247b2e28f436da302757d3f82122", "text": "This paper proposes LPRNet end-to-end method for Automatic License Plate Recognition without preliminary character segmentation. Our approach is inspired by recent breakthroughs in Deep Neural Networks, and works in real-time with recognition accuracy up to 95% for Chinese license plates: 3 ms/plate on nVIDIA R © GeForceTMGTX 1080 and 1.3 ms/plate on Intel R © CoreTMi7-6700K CPU. LPRNet consists of the lightweight Convolutional Neural Network, so it can be trained in end-to-end way. To the best of our knowledge, LPRNet is the first real-time License Plate Recognition system that does not use RNNs. As a result, the LPRNet algorithm may be used to create embedded solutions for LPR that feature high level accuracy even on challenging Chinese license plates.", "title": "" }, { "docid": "d51ef75ccf464cc03656210ec500db44", "text": "The choice of a business process modelling (BPM) tool in combination with the selection of a modelling language is one of the crucial steps in BPM project preparation. Different aspects influence the decision: tool functionality, price, modelling language support, etc. In this paper we discuss the aspect of usability, which has already been recognized as an important topic in software engineering and web design. We conduct a literature review to find out the current state of research on the usability in the BPM field. The results of the literature review show, that although a number of research papers mention the importance of usability for BPM tools, real usability evaluation studies have rarely been undertaken. Based on the results of the literature analysis, the possible research directions in the field of usability of BPM tools are suggested.", "title": "" }, { "docid": "67a3f92ab8c5a6379a30158bb9905276", "text": "We present a compendium of recent and current projects that utilize crowdsourcing technologies for language studies, finding that the quality is comparable to controlled laboratory experiments, and in some cases superior. While crowdsourcing has primarily been used for annotation in recent language studies, the results here demonstrate that far richer data may be generated in a range of linguistic disciplines from semantics to psycholinguistics. For these, we report a number of successful methods for evaluating data quality in the absence of a ‘correct’ response for any given data point.", "title": "" }, { "docid": "c24bfd3b7bbc8222f253b004b522f7d5", "text": "The Audio/Visual Emotion Challenge and Workshop (AVEC 2017) \"Real-life depression, and affect\" will be the seventh competition event aimed at comparison of multimedia processing and machine learning methods for automatic audiovisual depression and emotion analysis, with all participants competing under strictly the same conditions. The goal of the Challenge is to provide a common benchmark test set for multimodal information processing and to bring together the depression and emotion recognition communities, as well as the audiovisual processing communities, to compare the relative merits of the various approaches to depression and emotion recognition from real-life data. This paper presents the novelties introduced this year, the challenge guidelines, the data used, and the performance of the baseline system on the two proposed tasks: dimensional emotion recognition (time and value-continuous), and dimensional depression estimation (value-continuous).", "title": "" }, { "docid": "6d3dc0ea95cd8626aff6938748b58d1a", "text": "Mango cultivation methods being adopted currently are ineffective and low productive despite consuming huge man power. Advancements in robust unmanned aerial vehicles (UAV's), high speed image processing algorithms and machine vision techniques, reinforce the possibility of transforming agricultural scenario to modernity within prevailing time and energy constraints. Present paper introduces Agricultural Aid for Mango cutting (AAM), an Agribot that could be employed for precision mango farming. It is a quadcopter empowered with vision and cutter systems complemented with necessary ancillaries. It could hover around the trees, detect the ripe mangoes, cut and collect them. Paper also sheds light on the available Agribots that have mostly been limited to the research labs. AAM robot is the first of its kind that once implemented could pave way to the next generation Agribots capable of increasing the agricultural productivity and justify the existence of intelligent machines.", "title": "" }, { "docid": "ec31c653457389bd587b26f4427c90d7", "text": "Segmentation and classification of urban range data into different object classes have several challenges due to certain properties of the data, such as density variation, inconsistencies due to missing data and the large data size that require heavy computation and large memory. A method to classify urban scenes based on a super-voxel segmentation of sparse 3D data obtained from LiDAR sensors is presented. The 3D point cloud is first segmented into voxels, which are then characterized by several attributes transforming them into super-voxels. These are joined together by using a link-chain method rather than the usual region growing algorithm to create objects. These objects are then classified using geometrical models and local descriptors. In order to evaluate the results, a new metric that combines both segmentation and classification results simultaneously is presented. The effects of voxel size and incorporation of RGB color and laser reflectance intensity on the classification results are also discussed. The method is evaluated on standard data sets using different metrics to demonstrate its efficacy.", "title": "" }, { "docid": "f65c027ab5baa981667955cc300d2f34", "text": "In-band full-duplex (FD) wireless communication, i.e. simultaneous transmission and reception at the same frequency, in the same channel, promises up to 2x spectral efficiency, along with advantages in higher network layers [1]. the main challenge is dealing with strong in-band leakage from the transmitter to the receiver (i.e. self-interference (SI)), as TX powers are typically >100dB stronger than the weakest signal to be received, necessitating TX-RX isolation and SI cancellation. Performing this SI-cancellation solely in the digital domain, if at all possible, would require extremely clean (low-EVM) transmission and a huge dynamic range in the RX and ADC, which is currently not feasible [2]. Cancelling SI entirely in analog is not feasible either, since the SI contains delayed TX components reflected by the environment. Cancelling these requires impractically large amounts of tunable analog delay. Hence, FD-solutions proposed thus far combine SI-rejection at RF, analog BB, digital BB and cross-domain.", "title": "" }, { "docid": "5ca36b7877ebd3d05e48d3230f2dceb0", "text": "BACKGROUND\nThe frontal branch has a defined course along the Pitanguy line from tragus to lateral brow, although its depth along this line is controversial. The high-superficial musculoaponeurotic system (SMAS) face-lift technique divides the SMAS above the arch, which conflicts with previous descriptions of the frontal nerve depth. This anatomical study defines the depth and fascial boundaries of the frontal branch of the facial nerve over the zygomatic arch.\n\n\nMETHODS\nEight fresh cadaver heads were included in the study, with bilateral facial nerves studied (n = 16). The proximal frontal branches were isolated and then sectioned in full-thickness tissue blocks over a 5-cm distance over the zygomatic arch. The tissue blocks were evaluated histologically for the depth and fascial planes surrounding the frontal nerve. A dissection video accompanies this article.\n\n\nRESULTS\nThe frontal branch of the facial nerve was identified in each tissue section and its fascial boundaries were easily identified using epidermis and periosteum as reference points. The frontal branch coursed under a separate fascial plane, the parotid-temporal fascia, which was deep to the SMAS as it coursed to the zygomatic arch and remained within this deep fascia over the arch. The frontal branch was intact and protected by the parotid-temporal fascia after a high-SMAS face lift.\n\n\nCONCLUSIONS\nThe frontal branch of the facial nerve is protected by a deep layer of fascia, termed the parotid-temporal fascia, which is separate from the SMAS as it travels over the zygomatic arch. Division of the SMAS above the arch in a high-SMAS face lift is safe using the technique described in this study.", "title": "" }, { "docid": "acc26655abb2a181034db8571409d0a5", "text": "In this paper, a new optimization approach is designed for convolutional neural network (CNN) which introduces explicit logical relations between filters in the convolutional layer. In a conventional CNN, the filters’ weights in convolutional layers are separately trained by their own residual errors, and the relations of these filters are not explored for learning. Different from the traditional learning mechanism, the proposed correlative filters (CFs) are initiated and trained jointly in accordance with predefined correlations, which are efficient to work cooperatively and finally make a more generalized optical system. The improvement in CNN performance with the proposed CF is verified on five benchmark image classification datasets, including CIFAR-10, CIFAR-100, MNIST, STL-10, and street view house number. The comparative experimental results demonstrate that the proposed approach outperforms a number of state-of-the-art CNN approaches.", "title": "" }, { "docid": "941d7a7a59261fe2463f42cad9cff004", "text": "Dragon's blood is one of the renowned traditional medicines used in different cultures of world. It has got several therapeutic uses: haemostatic, antidiarrhetic, antiulcer, antimicrobial, antiviral, wound healing, antitumor, anti-inflammatory, antioxidant, etc. Besides these medicinal applications, it is used as a coloring material, varnish and also has got applications in folk magic. These red saps and resins are derived from a number of disparate taxa. Despite its wide uses, little research has been done to know about its true source, quality control and clinical applications. In this review, we have tried to overview different sources of Dragon's blood, its source wise chemical constituents and therapeutic uses. As well as, a little attempt has been done to review the techniques used for its quality control and safety.", "title": "" } ]
scidocsrr
5bb3b78d1b976ec8a8b48f408a80c8a2
CamBP: a camera-based, non-contact blood pressure monitor
[ { "docid": "2531d8d05d262c544a25dbffb7b43d67", "text": "Plethysmographic signals were measured remotely (> 1m) using ambient light and a simple consumer level digital camera in movie mode. Heart and respiration rates could be quantified up to several harmonics. Although the green channel featuring the strongest plethysmographic signal, corresponding to an absorption peak by (oxy-) hemoglobin, the red and blue channels also contained plethysmographic information. The results show that ambient light photo-plethysmography may be useful for medical purposes such as characterization of vascular skin lesions (e.g., port wine stains) and remote sensing of vital signs (e.g., heart and respiration rates) for triage or sports purposes.", "title": "" } ]
[ { "docid": "28a6a89717b3894b181e746e684cfad5", "text": "When information is abundant, it becomes increasingly difficult to fit nuggets of knowledge into a single coherent picture. Complex stories spaghetti into branches, side stories, and intertwining narratives. In order to explore these stories, one needs a map to navigate unfamiliar territory. We propose a methodology for creating structured summaries of information, which we call metro maps. Our proposed algorithm generates a concise structured set of documents maximizing coverage of salient pieces of information. Most importantly, metro maps explicitly show the relations among retrieved pieces in a way that captures story development. We first formalize characteristics of good maps and formulate their construction as an optimization problem. Then we provide efficient methods with theoretical guarantees for generating maps. Finally, we integrate user interaction into our framework, allowing users to alter the maps to better reflect their interests. Pilot user studies with a real-world dataset demonstrate that the method is able to produce maps which help users acquire knowledge efficiently.", "title": "" }, { "docid": "7b3dd8bdc75bf99f358ef58b2d56e570", "text": "This paper studies asset allocation decisions in the presence of regime switching in asset returns. We find evidence that four separate regimes characterized as crash, slow growth, bull and recovery states are required to capture the joint distribution of stock and bond returns. Optimal asset allocations vary considerably across these states and change over time as investors revise their estimates of the state probabilities. In the crash state, buy-and-hold investors allocate more of their portfolio to stocks the longer their investment horizon, while the optimal allocation to stocks declines as a function of the investment horizon in bull markets. The joint effects of learning about state probabilities and predictability of asset returns from the dividend yield give rise to a non-monotonic relationship between the investment horizon and the demand for stocks. Welfare costs from ignoring regime switching can be substantial even after accounting for parameter uncertainty. Out-of-sample forecasting experiments confirm the economic importance of accounting for the presence of regimes in asset returns.", "title": "" }, { "docid": "c31dbdee3c36690794f3537c61cfc1e3", "text": "Shape memory alloy (SMA) actuators, which have ability to return to a predetermined shape when heated, have many potential applications in aeronautics, surgical tools, robotics and so on. Although the number of applications is increasing, there has been limited success in precise motion control since the systems are disturbed by unknown factors beside their inherent nonlinear hysteresis or the surrounding environment of the systems is changed. This paper presents a new development of SMA position control system by using self-tuning fuzzy PID controller. The use of this control algorithm is to tune the parameters of the PID controller by integrating fuzzy inference and producing a fuzzy adaptive PID controller that can be used to improve the control performance of nonlinear systems. The experimental results of position control of SMA actuators using conventional and self tuning fuzzy PID controller are both included in this paper", "title": "" }, { "docid": "9497731525a996844714d5bdbca6ae03", "text": "Recently, machine learning is widely used in applications and cloud services. And as the emerging field of machine learning, deep learning shows excellent ability in solving complex learning problems. To give users better experience, high performance implementations of deep learning applications seem very important. As a common means to accelerate algorithms, FPGA has high performance, low power consumption, small size and other characteristics. So we use FPGA to design a deep learning accelerator, the accelerator focuses on the implementation of the prediction process, data access optimization and pipeline structure. Compared with Core 2 CPU 2.3GHz, our accelerator can achieve promising result.", "title": "" }, { "docid": "ea8450e8e1a217f1af596bb70051f5e7", "text": "Supplier selection is nowadays one of the critical topics in supply chain management. This paper presents a new decision making approach for group multi-criteria supplier selection problem, which clubs supplier selection process with order allocation for dynamic supply chains to cope market variations. More specifically, the developed approach imitates the knowledge acquisition and manipulation in a manner similar to the decision makers who have gathered considerable knowledge and expertise in procurement domain. Nevertheless, under many conditions, exact data are inadequate to model real-life situation and fuzzy logic can be incorporated to handle the vagueness of the decision makers. As per this concept, fuzzy-AHP method is used first for supplier selection through four classes (CLASS I: Performance strategy, CLASS II: Quality of service, CLASS III: Innovation and CLASS IV: Risk), which are qualitatively meaningful. Thereafter, using simulation based fuzzy TOPSIS technique, the criteria application is quantitatively evaluated for order allocation among the selected suppliers. As a result, the approach generates decision-making knowledge, and thereafter, the developed combination of rules order allocation can easily be interpreted, adopted and at the same time if necessary, modified by decision makers. To demonstrate the applicability of the proposed approach, an illustrative example is presented and the results analyzed. & 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "2ff17287164ea85e0a41974e5da0ecb6", "text": "We propose an end-to-end-trainable attention module for convolutional neural network (CNN) architectures built for image classification. The module takes as input the 2D feature vector maps which form the intermediate representations of the input image at different stages in the CNN pipeline, and outputs a 2D matrix of scores for each map. Standard CNN architectures are modified through the incorporation of this module, and trained under the constraint that a convex combination of the intermediate 2D feature vectors, as parameterised by the score matrices, must alone be used for classification. Incentivised to amplify the relevant and suppress the irrelevant or misleading, the scores thus assume the role of attention values. Our experimental observations provide clear evidence to this effect: the learned attention maps neatly highlight the regions of interest while suppressing background clutter. Consequently, the proposed function is able to bootstrap standard CNN architectures for the task of image classification, demonstrating superior generalisation over 6 unseen benchmark datasets. When binarised, our attention maps outperform other CNN-based attention maps, traditional saliency maps, and top object proposals for weakly supervised segmentation as demonstrated on the Object Discovery dataset. We also demonstrate improved robustness against the fast gradient sign method of adversarial attack.", "title": "" }, { "docid": "c673f0f39f874f8dfde363d6a030e8dd", "text": "Big data refers to data volumes in the range of exabytes (1018) and beyond. Such volumes exceed the capacity of current on-line storage systems and processing systems. Data, information, and knowledge are being created and collected at a rate that is rapidly approaching the exabyte/year range. But, its creation and aggregation are accelerating and will approach the zettabyte/year range within a few years. Volume is only one aspect of big data; other attributes are variety, velocity, value, and complexity. Storage and data transport are technology issues, which seem to be solvable in the near-term, but represent longterm challenges that require research and new paradigms. We analyze the issues and challenges as we begin a collaborative research program into methodologies for big data analysis and design.", "title": "" }, { "docid": "593aae604e5ecd7b6d096ed033a303f8", "text": "We describe the first mobile app for identifying plant species using automatic visual recognition. The system – called Leafsnap – identifies tree species from photographs of their leaves. Key to this system are computer vision components for discarding non-leaf images, segmenting the leaf from an untextured background, extracting features representing the curvature of the leaf’s contour over multiple scales, and identifying the species from a dataset of the 184 trees in the Northeastern United States. Our system obtains state-of-the-art performance on the real-world images from the new Leafsnap Dataset – the largest of its kind. Throughout the paper, we document many of the practical steps needed to produce a computer vision system such as ours, which currently has nearly a million users.", "title": "" }, { "docid": "63405a3fc4815e869fc872bb96bb8a33", "text": "We demonstrate how to learn efficient heuristics for automated reasoning algorithms through deep reinforcement learning. We consider search algorithms for quantified Boolean logics, that already can solve formulas of impressive size up to 100s of thousands of variables. The main challenge is to find a representation which lends to making predictions in a scalable way. The heuristics learned through our approach significantly improve over the handwritten heuristics for several sets of formulas.", "title": "" }, { "docid": "3d10793b2e4e63e7d639ff1e4cdf04b6", "text": "Research in signal processing shows that a variety of transforms have been introduced to map the data from the original space into the feature space, in order to efficiently analyze a signal. These techniques differ in their basis functions, that is used for projecting the signal into a higher dimensional space. One of the widely used schemes for quasi-stationary and non-stationary signals is the time-frequency (TF) transforms, characterized by specific kernel functions. This work introduces a novel class of Ramanujan Fourier Transform (RFT) based TF transform functions, constituted by Ramanujan sums (RS) basis. The proposed special class of transforms offer high immunity to noise interference, since the computation is carried out only on co-resonant components, during analysis of signals. Further, we also provide a 2-D formulation of the RFT function. Experimental validation using synthetic examples, indicates that this technique shows potential for obtaining relatively sparse TF-equivalent representation and can be optimized for characterization of certain real-life signals.", "title": "" }, { "docid": "e1b6de27518c1c17965a891a8d14a1e1", "text": "Mobile phones are becoming more and more widely used nowadays, and people do not use the phone only for communication: there is a wide variety of phone applications allowing users to select those that fit their needs. Aggregated over time, application usage patterns exhibit not only what people are consistently interested in but also the way in which they use their phones, and can help improving phone design and personalized services. This work aims at mining automatically usage patterns from apps data recorded continuously with smartphones. A new probabilistic framework for mining usage patterns is proposed. Our methodology involves the design of a bag-of-apps model that robustly represents level of phone usage over specific times of the day, and the use of a probabilistic topic model that jointly discovers patterns of usage over multiple applications and describes users as mixtures of such patterns. Our framework is evaluated using 230 000+ hours of real-life app phone log data, demonstrates that relevant patterns of usage can be extracted, and is objectively validated on a user retrieval task with competitive performance.", "title": "" }, { "docid": "1865cf66083c30d74b555eab827d0f5f", "text": "Business intelligence and analytics (BIA) is about the development of technologies, systems, practices, and applications to analyze critical business data so as to gain new insights about business and markets. The new insights can be used for improving products and services, achieving better operational efficiency, and fostering customer relationships. In this article, we will categorize BIA research activities into three broad research directions: (a) big data analytics, (b) text analytics, and (c) network analytics. The article aims to review the state-of-the-art techniques and models and to summarize their use in BIA applications. For each research direction, we will also determine a few important questions to be addressed in future research.", "title": "" }, { "docid": "5a3f542176503ddc6fcbd0fe29f08869", "text": "INTRODUCTION\nArtificial intelligence is a branch of computer science capable of analysing complex medical data. Their potential to exploit meaningful relationship with in a data set can be used in the diagnosis, treatment and predicting outcome in many clinical scenarios.\n\n\nMETHODS\nMedline and internet searches were carried out using the keywords 'artificial intelligence' and 'neural networks (computer)'. Further references were obtained by cross-referencing from key articles. An overview of different artificial intelligent techniques is presented in this paper along with the review of important clinical applications.\n\n\nRESULTS\nThe proficiency of artificial intelligent techniques has been explored in almost every field of medicine. Artificial neural network was the most commonly used analytical tool whilst other artificial intelligent techniques such as fuzzy expert systems, evolutionary computation and hybrid intelligent systems have all been used in different clinical settings.\n\n\nDISCUSSION\nArtificial intelligence techniques have the potential to be applied in almost every field of medicine. There is need for further clinical trials which are appropriately designed before these emergent techniques find application in the real clinical setting.", "title": "" }, { "docid": "fd5c5ff7c97b9d6b6bfabca14631b423", "text": "The composition and activity of the gut microbiota codevelop with the host from birth and is subject to a complex interplay that depends on the host genome, nutrition, and life-style. The gut microbiota is involved in the regulation of multiple host metabolic pathways, giving rise to interactive host-microbiota metabolic, signaling, and immune-inflammatory axes that physiologically connect the gut, liver, muscle, and brain. A deeper understanding of these axes is a prerequisite for optimizing therapeutic strategies to manipulate the gut microbiota to combat disease and improve health.", "title": "" }, { "docid": "30d0453033d3951f5b5faf3213eacb89", "text": "Semantic mapping is the incremental process of “mapping” relevant information of the world (i.e., spatial information, temporal events, agents and actions) to a formal description supported by a reasoning engine. Current research focuses on learning the semantic of environments based on their spatial location, geometry and appearance. Many methods to tackle this problem have been proposed, but the lack of a uniform representation, as well as standard benchmarking suites, prevents their direct comparison. In this paper, we propose a standardization in the representation of semantic maps, by defining an easily extensible formalism to be used on top of metric maps of the environments. Based on this, we describe the procedure to build a dataset (based on real sensor data) for benchmarking semantic mapping techniques, also hypothesizing some possible evaluation metrics. Nevertheless, by providing a tool for the construction of a semantic map ground truth, we aim at the contribution of the scientific community in acquiring data for populating the dataset.", "title": "" }, { "docid": "56e1778df9d5b6fa36cbf4caae710e67", "text": "The Levenberg-Marquardt method is a standard technique used to solve nonlinear least squares problems. Least squares problems arise when fitting a parameterized function to a set of measured data points by minimizing the sum of the squares of the errors between the data points and the function. Nonlinear least squares problems arise when the function is not linear in the parameters. Nonlinear least squares methods involve an iterative improvement to parameter values in order to reduce the sum of the squares of the errors between the function and the measured data points. The Levenberg-Marquardt curve-fitting method is actually a combination of two minimization methods: the gradient descent method and the Gauss-Newton method. In the gradient descent method, the sum of the squared errors is reduced by updating the parameters in the direction of the greatest reduction of the least squares objective. In the Gauss-Newton method, the sum of the squared errors is reduced by assuming the least squares function is locally quadratic, and finding the minimum of the quadratic. The Levenberg-Marquardt method acts more like a gradient-descent method when the parameters are far from their optimal value, and acts more like the Gauss-Newton method when the parameters are close to their optimal value. This document describes these methods and illustrates the use of software to solve nonlinear least squares curve-fitting problems.", "title": "" }, { "docid": "2431ee8fb0dcfd84c61e60ee41a95edb", "text": "Web applications have become a very popular means of developing software. This is because of many advantages of web applications like no need of installation on each client machine, centralized data, reduction in business cost etc. With the increase in this trend web applications are becoming vulnerable for attacks. Cross site scripting (XSS) is the major threat for web application as it is the most basic attack on web application. It provides the surface for other types of attacks like Cross Site Request Forgery, Session Hijacking etc. There are three types of XSS attacks i.e. non-persistent (or reflected) XSS, persistent (or stored) XSS and DOM-based vulnerabilities. There is one more type that is not as common as those three types, induced XSS. In this work we aim to study and consolidate the understanding of XSS and their origin, manifestation, kinds of dangers and mitigation efforts for XSS. Different approaches proposed by researchers are presented here and an analysis of these approaches is performed. Finally the conclusion is drawn at the end of the work.", "title": "" }, { "docid": "6c1c3bc94314ce1efae62ac3ec605d4a", "text": "Solar energy is an abundant renewable energy source (RES) which is available without any price from the Sun to the earth. It can be a good alternative of energy source in place of non-renewable sources (NRES) of energy like as fossil fuels and petroleum articles. Sun light can be utilized through solar cells which fulfills the need of energy of the utilizer instead of energy generation by NRES. The development of solar cells has crossed by a number of modifications from one age to another. The cost and efficiency of solar cells are the obstacles in the advancement. In order to select suitable solar photovoltaic (PV) cells for a particular area, operators are needed to sense the basic mechanisms and topologies of diverse solar PV with maximum power point tracking (MPPT) methodologies that are checked to a great degree. In this article, authors reviewed and analyzed a successive growth in the solar PV cell research from one decade to other, and explained about their coming fashions and behaviors. This article also attempts to emphasize on many experiments and technologies to contribute the perks of solar energy.", "title": "" }, { "docid": "0dc1119bf47ffa6d032c464a54d5d173", "text": "The use of an analogy from a semantically distant domain to guide the problemsolving process was investigated. The representation of analogy in memory and processes involved in the use of analogies were discussed theoretically and explored in five experiments. In Experiment I oral protocols were used to examine the processes involved in solving a problem by analogy. In all experiments subjects who first read a story about a military problem and its solution tended to generate analogous solutions to a medical problem (Duncker’s “radiation problem”), provided they were given a hint to use the story to help solve the problem. Transfer frequency was reduced when the problem presented in the military story was substantially disanalogous to the radiation problem, even though the solution illustrated in the story corresponded to an effective radiation solution (Experiment II). Subjects in Experiment III tended to generate analogous solutions to the radiation problem after providing their own solutions to the military problem. Subjects were able to retrieve the story from memory and use it to generate an analogous solution, even when the critical story had been memorized in the context of two distractor stories (Experiment IV). However, when no hint to consider the story was given, frequency of analogous solutions decreased markedly. This decrease in transfer occurred when the story analogy was presented in a recall task along with distractor stories (Experiment IV), when it was presented alone, and when it was presented in between two attempts to solve the problem (Experiment V). Component processes and strategic variations in analogical problem solving were discussed. Issues related to noticing analogies and accessing them in memory were also examined, as was the relationship of analogical reasoning to other cognitive tasks.", "title": "" }, { "docid": "419499ced8902a00909c32db352ea7f5", "text": "Software defined networks provide new opportunities for automating the process of network debugging. Many tools have been developed to verify the correctness of network configurations on the control plane. However, due to software bugs and hardware faults of switches, the correctness of control plane may not readily translate into that of data plane. To bridge this gap, we present VeriDP, which can monitor \"whether actual forwarding behaviors are complying with network configurations\". Given that policies are well-configured, operators can leverage VeriDP to monitor the correctness of the network data plane. In a nutshell, VeriDP lets switches tag packets that they forward, and report tags together with headers to the verification server before the packets leave the network. The verification server pre-computes all header-to-tag mappings based on the configuration, and checks whether the reported tags agree with the mappings. We prototype VeriDP with both software and hardware OpenFlow switches, and use emulation to show that VeriDP can detect common data plane fault including black holes and access violations, with a minimal impact on the data plane.", "title": "" } ]
scidocsrr
44c45c53021be2a5091a012f9299fe3c
First Step Towards End-to-End Parametric TTS Synthesis: Generating Spectral Parameters with Neural Attention
[ { "docid": "280c39aea4584e6f722607df68ee28dc", "text": "Statistical parametric speech synthesis (SPSS) using deep neural networks (DNNs) has shown its potential to produce naturally-sounding synthesized speech. However, there are limitations in the current implementation of DNN-based acoustic modeling for speech synthesis, such as the unimodal nature of its objective function and its lack of ability to predict variances. To address these limitations, this paper investigates the use of a mixture density output layer. It can estimate full probability density functions over real-valued output features conditioned on the corresponding input features. Experimental results in objective and subjective evaluations show that the use of the mixture density output layer improves the prediction accuracy of acoustic features and the naturalness of the synthesized speech.", "title": "" } ]
[ { "docid": "b4f1559a05ded4584ef06f2d660053ac", "text": "A circularly polarized microstrip array antenna is proposed for Ka-band satellite applications. The antenna element consists of an L-shaped patch with parasitic circular-ring radiator. A sequentially rotated 2×2 antenna array exhibits a wideband 3-dB axial ratio bandwidth of 20.6% (25.75 GHz - 31.75 GHz) and 2;1-VSWR bandwidth of 24.0% (25.5 GHz - 32.5 GHz). A boresight gain of 10-11.8 dBic is achieved across a frequency range from 26 GHz to 32 GHz. An 8×8 antenna array exhibits a boresight gain of greater than 24 dBic over 27.25 GHz-31.25 GHz.", "title": "" }, { "docid": "c601040737c42abcef996e027fabc8cf", "text": "This article assumes that brands should be managed as valuable, long-term corporate assets. It is proposed that for a true brand asset mindset to be achieved, the relationship between brand loyalty and brand value needs to be recognised within the management accounting system. It is also suggested that strategic brand management is achieved by having a multi-disciplinary focus, which is facilitated by a common vocabulary. This article seeks to establish the relationships between the constructs and concepts of branding, and to provide a framework and vocabulary that aids effective communication between the functions of accounting and marketing. Performance measures for brand management are also considered, and a model for the management of brand equity is provided. Very simply, brand description (or identity or image) is tailored to the needs and wants of a target market using the marketing mix of product, price, place, and promotion. The success or otherwise of this process determines brand strength or the degree of brand loyalty. A brand's value is determined by the degree of brand loyalty, as this implies a guarantee of future cash flows. Feldwick considered that using the term brand equity creates the illusion that an operational relationship exists between brand description, brand strength and brand value that cannot be demonstrated to operate in practice. This is not surprising, given that brand description and brand strength are, broadly speaking, within the remit of marketers and brand value has been considered largely an accounting issue. However, for brands to be managed strategically as long-term assets, the relationship outlined in Figure 1 needs to be operational within the management accounting system. The efforts of managers of brands could be reviewed and assessed by the measurement of brand strength and brand value, and brand strategy modified accordingly. Whilst not a simple process, the measurement of outcomes is useful as part of a range of diagnostic tools for management. This is further explored in the summary discussion. Whilst there remains a diversity of opinion on the definition and basis of brand equity, most approaches consider brand equity to be a strategic issue, albeit often implicitly. The following discussion explores the range of interpretations of brand equity, showing how they relate to Feldwick's (1996) classification. Ambler and Styles (1996) suggest that managers of brands choose between taking profits today or storing them for the future, with brand equity being the `̀ . . . store of profits to be realised at a later date.'' Their definition follows Srivastava and Shocker (1991) with brand equity suggested as; . . . the aggregation of all accumulated attitudes and behavior patterns in the extended minds of consumers, distribution channels and influence agents, which will enhance future profits and long term cash flow. This definition of brand equity distinguishes the brand asset from its valuation, and falls into Feldwick's (1996) brand strength category of brand equity. This approach is intrinsically strategic in nature, with the emphasis away from short-term profits. Davis (1995) also emphasises the strategic importance of brand equity when he defines brand value (one form of brand equity) as `̀ . . . the potential strategic contributions and benefits that a brand can make to a company.'' In this definition, brand value is the resultant form of brand equity in Figure 1, or the outcome of consumer-based brand equity. Keller (1993) also takes the consumer-based brand strength approach to brand equity, suggesting that brand equity represents a condition in which the consumer is familiar with the brand and recalls some favourable, strong and unique brand associations. Hence, there is a differential effect of brand knowledge on consumer response to the marketing of a brand. This approach is aligned to the relationship described in Figure 1, where brand strength is a function of brand description. Winters (1991) relates brand equity to added value by suggesting that brand equity involves the value added to a product by consumers' associations and perceptions of a particular brand name. It is unclear in what way added value is being used, but brand equity fits the categories of brand description and brand strength as outlined above. Leuthesser (1988) offers a broad definition of brand equity as: the set of associations and behaviour on the part of a brand's customers, channel members and parent corporation that permits the brand to earn greater volume or greater margins than it could without the brand name. This definition covers Feldwick's classifications of brand description and brand strength implying a similar relationship to that outlined in Figure 1. The key difference to Figure 1 is that the outcome of brand strength is not specified as brand value, but implies market share, and profit as outcomes. Marketers tend to describe, rather than ascribe a figure to, the outcomes of brand strength. Pitta and Katsanis (1995) suggest that brand equity increases the probability of brand choice, leads to brand loyalty and `̀ insulates the brand from a measure of competitive threats.'' Aaker (1991) suggests that strong brands will usually provide higher profit margins and better access to distribution channels, as well as providing a broad platform for product line extensions. Brand extension[1] is a commonly cited advantage of high brand equity, with Dacin and Smith (1994) and Keller and Aaker (1992) suggesting that successful brand extensions can also build brand equity. Loken and John (1993) and Aaker (1993) advise caution in that poor brand extensions can erode brand equity. Figure 1 The brand equity chain [ 663 ] Lisa Wood Brands and brand equity: definition and management Management Decision 38/9 [2000] 662±669 Farquhar (1989) suggests a relationship between high brand equity and market power asserting that: The competitive advantage of firms that have brands with high equity includes the opportunity for successful extensions, resilience against competitors' promotional pressures, and creation of barriers to competitive entry. This relationship is summarised in Figure 2. Figure 2 indicates that there can be more than one outcome determined by brand strength apart from brand value. It should be noted that it is argued by Wood (1999) that brand value measurements could be used as an indicator of market power. Achieving a high degree of brand strength may be considered an important objective for managers of brands. If we accept that the relationships highlighted in Figures 1 and 2 are something that we should be aiming for, then it is logical to focus our attention on optimising brand description. This requires a rich understanding of the brand construct itself. Yet, despite an abundance of literature, the definitive brand construct has yet to be produced. Subsequent discussion explores the brand construct itself, and highlights the specific relationship between brands and added value. This relationship is considered to be key to the variety of approaches to brand definition within marketing, and is currently an area of incompatibility between marketing and accounting.", "title": "" }, { "docid": "81c02e708a21532d972aca0b0afd8bb5", "text": "We propose a new tree-based ORAM scheme called Circuit ORAM. Circuit ORAM makes both theoretical and practical contributions. From a theoretical perspective, Circuit ORAM shows that the well-known Goldreich-Ostrovsky logarithmic ORAM lower bound is tight under certain parameter ranges, for several performance metrics. Therefore, we are the first to give an answer to a theoretical challenge that remained open for the past twenty-seven years. Second, Circuit ORAM earns its name because it achieves (almost) optimal circuit size both in theory and in practice for realistic choices of block sizes. We demonstrate compelling practical performance and show that Circuit ORAM is an ideal candidate for secure multi-party computation applications.", "title": "" }, { "docid": "ce2139f51970bfa5bd3738392f55ea48", "text": "A novel type of dual circular polarizer for simultaneously receiving and transmitting right-hand and left-hand circularly polarized waves is developed and tested. It consists of a H-plane T junction of rectangular waveguide, one circular waveguide as an Eplane arm located on top of the junction, and two metallic pins used for matching. The theoretical analysis and design of the three-physicalport and four-mode polarizer were researched by solving ScatteringMatrix of the network and using a full-wave electromagnetic simulation tool. The optimized polarizer has the advantages of a very compact size with a volume smaller than 0.6λ3, low complexity and manufacturing cost. A couple of the polarizer has been manufactured and tested, and the experimental results are basically consistent with the theories.", "title": "" }, { "docid": "33bc830ab66c9864fd4c45c463c2c9da", "text": "We present a novel system for sketch-based face image editing, enabling users to edit images intuitively by sketching a few strokes on a region of interest. Our interface features tools to express a desired image manipulation by providing both geometry and color constraints as user-drawn strokes. As an alternative to the direct user input, our proposed system naturally supports a copy-paste mode, which allows users to edit a given image region by using parts of another exemplar image without the need of hand-drawn sketching at all. The proposed interface runs in real-time and facilitates an interactive and iterative workflow to quickly express the intended edits. Our system is based on a novel sketch domain and a convolutional neural network trained end-to-end to automatically learn to render image regions corresponding to the input strokes. To achieve high quality and semantically consistent results we train our neural network on two simultaneous tasks, namely image completion and image translation. To the best of our knowledge, we are the first to combine these two tasks in a unified framework for interactive image editing. Our results show that the proposed sketch domain, network architecture, and training procedure generalize well to real user input and enable high quality synthesis results without additional post-processing.", "title": "" }, { "docid": "d1f02e2f57cffbc17387de37506fddc9", "text": "The task of matching patterns in graph-structured data has applications in such diverse areas as computer vision, biology, electronics, computer aided design, social networks, and intelligence analysis. Consequently, work on graph-based pattern matching spans a wide range of research communities. Due to variations in graph characteristics and application requirements, graph matching is not a single problem, but a set of related problems. This paper presents a survey of existing work on graph matching, describing variations among problems, general and specific solution approaches, evaluation techniques, and directions for further research. An emphasis is given to techniques that apply to general graphs with semantic characteristics.", "title": "" }, { "docid": "18bc3abbd6a4f51fdcfbafcc280f0805", "text": "Complex disease genetics has been revolutionised in recent years by the advent of genome-wide association (GWA) studies. The chronic inflammatory bowel diseases (IBDs), Crohn's disease and ulcerative colitis have seen notable successes culminating in the discovery of 99 published susceptibility loci/genes (71 Crohn's disease; 47 ulcerative colitis) to date. Approximately one-third of loci described confer susceptibility to both Crohn's disease and ulcerative colitis. Amongst these are multiple genes involved in IL23/Th17 signalling (IL23R, IL12B, JAK2, TYK2 and STAT3), IL10, IL1R2, REL, CARD9, NKX2.3, ICOSLG, PRDM1, SMAD3 and ORMDL3. The evolving genetic architecture of IBD has furthered our understanding of disease pathogenesis. For Crohn's disease, defective processing of intracellular bacteria has become a central theme, following gene discoveries in autophagy and innate immunity (associations with NOD2, IRGM, ATG16L1 are specific to Crohn's disease). Genetic evidence has also demonstrated the importance of barrier function to the development of ulcerative colitis (HNF4A, LAMB1, CDH1 and GNA12). However, when the data are analysed in more detail, deeper themes emerge including the shared susceptibility seen with other diseases. Many immune-mediated diseases overlap in this respect, paralleling the reported epidemiological evidence. However, in several cases the reported shared susceptibility appears at odds with the clinical picture. Examples include both type 1 and type 2 diabetes mellitus. In this review we will detail the presently available data on the genetic overlap between IBD and other diseases. The discussion will be informed by the epidemiological data in the published literature and the implications for pathogenesis and therapy will be outlined. This arena will move forwards very quickly in the next few years. Ultimately, we anticipate that these genetic insights will transform the landscape of common complex diseases such as IBD.", "title": "" }, { "docid": "f1a2d243c58592c7e004770dfdd4a494", "text": "Dynamic voltage scaling (DVS), which adjusts the clockspeed and supply voltage dynamically, is an effective techniquein reducing the energy consumption of embedded real-timesystems. The energy efficiency of a DVS algorithm largelydepends on the performance of the slack estimation methodused in it. In this paper, we propose a novel DVS algorithmfor periodic hard real-time tasks based on an improved slackestimation algorithm. Unlike the existing techniques, the proposedmethod takes full advantage of the periodic characteristicsof the real-time tasks under priority-driven schedulingsuch as EDF. Experimental results show that the proposed algorithmreduces the energy consumption by 20~40% over theexisting DVS algorithm. The experiment results also show thatour algorithm based on the improved slack estimation methodgives comparable energy savings to the DVS algorithm basedon the theoretically optimal (but impractical) slack estimationmethod.", "title": "" }, { "docid": "eebeb59c737839e82ecc20a748b12c6b", "text": "We present SWARM, a wearable affective technology designed to help a user to reflect on their own emotional state, modify their affect, and interpret the emotional states of others. SWARM aims for a universal design (inclusive of people with various disabilities), with a focus on modular actuation components to accommodate users' sensory capabilities and preferences, and a scarf form-factor meant to reduce the stigma of accessible technologies through a fashionable embodiment. Using an iterative, user-centered approach, we present SWARM's design. Additionally, we contribute findings for communicating emotions through technology actuations, wearable design techniques (including a modular soft circuit design technique that fuses conductive fabric with actuation components), and universal design considerations for wearable technology.", "title": "" }, { "docid": "f05dd13991878e5395561ce24564975c", "text": "Detecting informal settlements might be one of the most challenging tasks within urban remote sensing. This phenomenon occurs mostly in developing countries. In order to carry out the urban planning and development tasks necessary to improve living conditions for the poorest world-wide, an adequate spatial data basis is needed (see Mason, O. S. & Fraser, C. S., 1998). This can only be obtained through the analysis of remote sensing data, which represents an additional challenge from a technical point of view. Formal settlements by definition are mapped sufficiently for most purposes. However, this does not hold for informal settlements. Due to their microstructure and instability of shape, the detection of these settlements is substantially more difficult. Hence, more sophisticated data and methods of image analysis are necessary, which ideally act as a spatial data basis for a further informal settlement management. While these methods are usually quite labour-intensive, one should nonetheless bear in mind cost-effectivity of the applied methods and tools. In the present article, it will be shown how eCognition can be used to detect and discriminate informal settlements from other land-use-forms by describing typical characteristics of colour, texture, shape and context. This software is completely objectoriented and uses a patented, multi-scale image segmentation approach. The generated segments act as image objects whose physical and contextual characteristics can be described by means of fuzzy logic. The article will show methods and strategies using eCognition to detect informal settlements from high resolution space-borne image data such as IKONOS. A final discussion of the results will be given.", "title": "" }, { "docid": "e2e8aac3945fa7206dee21792133b77b", "text": "We provide an alternative to the maximum likelihood method for making inferences about the parameters of the logistic regression model. The method is based appropriate permutational distributions of sufficient statistics. It is useful for analysing small or unbalanced binary data with covariates. It also applies to small-sample clustered binary data. We illustrate the method by analysing several biomedical data sets.", "title": "" }, { "docid": "14dfa311d7edf2048ebd4425ae38d3e2", "text": "Forest species recognition has been traditionally addressed as a texture classification problem, and explored using standard texture methods such as Local Binary Patterns (LBP), Local Phase Quantization (LPQ) and Gabor Filters. Deep learning techniques have been a recent focus of research for classification problems, with state-of-the art results for object recognition and other tasks, but are not yet widely used for texture problems. This paper investigates the usage of deep learning techniques, in particular Convolutional Neural Networks (CNN), for texture classification in two forest species datasets - one with macroscopic images and another with microscopic images. Given the higher resolution images of these problems, we present a method that is able to cope with the high-resolution texture images so as to achieve high accuracy and avoid the burden of training and defining an architecture with a large number of free parameters. On the first dataset, the proposed CNN-based method achieves 95.77% of accuracy, compared to state-of-the-art of 97.77%. On the dataset of microscopic images, it achieves 97.32%, beating the best published result of 93.2%.", "title": "" }, { "docid": "9343a2775b5dac7c48c1c6cec3d0a59c", "text": "The Extended String-to-String Correction Problem [ESSCP] is defined as the problem of determining, for given strings A and B over alphabet V, a minimum-cost sequence S of edit operations such that S(A) &equil; B. The sequence S may make use of the operations: <underline>Change, Insert, Delete</underline> and <underline>Swaps</underline>, each of constant cost W<subscrpt>C</subscrpt>, W<subscrpt>I</subscrpt>, W<subscrpt>D</subscrpt>, and W<subscrpt>S</subscrpt> respectively. Swap permits any pair of adjacent characters to be interchanged.\n The principal results of this paper are:\n (1) a brief presentation of an algorithm (the CELLAR algorithm) which solves ESSCP in time Ø(¦A¦* ¦B¦* ¦V¦<supscrpt>s</supscrpt>*s), where s &equil; min(4W<subscrpt>C</subscrpt>, W<subscrpt>I</subscrpt>+W<subscrpt>D</subscrpt>)/W<subscrpt>S</subscrpt> + 1;\n (2) presentation of polynomial time algorithms for the cases (a) W<subscrpt>S</subscrpt> &equil; 0, (b) W<subscrpt>S</subscrpt> > 0, W<subscrpt>C</subscrpt>&equil; W<subscrpt>I</subscrpt>&equil; W<subscrpt>D</subscrpt>&equil; @@@@;\n (3) proof that ESSCP, with W<subscrpt>I</subscrpt> < W<subscrpt>C</subscrpt> &equil; W<subscrpt>D</subscrpt> &equil; @@@@, 0 < W<subscrpt>S</subscrpt> < @@@@, suitably encoded, is NP-complete. (The remaining case, W<subscrpt>S</subscrpt>&equil; @@@@, reduces ESSCP to the string-to-string correction problem of [1], where an Ø( ¦A¦* ¦B¦) algorithm is given.) Thus, “almost all” ESSCP's can be solved in deterministic polynomial time, but the general problem is NP-complete.", "title": "" }, { "docid": "5d6bd34fb5fdb44950ec5d98e77219c3", "text": "This paper describes an experimental setup and results of user tests focusing on the perception of temporal characteristics of vibration of a mobile device. The experiment consisted of six vibration stimuli of different length. We asked the subjects to score the subjective perception level in a five point Lickert scale. The results suggest that the optimal duration of the control signal should be between 50 and 200 ms in this specific case. Longer durations were perceived as being irritating.", "title": "" }, { "docid": "ca82ceafc6079f416d9f7b94a7a6a665", "text": "When Big data and cloud computing join forces together, several domains like: healthcare, disaster prediction and decision making become easier and much more beneficial to users in term of information gathering, although cloud computing will reduce time and cost of analyzing information for big data, it may harm the confidentiality and integrity of the sensitive data, for instance, in healthcare, when analyzing disease's spreading area, the name of the infected people must remain secure, hence the obligation to adopt a secure model that protect sensitive data from malicious users. Several case studies on the integration of big data in cloud computing, urge on how easier it would be to analyze and manage big data in this complex envronement. Companies must consider outsourcing their sensitive data to the cloud to take advantage of its beneficial resources such as huge storage, fast calculation, and availability, yet cloud computing might harm the security of data stored and computed in it (confidentiality, integrity). Therefore, strict paradigm must be adopted by organization to obviate their outsourced data from being stolen, damaged or lost. In this paper, we compare between the existing models to secure big data implementation in the cloud computing. Then, we propose our own model to secure Big Data on the cloud computing environement, considering the lifecycle of data from uploading, storage, calculation to its destruction.", "title": "" }, { "docid": "0fdd7f5c5cd1225567e89b456ef25ea0", "text": "In this work we propose a hierarchical approach for labeling semantic objects and regions in scenes. Our approach is reminiscent of early vision literature in that we use a decomposition of the image in order to encode relational and spatial information. In contrast to much existing work on structured prediction for scene understanding, we bypass a global probabilistic model and instead directly train a hierarchical inference procedure inspired by the message passing mechanics of some approximate inference procedures in graphical models. This approach mitigates both the theoretical and empirical difficulties of learning probabilistic models when exact inference is intractable. In particular, we draw from recent work in machine learning and break the complex inference process into a hierarchical series of simple machine learning subproblems. Each subproblem in the hierarchy is designed to capture the image and contextual statistics in the scene. This hierarchy spans coarse-to-fine regions and explicitly models the mixtures of semantic labels that may be present due to imperfect segmentation. To avoid cascading of errors and overfitting, we train the learning problems in sequence to ensure robustness to likely errors earlier in the inference sequence and leverage the stacking approach developed by Cohen et al.", "title": "" }, { "docid": "922354208e78ed5154b8dfbe4ed14c7e", "text": "Digital systems, especially those for mobile applications are sensitive to power consumption, chip size and costs. Therefore they are realized using fixed-point architectures, either dedicated HW or programmable DSPs. On the other hand, system design starts from a floating-point description. These requirements have been the motivation for FRIDGE (Fixed-point pRogrammIng DesiGn Environment), a design environment for the specification, evaluation and implementation of fixed-point systems. FRIDGE offers a seamless design flow from a floating- point description to a fixed-point implementation. Within this paper we focus on two core capabilities of FRIDGE: (1) the concept of an interactive, automated transformation of floating-point programs written in ANSI-C into fixed-point specifications, based on an interpolative approach. The design time reductions that can be achieved make FRIDGE a key component for an efficient HW/SW-CoDesign. (2) a fast fixed-point simulation that performs comprehensive compile-time analyses, reducing simulation time by one order of magnitude compared to existing approaches.", "title": "" }, { "docid": "bfee1553c6207909abc9820e741d6e01", "text": "Ciphertext-policy attribute-based encryption (CP-ABE) is a promising cryptographic technique that integrates data encryption with access control for ensuring data security in IoT systems. However, the efficiency problem of CP-ABE is still a bottleneck limiting its development and application. A widespread consensus is that the computation overhead of bilinear pairing is excessive in the practical application of ABE, especially for the devices or the processors with limited computational resources and power supply. In this paper, we proposed a novel pairing-free data access control scheme based on CP-ABE using elliptic curve cryptography, abbreviated PF-CP-ABE. We replace complicated bilinear pairing with simple scalar multiplication on elliptic curves, thereby reducing the overall computation overhead. And we designed a new way of key distribution that it can directly revoke a user or an attribute without updating other users’ keys during the attribute revocation phase. Besides, our scheme use linear secret sharing scheme access structure to enhance the expressiveness of the access policy. The security and performance analysis show that our scheme significantly improved the overall efficiency as well as ensured the security.", "title": "" }, { "docid": "74812252b395dca254783d05e1db0cf5", "text": "Cyber-Physical Security Testbeds serve as valuable experimental platforms to implement and evaluate realistic, complex cyber attack-defense experiments. Testbeds, unlike traditional simulation platforms, capture communication, control and physical system characteristics and their interdependencies adequately in a unified environment. In this paper, we show how the PowerCyber CPS testbed at Iowa State was used to implement and evaluate cyber attacks on one of the fundamental Wide-Area Control applications, namely, the Automatic Generation Control (AGC). We provide a brief overview of the implementation of the experimental setup on the testbed. We then present a case study using the IEEE 9 bus system to evaluate the impacts of cyber attacks on AGC. Specifically, we analyzed the impacts of measurement based attacks that manipulated the tie-line and frequency measurements, and control based attacks that manipulated the ACE values sent to generators. We found that these attacks could potentially create under frequency conditions and could cause unnecessary load shedding. As part of future work, we plan to extend this work and utilize the experimental setup to implement other sophisticated, stealthy attack vectors and also develop attack-resilient algorithms to detect and mitigate such attacks.", "title": "" }, { "docid": "2ecb4d841ef57a3acdf05cbb727aecbf", "text": "Boosting is a general method for improving the accuracy of any given learning algorithm. This short overview paper introduces the boosting algorithm AdaBoost, and explains the underlying theory of boosting, including an explanation of why boosting often does not suffer from overfitting as well as boosting’s relationship to support-vector machines. Some examples of recent applications of boosting are also described.", "title": "" } ]
scidocsrr
0ee681e15fad3ce020562df645751692
Risk assessment model selection in construction industry
[ { "docid": "ee0d11cbd2e723aff16af1c2f02bbc2b", "text": "This study simplifies the complicated metric distance method [L.S. Chen, C.H. Cheng, Selecting IS personnel using ranking fuzzy number by metric distance method, Eur. J. Operational Res. 160 (3) 2005 803–820], and proposes an algorithm to modify Chen’s Fuzzy TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) [C.T. Chen, Extensions of the TOPSIS for group decision-making under fuzzy environment, Fuzzy Sets Syst., 114 (2000) 1–9]. From experimental verification, Chen directly assigned the fuzzy numbers 1̃ and 0̃ as fuzzy positive ideal solution (PIS) and negative ideal solution (NIS). Chen’s method sometimes violates the basic concepts of traditional TOPSIS. This study thus proposes fuzzy hierarchical TOPSIS, which not only is well suited for evaluating fuzziness and uncertainty problems, but also can provide more objective and accurate criterion weights, while simultaneously avoiding the problem of Chen’s Fuzzy TOPSIS. For application and verification, this study presents a numerical example and build a practical supplier selection problem to verify our proposed method and compare it with other methods. 2008 Elsevier B.V. All rights reserved. * Corresponding author. Tel.: +886 5 5342601x5312; fax: +886 5 531 2077. E-mail addresses: jwwang@mail.nhu.edu.tw (J.-W. Wang), chcheng@yuntech.edu.tw (C.-H. Cheng), lendlice@ms12.url.com.tw (K.-C. Huang).", "title": "" } ]
[ { "docid": "14fb6228827657ba6f8d35d169ad3c63", "text": "In a recent paper, the authors proposed a new class of low-complexity iterative thresholding algorithms for reconstructing sparse signals from a small set of linear measurements. The new algorithms are broadly referred to as AMP, for approximate message passing. This is the first of two conference papers describing the derivation of these algorithms, connection with the related literature, extensions of the original framework, and new empirical evidence. In particular, the present paper outlines the derivation of AMP from standard sum-product belief propagation, and its extension in several directions. We also discuss relations with formal calculations based on statistical mechanics methods.", "title": "" }, { "docid": "e06005f63efd6f8ca77f8b91d1b3b4a9", "text": "Natural language generators for taskoriented dialogue must effectively realize system dialogue actions and their associated semantics. In many applications, it is also desirable for generators to control the style of an utterance. To date, work on task-oriented neural generation has primarily focused on semantic fidelity rather than achieving stylistic goals, while work on style has been done in contexts where it is difficult to measure content preservation. Here we present three different sequence-to-sequence models and carefully test how well they disentangle content and style. We use a statistical generator, PERSONAGE, to synthesize a new corpus of over 88,000 restaurant domain utterances whose style varies according to models of personality, giving us total control over both the semantic content and the stylistic variation in the training data. We then vary the amount of explicit stylistic supervision given to the three models. We show that our most explicit model can simultaneously achieve high fidelity to both semantic and stylistic goals: this model adds a context vector of 36 stylistic parameters as input to the hidden state of the encoder at each time step, showing the benefits of explicit stylistic supervision, even when the amount of training data is large.", "title": "" }, { "docid": "0850f46a4bcbe1898a6a2dca9f61ea61", "text": "Public opinion polarization is here conceived as a process of alignment along multiple lines of potential disagreement and measured as growing constraint in individuals' preferences. Using NES data from 1972 to 2004, the authors model trends in issue partisanship-the correlation of issue attitudes with party identification-and issue alignment-the correlation between pairs of issues-and find a substantive increase in issue partisanship, but little evidence of issue alignment. The findings suggest that opinion changes correspond more to a resorting of party labels among voters than to greater constraint on issue attitudes: since parties are more polarized, they are now better at sorting individuals along ideological lines. Levels of constraint vary across population subgroups: strong partisans and wealthier and politically sophisticated voters have grown more coherent in their beliefs. The authors discuss the consequences of partisan realignment and group sorting on the political process and potential deviations from the classic pluralistic account of American politics.", "title": "" }, { "docid": "1742428cb9c72c90f94d0beb6ad3e086", "text": "A key issue in face recognition is to seek an effective descriptor for representing face appearance. In the context of considering the face image as a set of small facial regions, this paper presents a new face representation approach coined spatial feature interdependence matrix (SFIM). Unlike classical face descriptors which usually use a hierarchically organized or a sequentially concatenated structure to describe the spatial layout features extracted from local regions, SFIM is attributed to the exploitation of the underlying feature interdependences regarding local region pairs inside a class specific face. According to SFIM, the face image is projected onto an undirected connected graph in a manner that explicitly encodes feature interdependence-based relationships between local regions. We calculate the pair-wise interdependence strength as the weighted discrepancy between two feature sets extracted in a hybrid feature space fusing histograms of intensity, local binary pattern and oriented gradients. To achieve the goal of face recognition, our SFIM-based face descriptor is embedded in three different recognition frameworks, namely nearest neighbor search, subspace-based classification, and linear optimization-based classification. Extensive experimental results on four well-known face databases and comprehensive comparisons with the state-of-the-art results are provided to demonstrate the efficacy of the proposed SFIM-based descriptor.", "title": "" }, { "docid": "4d3468bb14b7ad933baac5c50feec496", "text": "Conventional material removal techniques, like CNC milling, have been proven to be able to tackle nearly any machining challenge. On the other hand, the major drawback of using conventional CNC machines is the restricted working area and their produced shape limitation limitations. From a conceptual point of view, industrial robot technology could provide an excellent base for machining being both flexible and cost efficient. However, industrial machining robots lack absolute positioning accuracy, are unable to reject/absorb disturbances in terms of process forces and lack reliable programming and simulation tools to ensure right first time machining, at production startups. This paper reviews the penetration of industrial robots in the challenging field of machining.", "title": "" }, { "docid": "68720a44720b4d80e661b58079679763", "text": "The value of involving people as ‘users’ or ‘participants’ in the design process is increasingly becoming a point of debate. In this paper we describe a new framework, called ‘informant design’, which advocates efficiency of input from different people: maximizing the value of contributions tlom various informants and design team members at different stages of the design process. To illustrate how this can be achieved we describe a project that uses children and teachers as informants at difTerent stages to help us design an interactive learning environment for teaching ecology.", "title": "" }, { "docid": "42903610920a47773627a33db25590f3", "text": "We consider the analysis of Electroencephalography (EEG) and Local Field Potential (LFP) datasets, which are “big” in terms of the size of recorded data but rarely have sufficient labels required to train complex models (e.g., conventional deep learning methods). Furthermore, in many scientific applications, the goal is to be able to understand the underlying features related to the classification, which prohibits the blind application of deep networks. This motivates the development of a new model based on parameterized convolutional filters guided by previous neuroscience research; the filters learn relevant frequency bands while targeting synchrony, which are frequency-specific power and phase correlations between electrodes. This results in a highly expressive convolutional neural network with only a few hundred parameters, applicable to smaller datasets. The proposed approach is demonstrated to yield competitive (often state-of-the-art) predictive performance during our empirical tests while yielding interpretable features. Furthermore, a Gaussian process adapter is developed to combine analysis over distinct electrode layouts, allowing the joint processing of multiple datasets to address overfitting and improve generalizability. Finally, it is demonstrated that the proposed framework effectively tracks neural dynamics on children in a clinical trial on Autism Spectrum Disorder.", "title": "" }, { "docid": "338e037f4ec9f6215f48843b9d03f103", "text": "Sparse deep neural networks(DNNs) are efficient in both memory and compute when compared to dense DNNs. But due to irregularity in computation of sparse DNNs, their efficiencies are much lower than that of dense DNNs on general purpose hardwares. This leads to poor/no performance benefits for sparse DNNs. Performance issue for sparse DNNs can be alleviated by bringing structure to the sparsity and leveraging it for improving runtime efficiency. But such structural constraints often lead to sparse models with suboptimal accuracies. In this work, we jointly address both accuracy and performance of sparse DNNs using our proposed class of neural networks called HBsNN (Hierarchical Block sparse Neural Networks).", "title": "" }, { "docid": "0d9057d8a40eb8faa7e67128a7d24565", "text": "We develop efficient solution methods for a robust empirical risk minimization problem designed to give calibrated confidence intervals on performance and provide optimal tradeoffs between bias and variance. Our methods apply to distributionally robust optimization problems proposed by Ben-Tal et al., which put more weight on observations inducing high loss via a worst-case approach over a non-parametric uncertainty set on the underlying data distribution. Our algorithm solves the resulting minimax problems with nearly the same computational cost of stochastic gradient descent through the use of several carefully designed data structures. For a sample of size n, the per-iteration cost of our method scales as O(log n), which allows us to give optimality certificates that distributionally robust optimization provides at little extra cost compared to empirical risk minimization and stochastic gradient methods.", "title": "" }, { "docid": "2ab8c692ef55d2501ff61f487f91da9c", "text": "A common discussion subject for the male part of the population in particular, is the prediction of next weekend’s soccer matches, especially for the local team. Knowledge of offensive and defensive skills is valuable in the decision process before making a bet at a bookmaker. In this article we take an applied statistician’s approach to the problem, suggesting a Bayesian dynamic generalised linear model to estimate the time dependent skills of all teams in a league, and to predict next weekend’s soccer matches. The problem is more intricate than it may appear at first glance, as we need to estimate the skills of all teams simultaneously as they are dependent. It is now possible to deal with such inference problems using the iterative simulation technique known as Markov Chain Monte Carlo. We will show various applications of the proposed model based on the English Premier League and Division 1 1997-98; Prediction with application to betting, retrospective analysis of the final ranking, detection of surprising matches and how each team’s properties vary during the season.", "title": "" }, { "docid": "9876e3f1438d39549a93ce4011bc0df0", "text": "SIMON (Signal Interpretation and MONitoring) continuously collects, permanently stores, and processes bedside medical device data. Since 1998 SIMON has monitored over 3500 trauma intensive care unit (TICU) patients, representing approximately 250,000 hours of continuous monitoring and two billion data points, and is currently operational on all 14 TICU beds at Vanderbilt University Medical Center. This repository of dense physiologic data (heart rate, arterial, pulmonary, central venous, intracranial, and cerebral perfusion pressures, arterial and venous oxygen saturations, and other parameters sampled second-by-second) supports research to identify “new vital signs” features of patient physiology only observable through dense data capture and analysis more predictive of patient status than current measures. SIMON’s alerting and reporting capabilities, including web-based display, sentinel event notification via alphanumeric pagers, and daily summary reports of vital sign statistics, allow these discoveries to be rapidly tested and implemented in a working clinical environment. This", "title": "" }, { "docid": "5ed409feee70554257e4974ab99674e0", "text": "Text mining and information retrieval in large collections of scientific literature require automated processing systems that analyse the documents’ content. However, the layout of scientific articles is highly varying across publishers, and common digital document formats are optimised for presentation, but lack structural information. To overcome these challenges, we have developed a processing pipeline that analyses the structure a PDF document using a number of unsupervised machine learning techniques and heuristics. Apart from the meta-data extraction, which we reused from previous work, our system uses only information available from the current document and does not require any pre-trained model. First, contiguous text blocks are extracted from the raw character stream. Next, we determine geometrical relations between these blocks, which, together with geometrical and font information, are then used categorize the blocks into different classes. Based on this resulting logical structure we finally extract the body text and the table of contents of a scientific article. We separately evaluate the individual stages of our pipeline on a number of different datasets and compare it with other document structure analysis approaches. We show that it outperforms a state-of-the-art system in terms of the quality of the extracted body text and table of contents. Our unsupervised approach could provide a basis for advanced digital library scenarios that involve diverse and dynamic corpora.", "title": "" }, { "docid": "012d6b86279e65237a3ad4515e4e439f", "text": "The main purpose of the present study is to help managers cope with the negative effects of technostress on employee use of ICT. Drawing on transaction theory of stress (Cooper, Dewe, & O’Driscoll, 2001) and information systems (IS) continuance theory (Bhattacherjee, 2001) we investigate the effects of technostress on employee intentions to extend the use of ICT at work. Our results show that factors that create and inhibit technostress affect both employee satisfaction with the use of ICT and employee intentions to extend the use of ICT. Our findings have important implications for the management of technostress with regard to both individual stress levels and organizational performance. A key implication of our research is that managers should implement strategies for coping with technostress through the theoretical concept of technostress inhibitors. 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "da64b7855ec158e97d48b31e36f100a5", "text": "Named Entity Recognition (NER) is the task of classifying or labelling atomic elements in the text into categories such as Person, Location or Organisation. For Arabic language, recognizing named entities is a challenging task because of the complexity and the unique characteristics of this language. In addition, most of the previous work focuses on Modern Standard Arabic (MSA), however, recognizing named entities in social media is becoming more interesting these days. Dialectal Arabic (DA) and MSA are both used in social media, which is deemed as another challenging task. Most state-of-the-art Arabic NER systems count heavily on hand-crafted engineering features and lexicons which is time consuming. In this paper, we introduce a novel neural network architecture which benefits both from characterand word-level representations automatically, by using combination of bidirectional Long Short-Term Memory (LSTM) and Conditional Random Field (CRF), eliminating the need for most feature engineering. Moreover, our model relies on unsupervised word representations learned from unannotated corpora. Experimental results demonstrate that our model achieves state-of-the-art performance on publicly available benchmark for Arabic NER for social media and surpassing the previous system by a large margin.", "title": "" }, { "docid": "8787335d8f5a459dc47b813fd385083b", "text": "Human papillomavirus infection can cause a variety of benign or malignant oral lesions, and the various genotypes can cause distinct types of lesions. To our best knowledge, there has been no report of 2 different human papillomavirus-related oral lesions in different oral sites in the same patient before. This paper reported a patient with 2 different oral lesions which were clinically and histologically in accord with focal epithelial hyperplasia and oral papilloma, respectively. Using DNA extracted from these 2 different lesions, tissue blocks were tested for presence of human papillomavirus followed by specific polymerase chain reaction testing for 6, 11, 13, 16, 18, and 32 subtypes in order to confirm the clinical diagnosis. Finally, human papillomavirus-32-positive focal epithelial hyperplasia accompanying human papillomavirus-16-positive oral papilloma-like lesions were detected in different sites of the oral mucosa. Nucleotide sequence sequencing further confirmed the results. So in our clinical work, if the simultaneous occurrences of different human papillomavirus associated lesions are suspected, the multiple biopsies from different lesions and detection of human papillomavirus genotype are needed to confirm the diagnosis.", "title": "" }, { "docid": "e643f7f29c2e96639a476abb1b9a38b1", "text": "Weather forecasting has been one of the most scientifically and technologically challenging problem around the world. Weather data is one of the meteorological data that is rich with important information, which can be used for weather prediction We extract knowledge from weather historical data collected from Indian Meteorological Department (IMD) Pune. From the collected weather data comprising of 36 attributes, only 7 attributes are most relevant to rainfall prediction. We made data preprocessing and data transformation on raw weather data set, so that it shall be possible to work on Bayesian, the data mining, prediction model used for rainfall prediction. The model is trained using the training data set and has been tested for accuracy on available test data. The meteorological centers uses high performance computing and supercomputing power to run weather prediction model. To address the issue of compute intensive rainfall prediction model, we proposed and implemented data intensive model using data mining technique. Our model works with good accuracy and takes moderate compute resources to predict the rainfall. We have used Bayesian approach to prove our model for rainfall prediction, and found to be working well with good accuracy.", "title": "" }, { "docid": "9b1cf7cb855ba95693b90efacc34ac6d", "text": "Cellular electron cryo-tomography enables the 3D visualization of cellular organization in the near-native state and at submolecular resolution. However, the contents of cellular tomograms are often complex, making it difficult to automatically isolate different in situ cellular components. In this paper, we propose a convolutional autoencoder-based unsupervised approach to provide a coarse grouping of 3D small subvolumes extracted from tomograms. We demonstrate that the autoencoder can be used for efficient and coarse characterization of features of macromolecular complexes and surfaces, such as membranes. In addition, the autoencoder can be used to detect non-cellular features related to sample preparation and data collection, such as carbon edges from the grid and tomogram boundaries. The autoencoder is also able to detect patterns that may indicate spatial interactions between cellular components. Furthermore, we demonstrate that our autoencoder can be used for weakly supervised semantic segmentation of cellular components, requiring a very small amount of manual annotation.", "title": "" }, { "docid": "37c8fa72d0959a64460dbbe4fdb8c296", "text": "This paper presents a model which generates architectural layout for a single flat having regular shaped spaces; Bedroom, Bathroom, Kitchen, Balcony, Living and Dining Room. Using constraints at two levels; Topological (Adjacency, Compactness, Vaastu, and Open and Closed face constraints) and Dimensional (Length to Width ratio constraint), Genetic Algorithms have been used to generate the topological arrangement of spaces in the layout and further if required, feasibility have been dimensionally analyzed. Further easy evacuation form the selected layout in case of adversity has been proposed using Dijkstra's Algorithm. Later the proposed model has been tested for efficiency using various test cases. This paper also presents a classification and categorization of various problems of space planning.", "title": "" }, { "docid": "27fd4240452fe6f08af7fbf86e8acdf5", "text": "Intrinsically motivated goal exploration processes enable agents to autonomously sample goals to explore efficiently complex environments with high-dimensional continuous actions. They have been applied successfully to real world robots to discover repertoires of policies producing a wide diversity of effects. Often these algorithms relied on engineered goal spaces but it was recently shown that one can use deep representation learning algorithms to learn an adequate goal space in simple environments. However, in the case of more complex environments containing multiple objects or distractors, an efficient exploration requires that the structure of the goal space reflects the one of the environment. In this paper we show that using a disentangled goal space leads to better exploration performances than an entangled one. We further show that when the representation is disentangled, one can leverage it by sampling goals that maximize learning progress in a modular manner. Finally, we show that the measure of learning progress, used to drive curiosity-driven exploration, can be used simultaneously to discover abstract independently controllable features of the environment. The code used in the experiments is available at https://github.com/flowersteam/ Curiosity_Driven_Goal_Exploration.", "title": "" }, { "docid": "f383dd5dd7210105406c2da80cf72f89", "text": "We present a new, \"greedy\", channel-router that is quick, simple, and highly effective. It always succeeds, usually using no more than one track more than required by channel density. (It may be forced in rare cases to make a few connections \"off the end\" of the channel, in order to succeed.) It assumes that all pins and wiring lie on a common grid, and that vertical wires are on one layer, horizontal on another. The greedy router wires up the channel in a left-to-right, column-by-column manner, wiring each column completely before starting the next. Within each column the router tries to maximize the utility of the wiring produced, using simple, \"greedy\" heuristics. It may place a net on more than one track for a few columns, and \"collapse\" the net to a single track later on, using a vertical jog. It may also use a jog to move a net to a track closer to its pin in some future column. The router may occasionally add a new track to the channel, to avoid \"getting stuck\".", "title": "" } ]
scidocsrr
be7390a0d3790cb17b54f1b8b45dae52
Terminology Extraction: An Analysis of Linguistic and Statistical Approaches
[ { "docid": "8a043a1ac74da0ec0cd55d1c8b658666", "text": "Most recent research in trainable part of speech taggers has explored stochastic tagging. While these taggers obtain high accuracy, linguistic information is captured indirectly, typically in tens of thousands of lexical and contextual probabilities. In (Brill 1992), a trainable rule-based tagger was described that obtained performance comparable to that of stochastic taggers, but captured relevant linguistic information in a small number of simple non-stochastic rules. In this paper, we describe a number of extensions to this rule-based tagger. First, we describe a method for expressing lexical relations in tagging that stochastic taggers are currently unable to express. Next, we show a rule-based approach to tagging unknown words. Finally, we show how the tagger can be extended into a k-best tagger, where multiple tags can be assigned to words in some cases of uncertainty.", "title": "" } ]
[ { "docid": "b537af893b84a4c41edb829d45190659", "text": "We seek a complete description for the neurome of the Drosophila, which involves tracing more than 20,000 neurons. The currently available tracings are sensitive to background clutter and poor contrast of the images. In this paper, we present Tree2Tree2, an automatic neuron tracing algorithm to segment neurons from 3D confocal microscopy images. Building on our previous work in segmentation [1], this method uses an adaptive initial segmentation to detect the neuronal portions, as opposed to a global strategy that often results in under segmentation. In order to connect the disjoint portions, we use a technique called Path Search, which is based on a shortest path approach. An intelligent pruning step is also implemented to delete undesired branches. Tested on 3D confocal microscopy images of GFP labeled Drosophila neurons, the visual and quantitative results suggest that Tree2Tree2 is successful in automatically segmenting neurons in images plagued by background clutter and filament discontinuities.", "title": "" }, { "docid": "de71bef095a0ef7fb4fb1b10d4136615", "text": "Active learning—a class of algorithms that iteratively searches for the most informative samples to include in a training dataset—has been shown to be effective at annotating data for image classification. However, the use of active learning for object detection is still largely unexplored as determining informativeness of an object-location hypothesis is more difficult. In this paper, we address this issue and present two metrics for measuring the informativeness of an object hypothesis, which allow us to leverage active learning to reduce the amount of annotated data needed to achieve a target object detection performance. Our first metric measures “localization tightness” of an object hypothesis, which is based on the overlapping ratio between the region proposal and the final prediction. Our second metric measures “localization stability” of an object hypothesis, which is based on the variation of predicted object locations when input images are corrupted by noise. Our experimental results show that by augmenting a conventional active-learning algorithm designed for classification with the proposed metrics, the amount of labeled training data required can be reduced up to 25%. Moreover, on PASCAL 2007 and 2012 datasets our localization-stability method has an average relative improvement of 96.5% and 81.9% over the base-line method using classification only. Asian Conference on Computer Vision This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved. Copyright c © Mitsubishi Electric Research Laboratories, Inc., 2018 201 Broadway, Cambridge, Massachusetts 02139 Localization-Aware Active Learning for Object", "title": "" }, { "docid": "1a8e9b74d4c1a32299ca08e69078c70c", "text": "Semantic Textual Similarity (STS) measures the degree of semantic equivalence between two segments of text, even though the similar context is expressed using different words. The textual segments are word phrases, sentences, paragraphs or documents. The similarity can be measured using lexical, syntactic and semantic information embedded in the sentences. The STS task in SemEval workshop is viewed as a regression problem, where real-valued output is clipped to the range 0-5 on a sentence pair. In this paper, empirical evaluations are carried using lexical, syntactic and semantic features on STS 2016 dataset. A new syntactic feature, Phrase Entity Alignment (PEA) is proposed. A phrase entity is a conceptual unit in a sentence with a subject or an object and its describing words. PEA aligns phrase entities present in the sentences based on their similarity scores. STS score is measured by combing the similarity scores of all aligned phrase entities. The impact of PEA on semantic textual equivalence is depicted using Pearson correlation between system generated scores and the human annotations. The proposed system attains a mean score of 0.7454 using random forest regression model. The results indicate that the system using the lexical, syntactic and semantic features together with PEA feature perform comparably better than existing systems.", "title": "" }, { "docid": "9c97262605b3505bbc33c64ff64cfcd5", "text": "This essay focuses on possible nonhuman applications of CRISPR/Cas9 that are likely to be widely overlooked because they are unexpected and, in some cases, perhaps even \"frivolous.\" We look at five uses for \"CRISPR Critters\": wild de-extinction, domestic de-extinction, personal whim, art, and novel forms of disease prevention. We then discuss the current regulatory framework and its possible limitations in those contexts. We end with questions about some deeper issues raised by the increased human control over life on earth offered by genome editing.", "title": "" }, { "docid": "df8f22d84cc8f6f38de90c2798889051", "text": "The large amount of videos popping up every day, make it is more and more critical that key information within videos can be extracted and understood in a very short time. Video summarization, the task of finding the smallest subset of frames, which still conveys the whole story of a given video, is thus of great significance to improve efficiency of video understanding. In this paper, we propose a novel Dilated Temporal Relational Generative Adversarial Network (DTR-GAN) to achieve framelevel video summarization. Given a video, it can select a set of key frames, which contains the most meaningful and compact information. Specifically, DTR-GAN learns a dilated temporal relational generator and a discriminator with three-player loss in an adversarial manner. A new dilated temporal relation (DTR) unit is introduced for enhancing temporal representation capturing. The generator aims to select key frames by using DTR units to effectively exploit global multi-scale temporal context and to complement the commonly used Bi-LSTM. To ensure that the summaries capture enough key video representation from a global perspective rather than a trivial randomly shorten sequence, we present a discriminator that learns to enforce both the information completeness and compactness of summaries via a three-player loss. The three-player loss includes the generated summary loss, the random summary loss, and the real summary (ground-truth) loss, which play important roles for better regularizing the learned model to obtain useful summaries. Comprehensive experiments on two public datasets SumMe and TVSum show the superiority of our DTR-GAN over the stateof-the-art approaches.", "title": "" }, { "docid": "c10bd86125db702e0839e2a3776e195b", "text": "To solve the big topic modeling problem, we need to reduce both time and space complexities of batch latent Dirichlet allocation (LDA) algorithms. Although parallel LDA algorithms on the multi-processor architecture have low time and space complexities, their communication costs among processors often scale linearly with the vocabulary size and the number of topics, leading to a serious scalability problem. To reduce the communication complexity among processors for a better scalability, we propose a novel communication-efficient parallel topic modeling architecture based on power law, which consumes orders of magnitude less communication time when the number of topics is large. We combine the proposed communication-efficient parallel architecture with the online belief propagation (OBP) algorithm referred to as POBP for big topic modeling tasks. Extensive empirical results confirm that POBP has the following advantages to solve the big topic modeling problem: 1) high accuracy, 2) communication-efficient, 3) fast speed, and 4) constant memory usage when compared with recent state-of-the-art parallel LDA algorithms on the multi-processor architecture. Index Terms —Big topic modeling, latent Dirichlet allocation, communication complexity, multi-processor architecture, online belief propagation, power law.", "title": "" }, { "docid": "d93abfdc3bc20a23e533f3ad2e30b9c9", "text": "Over the past few years, the realm of embedded systems has expanded to include a wide variety of products, ranging from digital cameras, to sensor networks, to medical imaging systems. Consequently, engineers strive to create ever smaller and faster products, many of which have stringent power requirements. Coupled with increasing pressure to decrease costs and time-to-market, the design constraints of embedded systems pose a serious challenge to embedded systems designers. Reconfigurable hardware can provide a flexible and efficient platform for satisfying the area, performance, cost, and power requirements of many embedded systems. This article presents an overview of reconfigurable computing in embedded systems, in terms of benefits it can provide, how it has already been used, design issues, and hurdles that have slowed its adoption.", "title": "" }, { "docid": "93d498adaee9070ffd608c5c1fe8e8c9", "text": "INTRODUCTION\nFluorescence anisotropy (FA) is one of the major established methods accepted by industry and regulatory agencies for understanding the mechanisms of drug action and selecting drug candidates utilizing a high-throughput format.\n\n\nAREAS COVERED\nThis review covers the basics of FA and complementary methods, such as fluorescence lifetime anisotropy and their roles in the drug discovery process. The authors highlight the factors affecting FA readouts, fluorophore selection and instrumentation. Furthermore, the authors describe the recent development of a successful, commercially valuable FA assay for long QT syndrome drug toxicity to illustrate the role that FA can play in the early stages of drug discovery.\n\n\nEXPERT OPINION\nDespite the success in drug discovery, the FA-based technique experiences competitive pressure from other homogeneous assays. That being said, FA is an established yet rapidly developing technique, recognized by academic institutions, the pharmaceutical industry and regulatory agencies across the globe. The technical problems encountered in working with small molecules in homogeneous assays are largely solved, and new challenges come from more complex biological molecules and nanoparticles. With that, FA will remain one of the major work-horse techniques leading to precision (personalized) medicine.", "title": "" }, { "docid": "72a6a7fe366def9f97ece6d1ddc46a2e", "text": "Our work in this paper presents a prediction of quality of experience based on full reference parametric (SSIM, VQM) and application metrics (resolution, bit rate, frame rate) in SDN networks. First, we used DCR (Degradation Category Rating) as subjective method to build the training model and validation, this method is based on not only the quality of received video but also the original video but all subjective methods are too expensive, don't take place in real time and takes much time for example our method takes three hours to determine the average MOS (Mean Opinion Score). That's why we proposed novel method based on machine learning algorithms to obtain the quality of experience in an objective manner. Previous researches in this field help us to use four algorithms: Decision Tree (DT), Neural Network, K nearest neighbors KNN and Random Forest RF thanks to their efficiency. We have used two metrics recommended by VQEG group to assess the best algorithm: Pearson correlation coefficient r and Root-Mean-Square-Error RMSE. The last part of the paper describes environment based on: Weka to analyze ML algorithms, MSU tool to calculate SSIM and VQM and Mininet for the SDN simulation.", "title": "" }, { "docid": "74d6c2fff4b67d05871ca0debbc4ec15", "text": "There is great interest in developing rechargeable lithium batteries with higher energy capacity and longer cycle life for applications in portable electronic devices, electric vehicles and implantable medical devices. Silicon is an attractive anode material for lithium batteries because it has a low discharge potential and the highest known theoretical charge capacity (4,200 mAh g(-1); ref. 2). Although this is more than ten times higher than existing graphite anodes and much larger than various nitride and oxide materials, silicon anodes have limited applications because silicon's volume changes by 400% upon insertion and extraction of lithium which results in pulverization and capacity fading. Here, we show that silicon nanowire battery electrodes circumvent these issues as they can accommodate large strain without pulverization, provide good electronic contact and conduction, and display short lithium insertion distances. We achieved the theoretical charge capacity for silicon anodes and maintained a discharge capacity close to 75% of this maximum, with little fading during cycling.", "title": "" }, { "docid": "e68c73806392d10c3c3fd262f6105924", "text": "Dynamic programming (DP) is a powerful paradigm for general, nonlinear optimal control. Computing exact DP solutions is in general only possible when the process states and the control actions take values in a small discrete set. In practice, it is necessary to approximate the solutions. Therefore, we propose an algorithm for approximate DP that relies on a fuzzy partition of the state space, and on a discretization of the action space. This fuzzy Q-iteration algorithmworks for deterministic processes, under the discounted return criterion. We prove that fuzzy Q -iteration asymptotically converges to a solution that lies within a bound of the optimal solution. A bound on the suboptimality of the solution obtained in a finite number of iterations is also derived. Under continuity assumptions on the dynamics and on the reward function, we show that fuzzyQ -iteration is consistent, i.e., that it asymptotically obtains the optimal solution as the approximation accuracy increases. These properties hold both when the parameters of the approximator are updated in a synchronous fashion, and when they are updated asynchronously. The asynchronous algorithm is proven to converge at least as fast as the synchronous one. The performance of fuzzy Q iteration is illustrated in a two-link manipulator control problem. © 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "cd67a23b3ed7ab6d97a198b0e66a5628", "text": "A growing number of children and adolescents are involved in resistance training in schools, fitness centers, and sports training facilities. In addition to increasing muscular strength and power, regular participation in a pediatric resistance training program may have a favorable influence on body composition, bone health, and reduction of sports-related injuries. Resistance training targeted to improve low fitness levels, poor trunk strength, and deficits in movement mechanics can offer observable health and fitness benefits to young athletes. However, pediatric resistance training programs need to be well-designed and supervised by qualified professionals who understand the physical and psychosocial uniqueness of children and adolescents. The sensible integration of different training methods along with the periodic manipulation of programs design variables over time will keep the training stimulus effective, challenging, and enjoyable for the participants.", "title": "" }, { "docid": "03be8a60e1285d62c34b982ddf1bcf58", "text": "A cognitive map has long been the dominant metaphor for hippocampal function, embracing the idea that place cells encode a geometric representation of space. However, evidence for predictive coding, reward sensitivity and policy dependence in place cells suggests that the representation is not purely spatial. We approach this puzzle from a reinforcement learning perspective: what kind of spatial representation is most useful for maximizing future reward? We show that the answer takes the form of a predictive representation. This representation captures many aspects of place cell responses that fall outside the traditional view of a cognitive map. Furthermore, we argue that entorhinal grid cells encode a low-dimensionality basis set for the predictive representation, useful for suppressing noise in predictions and extracting multiscale structure for hierarchical planning.", "title": "" }, { "docid": "f05f6d9eeff0b492b74e6ab18c4707ba", "text": "Communication is an interactive, complex, structured process involving agents that are capable of drawing conclusions from the information they have available about some real-life situations. Such situations are generally characterized as being imperfect. In this paper, we aim to address learning from the perspective of the communication between agents. To learn a collection of propositions concerning some situation is to incorporate it within one's knowledge about that situation. That is, the key factor in this activity is for the goal agent, where agents may switch role if appropriate, to integrate the information offered with what it already knows. This may require a process of belief revision, which suggests that the process of incorporation of new information should be modeled nonmonotonically. We shall employ for reasoning a three-valued based nonmonotonic logic that formalizes some aspects of revisable reasoning and it is accessible to implementation. The logic is sound and complete. A theorem-prover of the logic has successfully been implemented.", "title": "" }, { "docid": "9e65315d4e241dc8d4ea777247f7c733", "text": "A long-standing focus on compliance has traditionally constrained development of fundamental design changes for Electronic Health Records (EHRs). We now face a critical need for such innovation, as personalization and data science prompt patients to engage in the details of their healthcare and restore agency over their medical data. In this paper, we propose MedRec: a novel, decentralized record management system to handle EHRs, using blockchain technology. Our system gives patients a comprehensive, immutable log and easy access to their medical information across providers and treatment sites. Leveraging unique blockchain properties, MedRec manages authentication, confidentiality, accountability and data sharing—crucial considerations when handling sensitive information. A modular design integrates with providers' existing, local data storage solutions, facilitating interoperability and making our system convenient and adaptable. We incentivize medical stakeholders (researchers, public health authorities, etc.) to participate in the network as blockchain “miners”. This provides them with access to aggregate, anonymized data as mining rewards, in return for sustaining and securing the network via Proof of Work. MedRec thus enables the emergence of data economics, supplying big data to empower researchers while engaging patients and providers in the choice to release metadata. The purpose of this paper is to expose, in preparation for field tests, a working prototype through which we analyze and discuss our approach and the potential for blockchain in health IT and research.", "title": "" }, { "docid": "04f4058d37a33245abf8ed9acd0af35d", "text": "After being introduced in 2009, the first fully homomorphic encryption (FHE) scheme has created significant excitement in academia and industry. Despite rapid advances in the last 6 years, FHE schemes are still not ready for deployment due to an efficiency bottleneck. Here we introduce a custom hardware accelerator optimized for a class of reconfigurable logic to bring LTV based somewhat homomorphic encryption (SWHE) schemes one step closer to deployment in real-life applications. The accelerator we present is connected via a fast PCIe interface to a CPU platform to provide homomorphic evaluation services to any application that needs to support blinded computations. Specifically we introduce a number theoretical transform based multiplier architecture capable of efficiently handling very large polynomials. When synthesized for the Xilinx Virtex 7 family the presented architecture can compute the product of large polynomials in under 6.25 msec making it the fastest multiplier design of its kind currently available in the literature and is more than 102 times faster than a software implementation. Using this multiplier we can compute a relinearization operation in 526 msec. When used as an accelerator, for instance, to evaluate the AES block cipher, we estimate a per block homomorphic evaluation performance of 442 msec yielding performance gains of 28.5 and 17 times over similar CPU and GPU implementations, respectively.", "title": "" }, { "docid": "0c7c179f2f86e7289910cee4ca583e4b", "text": "This paper presents an algorithm for fingerprint image restoration using Digital Reaction-Diffusion System (DRDS). The DRDS is a model of a discrete-time discrete-space nonlinear reaction-diffusion dynamical system, which is useful for generating biological textures, patterns and structures. This paper focuses on the design of a fingerprint restoration algorithm that combines (i) a ridge orientation estimation technique using an iterative coarse-to-fine processing strategy and (ii) an adaptive DRDS having a capability of enhancing low-quality fingerprint images using the estimated ridge orientation. The phase-only image matching technique is employed for evaluating the similarity between an original fingerprint image and a restored image. The proposed algorithm may be useful for person identification applications using fingerprint images. key words: reaction-diffusion system, pattern formation, digital signal processing, digital filters, fingerprint restoration", "title": "" }, { "docid": "4b3d890a8891cd8c84713b1167383f6f", "text": "The present research tested the hypothesis that concepts of gratitude are prototypically organized and explored whether lay concepts of gratitude are broader than researchers' concepts of gratitude. In five studies, evidence was found that concepts of gratitude are indeed prototypically organized. In Study 1, participants listed features of gratitude. In Study 2, participants reliably rated the centrality of these features. In Studies 3a and 3b, participants perceived that a hypothetical other was experiencing more gratitude when they read a narrative containing central as opposed to peripheral features. In Study 4, participants remembered more central than peripheral features in gratitude narratives. In Study 5a, participants generated more central than peripheral features when they wrote narratives about a gratitude incident, and in Studies 5a and 5b, participants generated both more specific and more generalized types of gratitude in similar narratives. Throughout, evidence showed that lay conceptions of gratitude are broader than current research definitions.", "title": "" }, { "docid": "c15369f923be7c8030cc8f2b1f858ced", "text": "An important goal of scientific data analysis is to understand the behavior of a system or process based on a sample of the system. In many instances it is possible to observe both input parameters and system outputs, and characterize the system as a high-dimensional function. Such data sets arise, for instance, in large numerical simulations, as energy landscapes in optimization problems, or in the analysis of image data relating to biological or medical parameters. This paper proposes an approach to analyze and visualizing such data sets. The proposed method combines topological and geometric techniques to provide interactive visualizations of discretely sampled high-dimensional scalar fields. The method relies on a segmentation of the parameter space using an approximate Morse-Smale complex on the cloud of point samples. For each crystal of the Morse-Smale complex, a regression of the system parameters with respect to the output yields a curve in the parameter space. The result is a simplified geometric representation of the Morse-Smale complex in the high dimensional input domain. Finally, the geometric representation is embedded in 2D, using dimension reduction, to provide a visualization platform. The geometric properties of the regression curves enable the visualization of additional information about each crystal such as local and global shape, width, length, and sampling densities. The method is illustrated on several synthetic examples of two dimensional functions. Two use cases, using data sets from the UCI machine learning repository, demonstrate the utility of the proposed approach on real data. Finally, in collaboration with domain experts the proposed method is applied to two scientific challenges. The analysis of parameters of climate simulations and their relationship to predicted global energy flux and the concentrations of chemical species in a combustion simulation and their integration with temperature.", "title": "" } ]
scidocsrr
2248259ddc80c4f4f4eb8b028affc8ef
ASTRO: A Datalog System for Advanced Stream Reasoning
[ { "docid": "47ac4b546fe75f2556a879d6188d4440", "text": "There is great interest in exploiting the opportunity provided by cloud computing platforms for large-scale analytics. Among these platforms, Apache Spark is growing in popularity for machine learning and graph analytics. Developing efficient complex analytics in Spark requires deep understanding of both the algorithm at hand and the Spark API or subsystem APIs (e.g., Spark SQL, GraphX). Our BigDatalog system addresses the problem by providing concise declarative specification of complex queries amenable to efficient evaluation. Towards this goal, we propose compilation and optimization techniques that tackle the important problem of efficiently supporting recursion in Spark. We perform an experimental comparison with other state-of-the-art large-scale Datalog systems and verify the efficacy of our techniques and effectiveness of Spark in supporting Datalog-based analytics.", "title": "" }, { "docid": "4b03aeb6c56cc25ce57282279756d1ff", "text": "Weighted signed networks (WSNs) are networks in which edges are labeled with positive and negative weights. WSNs can capture like/dislike, trust/distrust, and other social relationships between people. In this paper, we consider the problem of predicting the weights of edges in such networks. We propose two novel measures of node behavior: the goodness of a node intuitively captures how much this node is liked/trusted by other nodes, while the fairness of a node captures how fair the node is in rating other nodes' likeability or trust level. We provide axioms that these two notions need to satisfy and show that past work does not meet these requirements for WSNs. We provide a mutually recursive definition of these two concepts and prove that they converge to a unique solution in linear time. We use the two measures to predict the edge weight in WSNs. Furthermore, we show that when compared against several individual algorithms from both the signed and unsigned social network literature, our fairness and goodness metrics almost always have the best predictive power. We then use these as features in different multiple regression models and show that we can predict edge weights on 2 Bitcoin WSNs, an Epinions WSN, 2 WSNs derived from Wikipedia, and a WSN derived from Twitter with more accurate results than past work. Moreover, fairness and goodness metrics form the most significant feature for prediction in most (but not all) cases.", "title": "" } ]
[ { "docid": "0a4a124589dffca733fa9fa87dc94b35", "text": "where ri is the reward in cycle i of a given history, and the expected value is taken over all possible interaction histories of π and μ. The choice of γi is a subtle issue that controls how greedy or far sighted the agent should be. Here we use the near-harmonic γi := 1/i2 as this produces an agent with increasing farsightedness of the order of its current age [Hutter2004]. As we desire an extremely general definition of intelligence for arbitrary systems, our space of environments should be as large as possible. An obvious choice is the space of all probability measures, however this causes serious problems as we cannot even describe some of these measures in a finite way.", "title": "" }, { "docid": "9d62bcab72472183be38ad67635d744d", "text": "In most challenging data analysis applications, data evolve over time and must be analyzed in near real time. Patterns and relations in such data often evolve over time, thus, models built for analyzing such data quickly become obsolete over time. In machine learning and data mining this phenomenon is referred to as concept drift. The objective is to deploy models that would diagnose themselves and adapt to changing data over time. This chapter provides an application oriented view towards concept drift research, with a focus on supervised learning tasks. First we overview and categorize application tasks for which the problem of concept drift is particularly relevant. Then we construct a reference framework for positioning application tasks within a spectrum of problems related to concept drift. Finally, we discuss some promising research directions from the application perspective, and present recommendations for application driven concept drift research and development.", "title": "" }, { "docid": "44ecfa6fb5c31abf3a035dea9e709d11", "text": "The issue of the variant vs. invariant in personality often arises in diVerent forms of the “person– situation” debate, which is based on a false dichotomy between the personal and situational determination of behavior. Previously reported data are summarized that demonstrate how behavior can vary as a function of subtle situational changes while individual consistency is maintained. Further discussion considers the personal source of behavioral invariance, the situational source of behavioral variation, the person–situation interaction, the nature of behavior, and the “personality triad” of persons, situations, and behaviors, in which each element is understood and predicted in terms of the other two. An important goal for future research is further development of theories and methods for conceptualizing and measuring the functional aspects of situations and of behaviors. One reason for the persistence of the person situation debate may be that it serves as a proxy for a deeper, implicit debate over values such as equality vs. individuality, determinism vs. free will, and Xexibility vs. consistency. However, these value dichotomies may be as false as the person–situation debate that they implicitly drive.  2005 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "d940c448ef854fd8c50bdf08a03cd008", "text": "The Multi-task Cascaded Convolutional Networks (MTCNN) has recently demonstrated impressive results on jointly face detection and alignment. By using the hard sample ming and training a model on FER2013 datasets, we exploit the inherent correlation between face detection and facial express-ion recognition, and report the results of facial expression recognition based on MTCNN.", "title": "" }, { "docid": "55767c008ad459f570fb6b99eea0b26d", "text": "The Tor network relies on volunteer relay operators for relay bandwidth, which may limit its growth and scaling potential. We propose an incentive scheme for Tor relying on two novel concepts. We introduce TorCoin, an “altcoin” that uses the Bitcoin protocol to reward relays for contributing bandwidth. Relays “mine” TorCoins, then sell them for cash on any existing altcoin exchange. To verify that a given TorCoin represents actual bandwidth transferred, we introduce TorPath, a decentralized protocol for forming Tor circuits such that each circuit is privately-addressable but publicly verifiable. Each circuit’s participants may then collectively mine a limited number of TorCoins, in proportion to the end-to-end transmission goodput they measure on that circuit.", "title": "" }, { "docid": "46adb7a040a2d8a40910a9f03825588d", "text": "The aim of this study was to investigate the consequences of friend networking sites (e.g., Friendster, MySpace) for adolescents' self-esteem and well-being. We conducted a survey among 881 adolescents (10-19-year-olds) who had an online profile on a Dutch friend networking site. Using structural equation modeling, we found that the frequency with which adolescents used the site had an indirect effect on their social self-esteem and well-being. The use of the friend networking site stimulated the number of relationships formed on the site, the frequency with which adolescents received feedback on their profiles, and the tone (i.e., positive vs. negative) of this feedback. Positive feedback on the profiles enhanced adolescents' social self-esteem and well-being, whereas negative feedback decreased their self-esteem and well-being.", "title": "" }, { "docid": "b98c34a4be7f86fb9506a6b1620b5d3e", "text": "A portable civilian GPS spoofer is implemented on a digital signal processor and used to characterize spoofing effects and develop defenses against civilian spoofing. This work is intended to equip GNSS users and receiver manufacturers with authentication methods that are effective against unsophisticated spoofing attacks. The work also serves to refine the civilian spoofing threat assessment by demonstrating the challenges involved in mounting a spoofing attack.", "title": "" }, { "docid": "0caf6ead2548d556d9dd0564fb59d8cc", "text": "A modified microstrip Franklin array antenna (MFAA) is proposed for short-range radar applications in vehicle blind spot information systems (BLIS). It is shown that the radiating performance [i.e., absolute gain value and half-power beamwidth (HPBW) angle] can be improved by increasing the number of radiators in the MFAA structure and assigning appropriate values to the antenna geometry parameters. The MFAA possesses a high absolute gain value (>10 dB), good directivity (HPBW <20°) in the E-plane, and large-range coverage (HPBW >80°) in the H-plane at an operating frequency of 24 GHz. Moreover, the 10-dB impedance bandwidth of the proposed antenna is around 250 MHz. The MFAA is, thus, an ideal candidate for automotive BLIS applications.", "title": "" }, { "docid": "edc89ba0554d32297a9aab7103c4abb9", "text": "During the last years, agile methods like eXtreme Programming have become increasingly popular. Parallel to this, more and more organizations rely on process maturity models to assess and improve their own processes or those of suppliers, since it has been getting clear that most project failures can be imputed to inconsistent, undisciplined processes. Many organizations demand CMMI compliance of projects where agile methods are employed. In this situation it is necessary to analyze the interrelations and mutual restrictions between agile methods and approaches for software process analysis and improvement. This paper analyzes to what extent the CMMI process areas can be covered by XP and where adjustments of XP have to be made. Based on this, we describe the limitations of CMMI in an agile environment and show that level 4 or 5 are not feasible under the current specifications of CMMI and XP.", "title": "" }, { "docid": "7a7c358eaa5752d6984a56429f58c556", "text": "If the training dataset is not very large, image recognition is usually implemented with the transfer learning methods. In these methods the features are extracted using a deep convolutional neural network, which was preliminarily trained with an external very-large dataset. In this paper we consider the nonparametric classification of extracted feature vectors with the probabilistic neural network (PNN). The number of neurons at the pattern layer of the PNN is equal to the database size, which causes the low recognition performance and high memory space complexity of this network. We propose to overcome these drawbacks by replacing the exponential activation function in the Gaussian Parzen kernel to the complex exponential functions in the Fej\\'er kernel. We demonstrate that in this case it is possible to implement the network with the number of neurons in the pattern layer proportional to the cubic root of the database size. Thus, the proposed modification of the PNN makes it possible to significantly decrease runtime and memory complexities without loosing its main advantages, namely, extremely fast training procedure and the convergence to the optimal Bayesian decision. An experimental study in visual object category classification and unconstrained face recognition with contemporary deep neural networks have shown, that our approach obtains very efficient and rather accurate decisions for the small training sample in comparison with the well-known classifiers.", "title": "" }, { "docid": "343ed18e56e6f562fa509710e4cf8dc6", "text": "The automated analysis of facial expressions has been widely used in different research areas, such as biometrics or emotional analysis. Special importance is attached to facial expressions in the area of sign language, since they help to form the grammatical structure of the language and allow for the creation of language disambiguation, and thus are called Grammatical Facial Expressions (GFEs). In this paper we outline the recognition of GFEs used in the Brazilian Sign Language. In order to reach this objective, we have captured nine types of GFEs using a KinectTMsensor, designed a spatial-temporal data representation, modeled the research question as a set of binary classification problems, and employed a Machine Learning technique.", "title": "" }, { "docid": "5091c24d3ed56e45246fb444a66c290e", "text": "This paper defines presence in terms of frames and involvement [1]. The value of this analysis of presence is demonstrated by applying it to several issues that have been raised about presence: residual awareness of nonmediation, imaginary presence, presence as categorical or continuum, and presence breaks. The paper goes on to explore the relationship between presence and reality. Goffman introduced frames to try to answer the question, “Under what circumstances do we think things real?” Under frame analysis there are three different conditions under which things are considered unreal, these are explained and related to the experience of presence. Frame analysis is used to show why virtual environments are not usually considered to be part of reality, although the virtual spaces of phone interaction are considered real. The analysis also yields practical suggestions for extending presence within virtual environments. Keywords--presence, frames, virtual environments, mobile phones, Goffman.", "title": "" }, { "docid": "aad7697ce9d9af2b49cd3a46e441ef8e", "text": "Soft pneumatic actuators (SPAs) are versatile robotic components enabling diverse and complex soft robot hardware design. However, due to inherent material characteristics exhibited by their primary constitutive material, silicone rubber, they often lack robustness and repeatability in performance. In this article, we present a novel SPA-based bending module design with shell reinforcement. The bidirectional soft actuator presented here is enveloped in a Yoshimura patterned origami shell, which acts as an additional protection layer covering the SPA while providing specific bending resilience throughout the actuator’s range of motion. Mechanical tests are performed to characterize several shell folding patterns and their effect on the actuator performance. Details on design decisions and experimental results using the SPA with origami shell modules and performance analysis are presented; the performance of the bending module is significantly enhanced when reinforcement is provided by the shell. With the aid of the shell, the bending module is capable of sustaining higher inflation pressures, delivering larger blocked torques, and generating the targeted motion trajectory.", "title": "" }, { "docid": "49d164ec845f6201f56e18a575ed9436", "text": "This research explores a Natural Language Processing technique utilized for the automatic reduction of melodies: the Probabilistic Context-Free Grammar (PCFG). Automatic melodic reduction was previously explored by means of a probabilistic grammar [11] [1]. However, each of these methods used unsupervised learning to estimate the probabilities for the grammar rules, and thus a corpusbased evaluation was not performed. A dataset of analyses using the Generative Theory of Tonal Music (GTTM) exists [13], which contains 300 Western tonal melodies and their corresponding melodic reductions in tree format. In this work, supervised learning is used to train a PCFG for the task of melodic reduction, using the tree analyses provided by the GTTM dataset. The resulting model is evaluated on its ability to create accurate reduction trees, based on a node-by-node comparison with ground-truth trees. Multiple data representations are explored, and example output reductions are shown. Motivations for performing melodic reduction include melodic identification and similarity, efficient storage of melodies, automatic composition, variation matching, and automatic harmonic analysis.", "title": "" }, { "docid": "41e9dac7301e00793c6e4891e07b53fa", "text": "We present an intriguing property of visual data that we observe in our attempt to isolate the influence of data for learning a visual representation. We observe that we can get better performance than existing model by just conditioning the existing representation on a million unlabeled images without any extra knowledge. As a by-product of this study, we achieve results better than prior state-of-theart for surface normal estimation on NYU-v2 depth dataset, and improved results for semantic segmentation using a selfsupervised representation on PASCAL-VOC 2012 dataset.", "title": "" }, { "docid": "67d317befd382c34c143ebfe806a3b55", "text": "In this paper, we present a novel meta-feature generation method in the context of meta-learning, which is based on rules that compare the performance of individual base learners in a one-against-one manner. In addition to these new meta-features, we also introduce a new meta-learner called Approximate Ranking Tree Forests (ART Forests) that performs very competitively when compared with several state-of-the-art meta-learners. Our experimental results are based on a large collection of datasets and show that the proposed new techniques can improve the overall performance of meta-learning for algorithm ranking significantly. A key point in our approach is that each performance figure of any base learner for any specific dataset is generated by optimising the parameters of the base learner separately for each dataset.", "title": "" }, { "docid": "08134d0d76acf866a71d660062f2aeb8", "text": "Colorization methods using deep neural networks have become a recent trend. However, most of them do not allow user inputs, or only allow limited user inputs (only global inputs or only local inputs), to control the output colorful images. The possible reason is that it’s difficult to differentiate the influence of different kind of user inputs in network training. To solve this problem, we present a novel deep colorization method, which allows simultaneous global and local inputs to better control the output colorized images. The key step is to design an appropriate loss function that can differentiate the influence of input data, global inputs and local inputs. With this design, our method accepts no inputs, or global inputs, or local inputs, or both global and local inputs, which is not supported in previous deep colorization methods. In addition, we propose a global color theme recommendation system to help users determine global inputs. Experimental results shows that our methods can better control the colorized images and generate state-of-art results.", "title": "" }, { "docid": "ed3b8bfdd6048e4a07ee988f1e35fd21", "text": "Accurate and automatic organ segmentation from 3D radiological scans is an important yet challenging problem for medical image analysis. Specifically, as a small, soft, and flexible abdominal organ, the pancreas demonstrates very high inter-patient anatomical variability in both its shape and volume. This inhibits traditional automated segmentation methods from achieving high accuracies, especially compared to the performance obtained for other organs, such as the liver, heart or kidneys. To fill this gap, we present an automated system from 3D computed tomography (CT) volumes that is based on a two-stage cascaded approach-pancreas localization and pancreas segmentation. For the first step, we localize the pancreas from the entire 3D CT scan, providing a reliable bounding box for the more refined segmentation step. We introduce a fully deep-learning approach, based on an efficient application of holistically-nested convolutional networks (HNNs) on the three orthogonal axial, sagittal, and coronal views. The resulting HNN per-pixel probability maps are then fused using pooling to reliably produce a 3D bounding box of the pancreas that maximizes the recall. We show that our introduced localizer compares favorably to both a conventional non-deep-learning method and a recent hybrid approach based on spatial aggregation of superpixels using random forest classification. The second, segmentation, phase operates within the computed bounding box and integrates semantic mid-level cues of deeply-learned organ interior and boundary maps, obtained by two additional and separate realizations of HNNs. By integrating these two mid-level cues, our method is capable of generating boundary-preserving pixel-wise class label maps that result in the final pancreas segmentation. Quantitative evaluation is performed on a publicly available dataset of 82 patient CT scans using 4-fold cross-validation (CV). We achieve a (mean  ±  std. dev.) Dice similarity coefficient (DSC) of 81.27 ± 6.27% in validation, which significantly outperforms both a previous state-of-the art method and a preliminary version of this work that report DSCs of 71.80 ± 10.70% and 78.01 ± 8.20%, respectively, using the same dataset.", "title": "" }, { "docid": "23832f031f7c700f741843e54ff81b4e", "text": "Data Mining in medicine is an emerging field of great importance to provide a prognosis and deeper understanding of disease classification, specifically in Mental Health areas. The main objective of this paper is to present a review of the existing research works in the literature, referring to the techniques and algorithms of Data Mining in Mental Health, specifically in the most prevalent diseases such as: Dementia, Alzheimer, Schizophrenia and Depression. Academic databases that were used to perform the searches are Google Scholar, IEEE Xplore, PubMed, Science Direct, Scopus and Web of Science, taking into account as date of publication the last 10 years, from 2008 to the present. Several search criteria were established such as ‘techniques’ AND ‘Data Mining’ AND ‘Mental Health’, ‘algorithms’ AND ‘Data Mining’ AND ‘dementia’ AND ‘schizophrenia’ AND ‘depression’, etc. selecting the papers of greatest interest. A total of 211 articles were found related to techniques and algorithms of Data Mining applied to the main Mental Health diseases. 72 articles have been identified as relevant works of which 32% are Alzheimer’s, 22% dementia, 24% depression, 14% schizophrenia and 8% bipolar disorders. Many of the papers show the prediction of risk factors in these diseases. From the review of the research articles analyzed, it can be said that use of Data Mining techniques applied to diseases such as dementia, schizophrenia, depression, etc. can be of great help to the clinical decision, diagnosis prediction and improve the patient’s quality of life.", "title": "" } ]
scidocsrr
64d42a604baece201ba258cf06ac275b
DCN+: Mixed Objective and Deep Residual Coattention for Question Answering
[ { "docid": "4337f8c11a71533d38897095e5e6847a", "text": "A number of recent works have proposed attention models for Visual Question Answering (VQA) that generate spatial maps highlighting image regions relevant to answering the question. In this paper, we argue that in addition to modeling “where to look” or visual attention, it is equally important to model “what words to listen to” or question attention. We present a novel co-attention model for VQA that jointly reasons about image and question attention. In addition, our model reasons about the question (and consequently the image via the co-attention mechanism) in a hierarchical fashion via a novel 1-dimensional convolution neural networks (CNN). Our model improves the state-of-the-art on the VQA dataset from 60.3% to 60.5%, and from 61.6% to 63.3% on the COCO-QA dataset. By using ResNet, the performance is further improved to 62.1% for VQA and 65.4% for COCO-QA.1. 1 Introduction Visual Question Answering (VQA) [2, 7, 16, 17, 29] has emerged as a prominent multi-discipline research problem in both academia and industry. To correctly answer visual questions about an image, the machine needs to understand both the image and question. Recently, visual attention based models [20, 23–25] have been explored for VQA, where the attention mechanism typically produces a spatial map highlighting image regions relevant to answering the question. So far, all attention models for VQA in literature have focused on the problem of identifying “where to look” or visual attention. In this paper, we argue that the problem of identifying “which words to listen to” or question attention is equally important. Consider the questions “how many horses are in this image?” and “how many horses can you see in this image?\". They have the same meaning, essentially captured by the first three words. A machine that attends to the first three words would arguably be more robust to linguistic variations irrelevant to the meaning and answer of the question. Motivated by this observation, in addition to reasoning about visual attention, we also address the problem of question attention. Specifically, we present a novel multi-modal attention model for VQA with the following two unique features: Co-Attention: We propose a novel mechanism that jointly reasons about visual attention and question attention, which we refer to as co-attention. Unlike previous works, which only focus on visual attention, our model has a natural symmetry between the image and question, in the sense that the image representation is used to guide the question attention and the question representation(s) are used to guide image attention. Question Hierarchy: We build a hierarchical architecture that co-attends to the image and question at three levels: (a) word level, (b) phrase level and (c) question level. At the word level, we embed the words to a vector space through an embedding matrix. At the phrase level, 1-dimensional convolution neural networks are used to capture the information contained in unigrams, bigrams and trigrams. The source code can be downloaded from https://github.com/jiasenlu/HieCoAttenVQA 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. ar X iv :1 60 6. 00 06 1v 3 [ cs .C V ] 2 6 O ct 2 01 6 Ques%on:\t\r What\t\r color\t\r on\t\r the stop\t\r light\t\r is\t\r lit\t\r up\t\r \t\r ? ...\t\r ... color\t\r stop\t\r light\t\r lit co-­‐a7en%on color\t\r ...\t\r stop\t\r \t\r light\t\r \t\r ... What color\t\r ... the stop light light\t\r \t\r ... What color What\t\r color\t\r on\t\r the\t\r stop\t\r light\t\r is\t\r lit\t\r up ...\t\r ... the\t\r stop\t\r light ...\t\r ... stop Image Answer:\t\r green Figure 1: Flowchart of our proposed hierarchical co-attention model. Given a question, we extract its word level, phrase level and question level embeddings. At each level, we apply co-attention on both the image and question. The final answer prediction is based on all the co-attended image and question features. Specifically, we convolve word representations with temporal filters of varying support, and then combine the various n-gram responses by pooling them into a single phrase level representation. At the question level, we use recurrent neural networks to encode the entire question. For each level of the question representation in this hierarchy, we construct joint question and image co-attention maps, which are then combined recursively to ultimately predict a distribution over the answers. Overall, the main contributions of our work are: • We propose a novel co-attention mechanism for VQA that jointly performs question-guided visual attention and image-guided question attention. We explore this mechanism with two strategies, parallel and alternating co-attention, which are described in Sec. 3.3; • We propose a hierarchical architecture to represent the question, and consequently construct image-question co-attention maps at 3 different levels: word level, phrase level and question level. These co-attended features are then recursively combined from word level to question level for the final answer prediction; • At the phrase level, we propose a novel convolution-pooling strategy to adaptively select the phrase sizes whose representations are passed to the question level representation; • Finally, we evaluate our proposed model on two large datasets, VQA [2] and COCO-QA [17]. We also perform ablation studies to quantify the roles of different components in our model.", "title": "" } ]
[ { "docid": "236896835b48994d7737b9152c0e435f", "text": "A network is said to show assortative mixing if the nodes in the network that have many connections tend to be connected to other nodes with many connections. Here we measure mixing patterns in a variety of networks and find that social networks are mostly assortatively mixed, but that technological and biological networks tend to be disassortative. We propose a model of an assortatively mixed network, which we study both analytically and numerically. Within this model we find that networks percolate more easily if they are assortative and that they are also more robust to vertex removal.", "title": "" }, { "docid": "bfa2f3edf0bd1c27bfe3ab90dde6fd75", "text": "Sophorolipids are biosurfactants belonging to the class of the glycolipid, produced mainly by the osmophilic yeast Candida bombicola. Structurally they are composed by a disaccharide sophorose (2’-O-β-D-glucopyranosyl-β-D-glycopyranose) which is linked β -glycosidically to a long fatty acid chain with generally 16 to 18 atoms of carbon with one or more unsaturation. They are produced as a complex mix containing up to 40 molecules and associated isomers, depending on the species which produces it, the substrate used and the culture conditions. They present properties which are very similar or superior to the synthetic surfactants and other biosurfactants with the advantage of presenting low toxicity, higher biodegradability, better environmental compatibility, high selectivity and specific activity in a broad range of temperature, pH and salinity conditions. Its biological activities are directly related with its chemical structure. Sophorolipids possess a great potential for application in areas such as: food; bioremediation; cosmetics; pharmaceutical; biomedicine; nanotechnology and enhanced oil recovery.", "title": "" }, { "docid": "a5a1dd08d612db28770175cc578dd946", "text": "A novel soft-robotic gripper design is presented, with three soft bending fingers and one passively adaptive palm. Each soft finger comprises two ellipse-profiled pneumatic chambers. Combined with the adaptive palm and the surface patterned feature, the soft gripper could achieve 40-N grasping force in practice, 10 times the self-weight, at a very low actuation pressure below 100 kPa. With novel soft finger design, the gripper could pick up small objects, as well as conform to large convex-shape objects with reliable contact. The fabrication process was presented in detail, involving commercial-grade three-dimensional printing and molding of silicone rubber. The fabricated actuators and gripper were tested on a dedicated platform, showing the gripper could reliably grasp objects of various shapes and sizes, even with external disturbances.", "title": "" }, { "docid": "56287b9aea445b570aa7fe77f1b7751a", "text": "Unsupervised machine translation—i.e., not assuming any cross-lingual supervision signal, whether a dictionary, translations, or comparable corpora—seems impossible, but nevertheless, Lample et al. (2018a) recently proposed a fully unsupervised machine translation (MT) model. The model relies heavily on an adversarial, unsupervised alignment of word embedding spaces for bilingual dictionary induction (Conneau et al., 2018), which we examine here. Our results identify the limitations of current unsupervised MT: unsupervised bilingual dictionary induction performs much worse on morphologically rich languages that are not dependent marking, when monolingual corpora from different domains or different embedding algorithms are used. We show that a simple trick, exploiting a weak supervision signal from identical words, enables more robust induction, and establish a near-perfect correlation between unsupervised bilingual dictionary induction performance and a previously unexplored graph similarity metric.", "title": "" }, { "docid": "dd4cc15729f65a0102028949b34cc56f", "text": "Autonomous vehicles platooning has received considerable attention in recent years, due to its potential to significantly benefit road transportation, improving traffic efficiency, enhancing road safety and reducing fuel consumption. The Vehicular ad hoc Networks and the de facto vehicular networking standard IEEE 802.11p communication protocol are key tools for the deployment of platooning applications, since the cooperation among vehicles is based on a reliable communication structure. However, vehicular networks can suffer different security threats. Indeed, in collaborative driving applications, the sudden appearance of a malicious attack can mainly compromise: (i) the correctness of data traffic flow on the vehicular network by sending malicious messages that alter the platoon formation and its coordinated motion; (ii) the safety of platooning application by altering vehicular network communication capability. In view of the fact that cyber attacks can lead to dangerous implications for the security of autonomous driving systems, it is fundamental to consider their effects on the behavior of the interconnected vehicles, and to try to limit them from the control design stage. To this aim, in this work we focus on some relevant types of malicious threats that affect the platoon safety, i.e. application layer attacks (Spoofing and Message Falsification) and network layer attacks (Denial of Service and Burst Transmission), and we propose a novel collaborative control strategy for enhancing the protection level of autonomous platoons. The control protocol is designed and validated in both analytically and experimental way, for the appraised malicious attack scenarios and for different communication topology structures. The effectiveness of the proposed strategy is shown by using PLEXE, a state of the art inter-vehicular communications and mobility simulator that includes basic building blocks for platooning. A detailed experimental analysis discloses the robustness of the proposed approach and its capabilities in reacting to the malicious attack effects.", "title": "" }, { "docid": "c6bd4cd6f90abf20f2619b1d1af33680", "text": "General human action recognition requires understanding of various visual cues. In this paper, we propose a network architecture that computes and integrates the most important visual cues for action recognition: pose, motion, and the raw images. For the integration, we introduce a Markov chain model which adds cues successively. The resulting approach is efficient and applicable to action classification as well as to spatial and temporal action localization. The two contributions clearly improve the performance over respective baselines. The overall approach achieves state-of-the-art action classification performance on HMDB51, J-HMDB and NTU RGB+D datasets. Moreover, it yields state-of-the-art spatio-temporal action localization results on UCF101 and J-HMDB.", "title": "" }, { "docid": "384dfe9f80cd50ce3a41cd0fdc494e43", "text": "Optical Character Recognition (OCR) systems often generate errors for images with noise or with low scanning resolution. In this paper, a novel approach that can be used to improve and restore the quality of any clean lower resolution images for easy recognition by OCR process. The method relies on the production of four copies of the original image so that each picture undergoes different restoration processes. These four copies of the images are then passed to a single OCR engine in parallel. In addition to that, the method does not need any traditional alignment between the four resulting texts, which is time consuming and needs complex calculation. It implements a new procedure to choose the best among them and can be applied without prior training on errors. The experimental results show improvement in word error rate for low resolution images by more than 67%.", "title": "" }, { "docid": "29822df06340218a43fbcf046cbeb264", "text": "Twitter provides search services to help people find new users to follow by recommending popular users or their friends' friends. However, these services do not offer the most relevant users to follow for a user. Furthermore, Twitter does not provide yet the search services to find the most interesting tweet messages for a user either. In this paper, we propose TWITOBI, a recommendation system for Twitter using probabilistic modeling for collaborative filtering which can recommend top-K users to follow and top-K tweets to read for a user. Our novel probabilistic model utilizes not only tweet messages but also the relationships between users. We develop an estimation algorithm for learning our model parameters and present its parallelized algorithm using MapReduce to handle large data. Our performance study with real-life data sets confirms the effectiveness and scalability of our algorithms.", "title": "" }, { "docid": "f095118c63d1531ebdbaec3565b0d91f", "text": "BACKGROUND\nSystematic reviews are most helpful if they are up-to-date. We did a systematic review of strategies and methods describing when and how to update systematic reviews.\n\n\nOBJECTIVES\nTo identify, describe and assess strategies and methods addressing: 1) when to update systematic reviews and 2) how to update systematic reviews.\n\n\nSEARCH STRATEGY\nWe searched MEDLINE (1966 to December 2005), PsycINFO, the Cochrane Methodology Register (Issue 1, 2006), and hand searched the 2005 Cochrane Colloquium proceedings.\n\n\nSELECTION CRITERIA\nWe included methodology reports, updated systematic reviews, commentaries, editorials, or other short reports describing the development, use, or comparison of strategies and methods for determining the need for updating or updating systematic reviews in healthcare.\n\n\nDATA COLLECTION AND ANALYSIS\nWe abstracted information from each included report using a 15-item questionnaire. The strategies and methods for updating systematic reviews were assessed and compared descriptively with respect to their usefulness, comprehensiveness, advantages, and disadvantages.\n\n\nMAIN RESULTS\nFour updating strategies, one technique, and two statistical methods were identified. Three strategies addressed steps for updating and one strategy presented a model for assessing the need to update. One technique discussed the use of the \"entry date\" field in bibliographic searching. Statistical methods were cumulative meta-analysis and predicting when meta-analyses are outdated.\n\n\nAUTHORS' CONCLUSIONS\nLittle research has been conducted on when and how to update systematic reviews and the feasibility and efficiency of the identified approaches is uncertain. These shortcomings should be addressed in future research.", "title": "" }, { "docid": "47ae3428ecddd561b678e5715dfd59ab", "text": "Social media have become an established feature of the dynamic information space that emerges during crisis events. Both emergency responders and the public use these platforms to search for, disseminate, challenge, and make sense of information during crises. In these situations rumors also proliferate, but just how fast such information can spread is an open question. We address this gap, modeling the speed of information transmission to compare retransmission times across content and context features. We specifically contrast rumor-affirming messages with rumor-correcting messages on Twitter during a notable hostage crisis to reveal differences in transmission speed. Our work has important implications for the growing field of crisis informatics.", "title": "" }, { "docid": "e3664eb9901464d6af312e817393e712", "text": "The security of computer systems fundamentally relies on memory isolation, e.g., kernel address ranges are marked as non-accessible and are protected from user access. In this paper, we present Meltdown. Meltdown exploits side effects of out-of-order execution on modern processors to read arbitrary kernel-memory locations including personal data and passwords. Out-of-order execution is an indispensable performance feature and present in a wide range of modern processors. The attack is independent of the operating system, and it does not rely on any software vulnerabilities. Meltdown breaks all security guarantees provided by address space isolation as well as paravirtualized environments and, thus, every security mechanism building upon this foundation. On affected systems, Meltdown enables an adversary to read memory of other processes or virtual machines in the cloud without any permissions or privileges, affecting millions of customers and virtually every user of a personal computer. We show that the KAISER defense mechanism for KASLR has the important (but inadvertent) side effect of impeding Meltdown. We stress that KAISER must be deployed immediately to prevent largescale exploitation of this severe information leakage.", "title": "" }, { "docid": "ae9469b80390e5e2e8062222423fc2cd", "text": "Social media such as those residing in the popular photo sharing websites is attracting increasing attention in recent years. As a type of user-generated data, wisdom of the crowd is embedded inside such social media. In particular, millions of users upload to Flickr their photos, many associated with temporal and geographical information. In this paper, we investigate how to rank the trajectory patterns mined from the uploaded photos with geotags and timestamps. The main objective is to reveal the collective wisdom recorded in the seemingly isolated photos and the individual travel sequences reflected by the geo-tagged photos. Instead of focusing on mining frequent trajectory patterns from geo-tagged social media, we put more effort into ranking the mined trajectory patterns and diversifying the ranking results. Through leveraging the relationships among users, locations and trajectories, we rank the trajectory patterns. We then use an exemplar-based algorithm to diversify the results in order to discover the representative trajectory patterns. We have evaluated the proposed framework on 12 different cities using a Flickr dataset and demonstrated its effectiveness.", "title": "" }, { "docid": "55a29653163bdf9599bf595154a99a25", "text": "The effect of the steel slag aggregate aging on mechanical properties of the high performance concrete is analysed in the paper. The effect of different aging periods of steel slag aggregate on mechanical properties of high performance concrete is studied. It was observed that properties of this concrete are affected by the steel slag aggregate aging process. The compressive strength increases with an increase in the aging period of steel slag aggregate. The flexural strength, Young’s modulus, and impact strength of concrete, increase at the rate similar to that of the compressive strength. The workability and the abrasion loss of concrete decrease with an increase of the steel slag aggregate aging period.", "title": "" }, { "docid": "424bf67761e234f6cf85eacabf38a502", "text": "Due to poor efficiencies of Incandescent Lamps (ILs), Fluorescent Lamps (FLs) and Compact Fluorescent Lamps (CFLs) are increasingly used in residential and commercial applications. This proliferation of FLs and CFLs increases the harmonics level in distribution systems that could affect power systems and end users. In order to quantify the harmonics produced by FLs and CFLs precisely, accurate modelling of these loads are required. Matlab Simulink is used to model and simulate the full models of FLs and CFLs to give close results to the experimental measurements. Moreover, a Constant Load Power (CLP) model is also modelled and its results are compared with the full models of FLs and CFLs. This CLP model is much faster to simulate and easier to model than the full model. Such models help engineers and researchers to evaluate the harmonics exist within households and commercial buildings.", "title": "" }, { "docid": "69dc7ae1e3149d475dabb4bbf8f05172", "text": "Knowledge about entities is essential for natural language understanding. This knowledge includes several facts about entities such as their names, properties, relations and types. This data is usually stored in large scale structures called knowledge bases (KB) and therefore building and maintaining KBs is very important. Examples of such KBs are Wikipedia, Freebase and Google knowledge graph. Incompleteness is unfortunately a reality for every KB, because the world is changing – new entities are emerging, and existing entities are getting new properties. Therefore, we always need to update KBs. To do so, we propose an information extraction method that processes large raw corpora in order to gather knowledge about entities. We focus on extraction of entity types and address the task of fine-grained entity typing: given a KB and a large corpus of text with mentions of entities in the KB, find all fine-grained types of the entities. For example given a large corpus and the entity “Barack Obama” we need to find all his types including PERSON, POLITICIAN, and AUTHOR. Artificial neural networks (NNs) have shown promising results in different machine learning problems. Distributed representation (embedding) is an effective way of representing data for NNs. In this work, we introduce two models for fine-grained entity typing using NNs with distributed representations of language units: (i) A global model that predicts types of an entity based on its global representation learned from the entity’s name and contexts. (ii) A context model that predicts types of an entity based on its context-level predictions. Each of the two proposed models has some specific properties. For the global model, learning high quality entity representations is crucial because it is the only source used for the predictions. Therefore, we introduce representations using name and contexts of entities on three levels of entity, word, and character. We show each has complementary information and a multi-level representation is the best. For the context model, we need to use distant supervision since the contextlevel labels are not available for entities. Distant supervised labels are noisy and this harms the performance of models. Therefore, we introduce and apply new algorithms for noise mitigation using multi-instance learning.", "title": "" }, { "docid": "afefd32f480dbb5880eea1d9e489147e", "text": "Creating mechanical automata that can walk in stable and pleasing manners is a challenging task that requires both skill and expertise. We propose to use computational design to offset the technical difficulties of this process. A simple drag-and-drop interface allows casual users to create personalized walking toys from a library of pre-defined template mechanisms. Provided with this input, our method leverages physical simulation and evolutionary optimization to refine the mechanical designs such that the resulting toys are able to walk. The optimization process is guided by an intuitive set of objectives that measure the quality of the walking motions. We demonstrate our approach on a set of simulated mechanical toys with different numbers of legs and various distinct gaits. Two fabricated prototypes showcase the feasibility of our designs.", "title": "" }, { "docid": "5fbedf5f399ee19d083a73f962cd9f29", "text": "A 70 mm-open-ended coaxial line probe was developed to perform measurements of the dielectric properties of large concrete samples. The complex permittivity was measured in the frequency range 50 MHz – 1.5 GHz during the hardening process of the concrete. As expected, strong dependence of water content was observed.", "title": "" }, { "docid": "1a77d9ee6da4620b38efec315c6357a1", "text": "The authors present a new approach to culture and cognition, which focuses on the dynamics through which specific pieces of cultural knowledge (implicit theories) become operative in guiding the construction of meaning from a stimulus. Whether a construct comes to the fore in a perceiver's mind depends on the extent to which the construct is highly accessible (because of recent exposure). In a series of cognitive priming experiments, the authors simulated the experience of bicultural individuals (people who have internalized two cultures) of switching between different cultural frames in response to culturally laden symbols. The authors discuss how this dynamic, constructivist approach illuminates (a) when cultural constructs are potent drivers of behavior and (b) how bicultural individuals may control the cognitive effects of culture.", "title": "" }, { "docid": "cba3209a27e1332f25f29e8b2c323d37", "text": "One of the technologies that has been showing possibilities of application in educational environments is the Augmented Reality (AR), in addition to its application to other fields such as tourism, advertising, video games, among others. The present article shows the results of an experiment carried out at the National University of Colombia, with the design and construction of augmented learning objects for the seventh and eighth grades of secondary education, which were tested and evaluated by students of a school in the department of Caldas. The study confirms the potential of this technology to support educational processes represented in the creation of digital resources for mobile devices. The development of learning objects in AR for mobile devices can support teachers in the integration of information and communication technologies (ICT) in the teaching-learning processes.", "title": "" } ]
scidocsrr
a147482deaac13986d5360193423da26
Sex differences in response to visual sexual stimuli: a review.
[ { "docid": "9b8d4b855bab5e2fdcadd1fe1632f197", "text": "Men report more permissive sexual attitudes and behavior than do women. This experiment tested whether these differences might result from false accommodation to gender norms (distorted reporting consistent with gender stereotypes). Participants completed questionnaires under three conditions. Sex differences in self-reported sexual behavior were negligible in a bogus pipeline condition in which participants believed lying could be detected, moderate in an anonymous condition, and greatest in an exposure threat condition in which the experimenter could potentially view participants responses. This pattern was clearest for behaviors considered less acceptable for women than men (e.g., masturbation, exposure to hardcore & softcore erotica). Results suggest that some sex differences in self-reported sexual behavior reflect responses influenced by normative expectations for men and women.", "title": "" }, { "docid": "a6a7770857964e96f98bd4021d38f59f", "text": "During human evolutionary history, there were \"trade-offs\" between expending time and energy on child-rearing and mating, so both men and women evolved conditional mating strategies guided by cues signaling the circumstances. Many short-term matings might be successful for some men; others might try to find and keep a single mate, investing their effort in rearing her offspring. Recent evidence suggests that men with features signaling genetic benefits to offspring should be preferred by women as short-term mates, but there are trade-offs between a mate's genetic fitness and his willingness to help in child-rearing. It is these circumstances and the cues that signal them that underlie the variation in short- and long-term mating strategies between and within the sexes.", "title": "" } ]
[ { "docid": "1c832140fce684c68fd91779d62596e3", "text": "The safety and antifungal efficacy of amphotericin B lipid complex (ABLC) were evaluated in 556 cases of invasive fungal infection treated through an open-label, single-patient, emergency-use study of patients who were refractory to or intolerant of conventional antifungal therapy. All 556 treatment episodes were evaluable for safety. During the course of ABLC therapy, serum creatinine levels significantly decreased from baseline (P < .02). Among 162 patients with serum creatinine values > or = 2.5 mg/dL at the start of ABLC therapy (baseline), the mean serum creatinine value decreased significantly from the first week through the sixth week (P < or = .0003). Among the 291 mycologically confirmed cases evaluable for therapeutic response, there was a complete or partial response to ABLC in 167 (57%), including 42% (55) of 130 cases of aspergillosis, 67% (28) of 42 cases of disseminated candidiasis, 71% (17) of 24 cases of zygomycosis, and 82% (9) of 11 cases of fusariosis. Response rates varied according to the pattern of invasive fungal infection, underlying condition, and reason for enrollment (intolerance versus progressive infection). These findings support the use of ABLC in the treatment of invasive fungal infections in patients who are intolerant of or refractory to conventional antifungal therapy.", "title": "" }, { "docid": "e92299720be4d028b4a7d726c99bc216", "text": "Nowadays terahertz spectroscopy is a well-established technique and recent progresses in technology demonstrated that this new technique is useful for both fundamental research and industrial applications. Varieties of applications such as imaging, non destructive testing, quality control are about to be transferred to industry supported by permanent improvements from basic research. Since chemometrics is today routinely applied to IR spectroscopy, we discuss in this paper the advantages of using chemometrics in the framework of terahertz spectroscopy. Different analytical procedures are illustrates. We conclude that advanced data processing is the key point to validate routine terahertz spectroscopy as a new reliable analytical technique.", "title": "" }, { "docid": "b27ab468a885a3d52ec2081be06db2ef", "text": "The beautification of human photos usually requires professional editing softwares, which are difficult for most users. In this technical demonstration, we propose a deep face beautification framework, which is able to automatically modify the geometrical structure of a face so as to boost the attractiveness. A learning based approach is adopted to capture the underlying relations between the facial shape and the attractiveness via training the Deep Beauty Predictor (DBP). Relying on the pre-trained DBP, we construct the BeAuty SHaper (BASH) to infer the \"flows\" of landmarks towards the maximal aesthetic level. BASH modifies the facial landmarks with the direct guidance of the beauty score estimated by DBP.", "title": "" }, { "docid": "1abcede6d3044e5550df404cfb7c87a4", "text": "There is intense interest in graphene in fields such as physics, chemistry, and materials science, among others. Interest in graphene's exceptional physical properties, chemical tunability, and potential for applications has generated thousands of publications and an accelerating pace of research, making review of such research timely. Here is an overview of the synthesis, properties, and applications of graphene and related materials (primarily, graphite oxide and its colloidal suspensions and materials made from them), from a materials science perspective.", "title": "" }, { "docid": "ee25ce281929eb63ce5027060be799c9", "text": "Enfin, je ne peux terminer ces quelques lignes sans remercier ma famille qui dans l'ombre m'apporte depuis si longtemps tout le soutien qui m'est nécessaire, et spécialement, mes pensées vont à Magaly. Un peu d'histoire… Dès les années 60, les données informatisées dans les organisations ont pris une importance qui n'a cessé de croître. Les systèmes informatiques gérant ces données sont utilisés essentiellement pour faciliter l'activité quotidienne des organisations et pour soutenir les prises de décision. La démocratisation de la micro-informatique dans les années 80 a permis un important développement de ces systèmes augmentant considérablement les quantités de données 1 2 informatisées disponibles. Face aux évolutions nombreuses et rapides qui s'imposent aux organisations, la prise de décision est devenue dès les années 90 une activité primordiale nécessitant la mise en place de systèmes dédiés efficaces [Inmon, 1994]. A partir de ces années, les éditeurs de logiciels ont proposé des outils facilitant l'analyse des données pour soutenir les prises de décision. Les tableurs sont probablement les premiers outils qui ont été utilisés pour analyser les données à des fins décisionnelles. Ils ont été complétés par des outils facilitant l'accès aux données pour les décideurs au travers d'interfaces graphiques dédiées au « requêtage » ; citons le logiciel Business Objects qui reste aujourd'hui encore l'un des plus connus. Le développement de systèmes dédiés à la prise de décision a vu naître des outils E.T.L. (« Extract-Transform-Load ») destinés à faciliter l'extraction et la transformation de données décisionnelles. Dès la fin des années 90, les acteurs importants tels que Microsoft, Oracle, IBM, SAP sont intervenus sur ce nouveau marché en faisant évoluer leurs outils et en acquérant de nombreux logiciels spécialisés ; par exemple, SAP vient d'acquérir Business Objects pour 4,8 milliards d'euros. Ils disposent aujourd'hui d'offres complètes intégrant l'ensemble de la chaîne décisionnelle : E.T.L., stockage (S.G.B.D.), restitution et analyse. Cette dernière décennie connaît encore une évolution marquante avec le développement d'une offre issue du monde du logiciel libre (« open source ») qui atteint aujourd'hui une certaine maturité (Talend 3 , JPalo 4 , Jasper 5). Dominée par les outils du marché, l'informatique décisionnelle est depuis le milieu des années 90 un domaine investi par le monde de la recherche au travers des concepts d'entrepôt de données (« data warehouse ») [Widom, 1995] [Chaudhury, et al., 1997] et d'OLAP (« On-Line Analytical Processing ») [Codd, et al., 1993]. D'abord diffusées dans …", "title": "" }, { "docid": "bc3aaa01d7e817a97c6709d193a74f9e", "text": "Monolingual dictionaries are widespread and semantically rich resources. This paper presents a simple model that learns to compute word embeddings by processing dictionary definitions and trying to reconstruct them. It exploits the inherent recursivity of dictionaries by encouraging consistency between the representations it uses as inputs and the representations it produces as outputs. The resulting embeddings are shown to capture semantic similarity better than regular distributional methods and other dictionary-based methods. In addition, the method shows strong performance when trained exclusively on dictionary data and generalizes in one shot.", "title": "" }, { "docid": "e34d244a395a753b0cb97f8535b56add", "text": "We propose Quadruplet Convolutional Neural Networks (Quad-CNN) for multi-object tracking, which learn to associate object detections across frames using quadruplet losses. The proposed networks consider target appearances together with their temporal adjacencies for data association. Unlike conventional ranking losses, the quadruplet loss enforces an additional constraint that makes temporally adjacent detections more closely located than the ones with large temporal gaps. We also employ a multi-task loss to jointly learn object association and bounding box regression for better localization. The whole network is trained end-to-end. For tracking, the target association is performed by minimax label propagation using the metric learned from the proposed network. We evaluate performance of our multi-object tracking algorithm on public MOT Challenge datasets, and achieve outstanding results.", "title": "" }, { "docid": "9276dc2798f90483025ea01e69c51001", "text": "Convolutional Neural Network (CNN) was firstly introduced in Computer Vision for image recognition by LeCun et al. in 1989. Since then, it has been widely used in image recognition and classification tasks. The recent impressive success of Krizhevsky et al. in ILSVRC 2012 competition demonstrates the significant advance of modern deep CNN on image classification task. Inspired by his work, many recent research works have been concentrating on understanding CNN and extending its application to more conventional computer vision tasks. Their successes and lessons have promoted the development of both CNN and vision science. This article makes a survey of recent progress in CNN since 2012. We will introduce the general architecture of a modern CNN and make insights into several typical CNN incarnations which have been studied extensively. We will also review the efforts to understand CNNs and review important applications of CNNs in computer vision tasks.", "title": "" }, { "docid": "c7d71b7bb07f62f4b47d87c9c4bae9b3", "text": "Smart contracts are full-fledged programs that run on blockchains (e.g., Ethereum, one of the most popular blockchains). In Ethereum, gas (in Ether, a cryptographic currency like Bitcoin) is the execution fee compensating the computing resources of miners for running smart contracts. However, we find that under-optimized smart contracts cost more gas than necessary, and therefore the creators or users will be overcharged. In this work, we conduct the first investigation on Solidity, the recommended compiler, and reveal that it fails to optimize gas-costly programming patterns. In particular, we identify 7 gas-costly patterns and group them to 2 categories. Then, we propose and develop GASPER, a new tool for automatically locating gas-costly patterns by analyzing smart contracts' bytecodes. The preliminary results on discovering 3 representative patterns from 4,240 real smart contracts show that 93.5%, 90.1% and 80% contracts suffer from these 3 patterns, respectively.", "title": "" }, { "docid": "361e874cccb263b202155ef92e502af3", "text": "String similarity join is an important operation in data integration and cleansing that finds similar string pairs from two collections of strings. More than ten algorithms have been proposed to address this problem in the recent two decades. However, existing algorithms have not been thoroughly compared under the same experimental framework. For example, some algorithms are tested only on specific datasets. This makes it rather difficult for practitioners to decide which algorithms should be used for various scenarios. To address this problem, in this paper we provide a comprehensive survey on a wide spectrum of existing string similarity join algorithms, classify them into different categories based on their main techniques, and compare them through extensive experiments on a variety of real-world datasets with different characteristics. We also report comprehensive findings obtained from the experiments and provide new insights about the strengths and weaknesses of existing similarity join algorithms which can guide practitioners to select appropriate algorithms for various scenarios.", "title": "" }, { "docid": "edd39b11eaed2dc89ab74542ce9660bb", "text": "The volume of data is growing at an increasing rate. This growth is both in size and in connectivity, where connectivity refers to the increasing presence of relationships between data. Social networks such as Facebook and Twitter store and process petabytes of data each day. Graph databases have gained renewed interest in the last years, due to their applications in areas such as the Semantic Web and Social Network Analysis. Graph databases provide an effective and efficient solution to data storage and querying data in these scenarios, where data is rich in relationships. In this paper, it is analyzed the fundamental points of graph databases, showing their main characteristics and advantages. We study Neo4j, the top graph database software in the market and evaluate its performance using the Social Network Benchmark (SNB).", "title": "" }, { "docid": "e74240aef79f42ac0345a2ae49ecde4a", "text": "Existing automatic music generation approaches that feature deep learning can be broadly classified into two types: raw audio models and symbolic models. Symbolic models, which train and generate at the note level, are currently the more prevalent approach; these models can capture long-range dependencies of melodic structure, but fail to grasp the nuances and richness of raw audio generations. Raw audio models, such as DeepMind’s WaveNet, train directly on sampled audio waveforms, allowing them to produce realistic-sounding, albeit unstructured music. In this paper, we propose an automatic music generation methodology combining both of these approaches to create structured, realistic-sounding compositions. We consider a Long Short Term Memory network to learn the melodic structure of different styles of music, and then use the unique symbolic generations from this model as a conditioning input to a WaveNet-based raw audio generator, creating a model for automatic, novel music. We then evaluate this approach by showcasing results of this work.", "title": "" }, { "docid": "77f5216ede8babf4fb3b2bcbfc9a3152", "text": "Various aspects of the theory of random walks on graphs are surveyed. In particular, estimates on the important parameters of access time, commute time, cover time and mixing time are discussed. Connections with the eigenvalues of graphs and with electrical networks, and the use of these connections in the study of random walks is described. We also sketch recent algorithmic applications of random walks, in particular to the problem of sampling.", "title": "" }, { "docid": "d0ebee0648beecbd00faaf67f76f256c", "text": "Text mining is the use of automated methods for exploiting the enormous amount of knowledge available in the biomedical literature. There are at least as many motivations for doing text mining work as there are types of bioscientists. Model organism database curators have been heavy participants in the development of the field due to their need to process large numbers of publications in order to populate the many data fields for every gene in their species of interest. Bench scientists have built biomedical text mining applications to aid in the development of tools for interpreting the output of high-throughput assays and to improve searches of sequence databases (see [1] for a review). Bioscientists of every stripe have built applications to deal with the dual issues of the doubleexponential growth in the scientific literature over the past few years and of the unique issues in searching PubMed/ MEDLINE for genomics-related publications. A surprising phenomenon can be noted in the recent history of biomedical text mining: although several systems have been built and deployed in the past few years—Chilibot, Textpresso, and PreBIND (see Text S1 for these and most other citations), for example—the ones that are seeing high usage rates and are making productive contributions to the working lives of bioscientists have been built not by text mining specialists, but by bioscientists. We speculate on why this might be so below. Three basic types of approaches to text mining have been prevalent in the biomedical domain. Co-occurrence– based methods do no more than look for concepts that occur in the same unit of text—typically a sentence, but sometimes as large as an abstract—and posit a relationship between them. (See [2] for an early co-occurrence–based system.) For example, if such a system saw that BRCA1 and breast cancer occurred in the same sentence, it might assume a relationship between breast cancer and the BRCA1 gene. Some early biomedical text mining systems were co-occurrence–based, but such systems are highly error prone, and are not commonly built today. In fact, many text mining practitioners would not consider them to be text mining systems at all. Co-occurrence of concepts in a text is sometimes used as a simple baseline when evaluating more sophisticated systems; as such, they are nontrivial, since even a co-occurrence– based system must deal with variability in the ways that concepts are expressed in human-produced texts. For example, BRCA1 could be referred to by any of its alternate symbols—IRIS, PSCP, BRCAI, BRCC1, or RNF53 (or by any of their many spelling variants, which include BRCA1, BRCA-1, and BRCA 1)— or by any of the variants of its full name, viz. breast cancer 1, early onset (its official name per Entrez Gene and the Human Gene Nomenclature Committee), as breast cancer susceptibility gene 1, or as the latter’s variant breast cancer susceptibility gene-1. Similarly, breast cancer could be referred to as breast cancer, carcinoma of the breast, or mammary neoplasm. These variability issues challenge more sophisticated systems, as well; we discuss ways of coping with them in Text S1. Two more common (and more sophisticated) approaches to text mining exist: rule-based or knowledgebased approaches, and statistical or machine-learning-based approaches. The variety of types of rule-based systems is quite wide. In general, rulebased systems make use of some sort of knowledge. This might take the form of general knowledge about how language is structured, specific knowledge about how biologically relevant facts are stated in the biomedical literature, knowledge about the sets of things that bioscientists talk about and the kinds of relationships that they can have with one another, and the variant forms by which they might be mentioned in the literature, or any subset or combination of these. (See [3] for an early rule-based system, and [4] for a discussion of rule-based approaches to various biomedical text mining tasks.) At one end of the spectrum, a simple rule-based system might use hardcoded patterns—for example, ,gene. plays a role in ,disease. or ,disease. is associated with ,gene.—to find explicit statements about the classes of things in which the researcher is interested. At the other end of the spectrum, a rulebased system might use sophisticated linguistic and semantic analyses to recognize a wide range of possible ways of making assertions about those classes of things. It is worth noting that useful systems have been built using technologies at both ends of the spectrum, and at many points in between. In contrast, statistical or machine-learning–based systems operate by building classifiers that may operate on any level, from labelling part of speech to choosing syntactic parse trees to classifying full sentences or documents. (See [5] for an early learning-based system, and [4] for a discussion of learning-based approaches to various biomedical text mining tasks.) Rule-based and statistical systems each have their advantages and", "title": "" }, { "docid": "86ba97e91a8c2bcb1015c25df7c782db", "text": "After a knee joint surgery, due to severe pain and immobility of the patient, the tissue around the knee become harder and knee stiffness will occur, which may causes many problems such as scar tissue swelling, bleeding, and fibrosis. A CPM (Continuous Passive Motion) machine is an apparatus that is being used to patient recovery, retrieving moving abilities of the knee, and reducing tissue swelling, after the knee joint surgery. This device prevents frozen joint syndrome (adhesive capsulitis), joint stiffness, and articular cartilage destruction by stimulating joint tissues, and flowing synovial fluid and blood around the knee joint. In this study, a new, light, and portable CPM machine with an appropriate interface, is designed and manufactured. The knee joint can be rotated from the range of -15° to 120° with a pace of 0.1 degree/sec to 1 degree/sec by this machine. One of the most important advantages of this new machine is its own user-friendly interface. This apparatus is controlled via an Android-based application; therefore, the users can use this machine easily via their own smartphones without the necessity to an extra controlling device. Besides, because of its apt size, this machine is a portable device. Smooth movement without any vibration and adjusting capability for different anatomies are other merits of this new CPM machine.", "title": "" }, { "docid": "be6e3666eba5752a59605a86e5bd932f", "text": "Accurate knowledge on the absolute or true speed of a vehicle, if and when available, can be used to enhance advanced vehicle dynamics control systems such as anti-lock brake systems (ABS) and auto-traction systems (ATS) control schemes. Current conventional method uses wheel speed measurements to estimate the speed of the vehicle. As a result, indication of the vehicle speed becomes erroneous and, thus, unreliable when large slips occur between the wheels and terrain. This paper describes a fuzzy rule-based Kalman filtering technique which employs an additional accelerometer to complement the wheel-based speed sensor, and produce an accurate estimation of the true speed of a vehicle. We use the Kalman filters to deal with the noise and uncertainties in the speed and acceleration models, and fuzzy logic to tune the covariances and reset the initialization of the filter according to slip conditions detected and measurement-estimation condition. Experiments were conducted using an actual vehicle to verify the proposed strategy. Application of the fuzzy logic rule-based Kalman filter shows that accurate estimates of the absolute speed can be achieved euen under sagnapcant brakang skzd and traction slip conditions.", "title": "" }, { "docid": "335e92a896c6cce646f3ae81c5d9a02c", "text": "Vulnerabilities in web applications allow malicious users to obtain unrestricted access to private and confidential information. SQL injection attacks rank at the top of the list of threats directed at any database-driven application written for the Web. An attacker can take advantages of web application programming security flaws and pass unexpected malicious SQL statements through a web application for execution by the back-end database. This paper proposes a novel specification-based methodology for the detection of exploitations of SQL injection vulnerabilities. The new approach on the one hand utilizes specifications that define the intended syntactic structure of SQL queries that are produced and executed by the web application and on the other hand monitors the application for executing queries that are in violation of the specification.\n The three most important advantages of the new approach against existing analogous mechanisms are that, first, it prevents all forms of SQL injection attacks; second, its effectiveness is independent of any particular target system, application environment, or DBMS; and, third, there is no need to modify the source code of existing web applications to apply the new protection scheme to them.\n We developed a prototype SQL injection detection system (SQL-IDS) that implements the proposed algorithm. The system monitors Java-based applications and detects SQL injection attacks in real time. We report some preliminary experimental results over several SQL injection attacks that show that the proposed query-specific detection allows the system to perform focused analysis at negligible computational overhead without producing false positives or false negatives. Therefore, the new approach is very efficient in practice.", "title": "" }, { "docid": "c1e3872540aa37d6253b3e52bb3551a9", "text": "Human Activity recognition (HAR) is an important area of research in ubiquitous computing and Human Computer Interaction. To recognize activities using mobile or wearable sensor, data are collected using appropriate sensors, segmented, needed features extracted and activities categories using discriminative models (SVM, HMM, MLP etc.). Feature extraction is an important stage as it helps to reduce computation time and ensure enhanced recognition accuracy. Earlier researches have used statistical features which require domain expert and handcrafted features. However, the advent of deep learning that extracts salient features from raw sensor data and has provided high performance in computer vision, speech and image recognition. Based on the recent advances recorded in deep learning for human activity recognition, we briefly reviewed the different deep learning methods for human activities implemented recently and then propose a conceptual deep learning frameworks that can be used to extract global features that model the temporal dependencies using Gated Recurrent Units. The framework when implemented would comprise of seven convolutional layer, two Gated recurrent unit and Support Vector Machine (SVM) layer for classification of activity details. The proposed technique is still under development and will be evaluated with benchmarked datasets and compared with other baseline deep learning algorithms.", "title": "" }, { "docid": "46e63e9f9dc006ad46e514adc26c12bd", "text": "In computer vision, image datasets used for classification are naturally associated with multiple labels and comprised of multiple views, because each image may contain several objects (e.g., pedestrian, bicycle, and tree) and is properly characterized by multiple visual features (e.g., color, texture, and shape). Currently, available tools ignore either the label relationship or the view complementarily. Motivated by the success of the vector-valued function that constructs matrix-valued kernels to explore the multilabel structure in the output space, we introduce multiview vector-valued manifold regularization (MV3MR) to integrate multiple features. MV3MR exploits the complementary property of different features and discovers the intrinsic local geometry of the compact support shared by different features under the theme of manifold regularization. We conduct extensive experiments on two challenging, but popular, datasets, PASCAL VOC' 07 and MIR Flickr, and validate the effectiveness of the proposed MV3MR for image classification.", "title": "" } ]
scidocsrr
2dfb5e06121079ab4320b8a496e4d45e
Dialog state tracking, a machine reading approach using Memory Network
[ { "docid": "bf294a4c3af59162b2f401e2cdcb060b", "text": "We present MCTest, a freely available set of stories and associated questions intended for research on the machine comprehension of text. Previous work on machine comprehension (e.g., semantic modeling) has made great strides, but primarily focuses either on limited-domain datasets, or on solving a more restricted goal (e.g., open-domain relation extraction). In contrast, MCTest requires machines to answer multiple-choice reading comprehension questions about fictional stories, directly tackling the high-level goal of open-domain machine comprehension. Reading comprehension can test advanced abilities such as causal reasoning and understanding the world, yet, by being multiple-choice, still provide a clear metric. By being fictional, the answer typically can be found only in the story itself. The stories and questions are also carefully limited to those a young child would understand, reducing the world knowledge that is required for the task. We present the scalable crowd-sourcing methods that allow us to cheaply construct a dataset of 500 stories and 2000 questions. By screening workers (with grammar tests) and stories (with grading), we have ensured that the data is the same quality as another set that we manually edited, but at one tenth the editing cost. By being open-domain, yet carefully restricted, we hope MCTest will serve to encourage research and provide a clear metric for advancement on the machine comprehension of text. 1 Reading Comprehension A major goal for NLP is for machines to be able to understand text as well as people. Several research disciplines are focused on this problem: for example, information extraction, relation extraction, semantic role labeling, and recognizing textual entailment. Yet these techniques are necessarily evaluated individually, rather than by how much they advance us towards the end goal. On the other hand, the goal of semantic parsing is the machine comprehension of text (MCT), yet its evaluation requires adherence to a specific knowledge representation, and it is currently unclear what the best representation is, for open-domain text. We believe that it is useful to directly tackle the top-level task of MCT. For this, we need a way to measure progress. One common method for evaluating someone’s understanding of text is by giving them a multiple-choice reading comprehension test. This has the advantage that it is objectively gradable (vs. essays) yet may test a range of abilities such as causal or counterfactual reasoning, inference among relations, or just basic understanding of the world in which the passage is set. Therefore, we propose a multiple-choice reading comprehension task as a way to evaluate progress on MCT. We have built a reading comprehension dataset containing 500 fictional stories, with 4 multiple choice questions per story. It was built using methods which can easily scale to at least 5000 stories, since the stories were created, and the curation was done, using crowd sourcing almost entirely, at a total of $4.00 per story. We plan to periodically update the dataset to ensure that methods are not overfitting to the existing data. The dataset is open-domain, yet restricted to concepts and words that a 7 year old is expected to understand. This task is still beyond the capability of today’s computers and algorithms.", "title": "" }, { "docid": "3d22f5be70237ae0ee1a0a1b52330bfa", "text": "Tracking the user's intention throughout the course of a dialog, called dialog state tracking, is an important component of any dialog system. Most existing spoken dialog systems are designed to work in a static, well-defined domain, and are not well suited to tasks in which the domain may change or be extended over time. This paper shows how recurrent neural networks can be effectively applied to tracking in an extended domain with new slots and values not present in training data. The method is evaluated in the third Dialog State Tracking Challenge, where it significantly outperforms other approaches in the task of tracking the user's goal. A method for online unsupervised adaptation to new domains is also presented. Unsupervised adaptation is shown to be helpful in improving word-based recurrent neural networks, which work directly from the speech recognition results. Word-based dialog state tracking is attractive as it does not require engineering a spoken language understanding system for use in the new domain and it avoids the need for a general purpose intermediate semantic representation.", "title": "" } ]
[ { "docid": "b2418dc7ae9659d643a74ba5c0be2853", "text": "MITJA D. BACK*, LARS PENKE, STEFAN C. SCHMUKLE, KAROLINE SACHSE, PETER BORKENAU and JENS B. ASENDORPF Department of Psychology, Johannes Gutenberg-University Mainz, Germany Department of Psychology and Centre for Cognitive Ageing and Cognitive Epidemiology, University of Edinburgh, UK Department of Psychology, Westfälische Wilhelms-University Münster, Germany Department of Psychology, Martin-Luther University Halle-Wittenberg, Germany Department of Psychology, Martin-Luther University Halle-Wittenberg, Germany Department of Psychology, Humboldt University Berlin, Germany", "title": "" }, { "docid": "05a77d687230dc28697ca1751586f660", "text": "In recent years, there has been a huge increase in the number of bots online, varying from Web crawlers for search engines, to chatbots for online customer service, spambots on social media, and content-editing bots in online collaboration communities. The online world has turned into an ecosystem of bots. However, our knowledge of how these automated agents are interacting with each other is rather poor. Bots are predictable automatons that do not have the capacity for emotions, meaning-making, creativity, and sociality and it is hence natural to expect interactions between bots to be relatively predictable and uneventful. In this article, we analyze the interactions between bots that edit articles on Wikipedia. We track the extent to which bots undid each other's edits over the period 2001-2010, model how pairs of bots interact over time, and identify different types of interaction trajectories. We find that, although Wikipedia bots are intended to support the encyclopedia, they often undo each other's edits and these sterile \"fights\" may sometimes continue for years. Unlike humans on Wikipedia, bots' interactions tend to occur over longer periods of time and to be more reciprocated. Yet, just like humans, bots in different cultural environments may behave differently. Our research suggests that even relatively \"dumb\" bots may give rise to complex interactions, and this carries important implications for Artificial Intelligence research. Understanding what affects bot-bot interactions is crucial for managing social media well, providing adequate cyber-security, and designing well functioning autonomous vehicles.", "title": "" }, { "docid": "854b2bfdef719879a437f2d87519d8e8", "text": "The morality of transformational leadership has been sharply questioned, particularly by libertarians, “grass roots” theorists, and organizational development consultants. This paper argues that to be truly transformational, leadership must be grounded in moral foundations. The four components of authentic transformational leadership (idealized influence, inspirational motivation, intellectual stimulation, and individualized consideration) are contrasted with their counterfeits in dissembling pseudo-transformational leadership on the basis of (1) the moral character of the leaders and their concerns for self and others; (2) the ethical values embedded in the leaders’ vision, articulation, and program, which followers can embrace or reject; and (3) the morality of the processes of social ethical choices and action in which the leaders and followers engage and collectively pursue. The literature on transformational leadership is linked to the long-standing literature on virtue and moral character, as exemplified by Socratic and Confucian typologies. It is related as well to the major themes of the modern Western ethical agenda: liberty, utility, and distributive justice Deception, sophistry, and pretense are examined alongside issues of transcendence, agency, trust, striving for congruence in values, cooperative action, power, persuasion, and corporate governance to establish the strategic and moral foundations of authentic transformational leadership.", "title": "" }, { "docid": "2493570aa0a224722a07e81c9aab55cd", "text": "A Smart Tailor Platform is proposed as a venue to integrate various players in garment industry, such as tailors, designers, customers, and other relevant stakeholders to automate its business processes. In, Malaysia, currently the processes are conducted manually which consume too much time in fulfilling its supply and demand for the industry. To facilitate this process, a study was conducted to understand the main components of the business operation. The components will be represented using a strategic management tool namely the Business Model Canvas (BMC). The inception phase of the Rational Unified Process (RUP) was employed to construct the BMC. The phase began by determining the basic idea and structure of the business process. The information gathered was classified into nine related dimensions and documented in accordance with the BMC. The generated BMC depicts the relationship of all the nine dimensions for the garment industry, and thus represents an integrated business model of smart tailor. This smart platform allows the players in the industry to promote, manage and fulfill supply and demands of their product electronically. In addition, the BMC can be used to assist developers in designing and developing the smart tailor platform.", "title": "" }, { "docid": "1dbed92fbed2293b1da3080c306709e2", "text": "The large number of peer-to-peer file-sharing applications can be subdivided in three basic categories: having a mediated, pure or hybrid architecture. This paper details each of these and indicates their respective strengths and weaknesses. In addition to this theoretical study, a number of practical experiments were conducted, with special attention for three popular applications, representative of each of the three architectures. Although a number of measurement studies have been done in the past ([1], [3], etc.) these all investigate only a fraction of the available applications and architectures, with very little focus on the bigger picture and to the recent evolutions in peer-to-peer architectures.", "title": "" }, { "docid": "e9059b28b268c4dfc091e63c7419f95b", "text": "Internet advertising is one of the most popular online business models. JavaScript-based advertisements (ads) are often directly embedded in a web publisher's page to display ads relevant to users (e.g., by checking the user's browser environment and page content). However, as third-party code, the ads pose a significant threat to user privacy. Worse, malicious ads can exploit browser vulnerabilities to compromise users' machines and install malware. To protect users from these threats, we propose AdSentry, a comprehensive confinement solution for JavaScript-based advertisements. The crux of our approach is to use a shadow JavaScript engine to sandbox untrusted ads. In addition, AdSentry enables flexible regulation on ad script behaviors by completely mediating its access to the web page (including its DOM) without limiting the JavaScript functionality exposed to the ads. Our solution allows both web publishers and end users to specify access control policies to confine ads' behaviors. We have implemented a proof-of-concept prototype of AdSentry that transparently supports the Mozilla Firefox browser. Our experiments with a number of ads-related attacks successfully demonstrate its practicality and effectiveness. The performance measurement indicates that our system incurs a small performance overhead.", "title": "" }, { "docid": "c4421784554095ffed1365b3ba41bdc0", "text": "Mood classification of music is an emerging domain of music information retrieval. In the approach presented here features extracted from an audio file are used in combination with the affective value of song lyrics to map a song onto a psychologically based emotion space. The motivation behind this system is the lack of intuitive and contextually aware playlist generation tools available to music listeners. The need for such tools is made obvious by the fact that digital music libraries are constantly expanding, thus making it increasingly difficult to recall a particular song in the library or to create a playlist for a specific event. By combining audio content information with context-aware data, such as song lyrics, this system allows the listener to automatically generate a playlist to suit their current activity or mood. Thesis Supervisor: Barry Vercoe Title: Professor of Media Arts and Sciences, Program in Media Arts and Sciences", "title": "" }, { "docid": "9a1505d126d1120ffa8d9670c71cb076", "text": "A relevant knowledge [24] (and consequently research area) is the study of software lifecycle process models (PM-SDLCs). Such process models have been defined in three abstraction levels: (i) full organizational software lifecycles process models (e.g. ISO 12207, ISO 15504, CMMI/SW); (ii) lifecycles frameworks models (e.g. waterfall, spiral, RAD, and others) and (iii) detailed software development life cycles process (e.g. unified process, TSP, MBASE, and others). This paper focuses on (ii) and (iii) levels and reports the results of a descriptive/comparative study of 13 PM-SDLCs that permits a plausible explanation of their evolution in terms of common, distinctive, and unique elements as well as of the specification rigor and agility attributes. For it, a conceptual research approach and a software process lifecycle meta-model are used. Findings from the conceptual analysis are reported. Paper ends with the description of research limitations and recommendations for further research.", "title": "" }, { "docid": "4bec71105c8dca3d0b48e99cdd4e809a", "text": "Remarkable progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets and deep convolutional neural networks (CNNs). CNNs enable learning data-driven, highly representative, hierarchical image features from sufficient training data. However, obtaining datasets as comprehensively annotated as ImageNet in the medical imaging domain remains a challenge. There are currently three major techniques that successfully employ CNNs to medical image classification: training the CNN from scratch, using off-the-shelf pre-trained CNN features, and conducting unsupervised CNN pre-training with supervised fine-tuning. Another effective method is transfer learning, i.e., fine-tuning CNN models pre-trained from natural image dataset to medical image tasks. In this paper, we exploit three important, but previously understudied factors of employing deep convolutional neural networks to computer-aided detection problems. We first explore and evaluate different CNN architectures. The studied models contain 5 thousand to 160 million parameters, and vary in numbers of layers. We then evaluate the influence of dataset scale and spatial image context on performance. Finally, we examine when and why transfer learning from pre-trained ImageNet (via fine-tuning) can be useful. We study two specific computer-aided detection (CADe) problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification. We achieve the state-of-the-art performance on the mediastinal LN detection, and report the first five-fold cross-validation classification results on predicting axial CT slices with ILD categories. Our extensive empirical evaluation, CNN model analysis and valuable insights can be extended to the design of high performance CAD systems for other medical imaging tasks.", "title": "" }, { "docid": "73be9211451c3051d817112dc94df86c", "text": "Shear wave elasticity imaging (SWEI) is a new approach to imaging and characterizing tissue structures based on the use of shear acoustic waves remotely induced by the radiation force of a focused ultrasonic beam. SWEI provides the physician with a virtual \"finger\" to probe the elasticity of the internal regions of the body. In SWEI, compared to other approaches in elasticity imaging, the induced strain in the tissue can be highly localized, because the remotely induced shear waves are attenuated fully within a very limited area of tissue in the vicinity of the focal point of a focused ultrasound beam. SWEI may add a new quality to conventional ultrasonic imaging or magnetic resonance imaging. Adding shear elasticity data (\"palpation information\") by superimposing color-coded elasticity data over ultrasonic or magnetic resonance images may enable better differentiation of tissues and further enhance diagnosis. This article presents a physical and mathematical basis of SWEI with some experimental results of pilot studies proving feasibility of this new ultrasonic technology. A theoretical model of shear oscillations in soft biological tissue remotely induced by the radiation force of focused ultrasound is described. Experimental studies based on optical and magnetic resonance imaging detection of these shear waves are presented. Recorded spatial and temporal profiles of propagating shear waves fully confirm the results of mathematical modeling. Finally, the safety of the SWEI method is discussed, and it is shown that typical ultrasonic exposure of SWEI is significantly below the threshold of damaging effects of focused ultrasound.", "title": "" }, { "docid": "53ac28d19b9f3c8f68e12016e9cfabbc", "text": "Despite surveillance systems becoming increasingly ubiquitous in our living environment, automated surveillance, currently based on video sensory modality and machine intelligence, lacks most of the time the robustness and reliability required in several real applications. To tackle this issue, audio sensory devices have been incorporated, both alone or in combination with video, giving birth in the past decade, to a considerable amount of research. In this article, audio-based automated surveillance methods are organized into a comprehensive survey: A general taxonomy, inspired by the more widespread video surveillance field, is proposed to systematically describe the methods covering background subtraction, event classification, object tracking, and situation analysis. For each of these tasks, all the significant works are reviewed, detailing their pros and cons and the context for which they have been proposed. Moreover, a specific section is devoted to audio features, discussing their expressiveness and their employment in the above-described tasks. Differing from other surveys on audio processing and analysis, the present one is specifically targeted to automated surveillance, highlighting the target applications of each described method and providing the reader with a systematic and schematic view useful for retrieving the most suited algorithms for each specific requirement.", "title": "" }, { "docid": "4457c0b480ec9f3d503aa89c6bbf03b9", "text": "An output-capacitorless low-dropout regulator (LDO) with a direct voltage-spike detection circuit is presented in this paper. The proposed voltage-spike detection is based on capacitive coupling. The detection circuit makes use of the rapid transient voltage at the LDO output to increase the bias current momentarily. Hence, the transient response of the LDO is significantly enhanced due to the improvement of the slew rate at the gate of the power transistor. The proposed voltage-spike detection circuit is applied to an output-capacitorless LDO implemented in a standard 0.35-¿m CMOS technology (where VTHN ¿ 0.5 V and VTHP ¿ -0.65 V). Experimental results show that the LDO consumes 19 ¿A only. It regulates the output at 0.8 V from a 1-V supply, with dropout voltage of 200 mV at the maximum output current of 66.7 mA. The voltage spike and the recovery time of the LDO with the proposed voltage-spike detection circuit are reduced to about 70 mV and 3 ¿s, respectively, whereas they are more than 420 mV and 30 ¿s for the LDO without the proposed detection circuit.", "title": "" }, { "docid": "2f8a6dcaeea91ef5034908b5bab6d8d3", "text": "Web-based social systems enable new community-based opportunities for participants to engage, share, and interact. This community value and related services like search and advertising are threatened by spammers, content polluters, and malware disseminators. In an effort to preserve community value and ensure longterm success, we propose and evaluate a honeypot-based approach for uncovering social spammers in online social systems. Two of the key components of the proposed approach are: (1) The deployment of social honeypots for harvesting deceptive spam profiles from social networking communities; and (2) Statistical analysis of the properties of these spam profiles for creating spam classifiers to actively filter out existing and new spammers. We describe the conceptual framework and design considerations of the proposed approach, and we present concrete observations from the deployment of social honeypots in MySpace and Twitter. We find that the deployed social honeypots identify social spammers with low false positive rates and that the harvested spam data contains signals that are strongly correlated with observable profile features (e.g., content, friend information, posting patterns, etc.). Based on these profile features, we develop machine learning based classifiers for identifying previously unknown spammers with high precision and a low rate of false positives.", "title": "" }, { "docid": "e777794833a060f99e11675952cd3342", "text": "In this paper we propose a novel method to utilize the skeletal structure not only for supporting force but for releasing heat by latent heat.", "title": "" }, { "docid": "f1c1a0baa9f96d841d23e76b2b00a68d", "text": "Introduction to Derivative-Free Optimization Andrew R. Conn, Katya Scheinberg, and Luis N. Vicente The absence of derivatives, often combined with the presence of noise or lack of smoothness, is a major challenge for optimization. This book explains how sampling and model techniques are used in derivative-free methods and how these methods are designed to efficiently and rigorously solve optimization problems. Although readily accessible to readers with a modest background in computational mathematics, it is also intended to be of interest to researchers in the field. 2009 · xii + 277 pages · Softcover · ISBN 978-0-898716-68-9 List Price $73.00 · RUNDBRIEF Price $51.10 · Code MP08", "title": "" }, { "docid": "823680e9de11eb0fe987d2b6827f4665", "text": "Narcissism has been a perennial topic for psychoanalytic papers since Freud's 'On narcissism: An introduction' (1914). The understanding of this field has recently been greatly furthered by the analytical writings of Kernberg and Kohut despite, or perhaps because of, their glaring disagreements. Despite such theoretical advances, clinical theory has far outpaced clinical practice. This paper provides a clarification of the characteristics, diagnosis and development of the narcissistic personality disorder and draws out the differing treatment implications, at various levels of psychological intensity, of the two theories discussed.", "title": "" }, { "docid": "abb06d560266ca1695f72e4d908cf6ea", "text": "A simple photovoltaic (PV) system capable of operating in grid-connected mode and using multilevel boost converter (MBC) and line commutated inverter (LCI) has been developed for extracting the maximum power and feeding it to a single phase utility grid with harmonic reduction. Theoretical analysis of the proposed system is done and the duty ratio of the MBC is estimated for extracting maximum power from PV array. For a fixed firing angle of LCI, the proposed system is able to track the maximum power with the determined duty ratio which remains the same for all irradiations. This is the major advantage of the proposed system which eliminates the use of a separate maximum power point tracking (MPPT) Experiments have been conducted for feeding a single phase voltage to the grid. So by proper and simplified technique we are reducing the harmonics in the grid for unbalanced loads.", "title": "" }, { "docid": "a3cb3e28db4e44642ecdac8eb4ae9a8a", "text": "A Ka-band highly linear power amplifier (PA) is implemented in 28-nm bulk CMOS technology. Using a deep class-AB PA topology with appropriate harmonic control circuit, highly linear and efficient PAs are designed at millimeter-wave band. This PA architecture provides a linear PA operation close to the saturated power. Also elaborated harmonic tuning and neutralization techniques are used to further improve the transistor gain and stability. A two-stack PA is designed for higher gain and output power than a common source (CS) PA. Additionally, average power tracking (APT) is applied to further reduce the power consumption at a low power operation and, hence, extend battery life. Both the PAs are tested with two different signals at 28.5 GHz; they are fully loaded long-term evolution (LTE) signal with 16-quadrature amplitude modulation (QAM), a 7.5-dB peakto-average power ratio (PAPR), and a 20-MHz bandwidth (BW), and a wireless LAN (WLAN) signal with 64-QAM, a 10.8-dB PAPR, and an 80-MHz BW. The CS/two-stack PAs achieve power-added efficiency (PAE) of 27%/25%, error vector magnitude (EVM) of 5.17%/3.19%, and adjacent channel leakage ratio (ACLRE-UTRA) of -33/-33 dBc, respectively, with an average output power of 11/14.6 dBm for the LTE signal. For the WLAN signal, the CS/2-stack PAs achieve the PAE of 16.5%/17.3%, and an EVM of 4.27%/4.21%, respectively, at an average output power of 6.8/11 dBm.", "title": "" } ]
scidocsrr
ce005239bc1f2180ad8508470e4a168d
Agent-based decision-making process in airport ground handling management
[ { "docid": "b20aa2222759644b4b60b5b450424c9e", "text": "Manufacturing has faced significant changes during the last years, namely the move from a local economy towards a global and competitive economy, with markets demanding for highly customized products of high quality at lower costs, and with short life cycles. In this environment, manufacturing enterprises, to remain competitive, must respond closely to customer demands by improving their flexibility and agility, while maintaining their productivity and quality. Dynamic response to emergence is becoming a key issue in manufacturing field because traditional manufacturing control systems are built upon rigid control architectures, which cannot respond efficiently and effectively to dynamic change. In these circumstances, the current challenge is to develop manufacturing control systems that exhibit intelligence, robustness and adaptation to the environment changes and disturbances. The introduction of multi-agent systems and holonic manufacturing systems paradigms addresses these requirements, bringing the advantages of modularity, decentralization, autonomy, scalability and reusability. This paper surveys the literature in manufacturing control systems using distributed artificial intelligence techniques, namely multi-agent systems and holonic manufacturing systems principles. The paper also discusses the reasons for the weak adoption of these approaches by industry and points out the challenges and research opportunities for the future. & 2008 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "36b609f1c748154f0f6193c6578acec9", "text": "Effective supply chain design calls for robust analytical models and design tools. Previous works in this area are mostly Operation Research oriented without considering manufacturing aspects. Recently, researchers have begun to realize that the decision and integration effort in supply chain design should be driven by the manufactured product, specifically, product characteristics and product life cycle. In addition, decision-making processes should be guided by a comprehensive set of performance metrics. In this paper, we relate product characteristics to supply chain strategy and adopt supply chain operations reference (SCOR) model level I performance metrics as the decision criteria. An integrated analytic hierarchy process (AHP) and preemptive goal programming (PGP) based multi-criteria decision-making methodology is then developed to take into account both qualitative and quantitative factors in supplier selection. While the AHP process matches product characteristics with supplier characteristics (using supplier ratings derived from pairwise comparisons) to qualitatively determine supply chain strategy, PGP mathematically determines the optimal order quantity from the chosen suppliers. Since PGP uses AHP ratings as input, the variations of pairwise comparisons in AHP will influence the final order quantity. Therefore, users of this methodology should put greater emphasis on the AHP progress to ensure the accuracy of supplier ratings. r 2003 Elsevier B.V. All rights reserved.", "title": "" } ]
[ { "docid": "df7a68ebb9bc03d8a73a54ab3474373f", "text": "We report on the implementation of a color-capable sub-pixel resolving optofluidic microscope based on the pixel super-resolution algorithm and sequential RGB illumination, for low-cost on-chip color imaging of biological samples with sub-cellular resolution.", "title": "" }, { "docid": "720a3d65af4905cbffe74ab21d21dd3f", "text": "Fluorescent carbon nanoparticles or carbon quantum dots (CQDs) are a new class of carbon nanomaterials that have emerged recently and have garnered much interest as potential competitors to conventional semiconductor quantum dots. In addition to their comparable optical properties, CQDs have the desired advantages of low toxicity, environmental friendliness low cost and simple synthetic routes. Moreover, surface passivation and functionalization of CQDs allow for the control of their physicochemical properties. Since their discovery, CQDs have found many applications in the fields of chemical sensing, biosensing, bioimaging, nanomedicine, photocatalysis and electrocatalysis. This article reviews the progress in the research and development of CQDs with an emphasis on their synthesis, functionalization and technical applications along with some discussion on challenges and perspectives in this exciting and promising field.", "title": "" }, { "docid": "6f1e71399e5786eb9c3923a1e967cd8f", "text": "This Working Paper should not be reported as representing the views of the IMF. The views expressed in this Working Paper are those of the author(s) and do not necessarily represent those of the IMF or IMF policy. Working Papers describe research in progress by the author(s) and are published to elicit comments and to further debate. Using a dataset which breaks down FDI flows into primary, secondary and tertiary sector investments and a GMM dynamic approach to address concerns about endogeneity, the paper analyzes various macroeconomic, developmental, and institutional/qualitative determinants of FDI in a sample of emerging market and developed economies. While FDI flows into the primary sector show little dependence on any of these variables, secondary and tertiary sector investments are affected in different ways by countries’ income levels and exchange rate valuation, as well as development indicators such as financial depth and school enrollment, and institutional factors such as judicial independence and labor market flexibility. Finally, we find that the effect of these factors often differs between advanced and emerging economies. JEL Classification Numbers: F21, F23", "title": "" }, { "docid": "39d15901cd5fbd1629d64a165a94c5f5", "text": "This paper shows how to use modular Marx multilevel converter diode (M3CD) modules to apply unipolar or bipolar high-voltage pulses for pulsed power applications. The M3CD cells allow the assembly of a multilevel converter without needing complex algorithms and parameter measurement to balance the capacitor voltages. This paper also explains how to supply all the modular cells in order to ensure galvanic isolation between control circuits and power circuits. The experimental results for a generator with seven levels, and unipolar and bipolar pulses into resistive, inductive, and capacitive loads are presented.", "title": "" }, { "docid": "01e064e0f2267de5a26765f945114a6e", "text": "In this paper, we make contact with the field of nonparametric statistics and present a development and generalization of tools and results for use in image processing and reconstruction. In particular, we adapt and expand kernel regression ideas for use in image denoising, upscaling, interpolation, fusion, and more. Furthermore, we establish key relationships with some popular existing methods and show how several of these algorithms, including the recently popularized bilateral filter, are special cases of the proposed framework. The resulting algorithms and analyses are amply illustrated with practical examples", "title": "" }, { "docid": "4d445832d38c288b1b59a3df7b38eb1b", "text": "UNLABELLED\nThe aim of this prospective study was to assess the predictive value of (18)F-FDG PET/CT imaging for pathologic response to neoadjuvant chemotherapy (NACT) and outcome in inflammatory breast cancer (IBC) patients.\n\n\nMETHODS\nTwenty-three consecutive patients (51 y ± 12.7) with newly diagnosed IBC, assessed by PET/CT at baseline (PET1), after the third course of NACT (PET2), and before surgery (PET3), were included. The patients were divided into 2 groups according to pathologic response as assessed by the Sataloff classification: pathologic complete response for complete responders (stage TA and NA or NB) and non-pathologic complete response for noncomplete responders (not stage A for tumor or not stage NA or NB for lymph nodes). In addition to maximum standardized uptake value (SUVmax) measurements, a global breast metabolic tumor volume (MTV) was delineated using a semiautomatic segmentation method. Changes in SUVmax and MTV between PET1 and PET2 (ΔSUV1-2; ΔMTV1-2) and PET1 and PET3 (ΔSUV1-3; ΔMTV1-3) were measured.\n\n\nRESULTS\nMean SUVmax on PET1, PET2, and PET3 did not statistically differ between the 2 pathologic response groups. On receiver-operating-characteristic analysis, a 72% cutoff for ΔSUV1-3 provided the best performance to predict residual disease, with sensitivity, specificity, and accuracy of 61%, 80%, and 65%, respectively. On univariate analysis, the 72% cutoff for ΔSUV1-3 was the best predictor of distant metastasis-free survival (P = 0.05). On multivariate analysis, the 72% cutoff for ΔSUV1-3 was an independent predictor of distant metastasis-free survival (P = 0.01).\n\n\nCONCLUSION\nOur results emphasize the good predictive value of change in SUVmax between baseline and before surgery to assess pathologic response and survival in IBC patients undergoing NACT.", "title": "" }, { "docid": "53a67740e444b5951bc6ab257236996e", "text": "Although human perception appears to be automatic and unconscious, complex sensory mechanisms exist that form the preattentive component of understanding and lead to awareness. Considerable research has been carried out into these preattentive mechanisms and computational models have been developed for similar problems in the fields of computer vision and speech analysis. The focus here is to explore aural and visual information in video streams for modeling attention and detecting salient events. The separate aural and visual modules may convey explicit, complementary or mutually exclusive information around the detected audiovisual events. Based on recent studies on perceptual and computational attention modeling, we formulate measures of attention using features of saliency for the audiovisual stream. Audio saliency is captured by signal modulations and related multifrequency band features, extracted through nonlinear operators and energy tracking. Visual saliency is measured by means of a spatiotemporal attention model driven by various feature cues (intensity, color, motion). Features from both modules mapped to one-dimensional, time-varying saliency curves, from which statistics of salient segments can be extracted and important audio or visual events can be detected through adaptive, threshold-based mechanisms. Audio and video curves are integrated in a single attention curve, where events may be enhanced, suppressed or vanished. Salient events from the audiovisual curve are detected through geometrical features such as local extrema, sharp transitions and level sets. The potential of inter-module fusion and audiovisual event detection is demonstrated in applications such as video key-frame selection, video skimming and video annotation.", "title": "" }, { "docid": "c7160e93c9cce017adc1200dc7d597f2", "text": "The transcription factor, nuclear factor erythroid 2 p45-related factor 2 (Nrf2), acts as a sensor of oxidative or electrophilic stresses and plays a pivotal role in redox homeostasis. Oxidative or electrophilic agents cause a conformational change in the Nrf2 inhibitory protein Keap1 inducing the nuclear translocation of the transcription factor which, through its binding to the antioxidant/electrophilic response element (ARE/EpRE), regulates the expression of antioxidant and detoxifying genes such as heme oxygenase 1 (HO-1). Nrf2 and HO-1 are frequently upregulated in different types of tumours and correlate with tumour progression, aggressiveness, resistance to therapy, and poor prognosis. This review focuses on the Nrf2/HO-1 stress response mechanism as a promising target for anticancer treatment which is able to overcome resistance to therapies.", "title": "" }, { "docid": "f2c203e9364fee062747468dc7995429", "text": "Microinverters are module-level power electronic (MLPE) systems that are expected to have a service life more than 25 years. The general practice for providing assurance in long-term reliability under humid climatic conditions is to subject the microinverters to ‘damp heat test’ at 85°C/85%RH for 1000hrs as recommended in lEC 61215 standard. However, there is limited understanding on the correlation between the said ‘damp heat’ test and field conditions for microinverters. In this paper, a physics-of-failure (PoF)-based approach is used to correlate damp heat test to field conditions. Results of the PoF approach indicates that even 3000hrs at 85°C/85%RH may not be sufficient to guarantee 25-years' service life in certain places in the world. Furthermore, we also demonstrate that use of Miami, FL weathering data as benchmark for defining damp heat test durations will not be sufficient to guarantee 25 years' service life. Finally, when tests were conducted at 85°C/85%RH for more than 3000hrs, it was found that the PV connectors are likely to fail before the actual power electronics could fail.", "title": "" }, { "docid": "bd8f4d5181d0b0bcaacfccd6fb0edd8b", "text": "Mass deployment of RF identification (RFID) is hindered by its cost per tag. The main cost comes from the application-specific integrated circuit (ASIC) chip set in a tag. A chipless tag costs less than a cent, and these have the potential for mass deployment for low-cost, item-level tagging as the replacement technology for optical barcodes. Chipless RFID tags can be directly printed on paper or plastic packets just like barcodes. They are highly useful for automatic identification and authentication, supply-chain automation, and medical applications. Among their potential industrial applications are authenticating of polymer bank notes; scanning of credit cards, library cards, and the like; tracking of inventory in retail settings; and identification of pathology and other medical test samples.", "title": "" }, { "docid": "f44b5199f93d4b441c125ac55e4e0497", "text": "A modified method for better superpixel generation based on simple linear iterative clustering (SLIC) is presented and named BSLIC in this paper. By initializing cluster centers in hexagon distribution and performing k-means clustering in a limited region, the generated superpixels are shaped into regular and compact hexagons. The additional cluster centers are initialized as edge pixels to improve boundary adherence, which is further promoted by incorporating the boundary term into the distance calculation of the k-means clustering. Berkeley Segmentation Dataset BSDS500 is used to qualitatively and quantitatively evaluate the proposed BSLIC method. Experimental results show that BSLIC achieves an excellent compromise between boundary adherence and regularity of size and shape. In comparison with SLIC, the boundary adherence of BSLIC is increased by at most 12.43% for boundary recall and 3.51% for under segmentation error.", "title": "" }, { "docid": "54bee01d53b8bcb6ca067493993b4ff3", "text": "Large, relational factor graphs with structure defined by first-order logic or other languages give rise to notoriously difficult inference problems. Because unrolling the structure necessary to represent distributions over all hypotheses has exponential blow-up, solutions are often derived from MCMC. However, because of limitations in the design and parameterization of the jump function, these sampling-based methods suffer from local minima—the system must transition through lower-scoring configurations before arriving at a better MAP solution. This paper presents a new method of explicitly selecting fruitful downward jumps by leveraging reinforcement learning (RL) to model delayed reward with a log-linear function approximation of residual future score improvement. Our method provides dramatic empirical success, producing new state-of-the-art results on a complex joint model of ontology alignment, with a 48% reduction in error over state-of-the-art in that domain.", "title": "" }, { "docid": "6f1fc6a07d0beb235f5279e17a46447f", "text": "Nowadays, automatic multidocument text summarization systems can successfully retrieve the summary sentences from the input documents. But, it has many limitations such as inaccurate extraction to essential sentences, low coverage, poor coherence among the sentences, and redundancy. This paper introduces a new concept of timestamp approach with Naïve Bayesian Classification approach for multidocument text summarization. The timestamp provides the summary an ordered look, which achieves the coherent looking summary. It extracts the more relevant information from the multiple documents. Here, scoring strategy is also used to calculate the score for the words to obtain the word frequency. The higher linguistic quality is estimated in terms of readability and comprehensibility. In order to show the efficiency of the proposed method, this paper presents the comparison between the proposed methods with the existing MEAD algorithm. The timestamp procedure is also applied on the MEAD algorithm and the results are examined with the proposed method. The results show that the proposed method results in lesser time than the existing MEAD algorithm to execute the summarization process. Moreover, the proposed method results in better precision, recall, and F-score than the existing clustering with lexical chaining approach.", "title": "" }, { "docid": "fad164e21c7ec013450a8b96d75d9457", "text": "Pinterest is a visual discovery tool for collecting and organizing content on the Web with over 70 million users. Users “pin” images, videos, articles, products, and other objects they find on the Web, and organize them into boards by topic. Other users can repin these and also follow other users or boards. Each user organizes things differently, and this produces a vast amount of human-curated content. For example, someone looking to decorate their home might pin many images of furniture that fits their taste. These curated collections produce a large number of associations between pins, and we investigate how to leverage these associations to surface personalized content to users. Little work has been done on the Pinterest network before due to lack of availability of data. We first performed an analysis on a representative sample of the Pinterest network. After analyzing the network, we created recommendation systems, suggesting pins that users would be likely to repin or like based on their previous interactions on Pinterest. We created recommendation systems using four approaches: a baseline recommendation system using the power law distribution of the images; a content-based filtering algorithm; and two collaborative filtering algorithms, one based on one-mode projection of a bipartite graph, and the second using a label propagation approach.", "title": "" }, { "docid": "05477664471a71eebc26d59aed9b0350", "text": "This article serves as a quick reference for respiratory alkalosis. Guidelines for analysis and causes, signs, and a stepwise approach are presented.", "title": "" }, { "docid": "9078698db240725e1eb9d1f088fb05f4", "text": "Broadcasting is a common operation in a network to resolve many issues. In a mobile ad hoc network (MANET) in particular, due to host mobility, such operations are expected to be executed more frequently (such as finding a route to a particular host, paging a particular host, and sending an alarm signal). Because radio signals are likely to overlap with others in a geographical area, a straightforward broadcasting by flooding is usually very costly and will result in serious redundancy, contention, and collision, to which we call the broadcast storm problem. In this paper, we identify this problem by showing how serious it is through analyses and simulations. We propose several schemes to reduce redundant rebroadcasts and differentiate timing of rebroadcasts to alleviate this problem. Simulation results are presented, which show different levels of improvement over the basic flooding approach.", "title": "" }, { "docid": "cb641fc639b86abadec4f85efc226c14", "text": "The modernization of the US electric power infrastructure, especially in lieu of its aging, overstressed networks; shifts in social, energy and environmental policies, and also new vulnerabilities, is a national concern. Our system are required to be more adaptive and secure more than every before. Consumers are also demanding increased power quality and reliability of supply and delivery. As such, power industries, government and national laboratories and consortia have developed increased interest in what is now called the Smart Grid of the future. The paper outlines Smart Grid intelligent functions that advance interactions of agents such as telecommunication, control, and optimization to achieve adaptability, self-healing, efficiency and reliability of power systems. The author also presents a special case for the development of Dynamic Stochastic Optimal Power Flow (DSOPF) technology as a tool needed in Smart Grid design. The integration of DSOPF to achieve the design goals with advanced DMS capabilities are discussed herein. This reference paper also outlines research focus for developing next generation of advance tools for efficient and flexible power systems operation and control.", "title": "" }, { "docid": "d775cdc31c84d94d95dc132b88a37fae", "text": "Image guided filtering has been widely used in many image processing applications. However, it is a local filtering method and has limited propagation ability. In this paper, we propose a new image filtering method: nonlocal image guided averaging (NLGA). Derived from a nonlocal linear model, the proposed method can utilize the nonlocal similarity of the guidance image, so that it can propagate nonlocal information reliably. Consequently, NLGA can obtain a sharper filtering results in the edge regions and more smooth results in the smooth regions. It shows superiority over image guided filtering in different applications, such as image dehazing, depth map super-resolution and image denoising.", "title": "" }, { "docid": "4630ade03760cb8ec1da11b16703b3f1", "text": "Dengue infection is a major cause of morbidity and mortality in Malaysia. To date, much research on dengue infection conducted in Malaysia have been published. One hundred and sixty six articles related to dengue in Malaysia were found from a search through a database dedicated to indexing all original data relevant to medicine published between the years 2000-2013. Ninety articles with clinical relevance and future research implications were selected and reviewed. These papers showed evidence of an exponential increase in the disease epidemic and a varying pattern of prevalent dengue serotypes at different times. The early febrile phase of dengue infection consist of an undifferentiated fever. Clinical suspicion and ability to identify patients at risk of severe dengue infection is important. Treatment of dengue infection involves judicious use of volume expander and supportive care. Potential future research areas are discussed to narrow our current knowledge gaps on dengue infection.", "title": "" }, { "docid": "6cb480efca7138e26ce484eb28f0caec", "text": "Given the demand for authentic personal interactions over social media, it is unclear how much firms should actively manage their social media presence. We study this question empirically in a healthcare setting. We show empirically that active social media management drives more user-generated content. However, we find that this is due to an increase in incremental user postings from an organization’s employees rather than from its clients. This result holds when we explore exogenous variation in social media policies, employees and clients that are explained by medical marketing laws, medical malpractice laws and distortions in Medicare incentives. Further examination suggests that content being generated mainly by employees can be avoided if a firm’s postings are entirely client-focused. However, empirically the majority of firm postings seem not to be specifically targeted to clients’ interests, instead highlighting more general observations or achievements of the firm itself. We show that untargeted postings like this provoke activity by employees rather than clients. This may not be a bad thing, as employee-generated content may help with employee motivation, recruitment or retention, but it does suggest that social media should not be funded or managed exclusively as a marketing function of the firm. ∗Economics Department, University of Virginia, Charlottesville, VA and RAND Corporation †MIT Sloan School of Management, MIT, Cambridge, MA and NBER ‡All errors are our own.", "title": "" } ]
scidocsrr
14f8b7ea36774ef990759bc644743083
Neural Variational Inference for Text Processing
[ { "docid": "55b9284f9997b18d3b1fad9952cd4caa", "text": "This paper presents a system which learns to answer questions on a broad range of topics from a knowledge base using few handcrafted features. Our model learns low-dimensional embeddings of words and knowledge base constituents; these representations are used to score natural language questions against candidate answers. Training our system using pairs of questions and structured representations of their answers, and pairs of question paraphrases, yields competitive results on a recent benchmark of the literature.", "title": "" }, { "docid": "8b6832586f5ec4706e7ace59101ea487", "text": "We develop a semantic parsing framework based on semantic similarity for open domain question answering (QA). We focus on single-relation questions and decompose each question into an entity mention and a relation pattern. Using convolutional neural network models, we measure the similarity of entity mentions with entities in the knowledge base (KB) and the similarity of relation patterns and relations in the KB. We score relational triples in the KB using these measures and select the top scoring relational triple to answer the question. When evaluated on an open-domain QA task, our method achieves higher precision across different recall points compared to the previous approach, and can improve F1 by 7 points.", "title": "" }, { "docid": "120e36cc162f4ce602da810c80c18c7d", "text": "We describe a new model for learning meaningful representations of text documents from an unlabeled collection of documents. This model is inspired by the recently proposed Replicated Softmax, an undirected graphical model of word counts that was shown to learn a better generative model and more meaningful document representations. Specifically, we take inspiration from the conditional mean-field recursive equations of the Replicated Softmax in order to define a neural network architecture that estimates the probability of observing a new word in a given document given the previously observed words. This paradigm also allows us to replace the expensive softmax distribution over words with a hierarchical distribution over paths in a binary tree of words. The end result is a model whose training complexity scales logarithmically with the vocabulary size instead of linearly as in the Replicated Softmax. Our experiments show that our model is competitive both as a generative model of documents and as a document representation learning algorithm.", "title": "" } ]
[ { "docid": "c73af0945ac35847c7a86a7f212b4d90", "text": "We report a case of planned complex suicide (PCS) by a young man who had previously tried to commit suicide twice. He was found dead hanging by his neck, with a shot in his head. The investigation of the scene, the method employed, and previous attempts at suicide altogether pointed toward a suicidal etiology. The main difference between PCS and those cases defined in the medicolegal literature as combined suicides lies in the complex mechanism used by the victim as a protection against a failure in one of the mechanisms.", "title": "" }, { "docid": "d35d96730b71db044fccf4c8467ff081", "text": "Image steganalysis is to discriminate innocent images and those suspected images with hidden messages. This task is very challenging for modern adaptive steganography, since modifications due to message hiding are extremely small. Recent studies show that Convolutional Neural Networks (CNN) have demonstrated superior performances than traditional steganalytic methods. Following this idea, we propose a novel CNN model for image steganalysis based on residual learning. The proposed Deep Residual learning based Network (DRN) shows two attractive properties than existing CNN based methods. First, the model usually contains a large number of network layers, which proves to be effective to capture the complex statistics of digital images. Second, the residual learning in DRN preserves the stego signal coming from secret messages, which is extremely beneficial for the discrimination of cover images and stego images. Comprehensive experiments on standard dataset show that the DRN model can detect the state of arts steganographic algorithms at a high accuracy. It also outperforms the classical rich model method and several recently proposed CNN based methods.", "title": "" }, { "docid": "738a69ad1006c94a257a25c1210f6542", "text": "Encrypted data search allows cloud to offer fundamental information retrieval service to its users in a privacy-preserving way. In most existing schemes, search result is returned by a semi-trusted server and usually considered authentic. However, in practice, the server may malfunction or even be malicious itself. Therefore, users need a result verification mechanism to detect the potential misbehavior in this computation outsourcing model and rebuild their confidence in the whole search process. On the other hand, cloud typically hosts large outsourced data of users in its storage. The verification cost should be efficient enough for practical use, i.e., it only depends on the corresponding search operation, regardless of the file collection size. In this paper, we are among the first to investigate the efficient search result verification problem and propose an encrypted data search scheme that enables users to conduct secure conjunctive keyword search, update the outsourced file collection and verify the authenticity of the search result efficiently. The proposed verification mechanism is efficient and flexible, which can be either delegated to a public trusted authority (TA) or be executed privately by data users. We formally prove the universally composable (UC) security of our scheme. Experimental result shows its practical efficiency even with a large dataset.", "title": "" }, { "docid": "5f1c1baca54a8af4bf9c96818ad5b688", "text": "The spatial organisation of museums and its influence on the visitor experience has been the subject of numerous studies. Previous research, despite reporting some actual behavioural correlates, rarely had the possibility to investigate the cognitive processes of the art viewers. In the museum context, where spatial layout is one of the most powerful curatorial tools available, attention and memory can be measured as a means of establishing whether or not the gallery fulfils its function as a space for contemplating art. In this exploratory experiment, 32 participants split into two groups explored an experimental, non-public exhibition and completed two unanticipated memory tests afterwards. The results show that some spatial characteristics of an exhibition can inhibit the recall of pictures and shift the focus to perceptual salience of the artworks.", "title": "" }, { "docid": "62da9a85945652f195086be0ef780827", "text": "Fingerprint biometric is one of the most successful biometrics applied in both forensic law enforcement and security applications. Recent developments in fingerprint acquisition technology have resulted in touchless live scan devices that generate 3D representation of fingerprints, and thus can overcome the deformation and smearing problems caused by conventional contact-based acquisition techniques. However, there are yet no 3D full fingerprint databases with their corresponding 2D prints for fingerprint biometric research. This paper presents a 3D fingerprint database we have established in order to investigate the 3D fingerprint biometric comprehensively. It consists of 3D fingerprints as well as their corresponding 2D fingerprints captured by two commercial fingerprint scanners from 150 subjects in Australia. Besides, we have tested the performance of 2D fingerprint verification, 3D fingerprint verification, and 2D to 3D fingerprint verification. The results show that more work is needed to improve the performance of 2D to 3D fingerprint verification. In addition, the database is expected to be released publicly in late 2014.", "title": "" }, { "docid": "49e616b9db5ba5003ae01abfb6ed3e16", "text": "BACKGROUND\nAlthough substantial evidence suggests that stressful life events predispose to the onset of episodes of depression and anxiety, the essential features of these events that are depressogenic and anxiogenic remain uncertain.\n\n\nMETHODS\nHigh contextual threat stressful life events, assessed in 98 592 person-months from 7322 male and female adult twins ascertained from a population-based registry, were blindly rated on the dimensions of humiliation, entrapment, loss, and danger and their categories. Onsets of pure major depression (MD), pure generalized anxiety syndrome (GAS) (defined as generalized anxiety disorder with a 2-week minimum duration), and mixed MD-GAS episodes were examined using logistic regression.\n\n\nRESULTS\nOnsets of pure MD and mixed MD-GAS were predicted by higher ratings of loss and humiliation. Onsets of pure GAS were predicted by higher ratings of loss and danger. High ratings of entrapment predicted only onsets of mixed episodes. The loss categories of death and respondent-initiated separation predicted pure MD but not pure GAS episodes. Events with a combination of humiliation (especially other-initiated separation) and loss were more depressogenic than pure loss events, including death. No sex differences were seen in the prediction of episodes of illness by event categories.\n\n\nCONCLUSIONS\nIn addition to loss, humiliating events that directly devalue an individual in a core role were strongly linked to risk for depressive episodes. Event dimensions and categories that predispose to pure MD vs pure GAS episodes can be distinguished with moderate specificity. The event dimensions that preceded mixed MD-GAS episodes were largely the sum of those that preceded pure MD and pure GAS episodes.", "title": "" }, { "docid": "865306ad6f5288cf62a4082769e8068a", "text": "The rapid growth of the Internet has brought with it an exponential increase in the type and frequency of cyber attacks. Many well-known cybersecurity solutions are in place to counteract these attacks. However, the generation of Big Data over computer networks is rapidly rendering these traditional solutions obsolete. To cater for this problem, corporate research is now focusing on Security Analytics, i.e., the application of Big Data Analytics techniques to cybersecurity. Analytics can assist network managers particularly in the monitoring and surveillance of real-time network streams and real-time detection of both malicious and suspicious (outlying) patterns. Such a behavior is envisioned to encompass and enhance all traditional security techniques. This paper presents a comprehensive survey on the state of the art of Security Analytics, i.e., its description, technology, trends, and tools. It hence aims to convince the reader of the imminent application of analytics as an unparalleled cybersecurity solution in the near future.", "title": "" }, { "docid": "485f7998056ef7a30551861fad33bef4", "text": "Research has shown close connections between personality and subjective well-being (SWB), suggesting that personality traits predispose individuals to experience different levels of SWB. Moreover, numerous studies have shown that self-efficacy is related to both personality factors and SWB. Extending previous research, we show that general self-efficacy functionally connects personality factors and two components of SWB (life satisfaction and subjective happiness). Our results demonstrate the mediating role of self-efficacy in linking personality factors and SWB. Consistent with our expectations, the influence of neuroticism, extraversion, openness, and conscientiousness on life satisfaction was mediated by self-efficacy. Furthermore, self-efficacy mediated the influence of openness and conscientiousness, but not that of neuroticism and extraversion, on subjective happiness. Results highlight the importance of cognitive beliefs in functionally linking personality traits and SWB.", "title": "" }, { "docid": "e104544e8ac61ea6d77415df1deeaf81", "text": "This thesis is devoted to marker-less 3D human motion tracking in calibrated and synchronized multicamera systems. Pose estimation is based on a 3D model, which is transformed into the image plane and then rendered. Owing to elaborated techniques the tracking of the full body has been achieved in real-time via dynamic optimization or dynamic Bayesian filtering. The objective function of a particle swarm optimization algorithm and the observation model of a particle filter are based on matching between the rendered 3D models in the required poses and image features representing the extracted person. In such an approach the main part of the computational overload is associated with the rendering of 3D models in hypothetical poses as well as determination of value of objective function. Effective methods for rendering of 3D models in real-time with support of OpenGL as well as parallel methods for determining the objective function on the GPU were developed. The elaborated solutions permit 3D tracking of full body motion in real-time.", "title": "" }, { "docid": "ea8b083238554866d36ac41b9c52d517", "text": "A fully automatic document retrieval system operating on the IBM 7094 is described. The system is characterized by the fact that several hundred different methods are available to analyze documents and search requests. This feature is used in the retrieval process by leaving the exact sequence of operations initially unspecified, and adapting the search strategy to the needs of individual users. The system is used not only to simulate an actual operating environment, but also to test the effectiveness of the various available processing methods. Results obtained so far seem to indicate that some combination of analysis procedures can in general be relied upon to retrieve the wanted information. A typical search request is used as an example in the present report to illustrate systems operations and evaluation procedures .", "title": "" }, { "docid": "d44080fc547355ff8389f9da53d03c45", "text": "High profile attacks such as Stuxnet and the cyber attack on the Ukrainian power grid have increased research in Industrial Control System (ICS) and Supervisory Control and Data Acquisition (SCADA) network security. However, due to the sensitive nature of these networks, there is little publicly available data for researchers to evaluate the effectiveness of the proposed solution. The lack of representative data sets makes evaluation and independent validation of emerging security solutions difficult and slows down progress towards effective and reusable solutions. This paper presents our work to generate representative labeled data sets for SCADA networks that security researcher can use freely. The data sets include packet captures including both malicious and non-malicious Modbus traffic and accompanying CSV files that contain labels to provide the ground truth for supervised machine learning. To provide representative data at the network level, the data sets were generated in a SCADA sandbox, where electrical network simulators were used to introduce realism in the physical component. Also, real attack tools, some of them custom built for Modbus networks, were used to generate the malicious traffic. Even though they do not fully replicate a production network, these data sets represent a good baseline to validate detection tools for SCADA systems.", "title": "" }, { "docid": "c3ca913fa81b2e79a2fff6d7a5e2fea7", "text": "We present Query-Regression Network (QRN), a variant of Recurrent Neural Network (RNN) that is suitable for end-to-end machine comprehension. While previous work [18, 22] largely relied on external memory and global softmax attention mechanism, QRN is a single recurrent unit with internal memory and local sigmoid attention. Unlike most RNN-based models, QRN is able to effectively handle long-term dependencies and is highly parallelizable. In our experiments we show that QRN obtains the state-of-the-art result in end-to-end bAbI QA tasks [21].", "title": "" }, { "docid": "dbfbdd4866d7fd5e34620c82b8124c3a", "text": "Searle (1989) posits a set of adequacy criteria for any account of the meaning and use of performative verbs, such as order or promise. Central among them are: (a) performative utterances are performances of the act named by the performative verb; (b) performative utterances are self-verifying; (c) performative utterances achieve (a) and (b) in virtue of their literal meaning. He then argues that the fundamental problem with assertoric accounts of performatives is that they fail (b), and hence (a), because being committed to having an intention does not guarantee having that intention. Relying on a uniform meaning for verbs on their reportative and performative uses, we propose an assertoric analysis of performative utterances that does not require an actual intention for deriving (b), and hence can meet (a) and (c). Explicit performative utterances are those whose illocutionary force is made explicit by the verbs appearing in them (Austin 1962): (1) I (hereby) promise you to be there at five. (is a promise) (2) I (hereby) order you to be there at five. (is an order) (3) You are (hereby) ordered to report to jury duty. (is an order) (1)–(3) look and behave syntactically like declarative sentences in every way. Hence there is no grammatical basis for the once popular claim that I promise/ order spells out a ‘performative prefix’ that is silent in all other declaratives. Such an analysis, in any case, leaves unanswered the question of how illocutionary force is related to compositional meaning and, consequently, does not explain how the first person and present tense are special, so that first-person present tense forms can spell out performative prefixes, while others cannot. Minimal variations in person or tense remove the ‘performative effect’: (4) I promised you to be there at five. (is not a promise) (5) He promises to be there at five. (is not a promise) An attractive idea is that utterances of sentences like those in (1)–(3) are asser∗ The names of the authors appear in alphabetical order. 150 Condoravdi & Lauer tions, just like utterances of other declaratives, whose truth is somehow guaranteed. In one form or another, this basic strategy has been pursued by a large number of authors ever since Austin (1962) (Lemmon 1962; Hedenius 1963; Bach & Harnish 1979; Ginet 1979; Bierwisch 1980; Leech 1983; among others). One type of account attributes self-verification to meaning proper. Another type, most prominently exemplified by Bach & Harnish (1979), tries to derive the performative effect by means of an implicature-like inference that the hearer may draw based on the utterance of the explicit performative. Searle’s (1989) Challenge Searle (1989) mounts an argument against analyses of explicit performative utterances as self-verifying assertions. He takes the argument to show that an assertoric account is impossible. Instead, we take it to pose a challenge that can be met, provided one supplies the right semantics for the verbs involved. Searle’s argument is based on the following desiderata he posits for any theory of explicit performatives: (a) performative utterances are performances of the act named by the performative verb; (b) performative utterances are self-guaranteeing; (c) performative utterances achieve (a) and (b) in virtue of their literal meaning, which, in turn, ought to be based on a uniform lexical meaning of the verb across performative and reportative uses. According to Searle’s speech act theory, making a promise requires that the promiser intend to do so, and similarly for other performative verbs (the sincerity condition). It follows that no assertoric account can meet (a-c): An assertion cannot ensure that the speaker has the necessary intention. “Such an assertion does indeed commit the speaker to the existence of the intention, but the commitment to having the intention doesn’t guarantee the actual presence of the intention.” Searle (1989: 546) Hence assertoric accounts must fail on (b), and, a forteriori, on (a) and (c).1 Although Searle’s argument is valid, his premise that for truth to be guaranteed the speaker must have a particular intention is questionable. In the following, we give an assertoric account that delivers on (a-c). We aim for an 1 It should be immediately clear that inference-based accounts cannot meet (a-c) above. If the occurrence of the performative effect depends on the hearer drawing an inference, then such sentences could not be self-verifying, for the hearer may well fail to draw the inference. Performative Verbs and Performative Acts 151 account on which the assertion of the explicit performative is the performance of the act named by the performative verb. No hearer inferences are necessary. 1 Reportative and Performative Uses What is the meaning of the word order, then, so that it can have both reportative uses – as in (6) – and performative uses – as in (7)? (6) A ordered B to sign the report. (7) [A to B] I order you to sign the report now. The general strategy in this paper will be to ask what the truth conditions of reportative uses of performative verbs are, and then see what happens if these verbs are put in the first person singular present tense. The reason to start with the reportative uses is that speakers have intuitions about their truth conditions. This is not true for performative uses, because these are always true when uttered, obscuring the truth-conditional content of the declarative sentence.2 An assertion of (6) takes for granted that A presumed to have authority over B and implies that there was a communicative act from A to B. But what kind of communicative act? (7) or, in the right context, (8a-c) would suffice. (8) a. Sign the report now! b. You must sign the report now! c. I want you to sign the report now! What do these sentences have in common? We claim it is this: In the right context they commit A to a particular kind of preference for B signing the report immediately. If B accepts the utterance, he takes on a commitment to act as though he, too, prefers signing the report. If the report is co-present with A and B, he will sign it, if the report is in his office, he will leave to go there immediately, and so on. To comply with an order to p is to act as though one prefers p. One need not actually prefer it, but one has to act as if one did. The authority mentioned above amounts to this acceptance being socially or institutionally mandated. Of course, B has the option to refuse to take on this commitment, in either of two ways: (i) he can deny A’s authority, (ii) while accepting the authority, he can refuse to abide by it, thereby violating the institutional or social mandate. Crucially, in either case, (6) will still be true, as witnessed by the felicity of: 2 Szabolcsi (1982), in one of the earliest proposals for a compositional semantics of performative utterances, already pointed out the importance of reportative uses. 152 Condoravdi & Lauer (9) a. (6), but B refused to do it. b. (6), but B questioned his authority. Not even uptake by the addressee is necessary for order to be appropriate, as seen in (10) and the naturally occurring (11):3 (10) (6), but B did not hear him. (11) He ordered Kornilov to desist but either the message failed to reach the general or he ignored it.4 What is necessary is that the speaker expected uptake to happen, arguably a minimal requirement for an act to count as a communicative event. To sum up, all that is needed for (6) to be true and appropriate is that (i) there is a communicative act from A to B which commits A to a preference for B signing the report immediately and (ii) A presumes to have authority over B. The performative effect arises precisely when the utterance itself is a witness for the existential claim in (i). There are two main ingredients in the meaning of order informally outlined above: the notion of a preference, in particular a special kind of preference that guides action, and the notion of a commitment. The next two sections lay some conceptual groundwork before we spell out our analysis in section 4. 2 Representing Preferences To represent preferences that guide action, we need a way to represent preferences of different strength. Kratzer’s (1981) theory of modality is not suitable for this purpose. Suppose, for instance, that Sven desires to finish his paper and that he also wants to lie around all day, doing nothing. Modeling his preferences in the style of Kratzer, the propositions expressed by (12) and (13) would have to be part of Sven’s bouletic ordering source assigned to the actual world: (12) Sven finishes his paper. (13) Sven lies around all day, doing nothing. But then, Sven should be equally happy if he does nothing as he is if he finishes his paper. We want to be able to explain why, given his knowledge that (12) and (13) are incompatible, he works on his paper. Intuitively, it is because the preference expressed by (12) is more important than that expressed by (13). 3 We owe this observation to Lauri Karttunen. 4 https://tspace.library.utoronto.ca/citd/RussianHeritage/12.NR/NR.12.html Performative Verbs and Performative Acts 153 Preference Structures Definition 1. A preference structure relative to an information state W is a pair 〈P,≤〉, where P⊆℘(W ) and ≤ is a (weak) partial order on P. We can now define a notion of consistency that is weaker than requiring that all propositions in the preference structure be compatible: Definition 2. A preference structure 〈P,≤〉 is consistent iff for any p,q ∈ P such that p∩q = / 0, either p < q or q < p. Since preference structures are defined relative to an information state W , consistency will require not only logically but also contextually incompatible propositions to be strictly ranked. For example, if W is Sven’s doxastic state, and he knows that (12) and (13) are incompatible, for a bouletic preference structure of his to be consistent it must strictly rank the two propositions. In general, bouletic preference ", "title": "" }, { "docid": "5e60c55f419c7d62f4eeb9165e7f107c", "text": "Background : Agile software development has become a popular way of developing software. Scrum is the most frequently used agile framework, but it is often reported to be adapted in practice. Objective: Thus, we aim to understand how Scrum is adapted in different contexts and what are the reasons for these changes. Method : Using a structured interview guideline, we interviewed ten German companies about their concrete usage of Scrum and analysed the results qualitatively. Results: All companies vary Scrum in some way. The least variations are in the Sprint length, events, team size and requirements engineering. Many users varied the roles, effort estimations and quality assurance. Conclusions: Many variations constitute a substantial deviation from Scrum as initially proposed. For some of these variations, there are good reasons. Sometimes, however, the variations are a result of a previous non-agile, hierarchical organisation.", "title": "" }, { "docid": "72e9e772ede3d757122997d525d0f79c", "text": "Deep learning systems, such as Convolutional Neural Networks (CNNs), can infer a hierarchical representation of input data that facilitates categorization. In this paper, we propose to learn affect-salient features for Speech Emotion Recognition (SER) using semi-CNN. The training of semi-CNN has two stages. In the first stage, unlabeled samples are used to learn candidate features by contractive convolutional neural network with reconstruction penalization. The candidate features, in the second step, are used as the input to semi-CNN to learn affect-salient, discriminative features using a novel objective function that encourages the feature saliency, orthogonality and discrimination. Our experiment results on benchmark datasets show that our approach leads to stable and robust recognition performance in complex scenes (e.g., with speaker and environment distortion), and outperforms several well-established SER features.", "title": "" }, { "docid": "021bf98bc0ff722d9a44c5ef5e73f3c8", "text": "BACKGROUND\nMalignant bowel obstruction is a highly symptomatic, often recurrent, and sometimes refractory condition in patients with intra-abdominal tumor burden. Gastro-intestinal symptoms and function may improve with anti-inflammatory, anti-secretory, and prokinetic/anti-nausea combination medical therapy.\n\n\nOBJECTIVE\nTo describe the effect of octreotide, metoclopramide, and dexamethasone in combination on symptom burden and bowel function in patients with malignant bowel obstruction and dysfunction.\n\n\nDESIGN\nA retrospective case series of patients with malignant bowel obstruction (MBO) and malignant bowel dysfunction (MBD) treated by a palliative care consultation service with octreotide, metoclopramide, and dexamethasone. Outcomes measures were nausea, pain, and time to resumption of oral intake.\n\n\nRESULTS\n12 cases with MBO, 11 had moderate/severe nausea on presentation. 100% of these had improvement in nausea by treatment day #1. 100% of patients with moderate/severe pain improved to tolerable level by treatment day #1. The median time to resumption of oral intake was 2 days (range 1-6 days) in the 8 cases with evaluable data. Of 7 cases with MBD, 6 had For patients with malignant bowel dysfunction, of those with moderate/severe nausea. 5 of 6 had subjective improvement by day#1. Moderate/severe pain improved to tolerable levels in 5/6 by day #1. Of the 4 cases with evaluable data on resumption of PO intake, time to resume PO ranged from 1-4 days.\n\n\nCONCLUSION\nCombination medical therapy may provide rapid improvement in symptoms associated with malignant bowel obstruction and dysfunction.", "title": "" }, { "docid": "ebea79abc60a5d55d0397d21f54cc85e", "text": "The increasing availability of large-scale location traces creates unprecedent opportunities to change the paradigm for knowledge discovery in transportation systems. A particularly promising area is to extract useful business intelligence, which can be used as guidance for reducing inefficiencies in energy consumption of transportation sectors, improving customer experiences, and increasing business performances. However, extracting business intelligence from location traces is not a trivial task. Conventional data analytic tools are usually not customized for handling large, complex, dynamic, and distributed nature of location traces. To that end, we develop a taxi business intelligence system to explore the massive taxi location traces from different business perspectives with various data mining functions. Since we implement the system using the real-world taxi GPS data, this demonstration will help taxi companies to improve their business performances by understanding the behaviors of both drivers and customers. In addition, several identified technical challenges also motivate data mining people to develop more sophisticate techniques in the future.", "title": "" }, { "docid": "72b4ca7dbbd4cc0cca0779b1e9fdafe4", "text": "As population structure can result in spurious associations, it has constrained the use of association studies in human and plant genetics. Association mapping, however, holds great promise if true signals of functional association can be separated from the vast number of false signals generated by population structure. We have developed a unified mixed-model approach to account for multiple levels of relatedness simultaneously as detected by random genetic markers. We applied this new approach to two samples: a family-based sample of 14 human families, for quantitative gene expression dissection, and a sample of 277 diverse maize inbred lines with complex familial relationships and population structure, for quantitative trait dissection. Our method demonstrates improved control of both type I and type II error rates over other methods. As this new method crosses the boundary between family-based and structured association samples, it provides a powerful complement to currently available methods for association mapping.", "title": "" }, { "docid": "b466803c9a9be5d38171ece8d207365e", "text": "A large number of saliency models, each based on a different hypothesis, have been proposed over the past 20 years. In practice, while subscribing to one hypothesis or computational principle makes a model that performs well on some types of images, it hinders the general performance of a model on arbitrary images and large-scale data sets. One natural approach to improve overall saliency detection accuracy would then be fusing different types of models. In this paper, inspired by the success of late-fusion strategies in semantic analysis and multi-modal biometrics, we propose to fuse the state-of-the-art saliency models at the score level in a para-boosting learning fashion. First, saliency maps generated by several models are used as confidence scores. Then, these scores are fed into our para-boosting learner (i.e., support vector machine, adaptive boosting, or probability density estimator) to generate the final saliency map. In order to explore the strength of para-boosting learners, traditional transformation-based fusion strategies, such as Sum, Min, and Max, are also explored and compared in this paper. To further reduce the computation cost of fusing too many models, only a few of them are considered in the next step. Experimental results show that score-level fusion outperforms each individual model and can further reduce the performance gap between the current models and the human inter-observer model.", "title": "" }, { "docid": "cc7179392883ad12d42469fc7e1b3e01", "text": "A low-profile broadband dual-polarized patch subarray is designed in this letter for a highly integrated X-band synthetic aperture radar payload on a small satellite. The proposed subarray is lightweight and has a low profile due to its tile structure realized by a multilayer printed circuit board process. The measured results confirm that the subarray yields 14-dB bandwidths from 9.15 to 10.3 GHz for H-pol and from 9.35 to 10.2 GHz for V-pol. The isolation remains better than 40 dB. The average realized gains are approximately 13 dBi for both polarizations. The sidelobe levels are 25 dB for H-pol and 20 dB for V-pol. The relative cross-polarization levels are  30 dB within the half-power beamwidth range.", "title": "" } ]
scidocsrr
93f55dd33860b0763d6a60c00ecb3596
Socially Aware Networking: A Survey
[ { "docid": "7fc6e701aacc7d014916b9b47b01be16", "text": "We compare recent approaches to community structure identification in terms of sensitivity and computational cost. The recently proposed modularity measure is revisited and the performance of the methods as applied to ad hoc networks with known community structure, is compared. We find that the most accurate methods tend to be more computationally expensive, and that both aspects need to be considered when choosing a method for practical purposes. The work is intended as an introduction as well as a proposal for a standard benchmark test of community detection methods.", "title": "" } ]
[ { "docid": "bfe9b8e84da087cfd3a3d8ece6dc9b9d", "text": "Microblog ranking is a hot research topic in recent years. Most of the related works apply TF-IDF metric for calculating content similarity while neglecting their semantic similarity. And most existing search engines which retrieve the microblog list by string matching the search keywords is not competent to provide a reliable list for users when dealing with polysemy and synonym. Besides, treating all the users with same authority for all topics is intuitively not ideal. In this paper, a comprehensive strategy for microblog ranking is proposed. First, we extend the conventional TF-IDF based content similarity with exploiting knowledge from WordNet. Then, we further incorporate a new feature for microblog ranking that is the topical relation between search keyword and its retrieval. Author topical authority is also incorporated into the ranking framework as an important feature for microblog ranking. Gradient Boosting Decision Tree(GBDT), then is employed to train the ranking model with multiple features involved. We conduct thorough experiments on a large-scale real-world Twitter dataset and demonstrate that our proposed approach outperform a number of existing approaches in discovering higher quality and more related microblogs.", "title": "" }, { "docid": "84a2d26a0987a79baf597508543f39b6", "text": "In order to recommend products to users we must ultimately predict how a user will respond to a new product. To do so we must uncover the implicit tastes of each user as well as the properties of each product. For example, in order to predict whether a user will enjoy Harry Potter, it helps to identify that the book is about wizards, as well as the user's level of interest in wizardry. User feedback is required to discover these latent product and user dimensions. Such feedback often comes in the form of a numeric rating accompanied by review text. However, traditional methods often discard review text, which makes user and product latent dimensions difficult to interpret, since they ignore the very text that justifies a user's rating. In this paper, we aim to combine latent rating dimensions (such as those of latent-factor recommender systems) with latent review topics (such as those learned by topic models like LDA). Our approach has several advantages. Firstly, we obtain highly interpretable textual labels for latent rating dimensions, which helps us to `justify' ratings with text. Secondly, our approach more accurately predicts product ratings by harnessing the information present in review text; this is especially true for new products and users, who may have too few ratings to model their latent factors, yet may still provide substantial information from the text of even a single review. Thirdly, our discovered topics can be used to facilitate other tasks such as automated genre discovery, and to identify useful and representative reviews.", "title": "" }, { "docid": "a4c76e58074a42133a59a31d9022450d", "text": "This article reviews a free-energy formulation that advances Helmholtz's agenda to find principles of brain function based on conservation laws and neuronal energy. It rests on advances in statistical physics, theoretical biology and machine learning to explain a remarkable range of facts about brain structure and function. We could have just scratched the surface of what this formulation offers; for example, it is becoming clear that the Bayesian brain is just one facet of the free-energy principle and that perception is an inevitable consequence of active exchange with the environment. Furthermore, one can see easily how constructs like memory, attention, value, reinforcement and salience might disclose their simple relationships within this framework.", "title": "" }, { "docid": "2bf0219394d87654d2824c805844fcaa", "text": "Wei-yu Kevin Chiang • Dilip Chhajed • James D. Hess Department of Information Systems, University of Maryland at Baltimore County, Baltimore, Maryland 21250 Department of Business Administration, University of Illinois at Urbana–Champaign, Champaign, Illinois 61820 Department of Business Administration, University of Illinois at Urbana–Champaign, Champaign, Illinois 61820 kevin@wchiang.net • chhajed@uiuc.edu • jhess@uiuc.edu", "title": "" }, { "docid": "91e8516d2e7e1e9de918251ac694ee08", "text": "High performance 3D integration Systems need a higher interconnect density between the die than traditional μbump interconnects can offer. For ultra-fine pitches interconnect pitches below 5μm a different solution is required. This paper describes a hybrid wafer-to-wafer (W2W) bonding approach that uses Cu damascene patterned surface bonding, allowing to scale down the interconnection pitch below 5 μm, potentially even down to 1μm, depending on the achievable W2W bonding accuracy. The bonding method is referred to as hybrid bonding since the bonding of the Cu/dielectric damascene surfaces leads simultaneously to metallic and dielectric bonding. In this paper, the integration flow for 300mm hybrid wafer bonding at 3.6μm and 1.8μm pitch will be described using a novel, alternative, non-oxide Cu/dielectric damascene process. Optimization of the surface preparation before bonding will be discussed. Of particular importance is the wafer chemical-mechanical-polishing (CMP) process and the pre-bonding wafer treatment. Using proper surface activation and very low roughness dielectrics, void-free room temperature bonding can be achieved. High bonding strengths are obtained, even using low temperature anneal (250°C). The process flow also integrates the use of a 5μm diameter, 50μm deep via-middle through-silicon-vias (TSV) to connect the wafer interfaces to the external wafer backside.", "title": "" }, { "docid": "a76ba02ef0f87a41cdff1a4046d4bba1", "text": "This paper proposes two RF self-interference cancellation techniques. Their small form-factor enables full-duplex communication links for small-to-medium size portable devices and hence promotes the adoption of full-duplex in mass-market applications and next-generation standards, e.g. IEEE802.11 and 5G. Measured prototype implementations of an electrical balance duplexer and a dual-polarized antenna both achieve >50 dB self-interference suppression at RF, operating in the ISM band at 2.45GHz.", "title": "" }, { "docid": "6f265af3f4f93fcce13563cac14b5774", "text": "Inorganic pyrophosphate (PP(i)) produced by cells inhibits mineralization by binding to crystals. Its ubiquitous presence is thought to prevent \"soft\" tissues from mineralizing, whereas its degradation to P(i) in bones and teeth by tissue-nonspecific alkaline phosphatase (Tnap, Tnsalp, Alpl, Akp2) may facilitate crystal growth. Whereas the crystal binding properties of PP(i) are largely understood, less is known about its effects on osteoblast activity. We have used MC3T3-E1 osteoblast cultures to investigate the effect of PP(i) on osteoblast function and matrix mineralization. Mineralization in the cultures was dose-dependently inhibited by PP(i). This inhibition could be reversed by Tnap, but not if PP(i) was bound to mineral. PP(i) also led to increased levels of osteopontin (Opn) induced via the Erk1/2 and p38 MAPK signaling pathways. Opn regulation by PP(i) was also insensitive to foscarnet (an inhibitor of phosphate uptake) and levamisole (an inhibitor of Tnap enzymatic activity), suggesting that increased Opn levels did not result from changes in phosphate. Exogenous OPN inhibited mineralization, but dephosphorylation by Tnap reversed this effect, suggesting that OPN inhibits mineralization via its negatively charged phosphate residues and that like PP(i), hydrolysis by Tnap reduces its mineral inhibiting potency. Using enzyme kinetic studies, we have shown that PP(i) inhibits Tnap-mediated P(i) release from beta-glycerophosphate (a commonly used source of organic phosphate for culture mineralization studies) through a mixed type of inhibition. In summary, PP(i) prevents mineralization in MC3T3-E1 osteoblast cultures by at least three different mechanisms that include direct binding to growing crystals, induction of Opn expression, and inhibition of Tnap activity.", "title": "" }, { "docid": "e1b536458ddc8603b281bac69e6bd2e8", "text": "We present highly integrated sensor-actuator-controller units (SAC units), addressing the increasing need for easy to use components in the design of modern high-performance robotic systems. Following strict design principles and an electro-mechanical co-design from the beginning on, our development resulted in highly integrated SAC units. Each SAC unit includes a motor, a gear unit, an IMU, sensors for torque, position and temperature as well as all necessary embedded electronics for control and communication over a high-speed EtherCAT bus. Key design considerations were easy to use interfaces and a robust cabling system. Using slip rings to electrically connect the input and output side, the units allow continuous rotation even when chained along a robotic arm. The experimental validation shows the potential of the new SAC units regarding the design of humanoid robots.", "title": "" }, { "docid": "288845120cdf96a20850b3806be3d89a", "text": "DNA replicases are multicomponent machines that have evolved clever strategies to perform their function. Although the structure of DNA is elegant in its simplicity, the job of duplicating it is far from simple. At the heart of the replicase machinery is a heteropentameric AAA+ clamp-loading machine that couples ATP hydrolysis to load circular clamp proteins onto DNA. The clamps encircle DNA and hold polymerases to the template for processive action. Clamp-loader and sliding clamp structures have been solved in both prokaryotic and eukaryotic systems. The heteropentameric clamp loaders are circular oligomers, reflecting the circular shape of their respective clamp substrates. Clamps and clamp loaders also function in other DNA metabolic processes, including repair, checkpoint mechanisms, and cell cycle progression. Twin polymerases and clamps coordinate their actions with a clamp loader and yet other proteins to form a replisome machine that advances the replication fork.", "title": "" }, { "docid": "0b4c076b80d91eb20ef71e63f17e9654", "text": "Current sports injury reporting systems lack a common conceptual basis. We propose a conceptual foundation as a basis for the recording of health problems associated with participation in sports, based on the notion of impairment used by the World Health Organization. We provide definitions of sports impairment concepts to represent the perspectives of health services, the participants in sports and physical exercise themselves, and sports institutions. For each perspective, the duration of the causative event is used as the norm for separating concepts into those denoting impairment conditions sustained instantly and those developing gradually over time. Regarding sports impairment sustained in isolated events, 'sports injury' denotes the loss of bodily function or structure that is the object of observations in clinical examinations; 'sports trauma' is defined as an immediate sensation of pain, discomfort or loss of functioning that is the object of athlete self-evaluations; and 'sports incapacity' is the sidelining of an athlete because of a health evaluation made by a legitimate sports authority that is the object of time loss observations. Correspondingly, sports impairment caused by excessive bouts of physical exercise is denoted as 'sports disease' (overuse syndrome) when observed by health service professionals during clinical examinations, 'sports illness' when observed by the athlete in self-evaluations, and 'sports sickness' when recorded as time loss from sports participation by a sports body representative. We propose a concerted development effort in this area that takes advantage of concurrent ontology management resources and involves the international sporting community in building terminology systems that have broad relevance.", "title": "" }, { "docid": "915b9627736c6ae916eafcd647cb39af", "text": "This paper describes a methodology for automated recognition of complex human activities. The paper proposes a general framework which reliably recognizes high-level human actions and human-human interactions. Our approach is a description-based approach, which enables a user to encode the structure of a high-level human activity as a formal representation. Recognition of human activities is done by semantically matching constructed representations with actual observations. The methodology uses a context-free grammar (CFG) based representation scheme as a formal syntax for representing composite activities. Our CFG-based representation enables us to define complex human activities based on simpler activities or movements. Our system takes advantage of both statistical recognition techniques from computer vision and knowledge representation concepts from traditional artificial intelligence. In the low-level of the system, image sequences are processed to extract poses and gestures. Based on the recognition of gestures, the high-level of the system hierarchically recognizes composite actions and interactions occurring in a sequence of image frames. The concept of hallucinations and a probabilistic semantic-level recognition algorithm is introduced to cope with imperfect lower-layers. As a result, the system recognizes human activities including ‘fighting’ and ‘assault’, which are high-level activities that previous systems had difficulties. The experimental results show that our system reliably recognizes sequences of complex human activities with a high recognition rate.", "title": "" }, { "docid": "8b863cd49dfe5edc2d27a0e9e9db0429", "text": "This paper presents an annotation scheme for adding entity and event target annotations to the MPQA corpus, a rich span-annotated opinion corpus. The new corpus promises to be a valuable new resource for developing systems for entity/event-level sentiment analysis. Such systems, in turn, would be valuable in NLP applications such as Automatic Question Answering. We introduce the idea of entity and event targets (eTargets), describe the annotation scheme, and present the results of an agreement study.", "title": "" }, { "docid": "d6a6cadd782762e4591447b7dd2c870a", "text": "OBJECTIVE\nThe objective of this study was to assess the effects of participation in a mindfulness meditation-based stress reduction program on mood disturbance and symptoms of stress in cancer outpatients.\n\n\nMETHODS\nA randomized, wait-list controlled design was used. A convenience sample of eligible cancer patients enrolled after giving informed consent and were randomly assigned to either an immediate treatment condition or a wait-list control condition. Patients completed the Profile of Mood States and the Symptoms of Stress Inventory both before and after the intervention. The intervention consisted of a weekly meditation group lasting 1.5 hours for 7 weeks plus home meditation practice.\n\n\nRESULTS\nNinety patients (mean age, 51 years) completed the study. The group was heterogeneous in type and stage of cancer. Patients' mean preintervention scores on dependent measures were equivalent between groups. After the intervention, patients in the treatment group had significantly lower scores on Total Mood Disturbance and subscales of Depression, Anxiety, Anger, and Confusion and more Vigor than control subjects. The treatment group also had fewer overall Symptoms of Stress; fewer Cardiopulmonary and Gastrointestinal symptoms; less Emotional Irritability, Depression, and Cognitive Disorganization; and fewer Habitual Patterns of stress. Overall reduction in Total Mood Disturbance was 65%, with a 31% reduction in Symptoms of Stress.\n\n\nCONCLUSIONS\nThis program was effective in decreasing mood disturbance and stress symptoms in both male and female patients with a wide variety of cancer diagnoses, stages of illness, and ages. cancer, stress, mood, intervention, mindfulness.", "title": "" }, { "docid": "bed89842ee325f9dc662d63c07f34726", "text": "Analysis of flows such as human movement can help spatial planners better understand territorial patterns in urban environments. In this paper, we describe FlowSampler, an interactive visual interface designed for spatial planners to gather, extract and analyse human flows in geolocated social media data. Our system adopts a graph-based approach to infer movement pathways from spatial point type data and expresses the resulting information through multiple linked multiple visualisations to support data exploration. We describe two use cases to demonstrate the functionality of our system and characterise how spatial planners utilise it to address analytical task.", "title": "" }, { "docid": "1ddbe5990a1fc4fe22a9788c77307a9f", "text": "The DENDRAL and Meta-DENDRAL programs are products of a large, interdisciplinary group of Stanford University scientists concerned with many and highly varied aspects of the mechanization ofscientific reasoningand theformalization of scientific knowledge for this purpose. An early motivation for our work was to explore the power of existing AI methods, such as heuristic search, for reasoning in difficult scientific problems [7]. Another concern has been to exploit the AI methodology to understand better some fundamental questions in the philosophy of science, for example the processes by which explanatory hypotheses are discovered or judged adequate [18]. From the start, the project has had an applications dimension [9, 10, 27]. It has sought to develop \"expert level\" agents to assist in the solution ofproblems in their discipline that require complex symbolic reasoning. The applications dimension is the focus of this paper. In order to achieve high performance, the DENDRAL programs incorporate large amounts ofknowledge about the area of science to which they are applied, structure elucidation in organic chemistry. A \"smart assistant\" for a chemist needs tobe able toperform many tasks as well as an expert, but need not necessarily understand the domain at the same theoretical level as the expert. The over-all structure elucidation task is described below (Section 2) followed by a description of the role of the DENDRAL programs within that framework (Section 3). The Meta-DENDRAL programs (Section 4) use a weaker body of knowledge about the domain ofmass spectrometry because their task is to formulate rules of mass spectrometry by induction from empirical data. A strong model of the domain would bias therules unnecessarily.", "title": "" }, { "docid": "b41c0a4e2a312d74d9a244e01fc76d66", "text": "There is a growing interest in studying the adoption of m-payments but literature on the subject is still in its infancy and no empirical research relating to this has been conducted in the context of the UK to date. The aim of this study is to unveil the current situation in m-payment adoption research and provide future research direction through the development of a research model for the examination of factors affecting m-payment adoption in the UK context. Following an extensive search of the literature, this study finds that 186 relationships between independent and dependent variables have been analysed by 32 existing empirical m-payment and m-banking adoption studies. From analysis of these relationships the most significant factors found to influence adoption are uncovered and an extension of UTAUT2 with the addition of perceived risk and trust is proposed to increase the applicability of UTAUT2 to the m-payment context.", "title": "" }, { "docid": "18ef3fbade2856543cae1fcc563c1c43", "text": "This paper induces the prominence of variegated machine learning techniques adapted so far for the identifying different network attacks and suggests a preferable Intrusion Detection System (IDS) with the available system resources while optimizing the speed and accuracy. With booming number of intruders and hackers in todays vast and sophisticated computerized world, it is unceasingly challenging to identify unknown attacks in promising time with no false positive and no false negative. Principal Component Analysis (PCA) curtails the amount of data to be compared by reducing their dimensions prior to classification that results in reduction of detection time. In this paper, PCA is adopted to reduce higher dimension dataset to lower dimension dataset. It is accomplished by converting network packet header fields into a vector then PCA applied over high dimensional dataset to reduce the dimension. The reduced dimension dataset is tested with Support Vector Machines (SVM), K-Nearest Neighbors (KNN), J48 Tree algorithm, Random Forest Tree classification algorithm, Adaboost algorihm, Nearest Neighbors generalized Exemplars algorithm, Navebayes probabilistic classifier and Voting Features Interval classification algorithm. Obtained results demonstrates detection accuracy, computational efficiency with minimal false alarms, less system resources utilization. Experimental results are compared with respect to detection rate and detection time and found that TREE classification algorithms achieved superior results over other algorithms. The whole experiment is conducted by using KDD99 data set.", "title": "" }, { "docid": "fac3285b06bd12db0cef95bb854d4480", "text": "The design of a novel and versatile single-port quad-band patch antenna is presented. The antenna is capable of supporting a maximum of four operational sub-bands, with the inherent capability to enhance or suppress any resonance(s) of interest. In addition, circular-polarisation is also achieved at the low frequency band, to demonstrate the polarisation agility. A prototype model of the antenna has been fabricated and its performance experimentally validated. The antenna's single layer and low-profile configuration makes it suitable for mobile user terminals and its cavity-backed feature results in low levels of coupling.", "title": "" }, { "docid": "fa34cdffb421f2c514d5bacbc6776ae9", "text": "A review on various CMOS voltage level shifters is presented in this paper. A voltage level-shifter shifts the level of input voltage to desired output voltage. Voltage Level Shifter circuits are compared with respect to output voltage level, power consumption and delay. Systems often require voltage level translation devices to allow interfacing between integrated circuit devices built from different voltage technologies. The choice of the proper voltage level translation device depends on many factors and will affect the performance and efficiency of the circuit application.", "title": "" }, { "docid": "14ca9dfee206612e36cd6c3b3e0ca61e", "text": "Radio-frequency identification (RFID) technology promises to revolutionize the way we track items in supply chain, retail store, and asset management applications. The size and different characteristics of RFID data pose many interesting challenges in the current data management systems. In this paper, we provide a brief overview of RFID technology and highlight a few of the data management challenges that we believe are suitable topics for exploratory research.", "title": "" } ]
scidocsrr
bed598d119ebf08545c93e7c90802bc1
Mash: fast genome and metagenome distance estimation using MinHash
[ { "docid": "6059b4bbf5d269d0a5f1f596b48c1acb", "text": "The mathematical concept of document resemblance captures well the informal notion of syntactic similarity. The resemblance can be estimated using a fixed size “sketch” for each document. For a large collection of documents (say hundreds of millions) the size of this sketch is of the order of a few hundred bytes per document. However, for efficient large scale web indexing it is not necessary to determine the actual resemblance value: it suffices to determine whether newly encountered documents are duplicates or near-duplicates of documents already indexed. In other words, it suffices to determine whether the resemblance is above a certain threshold. In this talk we show how this determination can be made using a ”sample” of less than 50 bytes per document. The basic approach for computing resemblance has two aspects: first, resemblance is expressed as a set (of strings) intersection problem, and second, the relative size of intersections is evaluated by a process of random sampling that can be done independently for each document. The process of estimating the relative size of intersection of sets and the threshold test discussed above can be applied to arbitrary sets, and thus might be of independent interest. The algorithm for filtering near-duplicate documents discussed here has been successfully implemented and has been used for the last three years in the context of the AltaVista search engine.", "title": "" }, { "docid": "faac043b0c32bad5a44d52b93e468b78", "text": "Comparative genomic analyses of primates offer considerable potential to define and understand the processes that mold, shape, and transform the human genome. However, primate taxonomy is both complex and controversial, with marginal unifying consensus of the evolutionary hierarchy of extant primate species. Here we provide new genomic sequence (~8 Mb) from 186 primates representing 61 (~90%) of the described genera, and we include outgroup species from Dermoptera, Scandentia, and Lagomorpha. The resultant phylogeny is exceptionally robust and illuminates events in primate evolution from ancient to recent, clarifying numerous taxonomic controversies and providing new data on human evolution. Ongoing speciation, reticulate evolution, ancient relic lineages, unequal rates of evolution, and disparate distributions of insertions/deletions among the reconstructed primate lineages are uncovered. Our resolution of the primate phylogeny provides an essential evolutionary framework with far-reaching applications including: human selection and adaptation, global emergence of zoonotic diseases, mammalian comparative genomics, primate taxonomy, and conservation of endangered species.", "title": "" }, { "docid": "252f4bcaeb5612a3018578ec2008dd71", "text": "Kraken is an ultrafast and highly accurate program for assigning taxonomic labels to metagenomic DNA sequences. Previous programs designed for this task have been relatively slow and computationally expensive, forcing researchers to use faster abundance estimation programs, which only classify small subsets of metagenomic data. Using exact alignment of k-mers, Kraken achieves classification accuracy comparable to the fastest BLAST program. In its fastest mode, Kraken classifies 100 base pair reads at a rate of over 4.1 million reads per minute, 909 times faster than Megablast and 11 times faster than the abundance estimation program MetaPhlAn. Kraken is available at http://ccb.jhu.edu/software/kraken/ .", "title": "" } ]
[ { "docid": "15de232c8daf22cf1a1592a21e1d9df3", "text": "This survey discusses how recent developments in multimodal processing facilitate conceptual grounding of language. We categorize the information flow in multimodal processing with respect to cognitive models of human information processing and analyze different methods for combining multimodal representations. Based on this methodological inventory, we discuss the benefit of multimodal grounding for a variety of language processing tasks and the challenges that arise. We particularly focus on multimodal grounding of verbs which play a crucial role for the compositional power of language. Title and Abstract in German Multimodale konzeptuelle Verankerung für die automatische Sprachverarbeitung Dieser Überblick erörtert, wie aktuelle Entwicklungen in der automatischen Verarbeitung multimodaler Inhalte die konzeptuelle Verankerung sprachlicher Inhalte erleichtern können. Die automatischen Methoden zur Verarbeitung multimodaler Inhalte werden zunächst hinsichtlich der zugrundeliegenden kognitiven Modelle menschlicher Informationsverarbeitung kategorisiert. Daraus ergeben sich verschiedene Methoden um Repräsentationen unterschiedlicher Modalitäten miteinander zu kombinieren. Ausgehend von diesen methodischen Grundlagen wird diskutiert, wie verschiedene Forschungsprobleme in der automatischen Sprachverarbeitung von multimodaler Verankerung profitieren können und welche Herausforderungen sich dabei ergeben. Ein besonderer Schwerpunkt wird dabei auf die multimodale konzeptuelle Verankerung von Verben gelegt, da diese eine wichtige kompositorische Funktion erfüllen.", "title": "" }, { "docid": "51e307584d6446ba2154676d02d2cc84", "text": "This article provides a tutorial overview of cognitive architectures that can form a theoretical foundation for designing multimedia instruction. Cognitive architectures include a description of memory stores, memory codes, and cognitive operations. Architectures that are relevant to multimedia learning include Paivio’s dual coding theory, Baddeley’s working memory model, Engelkamp’s multimodal theory, Sweller’s cognitive load theory, Mayer’s multimedia learning theory, and Nathan’s ANIMATE theory. The discussion emphasizes the interplay between traditional research studies and instructional applications of this research for increasing recall, reducing interference, minimizing cognitive load, and enhancing understanding. Tentative conclusions are that (a) there is general agreement among the different architectures, which differ in focus; (b) learners’ integration of multiple codes is underspecified in the models; (c) animated instruction is not required when mental simulations are sufficient; (d) actions must be meaningful to be successful; and (e) multimodal instruction is superior to targeting modality-specific individual differences.", "title": "" }, { "docid": "7db6124dc1f196ec2067a2d9dc7ba028", "text": "We describe a graphical representation of probabilistic relationships-an alternative to the Bayesian network-called a dependency network. Like a Bayesian network, a dependency network has a graph and a probability component. The graph component is a (cyclic) directed graph such that a node's parents render that node independent of all other nodes in the network. The probability component consists of the probability of a node given its parents for each node (as in a Bayesian network). We identify several basic properties of this representation, and describe its use in collaborative filtering (the task of predicting preferences) and the visualization of predictive relationships.", "title": "" }, { "docid": "789a9d6e2a007938fa8f1715babcabd2", "text": "We present a novel framework that enables efficient probabilistic inference in large-scale scientific models by allowing the execution of existing domain-specific simulators as probabilistic programs, resulting in highly interpretable posterior inference. Our framework is general purpose and scalable, and is based on a crossplatform probabilistic execution protocol through which an inference engine can control simulators in a language-agnostic way. We demonstrate the technique in particle physics, on a scientifically accurate simulation of the τ (tau) lepton decay, which is a key ingredient in establishing the properties of the Higgs boson. Highenergy physics has a rich set of simulators based on quantum field theory and the interaction of particles in matter. We show how to use probabilistic programming to perform Bayesian inference in these existing simulator codebases directly, in particular conditioning on observable outputs from a simulated particle detector to directly produce an interpretable posterior distribution over decay pathways. Inference efficiency is achieved via inference compilation where a deep recurrent neural network is trained to parameterize proposal distributions and control the stochastic simulator in a sequential importance sampling scheme, at a fraction of the computational cost of Markov chain Monte Carlo sampling.", "title": "" }, { "docid": "c692dd35605c4af62429edef6b80c121", "text": "As one of the most important mid-level features of music, chord contains rich information of harmonic structure that is useful for music information retrieval. In this paper, we present a chord recognition system based on the N-gram model. The system is time-efficient, and its accuracy is comparable to existing systems. We further propose a new method to construct chord features for music emotion classification and evaluate its performance on commercial song recordings. Experimental results demonstrate the advantage of using chord features for music classification and retrieval.", "title": "" }, { "docid": "e6c7d1db1e1cfaab5fdba7dd1146bcd2", "text": "We define the object detection from imagery problem as estimating a very large but extremely sparse bounding box dependent probability distribution. Subsequently we identify a sparse distribution estimation scheme, Directed Sparse Sampling, and employ it in a single end-to-end CNN based detection model. This methodology extends and formalizes previous state-of-the-art detection models with an additional emphasis on high evaluation rates and reduced manual engineering. We introduce two novelties, a corner based region-of-interest estimator and a deconvolution based CNN model. The resulting model is scene adaptive, does not require manually defined reference bounding boxes and produces highly competitive results on MSCOCO, Pascal VOC 2007 and Pascal VOC 2012 with real-time evaluation rates. Further analysis suggests our model performs particularly well when finegrained object localization is desirable. We argue that this advantage stems from the significantly larger set of available regions-of-interest relative to other methods. Source-code is available from: https://github.com/lachlants/denet", "title": "" }, { "docid": "566913d3a3d2e8fe24d6f5ff78440b94", "text": "We describe a Digital Advertising System Simulation (DASS) for modeling advertising and its impact on user behavior. DASS is both flexible and general, and can be applied to research on a wide range of topics, such as digital attribution, ad fatigue, campaign optimization, and marketing mix modeling. This paper introduces the basic DASS simulation framework and illustrates its application to digital attribution. We show that common position-based attribution models fail to capture the true causal effects of advertising across several simple scenarios. These results lay a groundwork for the evaluation of more complex attribution models, and the development of improved models.", "title": "" }, { "docid": "e2d0a4d2c2c38722d9e9493cf506fc1c", "text": "This paper describes two Global Positioning System (GPS) based attitude determination algorithms which contain steps of integer ambiguity resolution and attitude computation. The first algorithm extends the ambiguity function method to account for the unique requirement of attitude determination. The second algorithm explores the artificial neural network approach to find the attitude. A test platform is set up for verifying these algorithms.", "title": "" }, { "docid": "56a2279c9c3bcbddf03561bec2508f81", "text": "The article introduces a framework for users' design quality judgments based on Adaptive Decision Making theory. The framework describes judgment on quality attributes (usability, content/functionality, aesthetics, customisation and engagement) with dependencies on decision making arising from the user's background, task and context. The framework is tested and refined by three experimental studies. The first two assessed judgment of quality attributes of websites with similar content but radically different designs for aesthetics and engagement. Halo effects were demonstrated whereby attribution of good quality on one attribute positively influenced judgment on another, even in the face of objective evidence to the contrary (e.g., usability errors). Users' judgment was also shown to be susceptible to framing effects of the task and their background. These appear to change the importance order of the quality attributes; hence, quality assessment of a design appears to be very context dependent. The third study assessed the influence of customisation by experiments on mobile services applications, and demonstrated that evaluation of customisation depends on the users' needs and motivation. The results are discussed in the context of the literature on aesthetic judgment, user experience and trade-offs between usability and hedonic/ludic design qualities.", "title": "" }, { "docid": "8477b50ea5b4dd76f0bf7190ba05c284", "text": "It is shown how Conceptual Graphs and Formal Concept Analysis may be combined to obtain a formalization of Elementary Logic which is useful for knowledge representation and processing. For this, a translation of conceptual graphs to formal contexts and concept lattices is described through an example. Using a suitable mathematization of conceptual graphs, basics of a uniied mathematical theory for Elementary Logic are proposed.", "title": "" }, { "docid": "f48ee93659a25bee9a49e8be6c789987", "text": "what design is from a theoretical point of view, which is a role of the descriptive model. However, descriptive models are not necessarily helpful in directly deriving either the architecture of intelligent CAD or the knowledge representation for intelligent CAD. For this purpose, we need a computable design process model that should coincide, at least to some extent, with a cognitive model that explains actual design activities. One of the major problems in developing so-called intelligent computer-aided design (CAD) systems (ten Hagen and Tomiyama 1987) is the representation of design knowledge, which is a two-part process: the representation of design objects and the representation of design processes. We believe that intelligent CAD systems will be fully realized only when these two types of representation are integrated. Progress has been made in the representation of design objects, as can be seen, for example, in geometric modeling; however, almost no significant results have been seen in the representation of design processes, which implies that we need a design theory to formalize them. According to Finger and Dixon (1989), design process models can be categorized into a descriptive model that explains how design is done, a cognitive model that explains the designer’s behavior, a prescriptive model that shows how design must be done, and a computable model that expresses a method by which a computer can accomplish a task. A design theory for intelligent CAD is not useful when it is merely descriptive or cognitive; it must also be computable. We need a general model of design Articles", "title": "" }, { "docid": "f3b1e1c9effb7828a62187e9eec5fba7", "text": "Histone modifications and chromatin-associated protein complexes are crucially involved in the control of gene expression, supervising cell fate decisions and differentiation. Many promoters in embryonic stem (ES) cells harbor a distinctive histone modification signature that combines the activating histone H3 Lys 4 trimethylation (H3K4me3) mark and the repressive H3K27me3 mark. These bivalent domains are considered to poise expression of developmental genes, allowing timely activation while maintaining repression in the absence of differentiation signals. Recent advances shed light on the establishment and function of bivalent domains; however, their role in development remains controversial, not least because suitable genetic models to probe their function in developing organisms are missing. Here, we explore avenues to and from bivalency and propose that bivalent domains and associated chromatin-modifying complexes safeguard proper and robust differentiation.", "title": "" }, { "docid": "5203f520e6992ae6eb2e8cb28f523f6a", "text": "Integrons can insert and excise antibiotic resistance genes on plasmids in bacteria by site-specific recombination. Class 1 integrons code for an integrase, IntI1 (337 amino acids in length), and are generally borne on elements derived from Tn5090, such as that found in the central part of Tn21. A second class of integron is found on transposon Tn7 and its relatives. We have completed the sequence of the Tn7 integrase gene, intI2, which contains an internal stop codon. This codon was found to be conserved among intI2 genes on three other Tn7-like transposons harboring different cassettes. The predicted peptide sequence (IntI2*) is 325 amino acids long and is 46% identical to IntI1. In order to detect recombination activity, the internal stop codon at position 179 in the parental allele was changed to a triplet coding for glutamic acid. The sequences flanking the cassette arrays in the class 1 and 2 integrons are not closely related, but a common pool of mobile cassettes is used by the different integron classes; two of the three antibiotic resistance cassettes on Tn7 and its close relatives are also found in various class 1 integrons. We also observed a fourth excisable cassette downstream of those described previously in Tn7. The fourth cassette encodes a 165-amino-acid protein of unknown function with 6.5 contiguous repeats of a sequence coding for 7 amino acids. IntI2*179E promoted site-specific excision of each of the cassettes in Tn7 at different frequencies. The integrases from Tn21 and Tn7 showed limited cross-specificity in that IntI1 could excise all cassettes from both Tn21 and Tn7. However, we did not observe a corresponding excision of the aadA1 cassette from Tn21 by IntI2*179E.", "title": "" }, { "docid": "9fdd2b84fc412e03016a12d951e4be01", "text": "We examine the implications of shape on the process of finding dense correspondence and half-occlusions for a stereo pair of images. The desired property of the disparity map is that it should be a piecewise continuous function which is consistent with the images and which has the minimum number of discontinuities. To zeroth order, piecewise continuity becomes piecewise constancy. Using this approximation, we first discuss an approach for dealing with such a fronto-parallel shapeless world, and the problems involved therein. We then introduce horizontal and vertical slant to create a first order approximation to piecewise continuity. In particular, we emphasize the following geometric fact: a horizontally slanted surface (i.e., having depth variation in the direction of the separation of the two cameras) will appear horizontally stretched in one image as compared to the other image. Thus, while corresponding two images, N pixels on a scanline in one image may correspond to a different number of pixels M in the other image. This leads to three important modifications to existing stereo algorithms: (a) due to unequal sampling, existing intensity matching metrics must be modified, (b) unequal numbers of pixels in the two images must be allowed to correspond to each other, and (c) the uniqueness constraint, which is often used for detecting occlusions, must be changed to an interval uniqueness constraint. We also discuss the asymmetry between vertical and horizontal slant, and the central role of non-horizontal edges in the context of vertical slant. Using experiments, we discuss cases where existing algorithms fail, and how the incorporation of these new constraints provides correct results.", "title": "" }, { "docid": "2e0585860c1fa533412ff1fea76632cb", "text": "Author Co-citation Analysis (ACA) has long been used as an effective method for identifying the intellectual structure of a research domain, but it relies on simple co-citation counting, which does not take the citation content into consideration. The present study proposes a new method for measuring the similarity between co-cited authors by considering author's citation content. We collected the full-text journal articles in the information science domain and extracted the citing sentences to calculate their similarity distances. We compared our method with traditional ACA and found out that our approach, while displaying a similar intellectual structure for the information science domain as the other baseline methods, also provides more details about the sub-disciplines in the domain than with traditional ACA.", "title": "" }, { "docid": "5447d3fe8ed886a8792a3d8d504eaf44", "text": "Glucose-responsive delivery of insulin mimicking the function of pancreatic β-cells to achieve meticulous control of blood glucose (BG) would revolutionize diabetes care. Here the authors report the development of a new glucose-responsive insulin delivery system based on the potential interaction between the glucose derivative-modified insulin (Glc-Insulin) and glucose transporters on erythrocytes (or red blood cells, RBCs) membrane. After being conjugated with the glucosamine, insulin can efficiently bind to RBC membranes. The binding is reversible in the setting of hyperglycemia, resulting in fast release of insulin and subsequent drop of BG level in vivo. The delivery vehicle can be further simplified utilizing injectable polymeric nanocarriers coated with RBC membrane and loaded with Glc-Insulin. The described work is the first demonstration of utilizing RBC membrane to achieve smart insulin delivery with fast responsiveness.", "title": "" }, { "docid": "f10d79d1eb6d3ec994c1ec7ec3769437", "text": "The security of embedded devices often relies on the secrecy of proprietary cryptographic algorithms. These algorithms and their weaknesses are frequently disclosed through reverse-engineering software, but it is commonly thought to be too expensive to reconstruct designs from a hardware implementation alone. This paper challenges that belief by presenting an approach to reverse-engineering a cipher from a silicon implementation. Using this mostly automated approach, we reveal a cipher from an RFID tag that is not known to have a software or micro-code implementation. We reconstruct the cipher from the widely used Mifare Classic RFID tag by using a combination of image analysis of circuits and protocol analysis. Our analysis reveals that the security of the tag is even below the level that its 48-bit key length suggests due to a number of design flaws. Weak random numbers and a weakness in the authentication protocol allow for pre-computed rainbow tables to be used to find any key in a matter of seconds. Our approach of deducing functionality from circuit images is mostly automated, hence it is also feasible for large chips. The assumption that algorithms can be kept secret should therefore to be avoided for any type of silicon chip. Il faut qu’il n’exige pas le secret, et qu’il puisse sans inconvénient tomber entre les mains de l’ennemi. ([A cipher] must not depend on secrecy, and it must not matter if it falls into enemy hands.) August Kerckhoffs, La Cryptographie Militaire, January 1883 [13]", "title": "" }, { "docid": "2da44919966d841d4a1d6f3cc2a648e9", "text": "A composite cavity-backed folded sectorial bowtie antenna (FSBA) is proposed and investigated in this paper, which is differentially fed by an SMA connector through a balun, i.e. a transition from a microstrip line to a parallel stripline. The composite cavity as a general case, consisting of a conical part and a cylindrical rim, can be tuned freely from a cylindrical to a cup-shaped one. Parametric studies are performed to optimize the antenna performance. Experimental results reveal that it can achieve an impedance bandwidth of 143% for SWR les 2, a broadside gain of 8-15.3 dBi, and stable radiation pattern over the whole operating band. The total electrical dimensions are 0.66lambdam in diameter and 0.16lambdam in height, where lambdam is the free-space wavelength at lower edge of the operating frequency band. The problem about the distorted patterns in the upper frequency band for wideband cavity-backed antennas is solved in our work.", "title": "" }, { "docid": "228678ad5d18d21d4bc7c1819329274f", "text": "Intentional frequency perturbation by recently researched active islanding detection techniques for inverter based distributed generation (DG) define new threshold settings for the frequency relays. This innovation has enabled the modern frequency relays to operate inside the non-detection zone (NDZ) of the conventional frequency relays. However, the effect of such perturbation on the performance of the rate of change of frequency (ROCOF) relays has not been researched so far. This paper evaluates the performance of ROCOF relays under such perturbations for an inverter interfaced DG and proposes an algorithm along with the new threshold settings to enable it work under the NDZ. The proposed algorithm is able to differentiate between an islanding and a non-islanding event. The operating principle of relay is based on low frequency current injection through grid side voltage source converter (VSC) control of doubly fed induction generator (DFIG) and therefore, the relay is defined as “active ROCOF relay”. Simulations are done in MATLAB.", "title": "" }, { "docid": "4e6ca2d20e904a0eb72fcdcd1164a5e2", "text": "Fraudulent activities (e.g., suspicious credit card transaction, financial reporting fraud, and money laundering) are critical concerns to various entities including bank, insurance companies, and public service organizations. Typically, these activities lead to detrimental effects on the victims such as a financial loss. Over the years, fraud analysis techniques underwent a rigorous development. However, lately, the advent of Big data led to vigorous advancement of these techniques since Big Data resulted in extensive opportunities to combat financial frauds. Given that the massive amount of data that investigators need to sift through, massive volumes of data integrated from multiple heterogeneous sources (e.g., social media, blogs) to find fraudulent patterns is emerging as a feasible approach.", "title": "" } ]
scidocsrr
5cf491216962a850c261749dc519155d
Deep Learning For Video Saliency Detection
[ { "docid": "b716af4916ac0e4a0bf0b040dccd352b", "text": "Modeling visual attention-particularly stimulus-driven, saliency-based attention-has been a very active research area over the past 25 years. Many different models of attention are now available which, aside from lending theoretical contributions to other fields, have demonstrated successful applications in computer vision, mobile robotics, and cognitive systems. Here we review, from a computational perspective, the basic concepts of attention implemented in these models. We present a taxonomy of nearly 65 models, which provides a critical comparison of approaches, their capabilities, and shortcomings. In particular, 13 criteria derived from behavioral and computational studies are formulated for qualitative comparison of attention models. Furthermore, we address several challenging issues with models, including biological plausibility of the computations, correlation with eye movement datasets, bottom-up and top-down dissociation, and constructing meaningful performance measures. Finally, we highlight current research trends in attention modeling and provide insights for future.", "title": "" }, { "docid": "ed9d6571634f30797fb338a928cc8361", "text": "In this paper, we study the challenging problem of tracking the trajectory of a moving object in a video with possibly very complex background. In contrast to most existing trackers which only learn the appearance of the tracked object online, we take a different approach, inspired by recent advances in deep learning architectures, by putting more emphasis on the (unsupervised) feature learning problem. Specifically, by using auxiliary natural images, we train a stacked denoising autoencoder offline to learn generic image features that are more robust against variations. This is then followed by knowledge transfer from offline training to the online tracking process. Online tracking involves a classification neural network which is constructed from the encoder part of the trained autoencoder as a feature extractor and an additional classification layer. Both the feature extractor and the classifier can be further tuned to adapt to appearance changes of the moving object. Comparison with the state-of-the-art trackers on some challenging benchmark video sequences shows that our deep learning tracker is more accurate while maintaining low computational cost with real-time performance when our MATLAB implementation of the tracker is used with a modest graphics processing unit (GPU).", "title": "" } ]
[ { "docid": "6b79d1db9565fc7540d66ff8bf5aae1f", "text": "Recognizing sarcasm often requires a deep understanding of multiple sources of information, including the utterance, the conversational context, and real world facts. Most of the current sarcasm detection systems consider only the utterance in isolation. There are some limited attempts toward taking into account the conversational context. In this paper, we propose an interpretable end-to-end model that combines information from both the utterance and the conversational context to detect sarcasm, and demonstrate its effectiveness through empirical evaluations. We also study the behavior of the proposed model to provide explanations for the model’s decisions. Importantly, our model is capable of determining the impact of utterance and conversational context on the model’s decisions. Finally, we provide an ablation study to illustrate the impact of different components of the proposed model.", "title": "" }, { "docid": "904454a191da497071ee9b835561c6e6", "text": "We introduce a stochastic discrete automaton model to simulate freeway traffic. Monte-Carlo simulations of the model show a transition from laminar traffic flow to start-stopwaves with increasing vehicle density, as is observed in real freeway traffic. For special cases analytical results can be obtained.", "title": "" }, { "docid": "6669f61c302d79553a3e49a4f738c933", "text": "Imagining urban space as being comfortable or fearful is studied as an effect of people’s connections to their residential area communication infrastructure. Geographic Information System (GIS) modeling and spatial-statistical methods are used to process 215 mental maps obtained from respondents to a multilingual survey of seven ethnically marked residential communities of Los Angeles. Spatial-statistical analyses reveal that fear perceptions of Los Angeles urban space are not associated with commonly expected causes of fear, such as high crime victimization likelihood. The main source of discomfort seems to be presence of non-White and non-Asian populations. Respondents more strongly connected to television and interpersonal communication channels are relatively more fearful of these populations than those less strongly connected. Theoretical, methodological, and community-building policy implications are discussed.", "title": "" }, { "docid": "d922dbcdd2fb86e7582a4fb78990990e", "text": "This paper presents a novel system to estimate body pose configuration from a single depth map. It combines both pose detection and pose refinement. The input depth map is matched with a set of pre-captured motion exemplars to generate a body configuration estimation, as well as semantic labeling of the input point cloud. The initial estimation is then refined by directly fitting the body configuration with the observation (e.g., the input depth). In addition to the new system architecture, our other contributions include modifying a point cloud smoothing technique to deal with very noisy input depth maps, a point cloud alignment and pose search algorithm that is view-independent and efficient. Experiments on a public dataset show that our approach achieves significantly higher accuracy than previous state-of-art methods.", "title": "" }, { "docid": "c2453816adf52157fca295274a4d8627", "text": "Air quality monitoring is extremely important as air pollution has a direct impact on human health. Low-cost gas sensors are used to effectively perceive the environment by mounting them on top of mobile vehicles, for example, using a public transport network. Thus, these sensors are part of a mobile network and perform from time to time measurements in each others vicinity. In this paper, we study three calibration algorithms that exploit co-located sensor measurements to enhance sensor calibration and consequently the quality of the pollution measurements on-the-fly. Forward calibration, based on a traditional approach widely used in the literature, is used as performance benchmark for two novel algorithms: backward and instant calibration. We validate all three algorithms with real ozone pollution measurements carried out in an urban setting by comparing gas sensor output to high-quality measurements from analytical instruments. We find that both backward and instant calibration reduce the average measurement error by a factor of two compared to forward calibration. Furthermore, we unveil the arising difficulties if sensor calibration is not based on reliable reference measurements but on sensor readings of low-cost gas sensors which is inevitable in a mobile scenario with only a few reliable sensors. We propose a solution and evaluate its effect on the measurement accuracy in experiments and simulation.", "title": "" }, { "docid": "c0ee7fef7f96db6908f49170c6c75b2c", "text": "Improving Neural Networks with Dropout Nitish Srivastava Master of Science Graduate Department of Computer Science University of Toronto 2013 Deep neural nets with a huge number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining many different large neural nets at test time. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from a neural network during training. This prevents the units from co-adapting too much. Dropping units creates thinned networks during training. The number of possible thinned networks is exponential in the number of units in the network. At test time all possible thinned networks are combined using an approximate model averaging procedure. Dropout training followed by this approximate model combination significantly reduces overfitting and gives major improvements over other regularization methods. In this work, we describe models that improve the performance of neural networks using dropout, often obtaining state-of-the-art results on benchmark datasets.", "title": "" }, { "docid": "b73a9a7770a2bbd5edcc991d7b848371", "text": "This paper overviews various switched flux permanent magnet machines and their design and performance features, with particular emphasis on machine topologies with reduced magnet usage or without using magnet, as well as with variable flux capability. In addition, this paper also describes their relationships with doubly-salient permanent magnet machines and flux reversal permanent magnet machines.", "title": "" }, { "docid": "89526592b297342697c131daba388450", "text": "Fundamental and advanced developments in neum-fuzzy synergisms for modeling and control are reviewed. The essential part of neuro-fuuy synergisms comes from a common framework called adaptive networks, which unifies both neural networks and fuzzy models. The f u u y models under the framework of adaptive networks is called Adaptive-Network-based Fuzzy Inference System (ANFIS), which possess certain advantages over neural networks. We introduce the design methods f o r ANFIS in both modeling and control applications. Current problems and future directions for neuro-fuzzy approaches are also addressed.", "title": "" }, { "docid": "e36e0c8659b8bae3acf0f178fce362c3", "text": "Clinical data describing the phenotypes and treatment of patients represents an underused data source that has much greater research potential than is currently realized. Mining of electronic health records (EHRs) has the potential for establishing new patient-stratification principles and for revealing unknown disease correlations. Integrating EHR data with genetic data will also give a finer understanding of genotype–phenotype relationships. However, a broad range of ethical, legal and technical reasons currently hinder the systematic deposition of these data in EHRs and their mining. Here, we consider the potential for furthering medical research and clinical care using EHR data and the challenges that must be overcome before this is a reality.", "title": "" }, { "docid": "35812bda0819769efb1310d1f6d5defd", "text": "Distributed Denial-of-Service (DDoS) attacks are increasing in frequency and volume on the Internet, and there is evidence that cyber-criminals are turning to Internet-of-Things (IoT) devices such as cameras and vending machines as easy launchpads for large-scale attacks. This paper quantifies the capability of consumer IoT devices to participate in reflective DDoS attacks. We first show that household devices can be exposed to Internet reflection even if they are secured behind home gateways. We then evaluate eight household devices available on the market today, including lightbulbs, webcams, and printers, and experimentally profile their reflective capability, amplification factor, duration, and intensity rate for TCP, SNMP, and SSDP based attacks. Lastly, we demonstrate reflection attacks in a real-world setting involving three IoT-equipped smart-homes, emphasising the imminent need to address this problem before it becomes widespread.", "title": "" }, { "docid": "75952f3945628c15a66b7288e6c1d1a7", "text": "Most of the samples discovered are variations of known malicious programs and thus have similar structures, however, there is no method of malware classification that is completely effective. To address this issue, the approach proposed in this paper represents a malware in terms of a vector, in which each feature consists of the amount of APIs called from a Dynamic Link Library (DLL). To determine if this approach is useful to classify malware variants into the correct families, we employ Euclidean Distance and a Multilayer Perceptron with several learning algorithms. The experimental results are analyzed to determine which method works best with the approach. The experiments were conducted with a database that contains real samples of worms and trojans and show that is possible to classify malware variants using the number of functions imported per library. However, the accuracy varies depending on the method used for the classification.", "title": "" }, { "docid": "a2d7fc045b1c8706dbfe3772a8f6ef70", "text": "This paper is concerned with the problem of domain adaptation with multiple sources from a causal point of view. In particular, we use causal models to represent the relationship between the features X and class label Y , and consider possible situations where different modules of the causal model change with the domain. In each situation, we investigate what knowledge is appropriate to transfer and find the optimal target-domain hypothesis. This gives an intuitive interpretation of the assumptions underlying certain previous methods and motivates new ones. We finally focus on the case where Y is the cause for X with changing PY and PX|Y , that is, PY and PX|Y change independently across domains. Under appropriate assumptions, the availability of multiple source domains allows a natural way to reconstruct the conditional distribution on the target domain; we propose to model PX|Y (the process to generate effect X from cause Y ) on the target domain as a linear mixture of those on source domains, and estimate all involved parameters by matching the target-domain feature distribution. Experimental results on both synthetic and real-world data verify our theoretical results. Traditional machine learning relies on the assumption that both training and test data are from the same distribution. In practice, however, training and test data are probably sampled under different conditions, thus violating this assumption, and the problem of domain adaptation (DA) arises. Consider remote sensing image classification as an example. Suppose we already have several data sets on which the class labels are known; they are called source domains here. For a new data set, or a target domain, it is usually difficult to find the ground truth reference labels, and we aim to determine the labels by making use of the information from the source domains. Note that those domains are usually obtained in different areas and time periods, and that the corresponding data distribution various due to the change in illumination conditions, physical factors related to ground (e.g., different soil moisture or composition), vegetation, and atmospheric conditions. Other well-known instances of this situation include sentiment data analysis (Blitzer, Dredze, and Pereira 2007) and flow cytometry data analysis (Blanchard, Lee, and Scott 2011). DA approaches have Copyright c © 2015, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. many applications in varies areas including natural language processing, computer vision, and biology. For surveys on DA, see, e.g., (Jiang 2008; Pan and Yang 2010; Candela et al. 2009). In this paper, we consider the situation with n source domains on which both the features X and label Y are given, i.e., we are given (x,y) = (x k , y (i) k ) mi k=1, where i = 1, ..., n, and mi is the sample size of the ith source domain. Our goal is to find the classifier for the target domain, on which only the features x = (xk) m k=1 are available. Here we are concerned with a difficult scenario where no labeled point is available in the target domain, known as unsupervised domain adaptation. Since PXY changes across domains, we have to find what knowledge in the source domains should be transferred to the target one. Previous work in domain adaptation has usually assumed that PX changes but PY |X remain the same, i.e., the covariate shift situation; see, e.g., (Shimodaira 2000; Huang et al. 2007; Sugiyama et al. 2008; Ben-David, Shalev-Shwartz, and Urner 2012). It is also known as sample selection bias (particularly on the features X) in (Zadrozny 2004). In practice it is very often that both PX and PY |X change simultaneously across domains. For instance, both of them are likely to change over time and location for a satellite image classification system. If the data distribution changes arbitrarily across domains, clearly knowledge from the sources may not help in predicting Y on the target domain (Rosenstein et al. 2005). One has to find what type of information should be transferred from sources to the target. One possibility is to assume the change in both PX and PY |X is due to the change in PY , while PX|Y remains the same, as known as prior probability shift (Storkey 2009; Plessis and Sugiyama 2012) or target shift (Zhang et al. 2013). The latter further models the change in PX|Y caused by a location-scale (LS) transformation of the features for each class. The constraint of the LS transformation renders PX|Y on the target domain, denoted by P t X|Y , identifiable; however, it might be too restrictive. Fortunately, the availability of multiple source domains provides more hints as to find P t X|Y , as well as P t Y |X . Several algorithms have been proposed to combine knowledge from multiple source domains. For instance, (Mansour, Mohri, and Rostamizadeh 2008) proposed to form the target hypothesis by combining source hypotheses with a distribution weighted rule. (Gao et al. 2008), (Duan et al. 2009), and (Chattopadhyay et al. 2011) combine the predictions made by the source hypotheses, with the weights determined in different ways. An intuitive interpretation of the assumptions underlying those algorithms would facilitate choosing or developing DA methods for the problem at hand. To the best of our knowledge, however, it is still missing in the literature. One of our contributions in this paper is to provide such an interpretation. This paper studies the multi-source DA problem from a causal point of view where we consider the underlying data generating process behind the observed domains. We are particularly interested in what types of information stay the same, what types of information change, and how they change across domains. This enables us to construct the optimal hypothesis for the target domain in various situations. To this end, we use causal models to represent the relationship between X and Y , because they provide a compact description of the properties of the change in the data distribution.1 They, for instance, help characterize transportability of experimental findings (Pearl and Bareinboim 2011) or recoverability from selection bias (Bareinboim, Tian, and Pearl 2014). As another contribution, we further focus on a typical DA scenario where both PY and PX|Y (or the causal mechanism to generate effect X from cause Y ) change across domains, but their changes are independent from each other, as implied by the causal model Y → X . We assume that the source domains contains rich information such that for each class, P t X|Y can be approximated by a linear mixture of PX|Y on source domains. Together with other mild conditions on PX|Y , we then show that P t X|Y , as well as P t Y , is identifiable (or can be uniquely recovered). We present a computationally efficient method to estimate the involved parameters based on kernel mean distribution embedding (Smola et al. 2007; Gretton et al. 2007), followed by several approaches to constructing the target classifier using those parameters. One might wonder how to find the causal information underlying the data to facilitate domain adaptation. We note that in practice, background causal knowledge is usually available, helping formulating how to transfer the knowledge from source domains to the target. Even if this is not the case, multiple source domains with different data distributions may allow one to identify the causal structure, since the causal knowledge can be seen from the change in data distributions; see e.g., (Tian and Pearl 2001). 1 Possible DA Situations and Their Solutions DA can be considered as a learning problem in nonstationary environments (Sugiyama and Kawanabe 2012). It is helpful to find how the data distribution changes; it provides the clues as to find the learning machine for the target domain. The causal model also describes how the components of the joint distribution are related to each other, which, for instance, gives a causal explanation of the behavior of semi-supervised learning (Schölkopf et al. 2012). Table 1: Notation used in this paper. X , Y random variables X , Y domains", "title": "" }, { "docid": "bd590555337d3ada2c641c5f1918cf2c", "text": "Machine learning addresses the question of how to build computers that improve automatically through experience. It is one of today’s most rapidly growing technical fields, lying at the intersection of computer science and statistics, and at the core of artificial intelligence and data science. Recent progress in machine learning has been driven both by the development of new learning algorithms and theory and by the ongoing explosion in the availability of online data and low-cost computation. The adoption of data-intensive machine-learning methods can be found throughout science, technology and commerce, leading to more evidence-based decision-making across many walks of life, including health care, manufacturing, education, financial modeling, policing, and marketing.", "title": "" }, { "docid": "9ff6d7a36646b2f9170bd46d14e25093", "text": "Psychedelic drugs such as LSD and psilocybin are often claimed to be capable of inducing life-changing experiences described as mystical or transcendental, especially if high doses are taken. The present study examined possible enduring effects of such experiences by comparing users of psychedelic drugs (n = 88), users of nonpsychedelic illegal drugs (e.g., marijuana, amphetamines) (n = 29) and non illicit drug-using social drinkers (n = 66) on questionnaire measures of values, beliefs and emotional empathy. Samples were obtained from Israel (n = 110) and Australia (n = 73) in a cross-cultural comparison to see if values associated with psychedelic drug use transcended culture of origin. Psychedelic users scored significantly higher on mystical beliefs (e.g., oneness with God and the universe) and life values of spirituality and concern for others than the other groups, and lower on the value of financial prosperity, irrespective of culture of origin. Users of nonpsychedelic illegal drugs scored significantly lower on a measure of coping ability than both psychedelic users and non illicit drug users. Both groups of illegal drug users scored significantly higher on empathy than non illicit drug users. Results are discussed in the context of earlier findings from Pahnke (1966) and Doblin (1991) of the transformative effect of psychedelic experiences, although the possibility remains that present findings reflect predrug characteristics of those who chose to take psychedelic drugs rather than effects of the drugs themselves.", "title": "" }, { "docid": "d24331326c59911f9c1cdc5dd5f14845", "text": "A novel topology for a soft-switching buck dc– dc converter with a coupled inductor is proposed. The soft-switching buck converter has advantages over the traditional hardswitching converters. The most significant advantage is that it offers a lower switching loss. This converter operates under a zero-current switching condition at turn on and a zero-voltage switching condition at turn off. It presents the circuit configuration with a least components for realizing soft switching. Because of soft switching, the proposed converter can attain a high efficiency under heavy load conditions. Likewise, a high efficiency is also attained under light load conditions, which is significantly different from other soft switching buck converters", "title": "" }, { "docid": "30de4ba4607cfcc106361fa45b89a628", "text": "The purpose of this study is to characterize and understand the long-term behavior of the output from megavoltage radiotherapy linear accelerators. Output trends of nine beams from three linear accelerators over a period of more than three years are reported and analyzed. Output, taken during daily warm-up, forms the basis of this study. The output is measured using devices having ion chambers. These are not calibrated by accredited dosimetry laboratory, but are baseline-compared against monthly output which is measured using calibrated ion chambers. We consider the output from the daily check devices as it is, and sometimes normalized it by the actual output measured during the monthly calibration of the linacs. The data show noisy quasi-periodic behavior. The output variation, if normalized by monthly measured \"real' output, is bounded between ± 3%. Beams of different energies from the same linac are correlated with a correlation coefficient as high as 0.97, for one particular linac, and as low as 0.44 for another. These maximum and minimum correlations drop to 0.78 and 0.25 when daily output is normalized by the monthly measurements. These results suggest that the origin of these correlations is both the linacs and the daily output check devices. Beams from different linacs, independent of their energies, have lower correlation coefficient, with a maximum of about 0.50 and a minimum of almost zero. The maximum correlation drops to almost zero if the output is normalized by the monthly measured output. Some scatter plots of pairs of beam output from the same linac show band-like structures. These structures are blurred when the output is normalized by the monthly calibrated output. Fourier decomposition of the quasi-periodic output is consistent with a 1/f power law. The output variation appears to come from a distorted normal distribution with a mean of slightly greater than unity. The quasi-periodic behavior is manifested in the seasonally averaged output, showing annual variability with negative variations in the winter and positive in the summer. This trend is weakened when the daily output is normalized by the monthly calibrated output, indicating that the variation of the periodic component may be intrinsic to both the linacs and the daily measurement devices. Actual linac output was measured monthly. It needs to be adjusted once every three to six months for our tolerance and action levels. If these adjustments are artificially removed, then there is an increase in output of about 2%-4% per year.", "title": "" }, { "docid": "ed0444685c9a629c7d1fda7c4912fd55", "text": "Citrus fruits have potential health-promoting properties and their essential oils have long been used in several applications. Due to biological effects described to some citrus species in this study our objectives were to analyze and compare the phytochemical composition and evaluate the anti-inflammatory effect of essential oils (EO) obtained from four different Citrus species. Mice were treated with EO obtained from C. limon, C. latifolia, C. aurantifolia or C. limonia (10 to 100 mg/kg, p.o.) and their anti-inflammatory effects were evaluated in chemical induced inflammation (formalin-induced licking response) and carrageenan-induced inflammation in the subcutaneous air pouch model. A possible antinociceptive effect was evaluated in the hot plate model. Phytochemical analyses indicated the presence of geranial, limonene, γ-terpinene and others. EOs from C. limon, C. aurantifolia and C. limonia exhibited anti-inflammatory effects by reducing cell migration, cytokine production and protein extravasation induced by carrageenan. These effects were also obtained with similar amounts of pure limonene. It was also observed that C. aurantifolia induced myelotoxicity in mice. Anti-inflammatory effect of C. limon and C. limonia is probably due to their large quantities of limonene, while the myelotoxicity observed with C. aurantifolia is most likely due to the high concentration of citral. Our results indicate that these EOs from C. limon, C. aurantifolia and C. limonia have a significant anti-inflammatory effect; however, care should be taken with C. aurantifolia.", "title": "" }, { "docid": "adb2d3d17599c0aa36236f923adc934f", "text": "This paper reports, to our knowledge, the first spherical induction motor (SIM) operating with closed loop control. The motor can produce up to 4 Nm of torque along arbitrary axes with continuous speeds up to 300 rpm. The motor's rotor is a two-layer copper-over-iron spherical shell. The stator has four independent inductors that generate thrust forces on the rotor surface. The motor is also equipped with four optical mouse sensors that measure surface velocity to estimate the rotor's angular velocity, which is used for vector control of the inductors and control of angular velocity and orientation. Design considerations including torque distribution for the inductors, angular velocity sensing, angular velocity control, and orientation control are presented. Experimental results show accurate tracking of velocity and orientation commands.", "title": "" }, { "docid": "40083241b498dc6ac14de7dcc0b38399", "text": "We report on an automated runtime anomaly detection method at the application layer of multi-node computer systems. Although several network management systems are available in the market, none of them have sufficient capabilities to detect faults in multi-tier Web-based systems with redundancy. We model a Web-based system as a weighted graph, where each node represents a \"service\" and each edge represents a dependency between services. Since the edge weights vary greatly over time, the problem we address is that of anomaly detection from a time sequence of graphs.In our method, we first extract a feature vector from the adjacency matrix that represents the activities of all of the services. The heart of our method is to use the principal eigenvector of the eigenclusters of the graph. Then we derive a probability distribution for an anomaly measure defined for a time-series of directional data derived from the graph sequence. Given a critical probability, the threshold value is adaptively updated using a novel online algorithm.We demonstrate that a fault in a Web application can be automatically detected and the faulty services are identified without using detailed knowledge of the behavior of the system.", "title": "" }, { "docid": "aa3e8c4e4695d8c372987c8e409eb32f", "text": "We present a novel sketch-based system for the interactive modeling of a variety of free-form 3D objects using just a few strokes. Our technique is inspired by the traditional illustration strategy for depicting 3D forms where the basic geometric forms of the subjects are identified, sketched and progressively refined using few key strokes. We introduce two parametric surfaces, rotational and cross sectional blending, that are inspired by this illustration technique. We also describe orthogonal deformation and cross sectional oversketching as editing tools to complement our modeling techniques. Examples with models ranging from cartoon style to botanical illustration demonstrate the capabilities of our system.", "title": "" } ]
scidocsrr
c0c7f6e365f2bdd184a9df5cfc5f8587
A Practical Wireless Attack on the Connected Car and Security Protocol for In-Vehicle CAN
[ { "docid": "8d041241f1a587b234c8784dea9088a4", "text": "Modern intelligent vehicles have electronic control units containing firmware that enables various functions in the vehicle. New firmware versions are constantly developed to remove bugs and improve functionality. Automobile manufacturers have traditionally performed firmware updates over cables but in the near future they are aiming at conducting firmware updates over the air, which would allow faster updates and improved safety for the driver. In this paper, we present a protocol for secure firmware updates over the air. The protocol provides data integrity, data authentication, data confidentiality, and freshness. In our protocol, a hash chain is created of the firmware, and the first packet is signed by a trusted source, thus authenticating the whole chain. Moreover, the packets are encrypted using symmetric keys. We discuss the practical considerations that exist for implementing our protocol and show that the protocol is computationally efficient, has low memory overhead, and is suitable for wireless communication. Therefore, it is well suited to the limited hardware resources in the wireless vehicle environment.", "title": "" } ]
[ { "docid": "ae83a2258907f00500792178dc65340d", "text": "In this paper, a novel method for lung nodule detection, segmentation and recognition using computed tomography (CT) images is presented. Our contribution consists of several steps. First, the lung area is segmented by active contour modeling followed by some masking techniques to transfer non-isolated nodules into isolated ones. Then, nodules are detected by the support vector machine (SVM) classifier using efficient 2D stochastic and 3D anatomical features. Contours of detected nodules are then extracted by active contour modeling. In this step all solid and cavitary nodules are accurately segmented. Finally, lung tissues are classified into four classes: namely lung wall, parenchyma, bronchioles and nodules. This classification helps us to distinguish a nodule connected to the lung wall and/or bronchioles (attached nodule) from the one covered by parenchyma (solitary nodule). At the end, performance of our proposed method is examined and compared with other efficient methods through experiments using clinical CT images and two groups of public datasets from Lung Image Database Consortium (LIDC) and ANODE09. Solid, non-solid and cavitary nodules are detected with an overall detection rate of 89%; the number of false positive is 7.3/scan and the location of all detected nodules are recognized correctly.", "title": "" }, { "docid": "ee6612fa13482f7e3bbc7241b9e22297", "text": "The MOND limit is shown to follow from a requirement of space-time scale invariance of the equations of motion for nonrelativistic, purely gravitational systems; i.e., invariance of the equations of motion under (t, r) → (λt, λr) in the limit a 0 → ∞. It is suggested that this should replace the definition of the MOND limit based on the asymptotic behavior of a Newtonian-MOND interpolating function. In this way, the salient, deep-MOND results–asymptotically flat rotation curves, the mass-rotational-speed relation (baryonic Tully-Fisher relation), the Faber-Jackson relation, etc.–follow from a symmetry principle. For example, asymptotic flatness of rotation curves reflects the fact that radii change under scaling, while velocities do not. I then comment on the interpretation of the deep-MOND limit as one of \" zero mass \" : Rest masses, whose presence obstructs scaling symmetry, become negligible compared to the \" phantom \" , dynamical masses–those that some would attribute to dark matter. Unlike the former masses, the latter transform in a way that is consistent with the symmetry. Finally, I discuss the putative MOND-cosmology connection, in particular the possibility that MOND-especially the deep-MOND limit– is related to the asymptotic de Sitter geometry of our universe. I point out, in this connection, the possible relevance of a (classical) de Sitter-conformal-field-theory (dS/CFT) correspondence.", "title": "" }, { "docid": "5cd68b483657180231786dc5a3407c85", "text": "The ability of robotic systems to autonomously understand and/or navigate in uncertain environments is critically dependent on fairly accurate strategies, which are not always optimally achieved due to effectiveness, computational cost, and parameter settings. In this paper, we propose a novel and simple adaptive strategy to increase the efficiency and drastically reduce the computational effort in particle filters (PFs). The purpose of the adaptive approach (dispersion-based adaptive particle filter - DAPF) is to provide higher number of particles during the initial searching state (when the localization presents greater uncertainty) and fewer particles during the subsequent state (when the localization exhibits less uncertainty). With the aim of studying the dynamical PF behavior regarding others and putting the proposed algorithm into practice, we designed a methodology based on different target applications and a Kinect sensor. The various experiments conducted for both color tracking and mobile robot localization problems served to demonstrate that the DAPF algorithm can be further generalized. As a result, the DAPF approach significantly improved the computational performance over two well-known filtering strategies: 1) the classical PF with fixed particle set sizes, and 2) the adaptive technique named Kullback-Leiber distance.", "title": "" }, { "docid": "d647fc2b5635a3dfcebf7843fef3434c", "text": "Touch is our primary non-verbal communication channel for conveying intimate emotions and as such essential for our physical and emotional wellbeing. In our digital age, human social interaction is often mediated. However, even though there is increasing evidence that mediated touch affords affective communication, current communication systems (such as videoconferencing) still do not support communication through the sense of touch. As a result, mediated communication does not provide the intense affective experience of co-located communication. The need for ICT mediated or generated touch as an intuitive way of social communication is even further emphasized by the growing interest in the use of touch-enabled agents and robots for healthcare, teaching, and telepresence applications. Here, we review the important role of social touch in our daily life and the available evidence that affective touch can be mediated reliably between humans and between humans and digital agents. We base our observations on evidence from psychology, computer science, sociology, and neuroscience with focus on the first two. Our review shows that mediated affective touch can modulate physiological responses, increase trust and affection, help to establish bonds between humans and avatars or robots, and initiate pro-social behavior. We argue that ICT mediated or generated social touch can (a) intensify the perceived social presence of remote communication partners and (b) enable computer systems to more effectively convey affective information. However, this research field on the crossroads of ICT and psychology is still embryonic and we identify several topics that can help to mature the field in the following areas: establishing an overarching theoretical framework, employing better researchmethodologies, developing basic social touch building blocks, and solving specific ICT challenges.", "title": "" }, { "docid": "a9121a1211704006dc8de14a546e3bdc", "text": "This paper describes and compares two straightforward approaches for dependency parsing with partial annotations (PA). The first approach is based on a forest-based training objective for two CRF parsers, i.e., a biaffine neural network graph-based parser (Biaffine) and a traditional log-linear graph-based parser (LLGPar). The second approach is based on the idea of constrained decoding for three parsers, i.e., a traditional linear graphbased parser (LGPar), a globally normalized neural network transition-based parser (GN3Par) and a traditional linear transition-based parser (LTPar). For the test phase, constrained decoding is also used for completing partial trees. We conduct experiments on Penn Treebank under three different settings for simulating PA, i.e., random, most uncertain, and divergent outputs from the five parsers. The results show that LLGPar is most effective in directly learning from PA, and other parsers can achieve best performance when PAs are completed into full trees by LLGPar.", "title": "" }, { "docid": "756b25456494b3ece9b240ba3957f91c", "text": "In this paper we introduce the task of fact checking, i.e. the assessment of the truthfulness of a claim. The task is commonly performed manually by journalists verifying the claims made by public figures. Furthermore, ordinary citizens need to assess the truthfulness of the increasing volume of statements they consume. Thus, developing fact checking systems is likely to be of use to various members of society. We first define the task and detail the construction of a publicly available dataset using statements fact-checked by journalists available online. Then, we discuss baseline approaches for the task and the challenges that need to be addressed. Finally, we discuss how fact checking relates to mainstream natural language processing tasks and can stimulate further research.", "title": "" }, { "docid": "4d2fa4e81281f40626028192cf2f71ff", "text": "In this tutorial paper, we present a general architecture for digital clock and data recovery (CDR) for high-speed binary links. The architecture is based on replacing the analog loop filter and voltage-controlled oscillator (VCO) in a typical analog phase-locked loop (PLL)-based CDR with digital components. We provide a linearized analysis of the bang-bang phase detector and CDR loop including the effects of decimation and self-noise. Additionally, we provide measured results from an implementation of the digital CDR system which are directly comparable to the linearized analysis, plus measurements of the limit cycle behavior which arises in these loops when incoming jitter is small. Finally, the relative advantages of analog and digital implementations of the CDR for high-speed binary links is considered", "title": "" }, { "docid": "ebf92a0faf6538f1d2b85fb2aa497e80", "text": "The generally accepted assumption by most multimedia researchers is that learning is inhibited when on-screen text and narration containing the same information is presented simultaneously, rather than on-screen text or narration alone. This is known as the verbal redundancy effect. Are there situations where the reverse is true? This research was designed to investigate the reverse redundancy effect for non-native English speakers learning English reading comprehension, where two instructional modes were used the redundant mode and the modality mode. In the redundant mode, static pictures and audio narration were presented with synchronized redundant on-screen text. In the modality mode, only static pictures and audio were presented. In both modes, learners were allowed to control the pacing of the lessons. Participants were 209 Yemeni learners in their first year of tertiary education. Examination of text comprehension scores indicated that those learners who were exposed to the redundancy mode performed significantly better than learners in the modality mode. They were also significantly more motivated than their counterparts in the modality mode. This finding has added an important modification to the redundancy effect. That is the reverse redundancy effect is true for multimedia learning of English as a foreign language for students where textual information was foreign to them. In such situations, the redundant synchronized on-screen text did not impede learning; rather it reduced the cognitive load and thereby enhanced learning.", "title": "" }, { "docid": "1a153e0afca80aaf35ffa1b457725fa3", "text": "Cloud computing can reduce mainframe management costs, so more and more users choose to build their own cloud hosting environment. In cloud computing, all the commands through the network connection, therefore, information security is particularly important. In this paper, we will explore the types of intrusion detection systems, and integration of these types, provided an effective and output reports, so system administrators can understand the attacks and damage quickly. With the popularity of cloud computing, intrusion detection system log files are also increasing rapidly, the effect is limited and inefficient by using the conventional analysis system. In this paper, we use Hadoop's MapReduce algorithm analysis of intrusion detection System log files, the experimental results also confirmed that the calculation speed can be increased by about 89%. For the system administrator, IDS Log Cloud Analysis System (called ICAS) can provide fast and high reliability of the system.", "title": "" }, { "docid": "ed4d6179e2e432e752d7598c0db6ec59", "text": "In image deblurring, a fundamental problem is that the blur kernel suppresses a number of spatial frequencies that are difficult to recover reliably. In this paper, we explore the potential of a class-specific image prior for recovering spatial frequencies attenuated by the blurring process. Specifically, we devise a prior based on the class-specific subspace of image intensity responses to band-pass filters. We learn that the aggregation of these subspaces across all frequency bands serves as a good class-specific prior for the restoration of frequencies that cannot be recovered with generic image priors. In an extensive validation, our method, equipped with the above prior, yields greater image quality than many state-of-the-art methods by up to 5 dB in terms of image PSNR, across various image categories including portraits, cars, cats, pedestrians and household objects.", "title": "" }, { "docid": "705eca342fb014d0ae943a17c60a47c0", "text": "This is a critical design paper offering a possible scenario of use intended to provoke reflection about values and politics of design in persuasive computing. We describe the design of a system - Fit4Life - that encourages individuals to address the larger goal of reducing obesity in society by promoting individual healthy behaviors. Using the Persuasive Systems Design Model [26], this paper outlines the Fit4Life persuasion context, the technology, its use of persuasive messages, and an experimental design to test the system's efficacy. We also contribute a novel discussion of the ethical and sociocultural considerations involved in our design, an issue that has remained largely unaddressed in the existing persuasive technologies literature [29].", "title": "" }, { "docid": "8d432d8fd4a6d0f368a608ebca5d67d7", "text": "The origin and continuation of mankind is based on water. Water is one of the most abundant resources on earth, covering three-fourths of the planet’s surface. However, about 97% of the earth’s water is salt water in the oceans, and a tiny 3% is fresh water. This small percentage of the earth’s water—which supplies most of human and animal needs—exists in ground water, lakes and rivers. The only nearly inexhaustible sources of water are the oceans, which, however, are of high salinity. It would be feasible to address the water-shortage problem with seawater desalination; however, the separation of salts from seawater requires large amounts of energy which, when produced from fossil fuels, can cause harm to the environment. Therefore, there is a need to employ environmentally-friendly energy sources in order to desalinate seawater. After a historical introduction into desalination, this paper covers a large variety of systems used to convert seawater into fresh water suitable for human use. It also covers a variety of systems, which can be used to harness renewable energy sources; these include solar collectors, photovoltaics, solar ponds and geothermal energy. Both direct and indirect collection systems are included. The representative example of direct collection systems is the solar still. Indirect collection systems employ two subsystems; one for the collection of renewable energy and one for desalination. For this purpose, standard renewable energy and desalination systems are most often employed. Only industrially-tested desalination systems are included in this paper and they comprise the phase change processes, which include the multistage flash, multiple effect boiling and vapour compression and membrane processes, which include reverse osmosis and electrodialysis. The paper also includes a review of various systems that use renewable energy sources for desalination. Finally, some general guidelines are given for selection of desalination and renewable energy systems and the parameters that need to be considered. q 2005 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "b77d257b62ee7af929b64168c62fd785", "text": "The analysis of time series data is of interest to many application domains. But this analysis is challenging due to many reasons such as missing data in the series, unstructured nature of the data and errors in the data collection procedure, measuring equipment, etc. The problem of missing data while matching two time series is dealt with either by predicting a value for the missing data using the already collected data, or by completely ignoring the missing values. In this paper, we present an approach where we make use of the characteristics of the Mahalanobis Distance to inherently accommodate the missing values while finding the best match between two time series. Using this approach, we have designed two algorithms which can find the best match for a given query series in a candidate series, without imputing the missing values in the candidate. The initial algorithm finds the best nonwarped match between the candidate and the query time series, while the second algorithm is an extension of the initial algorithm to find the best match in the case of warped data using a Dynamic Time Warping (DTW) like algorithm. Thus, with experimental results we go on to conclude that the proposed warping algorithm is a good method for matching between two time series with warping and missing data.", "title": "" }, { "docid": "e299966eded9f65f6446b3cd7ab41f49", "text": "BACKGROUND Asthma is the most common chronic pulmonary disease during pregnancy. Several previous reports have documented reversible electrocardiographic changes during severe acute asthma attacks, including tachycardia, P pulmonale, right bundle branch block, right axis deviation, and ST segment and T wave abnormalities. CASE REPORT We present the case of a pregnant patient with asthma exacerbation in which acute bronchospasm caused S1Q3T3 abnormality on an electrocardiogram (ECG). The complete workup of ECG findings of S1Q3T3 was negative and correlated with bronchospasm. The S1Q3T3 electrocardiographic abnormality can be seen in acute bronchospasm in pregnant women. The other causes like pulmonary embolism, pneumothorax, acute lung disease, cor pulmonale, and left posterior fascicular block were excluded. CONCLUSIONS Asthma exacerbations are of considerable concern during pregnancy due to their adverse effect on the fetus, and optimization of asthma treatment during pregnancy is vital for achieving good outcomes. Prompt recognition of electrocardiographic abnormality and early treatment can prevent adverse perinatal outcomes.", "title": "" }, { "docid": "f4222d776f90050c15032e802d294d1a", "text": "We study the design and optimization of polyhedral patterns, which are patterns of planar polygonal faces on freeform surfaces. Working with polyhedral patterns is desirable in architectural geometry and industrial design. However, the classical tiling patterns on the plane must take on various shapes in order to faithfully and feasibly approximate curved surfaces. We define and analyze the deformations these tiles must undertake to account for curvature, and discover the symmetries that remain invariant under such deformations. We propose a novel method to regularize polyhedral patterns while maintaining these symmetries into a plethora of aesthetic and feasible patterns.", "title": "" }, { "docid": "c20393a25f4e53be6df2bd49abf6635f", "text": "This paper overviews NTCIR-13 Actionable Knowledge Graph (AKG) task. The task focuses on finding possible actions related to input entities and the relevant properties of such actions. AKG is composed of two subtasks: Action Mining (AM) and Actionable Knowledge Graph Generation (AKGG). Both subtasks are focused on English language. 9 runs have been submitted by 4 teams for the task. In this paper we describe both the subtasks, datasets, evaluation methods and the results of meta analyses.", "title": "" }, { "docid": "d003deabc7748959e8c5cc220b243e70", "text": "INTRODUCTION In Britain today, children by the age of 10 years have regular access to an average of five different screens at home. In addition to the main family television, for example, many very young children have their own bedroom TV along with portable handheld computer game consoles (eg, Nintendo, Playstation, Xbox), smartphone with games, internet and video, a family computer and a laptop and/or a tablet computer (eg, iPad). Children routinely engage in two or more forms of screen viewing at the same time, such as TV and laptop. Viewing is starting earlier in life. Nearly one in three American infants has a TV in their bedroom, and almost half of all infants watch TV or DVDs for nearly 2 h/day. Across the industrialised world, watching screen media is the main pastime of children. Over the course of childhood, children spend more time watching TV than they spend in school. When including computer games, internet and DVDs, by the age of seven years, a child born today will have spent one full year of 24 h days watching screen media. By the age of 18 years, the average European child will have spent 3 years of 24 h days watching screen media; at this rate, by the age of 80 years, they will have spent 17.6 years glued to media screens. Yet, irrespective of the content or educational value of what is being viewed, the sheer amount of average daily screen time (ST) during discretionary hours after school is increasingly being considered an independent risk factor for disease, and is recognised as such by other governments and medical bodies but not, however, in Britain or in most of the EU. To date, views of the British and European medical establishments on increasingly high levels of child ST remain conspicuous by their absence. This paper will highlight the dramatic increase in the time children today spend watching screen media. It will provide a brief overview of some specific health and well-being concerns of current viewing levels, explain why screen viewing is distinct from other forms of sedentary behaviour, and point to the potential public health benefits of a reduction in ST. It is proposed that Britain and Europe’s medical establishments now offer guidance on the average number of hours per day children spend viewing screen media, and the age at which they start.", "title": "" }, { "docid": "94b85074da2eedcff74b9ad16c5b562c", "text": "The purpose of the paper is to investigate the design of rectangular patch antenna arrays fed by miscrostrip and coaxial lines at 28 GHz for future 5G applications. Our objective is to design a four element antenna array with a bandwidth higher than 1 GHz and a maximum radiation gain. The performances of the rectangular 4∗1 and 2∗2 patch antenna arrays designed on Rogers RT/Duroid 5880 substrate were optimized and the simulation results reveal that the performance of 4∗1 antenna array fed by microstrip line is better than 2∗2 antenna array fed by coaxial cable. We obtained for the topology of 4∗1 rectangular patch array antenna a bandwidth of 2.15 GHz and 1.3 GHz respectively with almost similar gains of the order of 13.3 dBi.", "title": "" }, { "docid": "3e6aac2e0ff6099aabeee97dc1292531", "text": "A lthough ordinary least-squares (OLS) regression is one of the most familiar statistical tools, far less has been written − especially in the pedagogical literature − on regression through the origin (RTO). Indeed, the subject is surprisingly controversial. The present note highlights situations in which RTO is appropriate, discusses the implementation and evaluation of such models and compares RTO functions among three popular statistical packages. Some examples gleaned from past Teaching Statistics articles are used as illustrations. For expository convenience, OLS and RTO refer here to linear regressions obtained by least-squares methods with and without a constant term, respectively.", "title": "" }, { "docid": "fd27a21d2eaf5fc5b37d4cba6bd4dbef", "text": "RICHARD M. FELDER and JONI SPURLIN North Carolina State University, Raleigh, North Carolina 27695±7905, USA. E-mail: rmfelder@mindspring.com The Index of Learning Styles (ILS) is an instrument designed to assess preferences on the four dimensions of the Felder-Silverman learning style model. The Web-based version of the ILS is taken hundreds of thousands of times per year and has been used in a number of published studies, some of which include data reflecting on the reliability and validity of the instrument. This paper seeks to provide the first comprehensive examination of the ILS, including answers to several questions: (1) What are the dimensions and underlying assumptions of the model upon which the ILS is based? (2) How should the ILS be used and what misuses should be avoided? (3) What research studies have been conducted using the ILS and what conclusions regarding its reliability and validity may be inferred from the data?", "title": "" } ]
scidocsrr
48e2653f4a59a0c4f889d6b75a1c41ff
Click chain model in web search
[ { "docid": "71be2ab6be0ab5c017c09887126053e5", "text": "One of the most important yet insufficiently studied issues in online advertising is the externality effect among ads: the value of an ad impression on a page is affected not just by the location that the ad is placed in, but also by the set of other ads displayed on the page. For instance, a high quality competing ad can detract users from another ad, while a low quality ad could cause the viewer to abandon the page", "title": "" } ]
[ { "docid": "7e4a485d489f9e9ce94889b52214c804", "text": "A situated ontology is a world model used as a computational resource for solving a particular set of problems. It is treated as neither a \\natural\" entity waiting to be discovered nor a purely theoretical construct. This paper describes how a semantico-pragmatic analyzer, Mikrokosmos, uses knowledge from a situated ontology as well as from language-speciic knowledge sources (lexicons and microtheory rules). Also presented are some guidelines for acquiring ontological concepts and an overview of the technology developed in the Mikrokosmos project for large-scale acquisition and maintenance of ontological databases. Tools for acquiring, maintaining, and browsing ontologies can be shared more readily than ontologies themselves. Ontological knowledge bases can be shared as computational resources if such tools provide translators between diierent representation formats. 1 A Situated Ontology World models (ontologies) in computational applications are artiicially constructed entities. They are created, not discovered. This is why so many diierent world models were suggested. Many ontologies are developed for purely theoretical purposes or without the context of a practical situation (e. Many practical knowledge-based systems, on the other hand, employ world or domain models without recognizing them as a separate knowledge source (e.g., Farwell, et al. 1993). In the eld of natural language processing (NLP) there is now a consensus that all NLP systems that seek to represent and manipulate meanings of texts need an ontology (e. In our continued eeorts to build a multilingual knowledge-based machine translation (KBMT) system using an interlingual meaning representation (e.g., Onyshkevych and Nirenburg, 1994), we have developed an ontology to facilitate natural language interpretation and generation. The central goal of the Mikrokosmos project is to develop a system that produces a comprehensive Text Meaning Representation (TMR) for an input text in any of a set of source languages. 1 Knowledge that supports this process is stored both in language-speciic knowledge sources and in an independently motivated, language-neutral ontology (e. An ontology for NLP purposes is a body of knowledge about the world (or a domain) that a) is a repository of primitive symbols used in meaning representation; b) organizes these symbols in a tangled subsumption hierarchy; and c) further interconnects these symbols using a rich system of semantic and discourse-pragmatic relations deened among the concepts. In order for such an ontology to become a computational resource for solving problems such as ambiguity and reference resolution, it must be actually constructed, not merely deened formally, as is the …", "title": "" }, { "docid": "7973587470f4e40f04288fb261445cac", "text": "In developed countries, vitamin B12 (cobalamin) deficiency usually occurs in children, exclusively breastfed ones whose mothers are vegetarian, causing low body stores of vitamin B12. The haematologic manifestation of vitamin B12 deficiency is pernicious anaemia. It is a megaloblastic anaemia with high mean corpuscular volume and typical morphological features, such as hyperlobulation of the nuclei of the granulocytes. In advanced cases, neutropaenia and thrombocytopaenia can occur, simulating aplastic anaemia or leukaemia. In addition to haematological symptoms, infants may experience weakness, fatigue, failure to thrive, and irritability. Other common findings include pallor, glossitis, vomiting, diarrhoea, and icterus. Neurological symptoms may affect the central nervous system and, in severe cases, rarely cause brain atrophy. Here, we report an interesting case, a 12-month old infant, who was admitted with neurological symptoms and diagnosed with vitamin B12 deficiency.", "title": "" }, { "docid": "effe9cf542849a0da41f984f7097228a", "text": "We introduce FaceVR, a novel method for gaze-aware facial reenactment in the Virtual Reality (VR) context. The key component of FaceVR is a robust algorithm to perform real-time facial motion capture of an actor who is wearing a head-mounted display (HMD), as well as a new data-driven approach for eye tracking from monocular videos. In addition to these face reconstruction components, FaceVR incorporates photo-realistic re-rendering in real time, thus allowing artificial modifications of face and eye appearances. For instance, we can alter facial expressions, change gaze directions, or remove the VR goggles in realistic re-renderings. In a live setup with a source and a target actor, we apply these newly-introduced algorithmic components. We assume that the source actor is wearing a VR device, and we capture his facial expressions and eye movement in real-time. For the target video, we mimic a similar tracking process; however, we use the source input to drive the animations of the target video, thus enabling gaze-aware facial reenactment. To render the modified target video on a stereo display, we augment our capture and reconstruction process with stereo data. In the end, FaceVR produces compelling results for a variety of applications, such as gaze-aware facial reenactment, reenactment in virtual reality, removal of VR goggles, and re-targeting of somebody's gaze direction in a video conferencing call.", "title": "" }, { "docid": "edba38e0515256fbb2e72fce87747472", "text": "The risk of predation can have large effects on ecological communities via changes in prey behaviour, morphology and reproduction. Although prey can use a variety of sensory signals to detect predation risk, relatively little is known regarding the effects of predator acoustic cues on prey foraging behaviour. Here we show that an ecologically important marine crab species can detect sound across a range of frequencies, probably in response to particle acceleration. Further, crabs suppress their resource consumption in the presence of experimental acoustic stimuli from multiple predatory fish species, and the sign and strength of this response is similar to that elicited by water-borne chemical cues. When acoustic and chemical cues were combined, consumption differed from expectations based on independent cue effects, suggesting redundancies among cue types. These results highlight that predator acoustic cues may influence prey behaviour across a range of vertebrate and invertebrate taxa, with the potential for cascading effects on resource abundance.", "title": "" }, { "docid": "52462bd444f44910c18b419475a6c235", "text": "Snoring is a common symptom of serious chronic disease known as obstructive sleep apnea (OSA). Knowledge about the location of obstruction site (VVelum, OOropharyngeal lateral walls, T-Tongue, E-Epiglottis) in the upper airways is necessary for proper surgical treatment. In this paper we propose a dual source-filter model similar to the source-filter model of speech to approximate the generation process of snore audio. The first filter models the vocal tract from lungs to the point of obstruction with white noise excitation from the lungs. The second filter models the vocal tract from the obstruction point to the lips/nose with impulse train excitation which represents vibrations at the point of obstruction. The filter coefficients are estimated using the closed and open phases of the snore beat cycle. VOTE classification is done by using SVM classifier and filter coefficients as features. The classification experiments are performed on the development set (283 snore audios) of the MUNICH-PASSAU SNORE SOUND CORPUS (MPSSC). We obtain an unweighted average recall (UAR) of 49.58%, which is higher than the INTERSPEECH-2017 snoring sub-challenge baseline technique by ∼3% (absolute).", "title": "" }, { "docid": "c250963a2b536a9ce9149f385f4d2a0f", "text": "The systematic review (SR) is a methodology used to find and aggregate all relevant existing evidence about a specific research question of interest. One of the activities associated with the SR process is the selection of primary studies, which is a time consuming manual task. The quality of primary study selection impacts the overall quality of SR. The goal of this paper is to propose a strategy named “Score Citation Automatic Selection” (SCAS), to automate part of the primary study selection activity. The SCAS strategy combines two different features, content and citation relationships between the studies, to make the selection activity as automated as possible. Aiming to evaluate the feasibility of our strategy, we conducted an exploratory case study to compare the accuracy of selecting primary studies manually and using the SCAS strategy. The case study shows that for three SRs published in the literature and previously conducted in a manual implementation, the average effort reduction was 58.2 % when applying the SCAS strategy to automate part of the initial selection of primary studies, and the percentage error was 12.98 %. Our case study provided confidence in our strategy, and suggested that it can reduce the effort required to select the primary studies without adversely affecting the overall results of SR.", "title": "" }, { "docid": "db4784e051b798dfa6c3efa5e84c4d00", "text": "Purpose – The purpose of this paper is to propose and verify that the technology acceptance model (TAM) can be employed to explain and predict the acceptance of mobile learning (M-learning); an activity in which users access learning material with their mobile devices. The study identifies two factors that account for individual differences, i.e. perceived enjoyment (PE) and perceived mobility value (PMV), to enhance the explanatory power of the model. Design/methodology/approach – An online survey was conducted to collect data. A total of 313 undergraduate and graduate students in two Taiwan universities answered the questionnaire. Most of the constructs in the model were measured using existing scales, while some measurement items were created specifically for this research. Structural equation modeling was employed to examine the fit of the data with the model by using the LISREL software. Findings – The results of the data analysis shows that the data fit the extended TAM model well. Consumers hold positive attitudes for M-learning, viewing M-learning as an efficient tool. Specifically, the results show that individual differences have a great impact on user acceptance and that the perceived enjoyment and perceived mobility can predict user intentions of using M-learning. Originality/value – There is scant research available in the literature on user acceptance of M-learning from a customer’s perspective. The present research shows that TAM can predict user acceptance of this new technology. Perceived enjoyment and perceived mobility value are antecedents of user acceptance. The model enhances our understanding of consumer motivation of using M-learning. This understanding can aid our efforts when promoting M-learning.", "title": "" }, { "docid": "996eb4470d33f00ed9cb9bcc52eb5d82", "text": "Andrew is a distributed computing environment that is a synthesis of the personal computing and timesharing paradigms. When mature, it is expected to encompass over 5,000 workstations spanning the Carnegie Mellon University campus. This paper examines the security issues that arise in such an environment and describes the mechanisms that have been developed to address them. These mechanisms include the logical and physical separation of servers and clients, support for secure communication at the remote procedure call level, a distributed authentication service, a file-protection scheme that combines access lists with UNIX mode bits, and the use of encryption as a basic building block. The paper also discusses the assumptions underlying security in Andrew and analyzes the vulnerability of the system. Usage experience reveals that resource control, particularly of workstation CPU cycles, is more important than originally anticipated and that the mechanisms available to address this issue are rudimentary.", "title": "" }, { "docid": "ba57246214ea44910e94471375836d87", "text": "Collaborative filtering is a technique for recommending documents to users based on how similar their tastes are to other users. If two users tend to agree on what they like, the system will recommend the same documents to them. The generalized vector space model of information retrieval represents a document by a vector of its similarities to all other documents. The process of collaborative filtering is nearly identical to the process of retrieval using GVSM in a matrix of user ratings. Using this observation, a model for filtering collaboratively using document content is possible.", "title": "" }, { "docid": "774bf4b0a2c8fe48607e020da2737041", "text": "A class of three-dimensional planar arrays in substrate integrated waveguide (SIW) technology is proposed, designed and demonstrated with 8 × 16 elements at 35 GHz for millimeter-wave imaging radar system applications. Endfire element is generally chosen to ensure initial high gain and broadband characteristics for the array. Fermi-TSA (tapered slot antenna) structure is used as element to reduce the beamwidth. Corrugation is introduced to reduce the resulting antenna physical width without degradation of performance. The achieved measured gain in our demonstration is about 18.4 dBi. A taper shaped air gap in the center is created to reduce the coupling between two adjacent elements. An SIW H-to-E-plane vertical interconnect is proposed in this three-dimensional architecture and optimized to connect eight 1 × 16 planar array sheets to the 1 × 8 final network. The overall architecture is exclusively fabricated by the conventional PCB process. Thus, the developed SIW feeder leads to a significant reduction in both weight and cost, compared to the metallic waveguide-based counterpart. A complete antenna structure is designed and fabricated. The planar array ensures a gain of 27 dBi with low SLL of 26 dB and beamwidth as narrow as 5.15 degrees in the E-plane and 6.20 degrees in the 45°-plane.", "title": "" }, { "docid": "e99369633599d38d84ad1a5c74695475", "text": "Sarcasm is a form of language in which individual convey their message in an implicit way i.e. the opposite of what is implied. Sarcasm detection is the task of predicting sarcasm in text. This is the crucial step in sentiment analysis due to inherently ambiguous nature of sarcasm. With this ambiguity, sarcasm detection has always been a difficult task, even for humans. Therefore sarcasm detection has gained importance in many Natural Language Processing applications. In this paper, we describe approaches, issues, challenges and future scopes in sarcasm detection.", "title": "" }, { "docid": "8877d6753d6b7cd39ba36c074ca56b00", "text": "Perhaps the most fundamental application of affective computing will be Human-Computer Interaction (HCI) in which the computer should have the ability to detect and track the user's affective states, and make corresponding feedback. The human multi-sensor affect system defines the expectation of multimodal affect analyzer. In this paper, we present our efforts toward audio-visual HCI-related affect recognition. With HCI applications in mind, we take into account some special affective states which indicate users' cognitive/motivational states. Facing the fact that a facial expression is influenced by both an affective state and speech content, we apply a smoothing method to extract the information of the affective state from facial features. In our fusion stage, a voting method is applied to combine audio and visual modalities so that the final affect recognition accuracy is greatly improved. We test our bimodal affect recognition approach on 38 subjects with 11 HCI-related affect states. The extensive experimental results show that the average person-dependent affect recognition accuracy is almost 90% for our bimodal fusion.", "title": "" }, { "docid": "8f1bcaed29644b80a623be8d26b81c20", "text": "The GTZAN dataset appears in at least 100 published works, and is the most-used public dataset for evaluation in machine listening research for music genre recognition (MGR). Our recent work, however, shows GTZAN has several faults (repetitions, mislabelings, and distortions), which challenge the interpretability of any result derived using it. In this article, we disprove the claims that all MGR systems are affected in the same ways by these faults, and that the performances of MGR systems in GTZAN are still meaningfully comparable since they all face the same faults. We identify and analyze the contents of GTZAN, and provide a catalog of its faults. We review how GTZAN has been used in MGR research, and find few indications that its faults have been known and considered. Finally, we rigorously study the effects of its faults on evaluating five different MGR systems. The lesson is not to banish GTZAN, but to use it with consideration of its contents.", "title": "" }, { "docid": "07575ce75d921d6af72674e1fe563ff7", "text": "With a growing body of literature linking systems of high-performance work practices to organizational performance outcomes, recent research has pushed for examinations of the underlying mechanisms that enable this connection. In this study, based on a large sample of Welsh public-sector employees, we explored the role of several individual-level attitudinal factors--job satisfaction, organizational commitment, and psychological empowerment--as well as organizational citizenship behaviors that have the potential to provide insights into how human resource systems influence the performance of organizational units. The results support a unit-level path model, such that department-level, high-performance work system utilization is associated with enhanced levels of job satisfaction, organizational commitment, and psychological empowerment. In turn, these attitudinal variables were found to be positively linked to enhanced organizational citizenship behaviors, which are further related to a second-order construct measuring departmental performance.", "title": "" }, { "docid": "0297af005c837e410272ab3152942f90", "text": "Iris authentication is a popular method where persons are accurately authenticated. During authentication phase the features are extracted which are unique. Iris authentication uses IR images for authentication. This proposed work uses color iris images for authentication. Experiments are performed using ten different color models. This paper is focused on performance evaluation of color models used for color iris authentication. This proposed method is more reliable which cope up with different noises of color iris images. The experiments reveals the best selection of color model used for iris authentication. The proposed method is validated on UBIRIS noisy iris database. The results demonstrate that the accuracy is 92.1%, equal error rate of 0.072 and computational time is 0.039 seconds.", "title": "" }, { "docid": "e1c927d7fbe826b741433c99fff868d0", "text": "Multiclass maps are scatterplots, multidimensional projections, or thematic geographic maps where data points have a categorical attribute in addition to two quantitative attributes. This categorical attribute is often rendered using shape or color, which does not scale when overplotting occurs. When the number of data points increases, multiclass maps must resort to data aggregation to remain readable. We present multiclass density maps: multiple 2D histograms computed for each of the category values. Multiclass density maps are meant as a building block to improve the expressiveness and scalability of multiclass map visualization. In this article, we first present a short survey of aggregated multiclass maps, mainly from cartography. We then introduce a declarative model—a simple yet expressive JSON grammar associated with visual semantics—that specifies a wide design space of visualizations for multiclass density maps. Our declarative model is expressive and can be efficiently implemented in visualization front-ends such as modern web browsers. Furthermore, it can be reconfigured dynamically to support data exploration tasks without recomputing the raw data. Finally, we demonstrate how our model can be used to reproduce examples from the past and support exploring data at scale.", "title": "" }, { "docid": "0c8b192807a6728be21e6a19902393c0", "text": "The balance between facilitation and competition is likely to change with age due to the dynamic nature of nutrient, water and carbon cycles, and light availability during stand development. These processes have received attention in harsh, arid, semiarid and alpine ecosystems but are rarely examined in more productive communities, in mixed-species forest ecosystems or in long-term experiments spanning more than a decade. The aim of this study was to examine how inter- and intraspecific interactions between Eucalyptus globulus Labill. mixed with Acacia mearnsii de Wildeman trees changed with age and productivity in a field experiment in temperate south-eastern Australia. Spatially explicit neighbourhood indices were calculated to quantify tree interactions and used to develop growth models to examine how the tree interactions changed with time and stand productivity. Interspecific influences were usually less negative than intraspecific influences, and their difference increased with time for E. globulus and decreased with time for A. mearnsii. As a result, the growth advantages of being in a mixture increased with time for E. globulus and decreased with time for A. mearnsii. The growth advantage of being in a mixture also decreased for E. globulus with increasing stand productivity, showing that spatial as well as temporal dynamics in resource availability influenced the magnitude and direction of plant interactions.", "title": "" }, { "docid": "41ac115647c421c44d7ef1600814dc3e", "text": "PURPOSE\nThe bony skeleton serves as the scaffolding for the soft tissues of the face; however, age-related changes of bony morphology are not well defined. This study sought to compare the anatomic relationships of the facial skeleton and soft tissue structures between young and old men and women.\n\n\nMETHODS\nA retrospective review of CT scans of 100 consecutive patients imaged at Duke University Medical Center between 2004 and 2007 was performed using the Vitrea software package. The study population included 25 younger women (aged 18-30 years), 25 younger men, 25 older women (aged 55-65 years), and 25 older men. Using a standardized reference line, the distances from the anterior corneal plane to the superior orbital rim, lateral orbital rim, lower eyelid fat pad, inferior orbital rim, anterior cheek mass, and pyriform aperture were measured. Three-dimensional bony reconstructions were used to record the angular measurements of 4 bony regions: glabellar, orbital, maxillary, and pyriform aperture.\n\n\nRESULTS\nThe glabellar (p = 0.02), orbital (p = 0.0007), maxillary (p = 0.0001), and pyriform (p = 0.008) angles all decreased with age. The maxillary pyriform (p = 0.003) and infraorbital rim (p = 0.02) regressed with age. Anterior cheek mass became less prominent with age (p = 0.001), but the lower eyelid fat pad migrated anteriorly over time (p = 0.007).\n\n\nCONCLUSIONS\nThe facial skeleton appears to remodel throughout adulthood. Relative to the globe, the facial skeleton appears to rotate such that the frontal bone moves anteriorly and inferiorly while the maxilla moves posteriorly and superiorly. This rotation causes bony angles to become more acute and likely has an effect on the position of overlying soft tissues. These changes appear to be more dramatic in women.", "title": "" }, { "docid": "041ca42d50e4cac92cf81c989a8527fb", "text": "Helix antenna consists of a single conductor or multi-conductor open helix-shaped. Helix antenna has a three-dimensional shape. The shape of the helix antenna resembles a spring and the diameter and the distance between the windings of a certain size. This study aimed to design a signal amplifier wifi on 2.4 GHz. Materials used in the form of the pipe, copper wire, various connectors and wireless adapters and various other components. Mmmanagal describing simulation result on helix antenna. Further tested with wirelesmon software to test the wifi signal strength. The results are based Mmanagal, radiation patterns emitted achieve Ganin: 4.5 dBi horizontal polarization, F / B: −0,41dB; rear azimuth 1200 elevation 600, 2400 MHz, R27.9 and jX impedance −430.9, Elev: 64.40 real GND: 0.50 m height, and wifi signal strength increased from 47% to 55%.", "title": "" }, { "docid": "857a2098e5eb48340699c6b7a29ec293", "text": "Compressibiity of individuai sequences by the ciam of generaihd finite-atate information-losales encoders ia investigated These encodersrpnoperateinavariabie-ratemodeasweUasaflxedrateone,nnd they aiiow for any fhite-atate acheme of variabie-iength-to-variable-ien@ coding. For every individuai hfiite aeqence x a quantity p (x) ia defined, calledthecompressibilityofx,whirhisshowntobetheasymptotieatly attainable lower bound on the compression ratio tbat cao be achieved for x by any finite-state encoder. ‘flds is demonstrated by means of a amatructivecodtngtbeoremanditsconversethat,apartfnnntheirafymptotic significance, also provide useful performance criteria for finite and practicai data-compression taaka. The proposed concept of compressibility ia aiao shown to play a role analogous to that of entropy in ciaasicai informatfon theory where onedeaia with probabilistic ensembles of aequencea ratk Manuscript received June 10, 1977; revised February 20, 1978. J. Ziv is with Bell Laboratories, Murray Hill, NJ 07974, on leave from the Department of Electrical Engineering, Techmon-Israel Institute of Technology, Halfa, Israel. A. Lempel is with Sperry Research Center, Sudbury, MA 01776, on leave from the Department of Electrical Engineer@, Technion-Israel Institute of Technology, Haifa, Israel. tium with individuai sequences. Widie the delinition of p (x) aiiows a different machine for each different sequence to be compresse4 the constructive coding theorem ieada to a universal algorithm that is aaymik toticaiiy optfmai for au sequencea.", "title": "" } ]
scidocsrr
5a7052cb7df7235f112f0d4f750339a0
Exploring ROI size in deep learning based lipreading
[ { "docid": "7fe3cf6b8110c324a98a90f31064dadb", "text": "Traditional visual speech recognition systems consist of two stages, feature extraction and classification. Recently, several deep learning approaches have been presented which automatically extract features from the mouth images and aim to replace the feature extraction stage. However, research on joint learning of features and classification is very limited. In this work, we present an end-to-end visual speech recognition system based on Long-Short Memory (LSTM) networks. To the best of our knowledge, this is the first model which simultaneously learns to extract features directly from the pixels and perform classification and also achieves state-of-the-art performance in visual speech classification. The model consists of two streams which extract features directly from the mouth and difference images, respectively. The temporal dynamics in each stream are modelled by an LSTM and the fusion of the two streams takes place via a Bidirectional LSTM (BLSTM). An absolute improvement of 9.7% over the base line is reported on the OuluVS2 database, and 1.5% on the CUAVE database when compared with other methods which use a similar visual front-end.", "title": "" } ]
[ { "docid": "335daed2a03f710d25e1e0a43c600453", "text": "The Digital Bibliography and Library Project (DBLP) is a popular computer science bibliography website hosted at the University of Trier in Germany. It currently contains 2,722,212 computer science publications with additional information about the authors and conferences, journals, or books in which these are published. Although the database covers the majority of papers published in this field of research, it is still hard to browse the vast amount of textual data manually to find insights and correlations in it, in particular time-varying ones. This is also problematic if someone is merely interested in all papers of a specific topic and possible correlated scientific words which may hint at related papers. To close this gap, we propose an interactive tool which consists of two separate components, namely data analysis and data visualization. We show the benefits of our tool and explain how it might be used in a scenario where someone is confronted with the task of writing a state-of-the art report on a specific topic. We illustrate how data analysis, data visualization, and the human user supported by interaction features can work together to find insights which makes typical literature search tasks faster.", "title": "" }, { "docid": "a601abae0a3d54d4aa3ecbb4bd09755a", "text": "Article history: Received 27 March 2008 Received in revised form 2 September 2008 Accepted 20 October 2008", "title": "" }, { "docid": "51fb43ac979ce0866eb541adc145ba70", "text": "In many cooperatively breeding species, group members form a dominance hierarchy or queue to inherit the position of breeder. Models aimed at understanding individual variation in helping behavior, however, rarely take into account the effect of dominance rank on expected future reproductive success and thus the potential direct fitness costs of helping. Here we develop a kin-selection model of helping behavior in multimember groups in which only the highest ranking individual breeds. Each group member can invest in the dominant’s offspring at a cost to its own survivorship. The model predicts that lower ranked subordinates, who have a smaller probability of inheriting the group, should work harder than higher ranked subordinates. This prediction holds regardless of whether the intrinsic mortality rate of subordinates increases or decreases with rank. The prediction does not necessarily hold, however, where the costs of helping are higher for lower ranked individuals: a situation that may be common in vertebrates. The model makes two further testable predictions: that the helping effort of an individual of given rank should be lower in larger groups, and the reproductive success of dominants should be greater where group members are more closely related. Empirical evidence for these predictions is discussed. We argue that the effects of rank on stable helping effort may explain why attempts to correlate individual helping effort with relatedness in cooperatively breeding species have met with limited success.", "title": "" }, { "docid": "e8b199733c0304731a60db7c42987cf6", "text": "This ethnographic study of 22 diverse families in the San Francisco Bay Area provides a holistic account of parents' attitudes about their children's use of technology. We found that parents from different socioeconomic classes have different values and practices around technology use, and that those values and practices reflect structural differences in their everyday lives. Calling attention to class differences in technology use challenges the prevailing practice in human-computer interaction of designing for those similar to oneself, which often privileges middle-class values and practices. By discussing the differences between these two groups and the advantages of researching both, this research highlights the benefits of explicitly engaging with socioeconomic status as a category of analysis in design.", "title": "" }, { "docid": "2ec9ac2c283fa0458eb97d1e359ec358", "text": "Multiple automakers have in development or in production automated driving systems (ADS) that offer freeway-pilot functions. This type of ADS is typically limited to restricted-access freeways only, that is, the transition from manual to automated modes takes place only after the ramp merging process is completed manually. One major challenge to extend the automation to ramp merging is that the automated vehicle needs to incorporate and optimize long-term objectives (e.g. successful and smooth merge) when near-term actions must be safely executed. Moreover, the merging process involves interactions with other vehicles whose behaviors are sometimes hard to predict but may influence the merging vehicle's optimal actions. To tackle such a complicated control problem, we propose to apply Deep Reinforcement Learning (DRL) techniques for finding an optimal driving policy by maximizing the long-term reward in an interactive environment. Specifically, we apply a Long Short-Term Memory (LSTM) architecture to model the interactive environment, from which an internal state containing historical driving information is conveyed to a Deep Q-Network (DQN). The DQN is used to approximate the Q-function, which takes the internal state as input and generates Q-values as output for action selection. With this DRL architecture, the historical impact of interactive environment on the long-term reward can be captured and taken into account for deciding the optimal control policy. The proposed architecture has the potential to be extended and applied to other autonomous driving scenarios such as driving through a complex intersection or changing lanes under varying traffic flow conditions.", "title": "" }, { "docid": "6566ad2c654274105e94f99ac5e20401", "text": "This paper presents a universal morphological feature schema that represents the finest distinctions in meaning that are expressed by overt, affixal inflectional morphology across languages. This schema is used to universalize data extracted from Wiktionary via a robust multidimensional table parsing algorithm and feature mapping algorithms, yielding 883,965 instantiated paradigms in 352 languages. These data are shown to be effective for training morphological analyzers, yielding significant accuracy gains when applied to Durrett and DeNero’s (2013) paradigm learning framework.", "title": "" }, { "docid": "405bae0d413aa4b5fef0ac8b8c639235", "text": "Leukocyte adhesion deficiency (LAD) type III is a rare syndrome characterized by severe recurrent infections, leukocytosis, and increased bleeding tendency. All integrins are normally expressed yet a defect in their activation leads to the observed clinical manifestations. Less than 20 patients have been reported world wide and the primary genetic defect was identified in some of them. Here we describe the clinical features of patients in whom a mutation in the calcium and diacylglycerol-regulated guanine nucleotide exchange factor 1 (CalDAG GEF1) was found and compare them to other cases of LAD III and to animal models harboring a mutation in the CalDAG GEF1 gene. The hallmarks of the syndrome are recurrent infections accompanied by severe bleeding episodes distinguished by osteopetrosis like bone abnormalities and neurodevelopmental defects.", "title": "" }, { "docid": "4a761bed54487cb9c34fc0ff27883944", "text": "We show that unsupervised training of latent capsule layers using only the reconstruction loss, without masking to select the correct output class, causes a loss of equivariances and other desirable capsule qualities. This implies that supervised capsules networks can’t be very deep. Unsupervised sparsening of latent capsule layer activity both restores these qualities and appears to generalize better than supervised masking, while potentially enabling deeper capsules networks. We train a sparse, unsupervised capsules network of similar geometry to (Sabour et al., 2017) on MNIST (LeCun et al., 1998) and then test classification accuracy on affNIST1 using an SVM layer. Accuracy is improved from benchmark 79% to 90%.", "title": "" }, { "docid": "c0762517ebbae00ab5ee1291460c164c", "text": "This paper compares various topologies for 6.6kW on-board charger (OBC) to find out suitable topology. In general, OBC consists of 2-stage; power factor correction (PFC) stage and DC-DC converter stage. Conventional boost PFC, interleaved boost PFC, and semi bridgeless PFC are considered as PFC circuit, and full-bridge converter, phase shift full-bridge converter, and series resonant converter are taken into account for DC-DC converter circuit. The design process of each topology is presented. Then, loss analysis is implemented in order to calculate the efficiency of each topology for PFC circuit and DC-DC converter circuit. In addition, the volume of magnetic components and number of semi-conductor elements are considered. Based on these results, topology selection guideline according to the system specification of 6.6kW OBC is proposed.", "title": "" }, { "docid": "12274a9b350f1d1f7a3eb0cd865f260c", "text": "A large amount of multimedia data (e.g., image and video) is now available on the Web. A multimedia entity does not appear in isolation, but is accompanied by various forms of metadata, such as surrounding text, user tags, ratings, and comments etc. Mining these textual metadata has been found to be effective in facilitating multimedia information processing and management. A wealth of research efforts has been dedicated to text mining in multimedia. This chapter provides a comprehensive survey of recent research efforts. Specifically, the survey focuses on four aspects: (a) surrounding text mining; (b) tag mining; (c) joint text and visual content mining; and (d) cross text and visual content mining. Furthermore, open research issues are identified based on the current research efforts.", "title": "" }, { "docid": "7f71e539817c80aaa0a4fe3b68d76948", "text": "We propose to help weakly supervised object localization for classes where location annotations are not available, by transferring things and stuff knowledge from a source set with available annotations. The source and target classes might share similar appearance (e.g. bear fur is similar to cat fur) or appear against similar background (e.g. horse and sheep appear against grass). To exploit this, we acquire three types of knowledge from the source set: a segmentation model trained on both thing and stuff classes; similarity relations between target and source classes; and cooccurrence relations between thing and stuff classes in the source. The segmentation model is used to generate thing and stuff segmentation maps on a target image, while the class similarity and co-occurrence knowledge help refining them. We then incorporate these maps as new cues into a multiple instance learning framework (MIL), propagating the transferred knowledge from the pixel level to the object proposal level. In extensive experiments, we conduct our transfer from the PASCAL Context dataset (source) to the ILSVRC, COCO and PASCAL VOC 2007 datasets (targets). We evaluate our transfer across widely different thing classes, including some that are not similar in appearance, but appear against similar background. The results demonstrate significant improvement over standard MIL, and we outperform the state-of-the-art in the transfer setting.", "title": "" }, { "docid": "a3585d424a54c31514aba579b80d8231", "text": "The vast majority of today's critical infrastructure is supported by numerous feedback control loops and an attack on these control loops can have disastrous consequences. This is a major concern since modern control systems are becoming large and decentralized and thus more vulnerable to attacks. This paper is concerned with the estimation and control of linear systems when some of the sensors or actuators are corrupted by an attacker. We give a new simple characterization of the maximum number of attacks that can be detected and corrected as a function of the pair (A,C) of the system and we show in particular that it is impossible to accurately reconstruct the state of a system if more than half the sensors are attacked. In addition, we show how the design of a secure local control loop can improve the resilience of the system. When the number of attacks is smaller than a threshold, we propose an efficient algorithm inspired from techniques in compressed sensing to estimate the state of the plant despite attacks. We give a theoretical characterization of the performance of this algorithm and we show on numerical simulations that the method is promising and allows to reconstruct the state accurately despite attacks. Finally, we consider the problem of designing output-feedback controllers that stabilize the system despite sensor attacks. We show that a principle of separation between estimation and control holds and that the design of resilient output feedback controllers can be reduced to the design of resilient state estimators.", "title": "" }, { "docid": "07941e1f7a8fd0bbc678b641b80dc037", "text": "This contribution presents a very brief and critical discussion on automated machine learning (AutoML), which is categorized here into two classes, referred to as narrow AutoML and generalized AutoML, respectively. The conclusions yielded from this discussion can be summarized as follows: (1) most existent research on AutoML belongs to the class of narrow AutoML; (2) advances in narrow AutoML are mainly motivated by commercial needs, while any possible benefit obtained is definitely at a cost of increase in computing burdens; (3)the concept of generalized AutoML has a strong tie in spirit with artificial general intelligence (AGI), also called “strong AI”, for which obstacles abound for obtaining pivotal progresses.", "title": "" }, { "docid": "ff20e5cd554cd628eba07776fa9a5853", "text": "We describe our early experience in applying our console log mining techniques [19, 20] to logs from production Google systems with thousands of nodes. This data set is five orders of magnitude in size and contains almost 20 times as many messages types as the Hadoop data set we used in [19]. It also has many properties that are unique to large scale production deployments (e.g., the system stays on for several months and multiple versions of the software can run concurrently). Our early experience shows that our techniques, including source code based log parsing, state and sequence based feature creation and problem detection, work well on this production data set. We also discuss our experience in using our log parser to assist the log sanitization.", "title": "" }, { "docid": "8fe6e954db9080e233bbc6dbf8117914", "text": "This document defines a deterministic digital signature generation procedure. Such signatures are compatible with standard Digital Signature Algorithm (DSA) and Elliptic Curve Digital Signature Algorithm (ECDSA) digital signatures and can be processed with unmodified verifiers, which need not be aware of the procedure described therein. Deterministic signatures retain the cryptographic security features associated with digital signatures but can be more easily implemented in various environments, since they do not need access to a source of high-quality randomness. Status of This Memo This document is not an Internet Standards Track specification; it is published for informational purposes. This is a contribution to the RFC Series, independently of any other RFC stream. The RFC Editor has chosen to publish this document at its discretion and makes no statement about its value for implementation or deployment. Documents approved for publication by the RFC Editor are not a candidate for any level of Internet Standard; see Section 2 of RFC 5741. Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document.", "title": "" }, { "docid": "04f705462bdd34a8d82340fb59264a51", "text": "This paper describes EmoTweet-28, a carefully curated corpus of 15,553 tweets annotated with 28 emotion categories for the purpose of training and evaluating machine learning models for emotion classification. EmoTweet-28 is, to date, the largest tweet corpus annotated with fine-grained emotion categories. The corpus contains annotations for four facets of emotion: valence, arousal, emotion category and emotion cues. We first used small-scale content analysis to inductively identify a set of emotion categories that characterize the emotions expressed in microblog text. We then expanded the size of the corpus using crowdsourcing. The corpus encompasses a variety of examples including explicit and implicit expressions of emotions as well as tweets containing multiple emotions. EmoTweet-28 represents an important resource to advance the development and evaluation of more emotion-sensitive systems.", "title": "" }, { "docid": "0a3f5ff37c49840ec8e59cbc56d31be2", "text": "Convolutional neural networks (CNNs) are well known for producing state-of-the-art recognizers for document processing [1]. However, they can be difficult to implement and are usually slower than traditional multi-layer perceptrons (MLPs). We present three novel approaches to speeding up CNNs: a) unrolling convolution, b) using BLAS (basic linear algebra subroutines), and c) using GPUs (graphic processing units). Unrolled convolution converts the processing in each convolutional layer (both forward-propagation and back-propagation) into a matrix-matrix product. The matrix-matrix product representation of CNNs makes their implementation as easy as MLPs. BLAS is used to efficiently compute matrix products on the CPU. We also present a pixel shader based GPU implementation of CNNs. Results on character recognition problems indicate that unrolled convolution with BLAS produces a dramatic 2.4X−3.0X speedup. The GPU implementation is even faster and produces a 3.1X−4.1X speedup.", "title": "" }, { "docid": "f733b53147ce1765709acfcba52c8bbf", "text": "BACKGROUND\nIt is important to evaluate the impact of cannabis use on onset and course of psychotic illness, as the increasing number of novice cannabis users may translate into a greater public health burden. This study aims to examine the relationship between adolescent onset of regular marijuana use and age of onset of prodromal symptoms, or first episode psychosis, and the manifestation of psychotic symptoms in those adolescents who use cannabis regularly.\n\n\nMETHODS\nA review was conducted of the current literature for youth who initiated cannabis use prior to the age of 18 and experienced psychotic symptoms at, or prior to, the age of 25. Seventeen studies met eligibility criteria and were included in this review.\n\n\nRESULTS\nThe current weight of evidence supports the hypothesis that early initiation of cannabis use increases the risk of early onset psychotic disorder, especially for those with a preexisting vulnerability and who have greater severity of use. There is also a dose-response association between cannabis use and symptoms, such that those who use more tend to experience greater number and severity of prodromal and diagnostic psychotic symptoms. Those with early-onset psychotic disorder and comorbid cannabis use show a poorer course of illness in regards to psychotic symptoms, treatment, and functional outcomes. However, those with early initiation of cannabis use appear to show a higher level of social functioning than non-cannabis users.\n\n\nCONCLUSIONS\nAdolescent initiation of cannabis use is associated, in a dose-dependent fashion, with emergence and severity of psychotic symptoms and functional impairment such that those who initiate use earlier and use at higher frequencies demonstrate poorer illness and treatment outcomes. These associations appear more robust for adolescents at high risk for developing a psychotic disorder.", "title": "" }, { "docid": "f59adaac85f7131bf14335dad2337568", "text": "Product search is an important part of online shopping. In contrast to many search tasks, the objectives of product search are not confined to retrieving relevant products. Instead, it focuses on finding items that satisfy the needs of individuals and lead to a user purchase. The unique characteristics of product search make search personalization essential for both customers and e-shopping companies. Purchase behavior is highly personal in online shopping and users often provide rich feedback about their decisions (e.g. product reviews). However, the severe mismatch found in the language of queries, products and users make traditional retrieval models based on bag-of-words assumptions less suitable for personalization in product search. In this paper, we propose a hierarchical embedding model to learn semantic representations for entities (i.e. words, products, users and queries) from different levels with their associated language data. Our contributions are three-fold: (1) our work is one of the initial studies on personalized product search; (2) our hierarchical embedding model is the first latent space model that jointly learns distributed representations for queries, products and users with a deep neural network; (3) each component of our network is designed as a generative model so that the whole structure is explainable and extendable. Following the methodology of previous studies, we constructed personalized product search benchmarks with Amazon product data. Experiments show that our hierarchical embedding model significantly outperforms existing product search baselines on multiple benchmark datasets.", "title": "" } ]
scidocsrr
366061cc202731f6c17afeb18d38db19
The DSM diagnostic criteria for gender identity disorder in adolescents and adults.
[ { "docid": "e61d7b44a39c5cc3a77b674b2934ba40", "text": "The sexual behaviors and attitudes of male-to-female (MtF) transsexuals have not been investigated systematically. This study presents information about sexuality before and after sex reassignment surgery (SRS), as reported by 232 MtF patients of one surgeon. Data were collected using self-administered questionnaires. The mean age of participants at time of SRS was 44 years (range, 18-70 years). Before SRS, 54% of participants had been predominantly attracted to women and 9% had been predominantly attracted to men. After SRS, these figures were 25% and 34%, respectively.Participants' median numbers of sexual partners before SRS and in the last 12 months after SRS were 6 and 1, respectively. Participants' reported number of sexual partners before SRS was similar to the number of partners reported by male participants in the National Health and Social Life Survey (NHSLS). After SRS, 32% of participants reported no sexual partners in the last 12 months, higher than reported by male or female participants in the NHSLS. Bisexual participants reported more partners before and after SRS than did other participants. 49% of participants reported hundreds of episodes or more of sexual arousal to cross-dressing or cross-gender fantasy (autogynephilia) before SRS; after SRS, only 3% so reported. More frequent autogynephilic arousal after SRS was correlated with more frequent masturbation, a larger number of sexual partners, and more frequent partnered sexual activity. 85% of participants experienced orgasm at least occasionally after SRS and 55% ejaculated with orgasm.", "title": "" }, { "docid": "a4a15096e116a6afc2730d1693b1c34f", "text": "The present study reports on the construction of a dimensional measure of gender identity (gender dysphoria) for adolescents and adults. The 27-item gender identity/gender dysphoria questionnaire for adolescents and adults (GIDYQ-AA) was administered to 389 university students (heterosexual and nonheterosexual) and 73 clinic-referred patients with gender identity disorder. Principal axis factor analysis indicated that a one-factor solution, accounting for 61.3% of the total variance, best fits the data. Factor loadings were all >or= .30 (median, .82; range, .34-.96). A mean total score (Cronbach's alpha, .97) was computed, which showed strong evidence for discriminant validity in that the gender identity patients had significantly more gender dysphoria than both the heterosexual and nonheterosexual university students. Using a cut-point of 3.00, we found the sensitivity was 90.4% for the gender identity patients and specificity was 99.7% for the controls. The utility of the GIDYQ-AA is discussed.", "title": "" } ]
[ { "docid": "1f629796e9180c14668e28b83dc30675", "text": "In this article we tackle the issue of searchable encryption with a generalized query model. Departing from many previous works that focused on queries consisting of a single keyword, we consider the the case of queries consisting of arbitrary boolean expressions on keywords, that is to say conjunctions and disjunctions of keywords and their complement. Our construction of boolean symmetric searchable encryption BSSE is mainly based on the orthogonalization of the keyword field according to the Gram-Schmidt process. Each document stored in an outsourced server is associated with a label which contains all the keywords corresponding to the document, and searches are performed by way of a simple inner product. Furthermore, the queries in the BSSE scheme are randomized. This randomization hides the search pattern of the user since the search results cannot be associated deterministically to queries. We formally define an adaptive security model for the BSSE scheme. In addition, the search complexity is in $O(n)$ where $n$ is the number of documents stored in the outsourced server.", "title": "" }, { "docid": "98aec0805e83e344a6b9898fb65e1a11", "text": "Technology offers the potential to objectively monitor people's eating and activity behaviors and encourage healthier lifestyles. BALANCE is a mobile phone-based system for long term wellness management. The BALANCE system automatically detects the user's caloric expenditure via sensor data from a Mobile Sensing Platform unit worn on the hip. Users manually enter information on foods eaten via an interface on an N95 mobile phone. Initial validation experiments measuring oxygen consumption during treadmill walking and jogging show that the system's estimate of caloric output is within 87% of the actual value. Future work will refine and continue to evaluate the system's efficacy and develop more robust data input and activity inference methods.", "title": "" }, { "docid": "670b58d379b7df273309e55cf8e25db4", "text": "In this paper, we introduce a new large-scale dataset of ships, called SeaShips, which is designed for training and evaluating ship object detection algorithms. The dataset currently consists of 31 455 images and covers six common ship types (ore carrier, bulk cargo carrier, general cargo ship, container ship, fishing boat, and passenger ship). All of the images are from about 10 080 real-world video segments, which are acquired by the monitoring cameras in a deployed coastline video surveillance system. They are carefully selected to mostly cover all possible imaging variations, for example, different scales, hull parts, illumination, viewpoints, backgrounds, and occlusions. All images are annotated with ship-type labels and high-precision bounding boxes. Based on the SeaShips dataset, we present the performance of three detectors as a baseline to do the following: 1) elementarily summarize the difficulties of the dataset for ship detection; 2) show detection results for researchers using the dataset; and 3) make a comparison to identify the strengths and weaknesses of the baseline algorithms. In practice, the SeaShips dataset would hopefully advance research and applications on ship detection.", "title": "" }, { "docid": "a76ba02ef0f87a41cdff1a4046d4bba1", "text": "This paper proposes two RF self-interference cancellation techniques. Their small form-factor enables full-duplex communication links for small-to-medium size portable devices and hence promotes the adoption of full-duplex in mass-market applications and next-generation standards, e.g. IEEE802.11 and 5G. Measured prototype implementations of an electrical balance duplexer and a dual-polarized antenna both achieve >50 dB self-interference suppression at RF, operating in the ISM band at 2.45GHz.", "title": "" }, { "docid": "0be3de2b6f0dd5d3158cc7a98286d571", "text": "The use of tablet PCs is spreading rapidly, and accordingly users browsing and inputting personal information in public spaces can often be seen by third parties. Unlike conventional mobile phones and notebook PCs equipped with distinct input devices (e.g., keyboards), tablet PCs have touchscreen keyboards for data input. Such integration of display and input device increases the potential for harm when the display is captured by malicious attackers. This paper presents the description of reconstructing tablet PC displays via measurement of electromagnetic (EM) emanation. In conventional studies, such EM display capture has been achieved by using non-portable setups. Those studies also assumed that a large amount of time was available in advance of capture to obtain the electrical parameters of the target display. In contrast, this paper demonstrates that such EM display capture is feasible in real time by a setup that fits in an attaché case. The screen image reconstruction is achieved by performing a prior course profiling and a complemental signal processing instead of the conventional fine parameter tuning. Such complemental processing can eliminate the differences of leakage parameters among individuals and therefore correct the distortions of images. The attack distance, 2 m, makes this method a practical threat to general tablet PCs in public places. This paper discusses possible attack scenarios based on the setup described above. In addition, we describe a mechanism of EM emanation from tablet PCs and a countermeasure against such EM display capture.", "title": "" }, { "docid": "b0cba371bb9628ac96a9ae2bb228f5a9", "text": "Graph-based recommendation approaches can model associations between users and items alongside additional contextual information. Recent studies demonstrated that representing features extracted from social media (SM) auxiliary data, like friendships, jointly with traditional users/items ratings in the graph, contribute to recommendation accuracy. In this work, we take a step further and propose an extended graph representation that includes socio-demographic and personal traits extracted from the content posted by the user on SM. Empirical results demonstrate that processing unstructured textual information collected from Twitter and representing it in structured form in the graph improves recommendation performance, especially in cold start conditions.", "title": "" }, { "docid": "f5703292e4c722332dcd85b172a3d69e", "text": "Since an ever-increasing part of the population makes use of social media in their day-to-day lives, social media data is being analysed in many different disciplines. The social media analytics process involves four distinct steps, data discovery, collection, preparation, and analysis. While there is a great deal of literature on the challenges and difficulties involving specific data analysis methods, there hardly exists research on the stages of data discovery, collection, and preparation. To address this gap, we conducted an extended and structured literature analysis through which we identified challenges addressed and solutions proposed. The literature search revealed that the volume of data was most often cited as a challenge by researchers. In contrast, other categories have received less attention. Based on the results of the literature search, we discuss the most important challenges for researchers and present potential solutions. The findings are used to extend an existing framework on social media analytics. The article provides benefits for researchers and practitioners who wish to collect and analyse social media data.", "title": "" }, { "docid": "4ae0bb75493e5d430037ba03fcff4054", "text": "David Moher is at the Ottawa Methods Centre, Ottawa Hospital Research Institute, and the Department of Epidemiology and Community Medicine, Faculty of Medicine, University of Ottawa, Ottawa, Ontario, Canada. Alessandro Liberati is at the Università di Modena e Reggio Emilia, Modena, and the Centro Cochrane Italiano, Istituto Ricerche Farmacologiche Mario Negri, Milan, Italy. Jennifer Tetzlaff is at the Ottawa Methods Centre, Ottawa Hospital Research Institute, Ottawa, Ontario. Douglas G Altman is at the Centre for Statistics in Medicine, University of Oxford, Oxford, United Kingdom. Membership of the PRISMA Group is provided in the Acknowledgements.", "title": "" }, { "docid": "9a5ef746c96a82311e3ebe8a3476a5f4", "text": "A magnetic-tip steerable needle is presented with application to aiding deep brain stimulation electrode placement. The magnetic needle is 1.3mm in diameter at the tip with a 0.7mm diameter shaft, which is selected to match the size of a deep brain stimulation electrode. The tip orientation is controlled by applying torques to the embedded neodymium-iron-boron permanent magnets with a clinically-sized magnetic-manipulation system. The prototype design is capable of following trajectories under human-in-the-loop control with minimum bend radii of 100mm without inducing tissue damage and down to 30mm if some tissue damage is tolerable. The device can be retracted and redirected to reach multiple targets with a single insertion point.", "title": "" }, { "docid": "3d8cd89ae0b69ff4820f253aec3dbbeb", "text": "The importance of information as a resource for economic growth and education is steadily increasing. Due to technological advances in computer industry and the explosive growth of the Internet much valuable information will be available in digital libraries. This paper introduces a system that aims to support a user's browsing activities in document sets retrieved from a digital library. Latent Semantic Analysis is applied to extract salient semantic structures and citation patterns of documents stored in a digital library in a computationally expensive batch job. At retrieval time, cluster techniques are used to organize retrieved documents into clusters according to the previously extracted semantic similarities. A modified Boltzman algorithm [1] is employed to spatially organize the resulting clusters and their documents in the form of a three-dimensional information landscape or \"i-scape\". The i-scape is then displayed for interactive exploration via a multi-modal, virtual reality CAVE interface [8]. Users' browsing activities are recorded and user models are extracted to give newcomers online help based on previous navigation activity as well as to enable experienced users to recognize and exploit past user traces. In this way, the system provides interactive services to assist users in the spatial navigation, interpretation, and detailed exploration of potentially large document sets matching a query.", "title": "" }, { "docid": "2cd5075ed124f933fe56fe1dd566df22", "text": "We introduce MIDI-VAE, a neural network model based on Variational Autoencoders that is capable of handling polyphonic music with multiple instrument tracks, as well as modeling the dynamics of music by incorporating note durations and velocities. We show that MIDI-VAE can perform style transfer on symbolic music by automatically changing pitches, dynamics and instruments of a music piece from, e.g., a Classical to a Jazz style. We evaluate the efficacy of the style transfer by training separate style validation classifiers. Our model can also interpolate between short pieces of music, produce medleys and create mixtures of entire songs. The interpolations smoothly change pitches, dynamics and instrumentation to create a harmonic bridge between two music pieces. To the best of our knowledge, this work represents the first successful attempt at applying neural style transfer to complete musical compositions.", "title": "" }, { "docid": "c536e79078d7d5778895e5ac7f02c95e", "text": "Block-based programming languages like Scratch, Alice and Blockly are becoming increasingly common as introductory languages in programming education. There is substantial research showing that these visual programming environments are suitable for teaching programming concepts. But, what do people do when they use Scratch? In this paper we explore the characteristics of Scratch programs. To this end we have scraped the Scratch public repository and retrieved 250,000 projects. We present an analysis of these projects in three different dimensions. Initially, we look at the types of blocks used and the size of the projects. We then investigate complexity, used abstractions and programming concepts. Finally we detect code smells such as large scripts, dead code and duplicated code blocks. Our results show that 1) most Scratch programs are small, however Scratch programs consisting of over 100 sprites exist, 2) programming abstraction concepts like procedures are not commonly used and 3) Scratch programs do suffer from code smells including large scripts and unmatched broadcast signals.", "title": "" }, { "docid": "b5d22d191745e4b94c6b7784b52c8ed8", "text": "One of the biggest problems of SMEs is their tendencies to financial distress because of insufficient finance background. In this study, an early warning system (EWS) model based on data mining for financial risk detection is presented. CHAID algorithm has been used for development of the EWS. Developed EWS can be served like a tailor made financial advisor in decision making process of the firms with its automated nature to the ones who have inadequate financial background. Besides, an application of the model implemented which covered 7853 SMEs based on Turkish Central Bank (TCB) 2007 data. By using EWS model, 31 risk profiles, 15 risk indicators, 2 early warning signals, and 4 financial road maps has been determined for financial risk mitigation. 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "4ad261905326b55a40569ebbc549a67c", "text": "OBJECTIVES\nTo analyze the Spanish experience in an international study which evaluated tocilizumab in patients with rheumatoid arthritis (RA) and an inadequate response to conventional disease-modifying antirheumatic drugs (DMARDs) or tumor necrosis factor inhibitors (TNFis) in a clinical practice setting.\n\n\nMATERIAL AND METHODS\nSubanalysis of 170 patients with RA from Spain who participated in a phase IIIb, open-label, international clinical trial. Patients presented inadequate response to DMARDs or TNFis. They received 8mg/kg of tocilizumab every 4 weeks in combination with a DMARD or as monotherapy during 20 weeks. Safety and efficacy of tocilizumab were analyzed. Special emphasis was placed on differences between failure to a DMARD or to a TNFi and the need to switch to tocilizumab with or without a washout period in patients who had previously received TNFi.\n\n\nRESULTS\nThe most common adverse events were infections (25%), increased total cholesterol (38%) and transaminases (15%). Five patients discontinued the study due to an adverse event. After six months of tocilizumab treatment, 71/50/30% of patients had ACR 20/50/70 responses, respectively. A higher proportion of TNFi-naive patients presented an ACR20 response: 76% compared to 64% in the TNFi group with previous washout and 66% in the TNFi group without previous washout.\n\n\nCONCLUSIONS\nSafety results were consistent with previous results in patients with RA and an inadequate response to DMARDs or TNFis. Tocilizumab is more effective in patients who did not respond to conventional DMARDs than in patients who did not respond to TNFis.", "title": "" }, { "docid": "a87c60deb820064abaa9093398937ff3", "text": "Cardiac arrhythmia is one of the most important indicators of heart disease. Premature ventricular contractions (PVCs) are a common form of cardiac arrhythmia caused by ectopic heartbeats. The detection of PVCs by means of ECG (electrocardiogram) signals is important for the prediction of possible heart failure. This study focuses on the classification of PVC heartbeats from ECG signals and, in particular, on the performance evaluation of selected features using genetic algorithms (GA) to the classification of PVC arrhythmia. The objective of this study is to apply GA as a feature selection method to select the best feature subset from 200 time series features and to integrate these best features to recognize PVC forms. Neural networks, support vector machines and k-nearest neighbour classification algorithms were used. Findings were expressed in terms of accuracy, sensitivity, and specificity for the MIT-BIH Arrhythmia Database. The results showed that the proposed model achieved higher accuracy rates than those of other works on this topic.", "title": "" }, { "docid": "5ea912d602b0107ae9833292da22b800", "text": "We propose a novel and flexible anchor mechanism named MetaAnchor for object detection frameworks. Unlike many previous detectors model anchors via a predefined manner, in MetaAnchor anchor functions could be dynamically generated from the arbitrary customized prior boxes. Taking advantage of weight prediction, MetaAnchor is able to work with most of the anchor-based object detection systems such as RetinaNet. Compared with the predefined anchor scheme, we empirically find that MetaAnchor is more robust to anchor settings and bounding box distributions; in addition, it also shows the potential on transfer tasks. Our experiment on COCO detection task shows that MetaAnchor consistently outperforms the counterparts in various scenarios.", "title": "" }, { "docid": "866b95a50dede975eeff9aeec91a610b", "text": "In this paper, we focus on differential privacy preserving spectral graph analysis. Spectral graph analysis deals with the analysis of the spectra (eigenvalues and eigenvector components) of the graph’s adjacency matrix or its variants. We develop two approaches to computing the ε-differential eigen decomposition of the graph’s adjacency matrix. The first approach, denoted as LNPP, is based on the Laplace Mechanism that calibrates Laplace noise on the eigenvalues and every entry of the eigenvectors based on their sensitivities. We derive the global sensitivities of both eigenvalues and eigenvectors based on the matrix perturbation theory. Because the output eigenvectors after perturbation are no longer orthogonormal, we postprocess the output eigenvectors by using the state-of-the-art vector orthogonalization technique. The second approach, denoted as SBMF, is based on the exponential mechanism and the properties of the matrix Bingham-von Mises-Fisher density for network data spectral analysis. We prove that the sampling procedure achieves differential privacy. We conduct empirical evaluation on a real social network data and compare the two approaches in terms of utility preservation (the accuracy of spectra and the accuracy of low rank approximation) under the same differential privacy threshold. Our empirical evaluation results show that LNPP generally incurs smaller utility loss.", "title": "" }, { "docid": "a7317f3f1b4767f20c38394e519fa0d8", "text": "The development of the concept of burden for use in research lacks consistent conceptualization and operational definitions. The purpose of this article is to analyze the concept of burden in an effort to promote conceptual clarity. The technique advocated by Walker and Avant is used to analyze this concept. Critical attributes of burden include subjective perception, multidimensional phenomena, dynamic change, and overload. Predisposing factors are caregiver's characteristics, the demands of caregivers, and the involvement in caregiving. The consequences of burden generate problems in care-receiver, caregiver, family, and health care system. Overall, this article enables us to advance this concept, identify the different sources of burden, and provide directions for nursing intervention.", "title": "" } ]
scidocsrr
24c70b1ee4001017b1ef9740520874dd
Compositional Vector Space Models for Knowledge Base Inference
[ { "docid": "8b46e6e341f4fdf4eb18e66f237c4000", "text": "We present a general learning-based approach for phrase-level sentiment analysis that adopts an ordinal sentiment scale and is explicitly compositional in nature. Thus, we can model the compositional effects required for accurate assignment of phrase-level sentiment. For example, combining an adverb (e.g., “very”) with a positive polar adjective (e.g., “good”) produces a phrase (“very good”) with increased polarity over the adjective alone. Inspired by recent work on distributional approaches to compositionality, we model each word as a matrix and combine words using iterated matrix multiplication, which allows for the modeling of both additive and multiplicative semantic effects. Although the multiplication-based matrix-space framework has been shown to be a theoretically elegant way to model composition (Rudolph and Giesbrecht, 2010), training such models has to be done carefully: the optimization is nonconvex and requires a good initial starting point. This paper presents the first such algorithm for learning a matrix-space model for semantic composition. In the context of the phrase-level sentiment analysis task, our experimental results show statistically significant improvements in performance over a bagof-words model.", "title": "" }, { "docid": "78cda62ca882bb09efc08f7d4ea1801e", "text": "Open Domain: There are nearly an unbounded number of classes, objects and relations Missing Data: Many useful facts are never explicitly stated No Negative Examples: Labeling positive and negative examples for all interesting relations is impractical Learning First-Order Horn Clauses from Web Text Stefan Schoenmackers Oren Etzioni Daniel S. Weld Jesse Davis Turing Center, University of Washington Katholieke Universiteit Leuven", "title": "" } ]
[ { "docid": "011ff2d5995a46a686d9edb80f33b8ca", "text": "In the era of Social Computing, the role of customer reviews and ratings can be instrumental in predicting the success and sustainability of businesses as customers and even competitors use them to judge the quality of a business. Yelp is one of the most popular websites for users to write such reviews. This rating can be subjective and biased toward user's personality. Business preferences of a user can be decrypted based on his/ her past reviews. In this paper, we deal with (i) uncovering latent topics in Yelp data based on positive and negative reviews using topic modeling to learn which topics are the most frequent among customer reviews, (ii) sentiment analysis of users' reviews to learn how these topics associate to a positive or negative rating which will help businesses improve their offers and services, and (iii) predicting unbiased ratings from user-generated review text alone, using Linear Regression model. We also perform data analysis to get some deeper insights into customer reviews.", "title": "" }, { "docid": "a7aac88bd2862bafc2b4e1e562a7b86a", "text": "Longitudinal melanonychia presents in various conditions including neoplastic and reactive disorders. It is much more frequently seen in non-Caucasians than Caucasians. While most cases of nail apparatus melanoma start as longitudinal melanonychia, melanocytic nevi of the nail apparatus also typically accompany longitudinal melanonychia. Identifying the suspicious longitudinal melanonychia is therefore an important task for dermatologists. Dermoscopy provides useful information for making this decision. The most suspicious dermoscopic feature of early nail apparatus melanoma is irregular lines on a brown background. Evaluation of the irregularity may be rather subjective, but through experience, dermatologists can improve their diagnostic skills of longitudinal melanonychia, including benign conditions showing regular lines. Other important dermoscopic features of early nail apparatus melanoma are micro-Hutchinson's sign, a wide pigmented band, and triangular pigmentation on the nail plate. Although there is as yet no solid evidence concerning the frequency of dermoscopic follow up, we recommend checking the suspicious longitudinal melanonychia every 6 months. Moreover, patients with longitudinal melanonychia should be asked to return to the clinic quickly if the lesion shows obvious changes. Diagnosis of amelanotic or hypomelanotic melanoma affecting the nail apparatus is also challenging, but melanoma should be highly suspected if remnants of melanin granules are detected dermoscopically.", "title": "" }, { "docid": "f7aceafa35aaacb5b2b854a8b7e275b6", "text": "In this paper, the study and implementation of a high frequency pulse LED driver with self-oscillating circuit is presented. The self-oscillating half-bridge series resonant inverter is adopted in this LED driver and the circuit characteristics of LED with high frequency pulse driving voltage is also discussed. LED module is connected with full bridge diode rectifier but without low pass filter and this LED module is driven with high frequency pulse. In additional, the self-oscillating resonant circuit with saturable core is used to achieve zero voltage switching and to control the LED current. The LED equivalent circuit of resonant circuit and the operating principle of the self-oscillating half-bridge inverter are discussed in detail. Finally, an 18 W high frequency pulse LED driver is implemented to verify the feasibility. Experimental results show that the circuit efficiency is over 86.5% when input voltage operating within AC 110 ± 10 Vrms and the maximum circuit efficiency is up to 89.2%.", "title": "" }, { "docid": "e729c06c5a4153af05740a01509ee5d5", "text": "Understanding large-scale document collections in an efficient manner is an important problem. Usually, document data are associated with other information (e.g., an author's gender, age, and location) and their links to other entities (e.g., co-authorship and citation networks). For the analysis of such data, we often have to reveal common as well as discriminative characteristics of documents with respect to their associated information, e.g., male- vs. female-authored documents, old vs. new documents, etc. To address such needs, this paper presents a novel topic modeling method based on joint nonnegative matrix factorization, which simultaneously discovers common as well as discriminative topics given multiple document sets. Our approach is based on a block-coordinate descent framework and is capable of utilizing only the most representative, thus meaningful, keywords in each topic through a novel pseudo-deflation approach. We perform both quantitative and qualitative evaluations using synthetic as well as real-world document data sets such as research paper collections and nonprofit micro-finance data. We show our method has a great potential for providing in-depth analyses by clearly identifying common and discriminative topics among multiple document sets.", "title": "" }, { "docid": "74a3c4dae9573325b292da736d46a78e", "text": "Machine learning is currently dominated by largely experimental work focused on improvements in a few key tasks. However, the impressive accuracy numbers of the best performing models are questionable because the same test sets have been used to select these models for multiple years now. To understand the danger of overfitting, we measure the accuracy of CIFAR-10 classifiers by creating a new test set of truly unseen images. Although we ensure that the new test set is as close to the original data distribution as possible, we find a large drop in accuracy (4% to 10%) for a broad range of deep learning models. Yet, more recent models with higher original accuracy show a smaller drop and better overall performance, indicating that this drop is likely not due to overfitting based on adaptivity. Instead, we view our results as evidence that current accuracy numbers are brittle and susceptible to even minute natural variations in the data distribution.", "title": "" }, { "docid": "1ec8f8e1b34ebcf8a0c99975d2fa58c4", "text": "BACKGROUND\nTo compare simultaneous recordings from an external patch system specifically designed to ensure better P-wave recordings and standard Holter monitor to determine diagnostic efficacy. Holter monitors are a mainstay of clinical practice, but are cumbersome to access and wear and P-wave signal quality is frequently inadequate.\n\n\nMETHODS\nThis study compared the diagnostic efficacy of the P-wave centric electrocardiogram (ECG) patch (Carnation Ambulatory Monitor) to standard 3-channel (leads V1, II, and V5) Holter monitor (Northeast Monitoring, Maynard, MA). Patients were referred to a hospital Holter clinic for standard clinical indications. Each patient wore both devices simultaneously and served as their own control. Holter and Patch reports were read in a blinded fashion by experienced electrophysiologists unaware of the findings in the other corresponding ECG recording. All patients, technicians, and physicians completed a questionnaire on comfort and ease of use, and potential complications.\n\n\nRESULTS\nIn all 50 patients, the P-wave centric patch recording system identified rhythms in 23 patients (46%) that altered management, compared to 6 Holter patients (12%), P<.001. The patch ECG intervals PR, QRS and QT correlated well with the Holter ECG intervals having correlation coefficients of 0.93, 0.86, and 0.94, respectively. Finally, 48 patients (96%) preferred wearing the patch monitor.\n\n\nCONCLUSIONS\nA single-channel ambulatory patch ECG monitor, designed specifically to ensure that the P-wave component of the ECG be visible, resulted in a significantly improved rhythm diagnosis and avoided inaccurate diagnoses made by the standard 3-channel Holter monitor.", "title": "" }, { "docid": "fa0f3d0d78040d6b89087c24d8b7c07c", "text": "Named entity recognition is a crucial component of biomedical natural language processing, enabling information extraction and ultimately reasoning over and knowledge discovery from text. Much progress has been made in the design of rule-based and supervised tools, but they are often genre and task dependent. As such, adapting them to different genres of text or identifying new types of entities requires major effort in re-annotation or rule development. In this paper, we propose an unsupervised approach to extracting named entities from biomedical text. We describe a stepwise solution to tackle the challenges of entity boundary detection and entity type classification without relying on any handcrafted rules, heuristics, or annotated data. A noun phrase chunker followed by a filter based on inverse document frequency extracts candidate entities from free text. Classification of candidate entities into categories of interest is carried out by leveraging principles from distributional semantics. Experiments show that our system, especially the entity classification step, yields competitive results on two popular biomedical datasets of clinical notes and biological literature, and outperforms a baseline dictionary match approach. Detailed error analysis provides a road map for future work.", "title": "" }, { "docid": "9d9afbd6168c884f54f72d3daea57ca7", "text": "0167-8655/$ see front matter 2009 Elsevier B.V. A doi:10.1016/j.patrec.2009.06.012 * Corresponding author. Tel.: +82 2 705 8931; fax: E-mail addresses: sjyoon@sogang.ac.kr (S. Yoon), sa Computer aided diagnosis (CADx) systems for digitized mammograms solve the problem of classification between benign and malignant tissues while studies have shown that using only a subset of features generated from the mammograms can yield higher classification accuracy. To this end, we propose a mutual information-based Support Vector Machine Recursive Feature Elimination (SVM-RFE) as the classification method with feature selection in this paper. We have conducted extensive experiments on publicly available mammographic data and the obtained results indicate that the proposed method outperforms other SVM and SVM-RFE-based methods. 2009 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "cabfa3e645415d491ed4ca776b9e370a", "text": "The impact of social networks in customer buying decisions is rapidly increasing, because they are effective in shaping public opinion. This paper helps marketers analyze a social network’s members based on different characteristics as well as choose the best method for identifying influential people among them. Marketers can then use these influential people as seeds for market products/services. Considering the importance of opinion leadership in social networks, the authors provide a comprehensive overview of existing literature. Studies show that different titles (such as opinion leaders, influential people, market mavens, and key players) are used to refer to the influential group in social networks. In this paper, all the properties presented for opinion leaders in the form of different titles are classified into three general categories, including structural, relational, and personal characteristics. Furthermore, based on studying opinion leader identification methods, appropriate parameters are extracted in a comprehensive chart to evaluate and compare these methods accurately. based marketing, word-of-mouth marketing has more creditability (Li & Du, 2011), because there is no direct link between the sender and the merchant. As a result, information is considered independent and subjective. In recent years, many researches in word-of-mouth marketing investigate discovering influential nodes in a social network. These influential people are called opinion leaders in the literature. Organizations interested in e-commerce need to identify opinion leaders among their customers, also the place (web site) which they are going online. This is the place they can market their products. DOI: 10.4018/jvcsn.2011010105 44 International Journal of Virtual Communities and Social Networking, 3(1), 43-59, January-March 2011 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. Social Network Analysis Regarding the importance of interpersonal relationship, studies are looking for formal methods to measures who talks to whom in a community. These methods are known as social network analysis (Scott, 1991; Wasserman & Faust, 1994; Rogers & Kincaid, 1981; Valente & Davis, 1999). Social network analysis includes the study of the interpersonal relationships. It usually is more focused on the network itself, rather than on the attributes of the members (Li & Du, 2011). Valente and Rogers (1995) have described social network analysis from the point of view of interpersonal communication by “formal methods of measuring who talks to whom within a community”. Social network analysis enables researchers to identify people who are more central in the network and so more influential. By using these central people or opinion leaders as seeds diffusion of a new product or service can be accelerated (Katz & Lazarsfeld, 1955; Valente & Davis, 1999). Importance of Social Networks for Marketing The importance of social networks as a marketing tool is increasing, and it includes diverse areas (Even-Dar & Shapirab, 2011). Analysis of interdependencies between customers can improve targeted marketing as well as help organization in acquisition of new customers who are not detectable by traditional techniques. By recent technological developments social networks are not limited in face-to-face and physical relationships. Furthermore, online social networks have become a new medium for word-of-mouth marketing. Although the face-to-face word-of-mouth has a greater impact on consumer purchasing decisions over printed information because of its vividness and credibility, in recent years with the growth of the Internet and virtual communities the written word-of-mouth (word-of-mouse) has been created in the online channels (Mak, 2008). Consider a company that wants to launch a new product. This company can benefit from popular social networks like Facebook and Myspace rather than using classical advertising channels. Then, convincing several key persons in each network to adopt the new product, can help a company to exploit an effective diffusion in the network through word-of-mouth. According to Nielsen’s survey of more than 26,000 internet uses, 78% of respondents exhibited recommendations from others are the most trusted source when considering a product or service (Nielsen, 2007). Based on another study conducted by Deloitte’s Consumer Products group, almost 62% of consumers who read consumer-written product reviews online declare their purchase decisions have been directly influenced by the user reviews (Delottie, 2007). Empirical studies have demonstrated that new ideas and practices spread through interpersonal communication (Valente & Rogers, 1995; Valente & Davis, 1999; Valente, 1995). Hawkins et al. (1995) suggest that companies can use four possible courses of action, including marketing research, product sampling, retailing/personal selling and advertising to use their knowledge of opinion leaders to their advantage. The authors of this paper in a similar study have done a review of related literature using social networks for improving marketing response. They discuss the benefits and challenges of utilizing interpersonal relationships in a network as well as opinion leader identification; also, a three step process to show how firms can apply social networks for their marketing activities has been proposed (Jafari Momtaz et al., 2011). While applications of opinion leadership in business and marketing have been widely studied, it generally deals with the development of measurement scale (Burt, 1999), its importance in the social sciences (Flynn et al., 1994), and its application to various areas related to the marketing, such as the health care industry, political science (Burt, 1999) and public communications (Howard et al., 2000; Locock et al., 2001). In this paper, a comprehensive review of studies in the field of opinion leadership and employing social networks to improve the marketing response is done. In the next sec15 more pages are available in the full version of this document, which may be purchased using the \"Add to Cart\" button on the product's webpage: www.igi-global.com/article/identifying-opinion-leadersmarketing-analyzing/60541?camid=4v1 This title is available in InfoSci-Journals, InfoSci-Journal Disciplines Communications and Social Science, InfoSciCommunications, Online Engagement, and Media eJournal Collection. Recommend this product to your librarian: www.igi-global.com/e-resources/libraryrecommendation/?id=2", "title": "" }, { "docid": "8250999ad1b7278ff123cd3c89b5d2d9", "text": "Drawing on Bronfenbrenner’s ecological theory and prior empirical research, the current study examines the way that blogging and social networking may impact feelings of connection and social support, which in turn could impact maternal well-being (e.g., marital functioning, parenting stress, and depression). One hundred and fifty-seven new mothers reported on their media use and various well-being variables. On average, mothers were 27 years old (SD = 5.15) and infants were 7.90 months old (SD = 5.21). All mothers had access to the Internet in their home. New mothers spent approximately 3 hours on the computer each day, with most of this time spent on the Internet. Findings suggested that frequency of blogging predicted feelings of connection to extended family and friends which then predicted perceptions of social support. This in turn predicted maternal well-being, as measured by marital satisfaction, couple conflict, parenting stress, and depression. In sum, blogging may improve new mothers’ well-being, as they feel more connected to the world outside their home through the Internet.", "title": "" }, { "docid": "aa5d8162801abcc81ac542f7f2a423e5", "text": "Prediction of popularity has profound impact for social media, since it offers opportunities to reveal individual preference and public attention from evolutionary social systems. Previous research, although achieves promising results, neglects one distinctive characteristic of social data, i.e., sequentiality. For example, the popularity of online content is generated over time with sequential post streams of social media. To investigate the sequential prediction of popularity, we propose a novel prediction framework called Deep Temporal Context Networks (DTCN) by incorporating both temporal context and temporal attention into account. Our DTCN contains three main components, from embedding, learning to predicting. With a joint embedding network, we obtain a unified deep representation of multi-modal user-post data in a common embedding space. Then, based on the embedded data sequence over time, temporal context learning attempts to recurrently learn two adaptive temporal contexts for sequential popularity. Finally, a novel temporal attention is designed to predict new popularity (the popularity of a new userpost pair) with temporal coherence across multiple time-scales. Experiments on our released image dataset with about 600K Flickr photos demonstrate that DTCN outperforms state-of-the-art deep prediction algorithms, with an average of 21.51% relative performance improvement in the popularity prediction (Spearman Ranking Correlation).", "title": "" }, { "docid": "708c9b97f4a393ac49688d913b1d2cc6", "text": "Cognitive NLP systemsi.e., NLP systems that make use of behavioral data augment traditional text-based features with cognitive features extracted from eye-movement patterns, EEG signals, brain-imaging etc.. Such extraction of features is typically manual. We contend that manual extraction of features may not be the best way to tackle text subtleties that characteristically prevail in complex classification tasks like sentiment analysis and sarcasm detection, and that even the extraction and choice of features should be delegated to the learning system. We introduce a framework to automatically extract cognitive features from the eye-movement / gaze data of human readers reading the text and use them as features along with textual features for the tasks of sentiment polarity and sarcasm detection. Our proposed framework is based on Convolutional Neural Network (CNN). The CNN learns features from both gaze and text and uses them to classify the input text. We test our technique on published sentiment and sarcasm labeled datasets, enriched with gaze information, to show that using a combination of automatically learned text and gaze features often yields better classification performance over (i) CNN based systems that rely on text input alone and (ii) existing systems that rely on handcrafted gaze and textual features.", "title": "" }, { "docid": "d5f905fb66ba81ecde0239a4cc3bfe3f", "text": "Bidirectional path tracing (BDPT) can render highly realistic scenes with complicated lighting scenarios. The Light Vertex Cache (LVC) based BDPT method by Davidovic et al. [Davidovič et al. 2014] provided good performance on scenes with simple materials in a progressive rendering scenario. In this paper, we propose a new bidirectional path tracing formulation based on the LVC approach that handles scenes with complex, layered materials efficiently on the GPU. We achieve coherent material evaluation while conserving GPU memory requirements using sorting. We propose a modified method for selecting light vertices using the contribution importance which improves the image quality for a given amount of work. Progressive rendering can empower artists in the production pipeline to iterate and preview their work quickly. We hope the work presented here will enable the use of GPUs in the production pipeline with complex materials and complicated lighting scenarios.", "title": "" }, { "docid": "400a56ea0b2c005ed16500f0d7818313", "text": "Real estate appraisal, which is the process of estimating the price for real estate properties, is crucial for both buyers and sellers as the basis for negotiation and transaction. Traditionally, the repeat sales model has been widely adopted to estimate real estate prices. However, it depends on the design and calculation of a complex economic-related index, which is challenging to estimate accurately. Today, real estate brokers provide easy access to detailed online information on real estate properties to their clients. We are interested in estimating the real estate price from these large amounts of easily accessed data. In particular, we analyze the prediction power of online house pictures, which is one of the key factors for online users to make a potential visiting decision. The development of robust computer vision algorithms makes the analysis of visual content possible. In this paper, we employ a recurrent neural network to predict real estate prices using the state-of-the-art visual features. The experimental results indicate that our model outperforms several other state-of-the-art baseline algorithms in terms of both mean absolute error and mean absolute percentage error.", "title": "" }, { "docid": "b8c59cb962a970daaf012b15bcb8413d", "text": "Joint image filters leverage the guidance image as a prior and transfer the structural details from the guidance image to the target image for suppressing noise or enhancing spatial resolution. Existing methods either rely on various explicit filter constructions or hand-designed objective functions, thereby making it difficult to understand, improve, and accelerate these filters in a coherent framework. In this paper, we propose a learning-based approach for constructing joint filters based on Convolutional Neural Networks. In contrast to existing methods that consider only the guidance image, the proposed algorithm can selectively transfer salient structures that are consistent with both guidance and target images. We show that the model trained on a certain type of data, e.g., RGB and depth images, generalizes well to other modalities, e.g., flash/non-Flash and RGB/NIR images. We validate the effectiveness of the proposed joint filter through extensive experimental evaluations with state-of-the-art methods.", "title": "" }, { "docid": "6db749b222a44764cf07bde527c230a3", "text": "There have been many claims that the Internet represents a new “frictionless market.” Our research empirically analyzes the characteristics of the Internet as a channel for two categories of homogeneous products — books and CDs. Using a data set of over 8,500 price observations collected over a period of 15 months, we compare pricing behavior at 41 Internet and conventional retail outlets. We find that prices on the Internet are 9-16% lower than prices in conventional outlets, depending on whether taxes, shipping and shopping costs are included in the price. Additionally, we find that Internet retailers’ price adjustments over time are up to 100 times smaller than conventional retailers’ price adjustments — presumably reflecting lower menu costs in Internet channels. We also find that levels of price dispersion depend importantly on the measures employed. When we simply compare the prices posted by different Internet retailers we find substantial dispersion. Internet retailer prices differ by an average of 33% for books and 25% for CDs. However, when we weight these prices by proxies for market share, we find dispersion is lower in Internet channels than in conventional channels, reflecting the dominance of certain heavily branded retailers. We conclude that while there is lower friction in many dimensions of Internet competition, branding, awareness, and trust remain important sources of heterogeneity among Internet retailers.", "title": "" }, { "docid": "83ed2dfe4456bc3cc8052747e7df7bfc", "text": "Dietary restriction has been shown to have several health benefits including increased insulin sensitivity, stress resistance, reduced morbidity, and increased life span. The mechanism remains unknown, but the need for a long-term reduction in caloric intake to achieve these benefits has been assumed. We report that when C57BL6 mice are maintained on an intermittent fasting (alternate-day fasting) dietary-restriction regimen their overall food intake is not decreased and their body weight is maintained. Nevertheless, intermittent fasting resulted in beneficial effects that met or exceeded those of caloric restriction including reduced serum glucose and insulin levels and increased resistance of neurons in the brain to excitotoxic stress. Intermittent fasting therefore has beneficial effects on glucose regulation and neuronal resistance to injury in these mice that are independent of caloric intake.", "title": "" }, { "docid": "90c6cf2fd66683843a8dd549676727d5", "text": "Despite great progress in neuroscience, there are still fundamental unanswered questions about the brain, including the origin of subjective experience and consciousness. Some answers might rely on new physical mechanisms. Given that biophotons have been discovered in the brain, it is interesting to explore if neurons use photonic communication in addition to the well-studied electro-chemical signals. Such photonic communication in the brain would require waveguides. Here we review recent work (S. Kumar, K. Boone, J. Tuszynski, P. Barclay, and C. Simon, Scientific Reports 6, 36508 (2016)) suggesting that myelinated axons could serve as photonic waveguides. The light transmission in the myelinated axon was modeled, taking into account its realistic imperfections, and experiments were proposed both in vivo and in vitro to test this hypothesis. Potential implications for quantum biology are discussed.", "title": "" }, { "docid": "21f079e590e020df08d461ba78a26d65", "text": "The aim of this study was to develop a tool to measure the knowledge of nurses on pressure ulcer prevention. PUKAT 2·0 is a revised and updated version of the Pressure Ulcer Knowledge Assessment Tool (PUKAT) developed in 2010 at Ghent University, Belgium. The updated version was developed using state-of-the-art techniques to establish evidence concerning validity and reliability. Face and content validity were determined through a Delphi procedure including both experts from the European Pressure Ulcer Advisory Panel (EPUAP) and the National Pressure Ulcer Advisory Panel (NPUAP) (n = 15). A subsequent psychometric evaluation of 342 nurses and nursing students evaluated the item difficulty, discriminating power and quality of the response alternatives. Furthermore, construct validity was established through a test-retest procedure and the known-groups technique. The content validity was good and the difficulty level moderate. The discernment was found to be excellent: all groups with a (theoretically expected) higher level of expertise had a significantly higher score than the groups with a (theoretically expected) lower level of expertise. The stability of the tool is sufficient (Intraclass Correlation Coefficient = 0·69). The PUKAT 2·0 demonstrated good psychometric properties and can be used and disseminated internationally to assess knowledge about pressure ulcer prevention.", "title": "" }, { "docid": "1e852e116c11a6c7fb1067313b1ffaa3", "text": "Article history: Received 20 February 2013 Received in revised form 30 July 2013 Accepted 11 September 2013 Available online 21 September 2013", "title": "" } ]
scidocsrr
b47ad52c6259a7678a2215e570b97c72
Stability of cyberbullying victimization among adolescents: Prevalence and association with bully-victim status and psychosocial adjustment
[ { "docid": "7875910ad044232b4631ecacfec65656", "text": "In this study, a questionnaire (Cyberbullying Questionnaire, CBQ) was developed to assess the prevalence of numerous modalities of cyberbullying (CB) in adolescents. The association of CB with the use of other forms of violence, exposure to violence, acceptance and rejection by peers was also examined. In the study, participants were 1431 adolescents, aged between 12 and17 years (726 girls and 682 boys). The adolescents responded to the CBQ, measures of reactive and proactive aggression, exposure to violence, justification of the use of violence, and perceived social support of peers. Sociometric measures were also used to assess the use of direct and relational aggression and the degree of acceptance and rejection by peers. The results revealed excellent psychometric properties for the CBQ. Of the adolescents, 44.1% responded affirmatively to at least one act of CB. Boys used CB to greater extent than girls. Lastly, CB was significantly associated with the use of proactive aggression, justification of violence, exposure to violence, and less perceived social support of friends. 2010 Elsevier Ltd. All rights reserved.", "title": "" } ]
[ { "docid": "31ec7ef4e68950919054b59942d4dbfa", "text": "A promising approach to learn to play board games is to use reinforcement learning algorithms that can learn a game position evaluation function. In this paper we examine and compare three different methods for generating training games: (1) Learning by self-play, (2) Learning by playing against an expert program, and (3) Learning from viewing experts play against themselves. Although the third possibility generates highquality games from the start compared to initial random games generated by self-play, the drawback is that the learning program is never allowed to test moves which it prefers. We compared these three methods using temporal difference methods to learn the game of backgammon. For particular games such as draughts and chess, learning from a large database containing games played by human experts has as a large advantage that during the generation of (useful) training games, no expensive lookahead planning is necessary for move selection. Experimental results in this paper show how useful this method is for learning to play chess and draughts.", "title": "" }, { "docid": "c9f48010cdf39b4d024818f1bbb21307", "text": "This paper proposes to use probabilistic model checking to synthesize optimal robot policies in multi-tasking autonomous systems that are subject to human-robot interaction. Given the convincing empirical evidence that human behavior can be related to reinforcement models, we take as input a well-studied Q-table model of the human behavior for flexible scenarios. We first describe an automated procedure to distill a Markov decision process (MDP) for the human in an arbitrary but fixed scenario. The distinctive issue is that – in contrast to existing models – under-specification of the human behavior is included. Probabilistic model checking is used to predict the human’s behavior. Finally, the MDP model is extended with a robot model. Optimal robot policies are synthesized by analyzing the resulting two-player stochastic game. Experimental results with a prototypical implementation using PRISM show promising results.", "title": "" }, { "docid": "c5b2f22f1cc160b19fa689120c35c693", "text": "Commodity depth cameras have created many interesting new applications in the research community recently. These applications often require the calibration information between the color and the depth cameras. Traditional checkerboard based calibration schemes fail to work well for the depth camera, since its corner features cannot be reliably detected in the depth image. In this paper, we present a maximum likelihood solution for the joint depth and color calibration based on two principles. First, in the depth image, points on the checker-board shall be co-planar, and the plane is known from color camera calibration. Second, additional point correspondences between the depth and color images may be manually specified or automatically established to help improve calibration accuracy. Uncertainty in depth values has been taken into account systematically. The proposed algorithm is reliable and accurate, as demonstrated by extensive experimental results on simulated and real-world examples.", "title": "" }, { "docid": "3f8f835605b34d27802f6f2f0a363ae2", "text": "*Correspondence: Enrico Di Minin, Finnish Centre of Excellence in Metapopulation Biology, Department of Biosciences, Biocenter 3, University of Helsinki, PO Box 65 (Viikinkaari 1), 00014 Helsinki, Finland; School of Life Sciences, Westville Campus, University of KwaZulu-Natal, PO Box 54001 (University Road), Durban 4000, South Africa enrico.di.minin@helsinki.fi; Tuuli Toivonen, Finnish Centre of Excellence in Metapopulation Biology, Department of Biosciences, Biocenter 3, University of Helsinki, PO Box 65 (Viikinkaari 1), 00014 Helsinki, Finland; Department of Geosciences and Geography, University of Helsinki, PO Box 64 (Gustaf Hällströminkatu 2a), 00014 Helsinki, Finland tuuli.toivonen@helsinki.fi These authors have contributed equally to this work.", "title": "" }, { "docid": "6087e066b04b9c3ac874f3c58979f89a", "text": "What does it mean for a machine learning model to be ‘fair’, in terms which can be operationalised? Should fairness consist of ensuring everyone has an equal probability of obtaining some benefit, or should we aim instead to minimise the harms to the least advantaged? Can the relevant ideal be determined by reference to some alternative state of affairs in which a particular social pattern of discrimination does not exist? Various definitions proposed in recent literature make different assumptions about what terms like discrimination and fairness mean and how they can be defined in mathematical terms. Questions of discrimination, egalitarianism and justice are of significant interest to moral and political philosophers, who have expended significant efforts in formalising and defending these central concepts. It is therefore unsurprising that attempts to formalise ‘fairness’ in machine learning contain echoes of these old philosophical debates. This paper draws on existing work in moral and political philosophy in order to elucidate emerging debates about fair machine learning.", "title": "" }, { "docid": "cb4518f95b82e553b698ae136362bd59", "text": "Optimal control theory is a mature mathematical discipline with numerous applications in both science and engineering. It is emerging as the computational framework of choice for studying the neural control of movement, in much the same way that probabilistic inference is emerging as the computational framework of choice for studying sensory information processing. Despite the growing popularity of optimal control models, however, the elaborate mathematical machinery behind them is rarely exposed and the big picture is hard to grasp without reading a few technical books on the subject. While this chapter cannot replace such books, it aims to provide a self-contained mathematical introduction to optimal control theory that is su¢ ciently broad and yet su¢ ciently detailed when it comes to key concepts. The text is not tailored to the …eld of motor control (apart from the last section, and the overall emphasis on systems with continuous state) so it will hopefully be of interest to a wider audience. Of special interest in the context of this book is the material on the duality of optimal control and probabilistic inference; such duality suggests that neural information processing in sensory and motor areas may be more similar than currently thought. The chapter is organized in the following sections:", "title": "" }, { "docid": "85016bc639027363932f9adf7012d7a7", "text": "The output voltage ripple is one of the most significant system parameters in switch-mode power supplies. This ripple degrades the performance of application specific integrated circuits (ASICs). The most common way to reduce it is to use additional integrated low drop-out regulators (LDO) on the ASIC. This technique usually suffers from high system efficiency as it is required for portable electronic systems. It also increases the design challenges of on-chip power management circuits and area required for the LDOs. This work presents a low-power fully integrated 0.97mm2 DC-DC Buck converter with a tuned series LDO with 1mV voltage ripple in a 0.25μm BiCMOS process. The converter prodives a power supply rejection ratio of more than 60 dB from 1 to 6MHz and a load current range of 0...400 mA. A peak efficiency of 93.7% has been measured. For high light load efficiency, automatic mode operation is implemented. To decrease the form factor and costs, the external components count has been reduced to a single inductor of 1 μH and two external capacitors of 2 μF each.", "title": "" }, { "docid": "1014a33211c9ca3448fa02cf734a5775", "text": "We propose a general method called truncated gradient to induce sparsity in the weights of online learning algorithms with convex loss functions. This method has several essential properties: 1. The degree of sparsity is continuous a parameter controls the rate of sparsi cation from no sparsi cation to total sparsi cation. 2. The approach is theoretically motivated, and an instance of it can be regarded as an online counterpart of the popular L1-regularization method in the batch setting. We prove that small rates of sparsi cation result in only small additional regret with respect to typical online learning guarantees. 3. The approach works well empirically. We apply the approach to several datasets and nd that for datasets with large numbers of features, substantial sparsity is discoverable.", "title": "" }, { "docid": "98d23862436d8ff4d033cfd48692c84d", "text": "Memory corruption vulnerabilities are the root cause of many modern attacks. Existing defense mechanisms are inadequate; in general, the software-based approaches are not efficient and the hardware-based approaches are not flexible. In this paper, we present hardware-assisted data-flow isolation, or, HDFI, a new fine-grained data isolation mechanism that is broadly applicable and very efficient. HDFI enforces isolation at the machine word granularity by virtually extending each memory unit with an additional tag that is defined by dataflow. This capability allows HDFI to enforce a variety of security models such as the Biba Integrity Model and the Bell -- LaPadula Model. We implemented HDFI by extending the RISC-V instruction set architecture (ISA) and instantiating it on the Xilinx Zynq ZC706 evaluation board. We ran several benchmarks including the SPEC CINT 2000 benchmark suite. Evaluation results show that the performance overhead caused by our modification to the hardware is low (<; 2%). We also developed or ported several security mechanisms to leverage HDFI, including stack protection, standard library enhancement, virtual function table protection, code pointer protection, kernel data protection, and information leak prevention. Our results show that HDFI is easy to use, imposes low performance overhead, and allows us to create more elegant and more secure solutions.", "title": "" }, { "docid": "6384a691d3b50e252ab76a61e28f012e", "text": "We study the algorithmics of information structure design --- a.k.a. persuasion or signaling --- in a fundamental special case introduced by Arieli and Babichenko: multiple agents, binary actions, and no inter-agent externalities. Unlike prior work on this model, we allow many states of nature. We assume that the principal's objective is a monotone set function, and study the problem both in the public signal and private signal models, drawing a sharp contrast between the two in terms of both efficacy and computational complexity.\n When private signals are allowed, our results are largely positive and quite general. First, we use linear programming duality and the equivalence of separation and optimization to show polynomial-time equivalence between (exactly) optimal signaling and the problem of maximizing the objective function plus an additive function. This yields an efficient implementation of the optimal scheme when the objective is supermodular or anonymous. Second, we exhibit a (1-1/e)-approximation of the optimal private signaling scheme, modulo an additive loss of ε, when the objective function is submodular. These two results simplify, unify, and generalize results of [Arieli and Babichenko, 2016] and [Babichenko and Barman, 2016], extending them from a binary state of nature to many states (modulo the additive loss in the latter result). Third, we consider the binary-state case with a submodular objective, and simplify and slightly strengthen the result of [Babichenko and Barman, 2016] to obtain a (1-1/e)-approximation via a scheme which (i) signals independently to each receiver and (ii) is \"oblivious\" in that it does not depend on the objective function so long as it is monotone submodular.\n When only a public signal is allowed, our results are negative. First, we show that it is NP-hard to approximate the optimal public scheme, within any constant factor, even when the objective is additive. Second, we show that the optimal private scheme can outperform the optimal public scheme, in terms of maximizing the sender's objective, by a polynomial factor.", "title": "" }, { "docid": "5b021c0223ee25535508eb1d6f63ff55", "text": "A 32-KB standard CMOS antifuse one-time programmable (OTP) ROM embedded in a 16-bit microcontroller as its program memory is designed and implemented in 0.18-mum standard CMOS technology. The proposed 32-KB OTP ROM cell array consists of 4.2 mum2 three-transistor (3T) OTP cells where each cell utilizes a thin gate-oxide antifuse, a high-voltage blocking transistor, and an access transistor, which are all compatible with standard CMOS process. In order for high density implementation, the size of the 3T cell has been reduced by 80% in comparison to previous work. The fabricated total chip size, including 32-KB OTP ROM, which can be programmed via external I 2C master device such as universal I2C serial EEPROM programmer, 16-bit microcontroller with 16-KB program SRAM and 8-KB data SRAM, peripheral circuits to interface other system building blocks, and bonding pads, is 9.9 mm2. This paper describes the cell, design, and implementation of high-density CMOS OTP ROM, and shows its promising possibilities in embedded applications", "title": "" }, { "docid": "ac6430e097fb5a7dc1f7864f283dcf47", "text": "In the task of Object Recognition, there exists a dichotomy between the categorization of objects and estimating object pose, where the former necessitates a view-invariant representation, while the latter requires a representation capable of capturing pose information over different categories of objects. With the rise of deep architectures, the prime focus has been on object category recognition. Deep learning methods have achieved wide success in this task. In contrast, object pose regression using these approaches has received relatively much less attention. In this paper we show how deep architectures, specifically Convolutional Neural Networks (CNN), can be adapted to the task of simultaneous categorization and pose estimation of objects. We investigate and analyze the layers of various CNN models and extensively compare between them with the goal of discovering how the layers of distributed representations of CNNs represent object pose information and how this contradicts object category representations. We extensively experiment on two recent large and challenging multi-view datasets. Our models achieve better than state-of-the-art performance on both datasets.", "title": "" }, { "docid": "a4f0b524f79db389c72abd27d36f8944", "text": "In order to summarize the status of rescue robotics, this chapter will cover the basic characteristics of disasters and their impact on robotic design, describe the robots actually used in disasters to date, promising robot designs (e.g., snakes, legged locomotion) and concepts (e.g., robot teams or swarms, sensor networks), methods of evaluation in benchmarks for rescue robotics, and conclude with a discussion of the fundamental problems and open issues facing rescue robotics, and their evolution from an interesting idea to widespread adoption. The Chapter will concentrate on the rescue phase, not recovery, with the understanding that capabilities for rescue can be applied to, and extended for, the recovery phase. The use of robots in the prevention and preparedness phases of disaster management are outside the scope of this chapter.", "title": "" }, { "docid": "5a9113dc952bb51faf40d242e91db09c", "text": "This study highlights the changes in lycopene and β-carotene retention in tomato juice subjected to combined pressure-temperature (P-T) treatments ((high-pressure processing (HPP; 500-700 MPa, 30 °C), pressure-assisted thermal processing (PATP; 500-700 MPa, 100 °C), and thermal processing (TP; 0.1 MPa, 100 °C)) for up to 10 min. Processing treatments utilized raw (untreated) and hot break (∼93 °C, 60 s) tomato juice as controls. Changes in bioaccessibility of these carotenoids as a result of processing were also studied. Microscopy was applied to better understand processing-induced microscopic changes. TP did not alter the lycopene content of the tomato juice. HPP and PATP treatments resulted in up to 12% increases in lycopene extractability. all-trans-β-Carotene showed significant degradation (p < 0.05) as a function of pressure, temperature, and time. Its retention in processed samples varied between 60 and 95% of levels originally present in the control. Regardless of the processing conditions used, <0.5% lycopene appeared in the form of micelles (<0.5% bioaccessibility). Electron microscopy images showed more prominent lycopene crystals in HPP and PATP processed juice than in thermally processed juice. However, lycopene crystals did appear to be enveloped regardless of the processing conditions used. The processed juice (HPP, PATP, TP) showed significantly higher (p < 0.05) all-trans-β-carotene micellarization as compared to the raw unprocessed juice (control). Interestingly, hot break juice subjected to combined P-T treatments showed 15-30% more all-trans-β-carotene micellarization than the raw juice subjected to combined P-T treatments. This study demonstrates that combined pressure-heat treatments increase lycopene extractability. However, the in vitro bioaccessibility of carotenoids was not significantly different among the treatments (TP, PATP, HPP) investigated.", "title": "" }, { "docid": "47afea1e95f86bb44a1cf11e020828fc", "text": "Document clustering is generally the first step for topic identification. Since many clustering methods operate on the similarities between documents, it is important to build representations of these documents which keep their semantics as much as possible and are also suitable for efficient similarity calculation. As we describe in Koopman et al. (Proceedings of ISSI 2015 Istanbul: 15th International Society of Scientometrics and Informetrics Conference, Istanbul, Turkey, 29 June to 3 July, 2015. Bogaziçi University Printhouse. http://www.issi2015.org/files/downloads/all-papers/1042.pdf , 2015), the metadata of articles in the Astro dataset contribute to a semantic matrix, which uses a vector space to capture the semantics of entities derived from these articles and consequently supports the contextual exploration of these entities in LittleAriadne. However, this semantic matrix does not allow to calculate similarities between articles directly. In this paper, we will describe in detail how we build a semantic representation for an article from the entities that are associated with it. Base on such semantic representations of articles, we apply two standard clustering methods, K-Means and the Louvain community detection algorithm, which leads to our two clustering solutions labelled as OCLC-31 (standing for K-Means) and OCLC-Louvain (standing for Louvain). In this paper, we will give the implementation details and a basic comparison with other clustering solutions that are reported in this special issue.", "title": "" }, { "docid": "45a45087a6829486d46eda0adcff978f", "text": "Container technology has the potential to considerably simplify the management of the software stack of High Performance Computing (HPC) clusters. However, poor integration with established HPC technologies is still preventing users and administrators to reap the benefits of containers. Message Passing Interface (MPI) is a pervasive technology used to run scientific software, often written in Fortran and C/C++, that presents challenges for effective integration with containers. This work shows how an existing MPI implementation can be extended to improve this integration.", "title": "" }, { "docid": "e5ce1ddd50a728fab41043324938a554", "text": "B-trees are used by many file systems to represent files and directories. They provide guaranteed logarithmic time key-search, insert, and remove. File systems like WAFL and ZFS use shadowing, or copy-on-write, to implement snapshots, crash recovery, write-batching, and RAID. Serious difficulties arise when trying to use b-trees and shadowing in a single system.\n This article is about a set of b-tree algorithms that respects shadowing, achieves good concurrency, and implements cloning (writeable snapshots). Our cloning algorithm is efficient and allows the creation of a large number of clones.\n We believe that using our b-trees would allow shadowing file systems to better scale their on-disk data structures.", "title": "" }, { "docid": "f10294ed332670587cf9c100f2d75428", "text": "In ancient times, people exchanged their goods and services to obtain what they needed (such as clothes and tools) from other people. This system of bartering compensated for the lack of currency. People offered goods/services and received in kind other goods/services. Now, despite the existence of multiple currencies and the progress of humanity from the Stone Age to the Byte Age, people still barter but in a different way. Mainly, people use money to pay for the goods they purchase and the services they obtain.", "title": "" }, { "docid": "bf3450649fdf5d5bb4ee89fbaf7ec0ff", "text": "In this study, we propose a research model to assess the effect of a mobile health (mHealth) app on exercise motivation and physical activity of individuals based on the design and self-determination theory. The research model is formulated from the perspective of motivation affordance and gamification. We will discuss how the use of specific gamified features of the mHealth app can trigger/afford corresponding users’ exercise motivations, which further enhance users’ participation in physical activity. We propose two hypotheses to test the research model using a field experiment. We adopt a 3-phase longitudinal approach to collect data in three different time zones, in consistence with approach commonly adopted in psychology and physical activity research, so as to reduce the common method bias in testing the two hypotheses.", "title": "" } ]
scidocsrr
850a195fc49bfcc68808dd54c19d3d97
Energy Saving Additive Neural Network
[ { "docid": "b059f6d2e9f10e20417f97c05d92c134", "text": "We present a hybrid analog/digital very large scale integration (VLSI) implementation of a spiking neural network with programmable synaptic weights. The synaptic weight values are stored in an asynchronous Static Random Access Memory (SRAM) module, which is interfaced to a fast current-mode event-driven DAC for producing synaptic currents with the appropriate amplitude values. These currents are further integrated by current-mode integrator synapses to produce biophysically realistic temporal dynamics. The synapse output currents are then integrated by compact and efficient integrate and fire silicon neuron circuits with spike-frequency adaptation and adjustable refractory period and spike-reset voltage settings. The fabricated chip comprises a total of 32 × 32 SRAM cells, 4 × 32 synapse circuits and 32 × 1 silicon neurons. It acts as a transceiver, receiving asynchronous events in input, performing neural computation with hybrid analog/digital circuits on the input spikes, and eventually producing digital asynchronous events in output. Input, output, and synaptic weight values are transmitted to/from the chip using a common communication protocol based on the Address Event Representation (AER). Using this representation it is possible to interface the device to a workstation or a micro-controller and explore the effect of different types of Spike-Timing Dependent Plasticity (STDP) learning algorithms for updating the synaptic weights values in the SRAM module. We present experimental results demonstrating the correct operation of all the circuits present on the chip.", "title": "" } ]
[ { "docid": "6bc2f0ea840e4b14e1340aa0c0bf4f07", "text": "A low-voltage low-power CMOS operational transconductance amplifier (OTA) with near rail-to-rail output swing is presented in this brief. The proposed circuit is based on the current-mirror OTA topology. In addition, several circuit techniques are adopted to enhance the voltage gain. Simulated from a 0.8-V supply voltage, the proposed OTA achieves a 62-dB dc gain and a gain–bandwidth product of 160 MHz while driving a 2-pF load. The OTA is designed in a 0.18m CMOS process. The power consumption is 0.25 mW including the common-mode feedback circuit.", "title": "" }, { "docid": "235edeee5ed3a16b88960400d13cb64f", "text": "Product service systems (PSS) can be understood as an innovation / business strategy that includes a set of products and services that are realized by an actor network. More recently, PSS that comprise System of Systems (SoS) have been of increasing interest, notably in the transportation (autonomous vehicle infrastructures, multi-modal transportation) and energy sector (smart grids). Architecting such PSS-SoS goes beyond classic SoS engineering, as they are often driven by new technology, without an a priori client and actor network, and thus, a much larger number of potential architectures. However, it seems that neither the existing PSS nor SoS literature provides solutions to how to architect such PSS. This paper presents a methodology for architecting PSS-SoS that are driven by technological innovation. The objective is to design PSS-SoS architectures together with their value proposition and business model from an initial technology impact assessment. For this purpose, we adapt approaches from the strategic management, business modeling, PSS and SoS architecting literature. We illustrate the methodology by applying it to the case of an automobile PSS.", "title": "" }, { "docid": "cdd3dd7a367027ebfe4b3f59eca99267", "text": "3 Computation of the shearlet transform 13 3.1 Finite discrete shearlets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.2 A discrete shearlet frame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 3.3 Inversion of the shearlet transform . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.4 Smooth shearlets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 3.5 Implementation details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 3.5.1 Indexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 3.5.2 Computation of spectra . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 3.6 Short documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 3.7 Download & Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 3.8 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.9 Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32", "title": "" }, { "docid": "a3da533f428b101c8f8cb0de04546e48", "text": "In this paper we investigate the challenging problem of cursive text recognition in natural scene images. In particular, we have focused on isolated Urdu character recognition in natural scenes that could not be handled by tradition Optical Character Recognition (OCR) techniques developed for Arabic and Urdu scanned documents. We also present a dataset of Urdu characters segmented from images of signboards, street scenes, shop scenes and advertisement banners containing Urdu text. A variety of deep learning techniques have been proposed by researchers for natural scene text detection and recognition. In this work, a Convolutional Neural Network (CNN) is applied as a classifier, as CNN approaches have been reported to provide high accuracy for natural scene text detection and recognition. A dataset of manually segmented characters was developed and deep learning based data augmentation techniques were applied to further increase the size of the dataset. The training is formulated using filter sizes of 3x3, 5x5 and mixed 3x3 and 5x5 with a stride value of 1 and 2. The CNN model is trained with various learning rates and state-of-the-art results are achieved.", "title": "" }, { "docid": "d81a5fd44adc6825e18e3841e4e66291", "text": "We study compression techniques for parallel in-memory graph algorithms, and show that we can achieve reduced space usage while obtaining competitive or improved performance compared to running the algorithms on uncompressed graphs. We integrate the compression techniques into Ligra, a recent shared-memory graph processing system. This system, which we call Ligra+, is able to represent graphs using about half of the space for the uncompressed graphs on average. Furthermore, Ligra+ is slightly faster than Ligra on average on a 40-core machine with hyper-threading. Our experimental study shows that Ligra+ is able to process graphs using less memory, while performing as well as or faster than Ligra.", "title": "" }, { "docid": "184402cd0ef80ae3426fd36fbb2ec998", "text": "Hundreds of hours of videos are uploaded every minute on YouTube and other video sharing sites: some will be viewed by millions of people and other will go unnoticed by all but the uploader. In this paper we propose to use visual sentiment and content features to predict the popularity of web videos. The proposed approach outperforms current state-of-the-art methods on two publicly available datasets.", "title": "" }, { "docid": "2da84ca7d7db508a6f9a443f2dbae7c1", "text": "This paper proposes a computationally efficient approach to detecting objects natively in 3D point clouds using convolutional neural networks (CNNs). In particular, this is achieved by leveraging a feature-centric voting scheme to implement novel convolutional layers which explicitly exploit the sparsity encountered in the input. To this end, we examine the trade-off between accuracy and speed for different architectures and additionally propose to use an L1 penalty on the filter activations to further encourage sparsity in the intermediate representations. To the best of our knowledge, this is the first work to propose sparse convolutional layers and L1 regularisation for efficient large-scale processing of 3D data. We demonstrate the efficacy of our approach on the KITTI object detection benchmark and show that VoteSDeep models with as few as three layers outperform the previous state of the art in both laser and laser-vision based approaches by margins of up to 40% while remaining highly competitive in terms of processing time.", "title": "" }, { "docid": "d864cc5603c97a8ff3c070dd385fe3a8", "text": "Nowadays, different protocols coexist in Internet that provides services to users. Unfortunately, control decisions and distributed management make it hard to control networks. These problems result in an inefficient and unpredictable network behaviour. Software Defined Networks (SDN) is a new concept of network architecture. It intends to be more flexible and to simplify the management in networks with respect to traditional architectures. Each of these aspects are possible because of the separation of control plane (controller) and data plane (switches) in network devices. OpenFlow is the most common protocol for SDN networks that provides the communication between control and data planes. Moreover, the advantage of decoupling control and data planes enables a quick evolution of protocols and also its deployment without replacing data plane switches. In this survey, we review the SDN technology and the OpenFlow protocol and their related works. Specifically, we describe some technologies as Wireless Sensor Networks and Wireless Cellular Networks and how SDN can be included within them in order to solve their challenges. We classify different solutions for each technology attending to the problem that is being fixed.", "title": "" }, { "docid": "8674128201d80772040446f1ab6a7cd1", "text": "In this paper, we present an attribute graph grammar for image parsing on scenes with man-made objects, such as buildings, hallways, kitchens, and living moms. We choose one class of primitives - 3D planar rectangles projected on images and six graph grammar production rules. Each production rule not only expands a node into its components, but also includes a number of equations that constrain the attributes of a parent node and those of its children. Thus our graph grammar is context sensitive. The grammar rules are used recursively to produce a large number of objects and patterns in images and thus the whole graph grammar is a type of generative model. The inference algorithm integrates bottom-up rectangle detection which activates top-down prediction using the grammar rules. The final results are validated in a Bayesian framework. The output of the inference is a hierarchical parsing graph with objects, surfaces, rectangles, and their spatial relations. In the inference, the acceptance of a grammar rule means recognition of an object, and actions are taken to pass the attributes between a node and its parent through the constraint equations associated with this production rule. When an attribute is passed from a child node to a parent node, it is called bottom-up, and the opposite is called top-down", "title": "" }, { "docid": "3755f56410365a498c3a1ff4b61e77de", "text": "Both high switching frequency and high efficiency are critical in reducing power adapter size. The active clamp flyback (ACF) topology allows zero voltage soft switching (ZVS) under all line and load conditions, eliminates leakage inductance and snubber losses, and enables high frequency and high power density power conversion. Traditional ACF ZVS operation relies on the resonance between leakage inductance and a small primary-side clamping capacitor, which leads to increased rms current and high conduction loss. This also causes oscillatory output rectifier current and impedes the implementation of synchronous rectification. This paper proposes a secondary-side resonance scheme to shape the primary current waveform in a way that significantly improves synchronous rectifier operation and reduces primary rms current. The concept is verified with a ${\\mathbf{25}}\\hbox{--}{\\text{W/in}}^{3}$ high-density 45-W adapter prototype using a monolithic gallium nitride power IC. Over 93% full-load efficiency was demonstrated at the worst case 90-V ac input and maximum full-load efficiency was 94.5%.", "title": "" }, { "docid": "cc4548925973baa6220ad81082a93c86", "text": "Usually benefits for transportation investments are analysed within a framework of cost-benefit analysis or its related techniques such as financial analysis, cost-effectiveness analysis, life-cycle costing, economic impact analysis, and others. While these tools are valid techniques in general, their application to intermodal transportation would underestimate the overall economic impact by missing important aspects of productivity enhancement. Intermodal transportation is an example of the so-called general purpose technologies (GPTs) that are characterized by statistically significant spillover effects. Diffusion, secondary innovations, and increased demand for specific human capital are basic features of GPTs. Eventually these features affect major macroeconomic variables, especially productivity. Recent economic literature claims that in order to study GPTs, micro and macro evidence should be combined to establish a better understanding of the connecting mechanisms from the micro level to the overall performance of an economy or the macro level. This study analyses these issues with respect to intermodal transportation. The goal is to understand the basic micro and macro mechanisms behind intermodal transportation in order to further develop a rigorous framework for evaluation of benefits from intermodal transportation. In doing so, lessons from computer simulation of the basic features of intermodal transportation are discussed and conclusions are made regarding an agenda for work in the field. 1 Dr. Yuri V. Yevdokimov, Assistant Professor of Economics and Civil Engineering, University of New Brunswick, Canada, Tel. (506) 447-3221, Fax (506) 453-4514, E-mail: yuri@unb.ca Introduction Intermodal transportation can be thought of as a process for transporting freight and passengers by means of a system of interconnected networks, involving various combinations of modes of transportation, in which all of the components are seamlessly linked and efficiently combined. Intermodal transportation is rapidly gaining acceptance as an integral component of the systems approach of conducting business in an increasingly competitive and interdependent global economy. For example, the United States Code with respect to transportation states: AIt is the policy of the United States Government to develop a National Intermodal Transportation System that is economically efficient and environmentally sound, provides the foundation for the United States to compete in the global economy and will move individuals and property in an energy efficient way. The National Intermodal Transportation System shall consist of all forms of transportation in a unified, interconnected manner, including the transportation systems of the future, to reduce energy consumption and air pollution while promoting economic development and supporting the United States= pre-eminent position in international commerce.@ (49 USC, Ch. 55, Sec. 5501, 1998) David Collenette (1997), the Transport Minister of Canada, noted: AWith population growth came development, and the relative advantages and disadvantages of the different modes changed as the transportation system became more advanced.... Intermodalism today is about safe, efficient transportation by the most appropriate combination of modes.@ (The Summit on North American Intermodal Transportation, 1997) These statements define intermodal transportation as a macroeconomic concept, because an effective transportation system is a vital factor in assuring the efficiency of an economic system as a whole. Moreover, intermodal transportation is an important socio-economic phenomenon which implies that the benefits of intermodal transportation have to be evaluated at the macroeconomic level, or at least at the regional level, involving all elements of the economic system that gain from having a more efficient transportation network in place. Defining Economic Benefits of Intermodal Transportation Traditionally, the benefits of a transportation investment have been primarily evaluated through reduced travel time and reduced vehicle maintenance and operation costs. However, according to Weisbrod and Treyz (1998), such methods underestimate the total benefits of transportation investment by Amissing other important aspects of productivity enhancement.@ It is so because transportation does not have an intrinsic purpose in itself and is rather intended to enable other economic activities such as production, consumption, leisure, and dissemination of knowledge to take place. Hence, in order to measure total economic benefits of investing in intermodal transportation, it is necessary to understand their basic relationships with different economic activities. Eventually, improvements in transportation reduce transportation costs. The immediate benefit of the reduction is the fall in total cost of production in an economic system under study which results in growth of the system=s output. This conclusion has been known in economic development literature since Tinbergen=s paper in 1957 (Tinbergen, 1957). However, the literature does not explicitly identify why transportation costs will fall. This issue is addressed in this discussion with respect to intermodal transportation. Transportation is a multiple service to multiple users. It is produced in transportation networks that provide infrastructure for economic activities. It appears that transportation networks have economies of scale. As discussed below, intermodal transportation magnifies these scale effects resulting in increasing returns to scale (IRS) of a specific nature. It implies that there are positive externalities that arise because of the scale effects, externalities that can initiate cumulative economic growth at the regional level as well as at the national level (see, for example, Brathen and Hervick, 1997, and Hussain and Westin, 1997). The phenomenon is known as a spill-over effect. Previously the effect has been evaluated through the contribution of transportation infrastructure investment to economic growth. Since Auschauer=s (1989) paper many economists have found evidence of such a contribution (see, for example, Bonaglia and Ferrara, 2000 and Khanam, 1996). Intermodal transportation as it was defined at the very beginning is more than mere improvements in transportation infrastructure. From a theoretical standpoint, it posseses some characteristics of the general-purpose technologies (GPT), and it seems appropriate to regard it as an example of the GPT, which is discussed below. It appears reasonable to study intermodal transportation as a two-way improvement of an economic system=s productivity. On the one hand, it improves current operational functions of the system. On the other hand, it expands those functions. Both improvements are achieved by consolidating different transportation systems into a seamless transportation network that utilizes the comparative advantages of different transportation modes. Improvements due to intermodal transportation are associated with the increased productivity of transportation services and a reduction in logistic costs. The former results in an increased volume of transportation per unit cost, while the latter directly reduces costs of commodity production. Expansion of the intermodal transportation network is associated with economies of scale and better accessibility to input and output markets. The overall impact of intermodal transportation can be divided into four elements: (i) an increase in the volume of transportation in an existing transportation network; (ii) a reduction in logistic costs of current operations; (iii) the economies of scale associated with transportation network expansion; (iv) better accessibility to input and output markets. These four elements are discussed below in a sequence. Increase in volume of transportation in the existing network An increase in volume of transportation can lead to economies of density a specific scale effect. The economies of density exist if an increase in the volume of transportation in the network does not require a proportional increase in all inputs of the network. Usually the phenomenon is associated with an increase in the frequency of transportation (traffic) within the existing network (see Boyer, 1998 for a formal definition, Ciccone and Hall, 1996 for general discussion of economies of density, and Fujii, Im and Mak, 1992 for examples of economies of density in transportation). In the case of intermodal transportation, economies of density are achieved through cargo containerization, cargo consolidation and computer-guiding systems at intermodal facilities. Cargo containerization and consolidation result in an increased load factor of transportation vehicles and higher capacity utilization of the transportation fixed facilities, while utilization of computer-guiding systems results in higher labour productivity. For instance, in 1994 Burlington Northern Santa Fe Railway (BNSF) introduced the Alliance Intermodal Facility at Fort Worth, Texas, into its operations between Chicago and Los Angeles. According to OmniTRAX specialists, who operates the facility, BNSF has nearly doubled its volume of throughput at the intermodal facility since 1994. First, containerization of commodities being transported plus hubbing or cargo consolidation at the intermodal facility resulted in longer trains with higher frequency. Second, all day-to-day operations at the intermodal facility are governed by the Optimization Alternatives Strategic Intermodal Scheduler (OASIS) computer system, which allowed BNSF to handle more operations with less labour. Reduction in Logistic Costs Intermodal transportation is characterized by optimal frequency of service and modal choice and increased reliability. Combined, these two features define the just-in-time delivery -a major service produced by intermodal transportation. Furthermore, Blackburn (1991) argues that just-in-time d", "title": "" }, { "docid": "a926341e8b663de6c412b8e3a61ee171", "text": "— Studies within the EHEA framework include the acquisition of skills such as the ability to learn autonomously, which requires students to devote much of their time to individual and group work to reinforce and further complement the knowledge acquired in the classroom. In order to consolidate the results obtained from classroom activities, lecturers must develop tools to encourage learning and facilitate the process of independent learning. The aim of this work is to present the use of virtual laboratories based on Easy Java Simulations to assist in the understanding and testing of electrical machines. con los usuarios integrándose fácilmente en plataformas de e-aprendizaje. Para nuestra aplicación hemos escogido el Java Ejs (Easy Java Simulations), ya que es una herramienta de software gratuita, diseñada para el desarrollo de laboratorios virtuales interactivos, dispone de elementos visuales parametrizables", "title": "" }, { "docid": "5c935db4a010bc26d93dd436c5e2f978", "text": "A taxonomic revision of Australian Macrobrachium identified three species new to the Australian fauna – two undescribed species and one new record, viz. M. auratumsp. nov., M. koombooloombasp. nov., and M. mammillodactylus(Thallwitz, 1892). Eight taxa previously described by Riek (1951) are recognised as new junior subjective synonyms, viz. M. adscitum adscitum, M. atactum atactum, M. atactum ischnomorphum, M. atactum sobrinum, M. australiense crassum, M. australiense cristatum, M. australiense eupharum of M. australienseHolthuis, 1950, and M. glypticumof M. handschiniRoux, 1933. Apart from an erroneous type locality for a junior subjective synonym, there were no records to confirm the presence of M. australe(Guérin-Méneville, 1838) on the Australian continent. In total, 13 species of Macrobrachiumare recorded from the Australian continent. Keys to male developmental stages and Australian species are provided. A revised diagnosis is given for the genus. A list of 31 atypical species which do not appear to be based on fully developed males or which require re-evaluation of their generic status is provided. Terminology applied to spines and setae is revised.", "title": "" }, { "docid": "e2d39e2714351b04054b871fa8a7a2fa", "text": "In this letter, we propose sparsity-based coherent and noncoherent dictionaries for action recognition. First, the input data are divided into different clusters and the number of clusters depends on the number of action categories. Within each cluster, we seek data items of each action category. If the number of data items exceeds threshold in any action category, these items are labeled as coherent. In a similar way, all coherent data items from different clusters form a coherent group of each action category, and data that are not part of the coherent group belong to noncoherent group of each action category. These coherent and noncoherent groups are learned using K-singular value decomposition dictionary learning. Since the coherent group has more similarity among data, only few atoms need to be learned. In the noncoherent group, there is a high variability among the data items. So, we propose an orthogonal-projection-based selection to get optimal dictionary in order to retain maximum variance in the data. Finally, the obtained dictionary atoms of both groups in each action category are combined and then updated using the limited Broyden–Fletcher–Goldfarb–Shanno optimization algorithm. The experiments are conducted on challenging datasets HMDB51 and UCF50 with action bank features and achieve comparable result using this state-of-the-art feature.", "title": "" }, { "docid": "56e47efe6efdb7819c6a2e87e8fbb56e", "text": "Recent investigations of Field Programmable Gate Array (FPGA)-based time-to-digital converters (TDCs) have predominantly focused on improving the time resolution of the device. However, the monolithic integration of multi-channel TDCs and the achievement of high measurement throughput remain challenging issues for certain applications. In this paper, the potential of the resources provided by the Kintex-7 Xilinx FPGA is fully explored, and a new design is proposed for the implementation of a high performance multi-channel TDC system on this FPGA. Using the tapped-delay-line wave union TDC architecture, in which a negative pulse is triggered by the hit signal propagating along the carry chain, two time measurements are performed in a single carry chain within one clock cycle. The differential non-linearity and time resolution can be significantly improved by realigning the bins. The on-line calibration and on-line updating of the calibration table reduce the influence of variations of environmental conditions. The logic resources of the 6-input look-up tables in the FPGA are employed for hit signal edge detection and bubble-proof encoding, thereby allowing the TDC system to operate at the maximum allowable clock rate of the FPGA and to achieve the maximum possible measurement throughput. This resource-efficient design, in combination with a modular implementation, makes the integration of multiple channels in one FPGA practicable. Using our design, a 128-channel TDC with a dead time of 1.47 ns, a dynamic range of 360 ns, and a root-mean-square resolution of less than 10 ps was implemented in a single Kintex-7 device.", "title": "" }, { "docid": "b06fc6126bf086cdef1d5ac289cf5ebe", "text": "Rhinophyma is a subtype of rosacea characterized by nodular thickening of the skin, sebaceous gland hyperplasia, dilated pores, and in its late stage, fibrosis. Phymatous changes in rosacea are most common on the nose but can also occur on the chin (gnatophyma), ears (otophyma), and eyelids (blepharophyma). In severe cases, phymatous changes result in the loss of normal facial contours, significant disfigurement, and social isolation. Additionally, patients with profound rhinophyma can experience nare obstruction and difficulty breathing due to the weight and bulk of their nose. Treatment options for severe advanced rhinophyma include cryosurgery, partial-thickness decortication with subsequent secondary repithelialization, carbon dioxide (CO2) or erbium-doped yttrium aluminum garnet (Er:YAG) laser ablation, full-thickness resection with graft or flap reconstruction, excision by electrocautery or radio frequency, and sculpting resection using a heated Shaw scalpel. We report a severe case of rhinophyma resulting in marked facial disfigurement and nasal obstruction treated successfully using the Shaw scalpel. Rhinophymectomy using the Shaw scalpel allows for efficient and efficacious treatment of rhinophyma without the need for multiple procedures or general anesthesia and thus should be considered in patients with nare obstruction who require intervention.", "title": "" }, { "docid": "3c29c0a3e8ec6292f05c7907436b5e9a", "text": "Emerging Wi-Fi technologies are expected to cope with large amounts of traffic in dense networks. Consequently, proposals for the future IEEE 802.11ax Wi-Fi amendment include sensing threshold and transmit power adaptation, in order to improve spatial reuse. However, it is not yet understood to which extent such adaptive approaches — and which variant — would achieve a better balance between spatial reuse and the level of interference, in order to improve the network performance. Moreover, it is not clear how legacy Wi-Fi devices would be affected by new-generation Wi-Fi implementing these adaptive design parameters. In this paper we present a thorough comparative study in ns-3 for four major proposed adaptation algorithms and we compare their performance against legacy non-adaptive Wi-Fi. Additionally, we consider mixed populations where both legacy non-adaptive and new-generation adaptive populations coexist. We assume a dense indoor residential deployment and different numbers of available channels in the 5 GHz band, relevant for future IEEE 802.11ax. Our results show that for the dense scenarios considered, the algorithms do not significantly improve the overall network performance compared to the legacy baseline, as they increase the throughput of some nodes, while decreasing the throughput of others. For mixed populations in dense deployments, adaptation algorithms that improve the performance of new-generation nodes degrade the performance of legacy nodes and vice versa. This suggests that to support Wi-Fi evolution for dense deployments and consistently increase the throughput throughout the network, more sophisticated algorithms are needed, e.g. considering combinations of input parameters in current variants.", "title": "" }, { "docid": "eb3eccf745937773c399334673235f57", "text": "Continuous practices, i.e., continuous integration, delivery, and deployment, are the software development industry practices that enable organizations to frequently and reliably release new features and products. With the increasing interest in the literature on continuous practices, it is important to systematically review and synthesize the approaches, tools, challenges, and practices reported for adopting and implementing continuous practices. This paper aimed at systematically reviewing the state of the art of continuous practices to classify approaches and tools, identify challenges and practices in this regard, and identify the gaps for future research. We used the systematic literature review method for reviewing the peer-reviewed papers on continuous practices published between 2004 and June 1, 2016. We applied the thematic analysis method for analyzing the data extracted from reviewing 69 papers selected using predefined criteria. We have identified 30 approaches and associated tools, which facilitate the implementation of continuous practices in the following ways: 1) reducing build and test time in continuous integration (CI); 2) increasing visibility and awareness on build and test results in CI; 3) supporting (semi-) automated continuous testing; 4) detecting violations, flaws, and faults in CI; 5) addressing security and scalability issues in deployment pipeline; and 6) improving dependability and reliability of deployment process. We have also determined a list of critical factors, such as testing (effort and time), team awareness and transparency, good design principles, customer, highly skilled and motivated team, application domain, and appropriate infrastructure that should be carefully considered when introducing continuous practices in a given organization. The majority of the reviewed papers were validation (34.7%) and evaluation (36.2%) research types. This paper also reveals that continuous practices have been successfully applied to both greenfield and maintenance projects. Continuous practices have become an important area of software engineering research and practice. While the reported approaches, tools, and practices are addressing a wide range of challenges, there are several challenges and gaps, which require future research work for improving the capturing and reporting of contextual information in the studies reporting different aspects of continuous practices; gaining a deep understanding of how software-intensive systems should be (re-) architected to support continuous practices; and addressing the lack of knowledge and tools for engineering processes of designing and running secure deployment pipelines.", "title": "" }, { "docid": "a9dbb873487081afcc2a24dd7cb74bfe", "text": "We presented the first single block collision attack on MD5 with complexity of 2 MD5 compressions and posted the challenge for another completely new one in 2010. Last year, Stevens presented a single block collision attack to our challenge, with complexity of 2 MD5 compressions. We really appreciate Stevens’s hard work. However, it is a pity that he had not found even a better solution than our original one, let alone a completely new one and the very optimal solution that we preserved and have been hoping that someone can find it, whose collision complexity is about 2 MD5 compressions. In this paper, we propose a method how to choose the optimal input difference for generating MD5 collision pairs. First, we divide the sufficient conditions into two classes: strong conditions and weak conditions, by the degree of difficulty for condition satisfaction. Second, we prove that there exist strong conditions in only 24 steps (one and a half rounds) under specific conditions, by utilizing the weaknesses of compression functions of MD5, which are difference inheriting and message expanding. Third, there should be no difference scaling after state word q25 so that it can result in the least number of strong conditions in each differential path, in such a way we deduce the distribution of strong conditions for each input difference pattern. Finally, we choose the input difference with the least number of strong conditions and the most number of free message words. We implement the most efficient 2-block MD5 collision attack, which needs only about 2 MD5 compressions to find a collision pair, and show a single-block collision attack with complexity 2.", "title": "" }, { "docid": "cb66a49205c9914be88a7631ecc6c52a", "text": "BACKGROUND\nMidline facial clefts are rare and challenging deformities caused by failure of fusion of the medial nasal prominences. These anomalies vary in severity, and may include microform lines or midline lip notching, incomplete or complete labial clefting, nasal bifidity, or severe craniofacial bony and soft tissue anomalies with orbital hypertelorism and frontoethmoidal encephaloceles. In this study, the authors present 4 cases, classify the spectrum of midline cleft anomalies, and review our technical approaches to the surgical correction of midline cleft lip and bifid nasal deformities. Embryology and associated anomalies are discussed.\n\n\nMETHODS\nThe authors retrospectively reviewed our experience with 4 cases of midline cleft lip with and without nasal deformities of varied complexity. In addition, a comprehensive literature search was performed, identifying studies published relating to midline cleft lip and/or bifid nose deformities. Our assessment of the anomalies in our series, in conjunction with published reports, was used to establish a 5-tiered classification system. Technical approaches and clinical reports are described.\n\n\nRESULTS\nFunctional and aesthetic anatomic correction was successfully achieved in each case without complication. A classification and treatment strategy for the treatment of midline cleft lip and bifid nose deformity is presented.\n\n\nCONCLUSIONS\nThe successful treatment of midline cleft lip and bifid nose deformities first requires the identification and classification of the wide variety of anomalies. With exposure of abnormal nasolabial anatomy, the excision of redundant skin and soft tissue, anatomic approximation of cartilaginous elements, orbicularis oris muscle repair, and craniofacial osteotomy and reduction as indicated, a single-stage correction of midline cleft lip and bifid nasal deformity can be safely and effectively achieved.", "title": "" } ]
scidocsrr
51921151c2e3c4b4fa039456a32f955f
A task-driven approach to time scale detection in dynamic networks
[ { "docid": "b89a3bc8aa519ba1ccc818fe2a54b4ff", "text": "We present the design, implementation, and deployment of a wearable computing platform for measuring and analyzing human behavior in organizational settings. We propose the use of wearable electronic badges capable of automatically measuring the amount of face-to-face interaction, conversational time, physical proximity to other people, and physical activity levels in order to capture individual and collective patterns of behavior. Our goal is to be able to understand how patterns of behavior shape individuals and organizations. By using on-body sensors in large groups of people for extended periods of time in naturalistic settings, we have been able to identify, measure, and quantify social interactions, group behavior, and organizational dynamics. We deployed this wearable computing platform in a group of 22 employees working in a real organization over a period of one month. Using these automatic measurements, we were able to predict employees' self-assessments of job satisfaction and their own perceptions of group interaction quality by combining data collected with our platform and e-mail communication data. In particular, the total amount of communication was predictive of both of these assessments, and betweenness in the social network exhibited a high negative correlation with group interaction satisfaction. We also found that physical proximity and e-mail exchange had a negative correlation of r = -0.55&nbsp;(p 0.01), which has far-reaching implications for past and future research on social networks.", "title": "" }, { "docid": "e4890b63e9a51029484354535765801c", "text": "Many different machine learning algorithms exist; taking into account each algorithm's hyperparameters, there is a staggeringly large number of possible alternatives overall. We consider the problem of simultaneously selecting a learning algorithm and setting its hyperparameters, going beyond previous work that attacks these issues separately. We show that this problem can be addressed by a fully automated approach, leveraging recent innovations in Bayesian optimization. Specifically, we consider a wide range of feature selection techniques (combining 3 search and 8 evaluator methods) and all classification approaches implemented in WEKA's standard distribution, spanning 2 ensemble methods, 10 meta-methods, 27 base classifiers, and hyperparameter settings for each classifier. On each of 21 popular datasets from the UCI repository, the KDD Cup 09, variants of the MNIST dataset and CIFAR-10, we show classification performance often much better than using standard selection and hyperparameter optimization methods. We hope that our approach will help non-expert users to more effectively identify machine learning algorithms and hyperparameter settings appropriate to their applications, and hence to achieve improved performance.", "title": "" } ]
[ { "docid": "d02e87a00aaf29a86cf94ad0c539fd0d", "text": "Future advanced driver assistance systems will contain multiple sensors that are used for several applications, such as highly automated driving on freeways. The problem is that the sensors are usually asynchronous and their data possibly out-of-sequence, making fusion of the sensor data non-trivial. This paper presents a novel approach to track-to-track fusion for automotive applications with asynchronous and out-of-sequence sensors using information matrix fusion. This approach solves the problem of correlation between sensor data due to the common process noise and common track history, which eliminates the need to replace the global track estimate with the fused local estimate at each fusion cycle. The information matrix fusion approach is evaluated in simulation and its performance demonstrated using real sensor data on a test vehicle designed for highly automated driving on freeways.", "title": "" }, { "docid": "8972e89b0b06bf25e72f8cb82b6d629a", "text": "Community detection is an important task for mining the structure and function of complex networks. Generally, there are several different kinds of nodes in a network which are cluster nodes densely connected within communities, as well as some special nodes like hubs bridging multiple communities and outliers marginally connected with a community. In addition, it has been shown that there is a hierarchical structure in complex networks with communities embedded within other communities. Therefore, a good algorithm is desirable to be able to not only detect hierarchical communities, but also identify hubs and outliers. In this paper, we propose a parameter-free hierarchical network clustering algorithm SHRINK by combining the advantages of density-based clustering and modularity optimization methods. Based on the structural connectivity information, the proposed algorithm can effectively reveal the embedded hierarchical community structure with multiresolution in large-scale weighted undirected networks, and identify hubs and outliers as well. Moreover, it overcomes the sensitive threshold problem of density-based clustering algorithms and the resolution limit possessed by other modularity-based methods. To illustrate our methodology, we conduct experiments with both real-world and synthetic datasets for community detection, and compare with many other baseline methods. Experimental results demonstrate that SHRINK achieves the best performance with consistent improvements.", "title": "" }, { "docid": "5c32b7bea7470a50a900a62e1a3dffc3", "text": "Recommender systems (RSs) have been the most important technology for increasing the business in Taobao, the largest online consumer-to-consumer (C2C) platform in China. There are three major challenges facing RS in Taobao: scalability, sparsity and cold start. In this paper, we present our technical solutions to address these three challenges. The methods are based on a well-known graph embedding framework. We first construct an item graph from users' behavior history, and learn the embeddings of all items in the graph. The item embeddings are employed to compute pairwise similarities between all items, which are then used in the recommendation process. To alleviate the sparsity and cold start problems, side information is incorporated into the graph embedding framework. We propose two aggregation methods to integrate the embeddings of items and the corresponding side information. Experimental results from offline experiments show that methods incorporating side information are superior to those that do not. Further, we describe the platform upon which the embedding methods are deployed and the workflow to process the billion-scale data in Taobao. Using A/B test, we show that the online Click-Through-Rates (CTRs) are improved comparing to the previous collaborative filtering based methods widely used in Taobao, further demonstrating the effectiveness and feasibility of our proposed methods in Taobao's live production environment.", "title": "" }, { "docid": "e8c6cdc70be62c6da150b48ba69c0541", "text": "Stress granules and processing bodies are related mRNA-containing granules implicated in controlling mRNA translation and decay. A genomic screen identifies numerous factors affecting granule formation, including proteins involved in O-GlcNAc modifications. These results highlight the importance of post-translational modifications in translational control and mRNP granule formation.", "title": "" }, { "docid": "8c0a8816028e8c50ebccbd812ee3a4e5", "text": "Songs are representation of audio signal and musical instruments. An audio signal separation system should be able to identify different audio signals such as speech, background noise and music. In a song the singing voice provides useful information regarding pitch range, music content, music tempo and rhythm. An automatic singing voice separation system is used for attenuating or removing the music accompaniment. The paper presents survey of the various algorithm and method for separating singing voice from musical background. From the survey it is observed that most of researchers used Robust Principal Component Analysis method for separation of singing voice from music background, by taking into account the rank of music accompaniment and the sparsity of singing voices.", "title": "" }, { "docid": "8f1d27581e7a83e378129e4287c64bd9", "text": "Online social media plays an increasingly significant role in shaping the political discourse during elections worldwide. In the 2016 U.S. presidential election, political campaigns strategically designed candidacy announcements on Twitter to produce a significant increase in online social media attention. We use large-scale online social media communications to study the factors of party, personality, and policy in the Twitter discourse following six major presidential campaign announcements for the 2016 U.S. presidential election. We observe that all campaign announcements result in an instant bump in attention, with up to several orders of magnitude increase in tweets. However, we find that Twitter discourse as a result of this bump in attention has overwhelmingly negative sentiment. The bruising criticism, driven by crosstalk from Twitter users of opposite party affiliations, is organized by hashtags such as #NoMoreBushes and #WhyImNotVotingForHillary. We analyze how people take to Twitter to criticize specific personality traits and policy positions of presidential candidates.", "title": "" }, { "docid": "76d260180b588f881f1009a420a35b3b", "text": "Appearance changes due to weather and seasonal conditions represent a strong impediment to the robust implementation of machine learning systems in outdoor robotics. While supervised learning optimises a model for the training domain, it will deliver degraded performance in application domains that underlie distributional shifts caused by these changes. Traditionally, this problem has been addressed via the collection of labelled data in multiple domains or by imposing priors on the type of shift between both domains. We frame the problem in the context of unsupervised domain adaptation and develop a framework for applying adversarial techniques to adapt popular, state-of-the-art network architectures with the additional objective to align features across domains. Moreover, as adversarial training is notoriously unstable, we first perform an extensive ablation study, adapting many techniques known to stabilise generative adversarial networks, and evaluate on a surrogate classification task with the same appearance change. The distilled insights are applied to the problem of free-space segmentation for motion planning in autonomous driving.", "title": "" }, { "docid": "49b0cf976357d0c943ff003526ffff1f", "text": "Transcranial direct current stimulation (tDCS) is a promising tool for neurocognitive enhancement. Several studies have shown that just a single session of tDCS over the left dorsolateral pFC (lDLPFC) can improve the core cognitive function of working memory (WM) in healthy adults. Yet, recent studies combining multiple sessions of anodal tDCS over lDLPFC with verbal WM training did not observe additional benefits of tDCS in subsequent stimulation sessions nor transfer of benefits to novel WM tasks posttraining. Using an enhanced stimulation protocol as well as a design that included a baseline measure each day, the current study aimed to further investigate the effects of multiple sessions of tDCS on WM. Specifically, we investigated the effects of three subsequent days of stimulation with anodal (20 min, 1 mA) versus sham tDCS (1 min, 1 mA) over lDLPFC (with a right supraorbital reference) paired with a challenging verbal WM task. WM performance was measured with a verbal WM updating task (the letter n-back) in the stimulation sessions and several WM transfer tasks (different letter set n-back, spatial n-back, operation span) before and 2 days after stimulation. Anodal tDCS over lDLPFC enhanced WM performance in the first stimulation session, an effect that remained visible 24 hr later. However, no further gains of anodal tDCS were observed in the second and third stimulation sessions, nor did benefits transfer to other WM tasks at the group level. Yet, interestingly, post hoc individual difference analyses revealed that in the anodal stimulation group the extent of change in WM performance on the first day of stimulation predicted pre to post changes on both the verbal and the spatial transfer task. Notably, this relationship was not observed in the sham group. Performance of two individuals worsened during anodal stimulation and on the transfer tasks. Together, these findings suggest that repeated anodal tDCS over lDLPFC combined with a challenging WM task may be an effective method to enhance domain-independent WM functioning in some individuals, but not others, or can even impair WM. They thus call for a thorough investigation into individual differences in tDCS respondence as well as further research into the design of multisession tDCS protocols that may be optimal for boosting cognition across a wide range of individuals.", "title": "" }, { "docid": "300485eefc3020135cdaa31ad36f7462", "text": "The number of cyber threats is constantly increasing. In 2013, 200,000 malicious tools were identified each day by antivirus vendors. This figure rose to 800,000 per day in 2014 and then to 1.8 million per day in 2016! The bar of 3 million per day will be crossed in 2017. Traditional security tools (mainly signature-based) show their limits and are less and less effective to detect these new cyber threats. Detecting never-seen-before or zero-day malware, including ransomware, efficiently requires a new approach in cyber security management. This requires a move from signature-based detection to behavior-based detection. We have developed a data breach detection system named CDS using Machine Learning techniques which is able to identify zero-day malware by analyzing the network traffic. In this paper, we present the capability of the CDS to detect zero-day ransomware, particularly WannaCry.", "title": "" }, { "docid": "ad4c9b26e0273ada7236068fb8ac4729", "text": "Understanding user participation is fundamental in anticipating the popularity of online content. In this paper, we explore how the number of users' comments during a short observation period after publication can be used to predict the expected popularity of articles published by a countrywide online newspaper. We evaluate a simple linear prediction model on a real dataset of hundreds of thousands of articles and several millions of comments collected over a period of four years. Analyzing the accuracy of our proposed model for different values of its basic parameters we provide valuable insights on the potentials and limitations for predicting content popularity based on early user activity.", "title": "" }, { "docid": "f55e380c158ae01812f009fd81642d7f", "text": "In this paper, we proposed a system to effectively create music mashups – a kind of re-created music that is made by mixing parts of multiple existing music pieces. Unlike previous studies which merely generate mashups by overlaying music segments on one single base track, the proposed system creates mashups with multiple background (e.g. instrumental) and lead (e.g. vocal) track segments. So, besides the suitability between the vertically overlaid tracks (i.e. vertical mashability) used in previous studies, we proposed to further consider the suitability between the horizontally connected consecutive music segments (i.e. horizontal mashability) when searching for proper music segments to be combined. On the vertical side, two new factors: “harmonic change balance” and “volume weight” have been considered. On the horizontal side, the methods used in the studies of medley creation are incorporated. Combining vertical and horizontal mashabilities together, we defined four levels of mashability that may be encountered and found the proper solution to each of them. Subjective evaluations showed that the proposed four levels of mashability can appropriately reflect the degrees of listening enjoyment. Besides, by taking the newly proposed vertical mashability measurement into account, the improvement in user satisfaction is statistically significant.", "title": "" }, { "docid": "6c149f1f6e9dc859bf823679df175afb", "text": "Neurofeedback is attracting renewed interest as a method to self-regulate one's own brain activity to directly alter the underlying neural mechanisms of cognition and behavior. It not only promises new avenues as a method for cognitive enhancement in healthy subjects, but also as a therapeutic tool. In the current article, we present a review tutorial discussing key aspects relevant to the development of electroencephalography (EEG) neurofeedback studies. In addition, the putative mechanisms underlying neurofeedback learning are considered. We highlight both aspects relevant for the practical application of neurofeedback as well as rather theoretical considerations related to the development of new generation protocols. Important characteristics regarding the set-up of a neurofeedback protocol are outlined in a step-by-step way. All these practical and theoretical considerations are illustrated based on a protocol and results of a frontal-midline theta up-regulation training for the improvement of executive functions. Not least, assessment criteria for the validation of neurofeedback studies as well as general guidelines for the evaluation of training efficacy are discussed.", "title": "" }, { "docid": "6982c79b6fa2cda4f0323421f8e3b4be", "text": "We propose split-brain autoencoders, a straightforward modification of the traditional autoencoder architecture, for unsupervised representation learning. The method adds a split to the network, resulting in two disjoint sub-networks. Each sub-network is trained to perform a difficult task &#x2013; predicting one subset of the data channels from another. Together, the sub-networks extract features from the entire input signal. By forcing the network to solve cross-channel prediction tasks, we induce a representation within the network which transfers well to other, unseen tasks. This method achieves state-of-the-art performance on several large-scale transfer learning benchmarks.", "title": "" }, { "docid": "f7a1eaa86a81b104a9ae62dc87c495aa", "text": "In the Internet of Things, the extreme heterogeneity of sensors, actuators and user devices calls for new tools and design models able to translate the user's needs in machine-understandable scenarios. The scientific community has proposed different solution for such issue, e.g., the MQTT (MQ Telemetry Transport) protocol introduced the topic concept as “the key that identifies the information channel to which payload data is published”. This study extends the topic approach by proposing the Web of Topics (WoX), a conceptual model for the IoT. A WoX Topic is identified by two coordinates: (i) a discrete semantic feature of interest (e.g. temperature, humidity), and (ii) a URI-based location. An IoT entity defines its role within a Topic by specifying its technological and collaborative dimensions. By this approach, it is easier to define an IoT entity as a set of couples Topic-Role. In order to prove the effectiveness of the WoX approach, we developed the WoX APIs on top of an EPCglobal implementation. Then, 10 developers were asked to build a WoX-based application supporting a physics lab scenario at school. They also filled out an ex-ante and an ex-post questionnaire. A set of qualitative and quantitative metrics allowed measuring the model's outcome.", "title": "" }, { "docid": "645f49ff21d31bb99cce9f05449df0d7", "text": "The growing popularity of the JSON format has fueled increased interest in loading and processing JSON data within analytical data processing systems. However, in many applications, JSON parsing dominates performance and cost. In this paper, we present a new JSON parser called Mison that is particularly tailored to this class of applications, by pushing down both projection and filter operators of analytical queries into the parser. To achieve these features, we propose to deviate from the traditional approach of building parsers using finite state machines (FSMs). Instead, we follow a two-level approach that enables the parser to jump directly to the correct position of a queried field without having to perform expensive tokenizing steps to find the field. At the upper level, Mison speculatively predicts the logical locations of queried fields based on previously seen patterns in a dataset. At the lower level, Mison builds structural indices on JSON data to map logical locations to physical locations. Unlike all existing FSM-based parsers, building structural indices converts control flow into data flow, thereby largely eliminating inherently unpredictable branches in the program and exploiting the parallelism available in modern processors. We experimentally evaluate Mison using representative real-world JSON datasets and the TPC-H benchmark, and show that Mison produces significant performance benefits over the best existing JSON parsers; in some cases, the performance improvement is over one order of magnitude.", "title": "" }, { "docid": "9dac75a40e421163c4e05cfd5d36361f", "text": "In recent years, many data mining methods have been proposed for finding useful and structured information from market basket data. The association rule model was recently proposed in order to discover useful patterns and dependencies in such data. This paper discusses a method for indexing market basket data efficiently for similarity search. The technique is likely to be very useful in applications which utilize the similarity in customer buying behavior in order to make peer recommendations. We propose an index called the signature table, which is very flexible in supporting a wide range of similarity functions. The construction of the index structure is independent of the similarity function, which can be specified at query time. The resulting similarity search algorithm shows excellent scalability with increasing memory availability and database size.", "title": "" }, { "docid": "29ac2afc399bbf61927c4821d3a6e0a0", "text": "A well used approach for echo cancellation is the two-path method, where two adaptive filters in parallel are utilized. Typically, one filter is continuously updated, and when this filter is considered better adjusted to the echo-path than the other filter, the coefficients of the better adjusted filter is transferred to the other filter. When this transfer should occur is controlled by the transfer logic. This paper proposes transfer logic that is both more robust and more simple to tune, owing to fewer parameters, than the conventional approach. Extensive simulations show the advantages of the proposed method.", "title": "" }, { "docid": "510439267c11c53b31dcf0b1c40e331b", "text": "Spatial multicriteria decision problems are decision problems where one needs to take multiple conflicting criteria as well as geographical knowledge into account. In such a context, exploratory spatial analysis is known to provide tools to visualize as much data as possible on maps but does not integrate multicriteria aspects. Also, none of the tools provided by multicriteria analysis were initially destined to be used in a geographical context.In this paper, we propose an application of the PROMETHEE and GAIA ranking methods to Geographical Information Systems (GIS). The aim is to help decision makers obtain rankings of geographical entities and understand why such rankings have been obtained. To do that, we make use of the visual approach of the GAIA method and adapt it to display the results on geographical maps. This approach is then extended to cover several weaknesses of the adaptation. Finally, it is applied to a study of the region of Brussels as well as an evaluation of the Human Development Index (HDI) in Europe.", "title": "" }, { "docid": "283708fe3c950ac08bf932d68feb6d56", "text": "Diabetic wounds are unlike typical wounds in that they are slower to heal, making treatment with conventional topical medications an uphill process. Among several different alternative therapies, honey is an effective choice because it provides comparatively rapid wound healing. Although honey has been used as an alternative medicine for wound healing since ancient times, the application of honey to diabetic wounds has only recently been revived. Because honey has some unique natural features as a wound healer, it works even more effectively on diabetic wounds than on normal wounds. In addition, honey is known as an \"all in one\" remedy for diabetic wound healing because it can combat many microorganisms that are involved in the wound process and because it possesses antioxidant activity and controls inflammation. In this review, the potential role of honey's antibacterial activity on diabetic wound-related microorganisms and honey's clinical effectiveness in treating diabetic wounds based on the most recent studies is described. Additionally, ways in which honey can be used as a safer, faster, and effective healing agent for diabetic wounds in comparison with other synthetic medications in terms of microbial resistance and treatment costs are also described to support its traditional claims.", "title": "" }, { "docid": "df6e410fddeb22c7856f5362b7abc1de", "text": "With the increasing prevalence of Web 2.0 and cloud computing, password-based logins play an increasingly important role on user-end systems. We use passwords to authenticate ourselves to countless applications and services. However, login credentials can be easily stolen by attackers. In this paper, we present a framework, TrustLogin, to secure password-based logins on commodity operating systems. TrustLogin leverages System Management Mode to protect the login credentials from malware even when OS is compromised. TrustLogin does not modify any system software in either client or server and is transparent to users, applications, and servers. We conduct two study cases of the framework on legacy and secure applications, and the experimental results demonstrate that TrustLogin is able to protect login credentials from real-world keyloggers on Windows and Linux platforms. TrustLogin is robust against spoofing attacks. Moreover, the experimental results also show TrustLogin introduces a low overhead with the tested applications.", "title": "" } ]
scidocsrr
519cad491c492024d286bfcba25e17a6
A Heuristics Approach for Fast Detecting Suspicious Money Laundering Cases in an Investment Bank
[ { "docid": "e67dc912381ebbae34d16aad0d3e7d92", "text": "In this paper, we study the problem of applying data mining to facilitate the investigation of money laundering crimes (MLCs). We have identified a new paradigm of problems --- that of automatic community generation based on uni-party data, the data in which there is no direct or explicit link information available. Consequently, we have proposed a new methodology for Link Discovery based on Correlation Analysis (LDCA). We have used MLC group model generation as an exemplary application of this problem paradigm, and have focused on this application to develop a specific method of automatic MLC group model generation based on timeline analysis using the LDCA methodology, called CORAL. A prototype of CORAL method has been implemented, and preliminary testing and evaluations based on a real MLC case data are reported. The contributions of this work are: (1) identification of the uni-party data community generation problem paradigm, (2) proposal of a new methodology LDCA to solve for problems in this paradigm, (3) formulation of the MLC group model generation problem as an example of this paradigm, (4) application of the LDCA methodology in developing a specific solution (CORAL) to the MLC group model generation problem, and (5) development, evaluation, and testing of the CORAL prototype in a real MLC case data.", "title": "" }, { "docid": "0a0f4f5fc904c12cacb95e87f62005d0", "text": "This text is intended to provide a balanced introduction to machine vision. Basic concepts are introduced with only essential mathematical elements. The details to allow implementation and use of vision algorithm in practical application are provided, and engineering aspects of techniques are emphasized. This text intentionally omits theories of machine vision that do not have sufficient practical applications at the time.", "title": "" } ]
[ { "docid": "5666b1a6289f4eac05531b8ff78755cb", "text": "Neural text generation models are often autoregressive language models or seq2seq models. These models generate text by sampling words sequentially, with each word conditioned on the previous word, and are state-of-the-art for several machine translation and summarization benchmarks. These benchmarks are often defined by validation perplexity even though this is not a direct measure of the quality of the generated text. Additionally, these models are typically trained via maximum likelihood and teacher forcing. These methods are well-suited to optimizing perplexity but can result in poor sample quality since generating text requires conditioning on sequences of words that may have never been observed at training time. We propose to improve sample quality using Generative Adversarial Networks (GANs), which explicitly train the generator to produce high quality samples and have shown a lot of success in image generation. GANs were originally designed to output differentiable values, so discrete language generation is challenging for them. We claim that validation perplexity alone is not indicative of the quality of text generated by a model. We introduce an actor-critic conditional GAN that fills in missing text conditioned on the surrounding context. We show qualitatively and quantitatively, evidence that this produces more realistic conditional and unconditional text samples compared to a maximum likelihood trained model.", "title": "" }, { "docid": "bfa178f35027a55e8fd35d1c87789808", "text": "We present a generative model for the unsupervised learning of dependency structures. We also describe the multiplicative combination of this dependency model with a model of linear constituency. The product model outperforms both components on their respective evaluation metrics, giving the best published figures for unsupervised dependency parsing and unsupervised constituency parsing. We also demonstrate that the combined model works and is robust cross-linguistically, being able to exploit either attachment or distributional reg ularities that are salient in the data.", "title": "" }, { "docid": "56cf91a279fdcee59841cb9b8c866626", "text": "This paper describes a new maximum-power-point-tracking method for a photovoltaic system based on the Lagrange Interpolation Formula and proposes the particle swarm optimization method. The proposed control scheme eliminates the problems of conventional methods by using only a simple numerical calculation to initialize the particles around the global maximum power point. Hence, the suggested control scheme will utilize less iterations to reach the maximum power point. Simulation study is carried out using MATLAB/SIMULINK and compared with the Perturb and Observe method, the Incremental Conductance method, and the conventional Particle Swarm Optimization algorithm. The proposed algorithm is verified with the OPAL-RT real-time simulator. The simulation results confirm that the proposed algorithm can effectively enhance the stability and the fast tracking capability under abnormal insolation conditions.", "title": "" }, { "docid": "70d7c838e7b5c4318e8764edb5a70555", "text": "This research developed and tested a model of turnover contagion in which the job embeddedness and job search behaviors of coworkers influence employees’ decisions to quit. In a sample of 45 branches of a regional bank and 1,038 departments of a national hospitality firm, multilevel analysis revealed that coworkers’ job embeddedness and job search behaviors explain variance in individual “voluntary turnover” over and above that explained by other individual and group-level predictors. Broadly speaking, these results suggest that coworkers’ job embeddedness and job search behaviors play critical roles in explaining why people quit their jobs. Implications are discussed.", "title": "" }, { "docid": "9fab400cba6d9c91aba707c6952889f8", "text": "Deep Neural Networks (DNNs) have recently been shown to be vulnerable against adversarial examples, which are carefully crafted instances that can mislead DNNs to make errors during prediction. To better understand such attacks, a characterization is needed of the properties of regions (the so-called ‘adversarial subspaces’) in which adversarial examples lie. We tackle this challenge by characterizing the dimensional properties of adversarial regions, via the use of Local Intrinsic Dimensionality (LID). LID assesses the space-filling capability of the region surrounding a reference example, based on the distance distribution of the example to its neighbors. We first provide explanations about how adversarial perturbation can affect the LID characteristic of adversarial regions, and then show empirically that LID characteristics can facilitate the distinction of adversarial examples generated using state-of-the-art attacks. As a proof-of-concept, we show that a potential application of LID is to distinguish adversarial examples, and the preliminary results show that it can outperform several state-of-the-art detection measures by large margins for five attack strategies considered in this paper across three benchmark datasets . Our analysis of the LID characteristic for adversarial regions not only motivates new directions of effective adversarial defense, but also opens up more challenges for developing new attacks to better understand the vulnerabilities of DNNs.", "title": "" }, { "docid": "db1d87d3e5ab39ef639d7c53a740340a", "text": "Plants are natural producers of chemical substances, providing potential treatment of human ailments since ancient times. Some herbal chemicals in medicinal plants of traditional and modern medicine carry the risk of herb induced liver injury (HILI) with a severe or potentially lethal clinical course, and the requirement of a liver transplant. Discontinuation of herbal use is mandatory in time when HILI is first suspected as diagnosis. Although, herbal hepatotoxicity is of utmost clinical and regulatory importance, lack of a stringent causality assessment remains a major issue for patients with suspected HILI, while this problem is best overcome by the use of the hepatotoxicity specific CIOMS (Council for International Organizations of Medical Sciences) scale and the evaluation of unintentional reexposure test results. Sixty five different commonly used herbs, herbal drugs, and herbal supplements and 111 different herbs or herbal mixtures of the traditional Chinese medicine (TCM) are reported causative for liver disease, with levels of causality proof that appear rarely conclusive. Encouraging steps in the field of herbal hepatotoxicity focus on introducing analytical methods that identify cases of intrinsic hepatotoxicity caused by pyrrolizidine alkaloids, and on omics technologies, including genomics, proteomics, metabolomics, and assessing circulating micro-RNA in the serum of some patients with intrinsic hepatotoxicity. It remains to be established whether these new technologies can identify idiosyncratic HILI cases. To enhance its globalization, herbal medicine should universally be marketed as herbal drugs under strict regulatory surveillance in analogy to regulatory approved chemical drugs, proving a positive risk/benefit profile by enforcing evidence based clinical trials and excellent herbal drug quality.", "title": "" }, { "docid": "57290d8e0a236205c4f0ce887ffed3ab", "text": "We propose a novel, projection based way to incorporate the conditional information into the discriminator of GANs that respects the role of the conditional information in the underlining probabilistic model. This approach is in contrast with most frameworks of conditional GANs used in application today, which use the conditional information by concatenating the (embedded) conditional vector to the feature vectors. With this modification, we were able to significantly improve the quality of the class conditional image generation on ILSVRC2012 (ImageNet) 1000-class image dataset from the current state-of-the-art result, and we achieved this with a single pair of a discriminator and a generator. We were also able to extend the application to super-resolution and succeeded in producing highly discriminative super-resolution images. This new structure also enabled high quality category transformation based on parametric functional transformation of conditional batch normalization layers in the generator. The code with Chainer (Tokui et al., 2015), generated images and pretrained models are available at https://github.com/pfnet-research/sngan_projection.", "title": "" }, { "docid": "a6e2652aa074719ac2ca6e94d12fed03", "text": "■ Lincoln Laboratory led the nation in the development of high-power wideband radar with a unique capability for resolving target scattering centers and producing three-dimensional images of individual targets. The Laboratory fielded the first wideband radar, called ALCOR, in 1970 at Kwajalein Atoll. Since 1970 the Laboratory has developed and fielded several other wideband radars for use in ballistic-missile-defense research and space-object identification. In parallel with these radar systems, the Laboratory has developed high-capacity, high-speed signal and data processing techniques and algorithms that permit generation of target images and derivation of other target features in near real time. It has also pioneered new ways to realize improved resolution and scatterer-feature identification in wideband radars by the development and application of advanced signal processing techniques. Through the analysis of dynamic target images and other wideband observables, we can acquire knowledge of target form, structure, materials, motion, mass distribution, identifying features, and function. Such capability is of great benefit in ballistic missile decoy discrimination and in space-object identification.", "title": "" }, { "docid": "e82cd7c22668b0c9ed62b4afdf49d1f4", "text": "This paper presents a tutorial on delta-sigma fractional-N PLLs for frequency synthesis. The presentation assumes the reader has a working knowledge of integer-N PLLs. It builds on this knowledge by introducing the additional concepts required to understand ΔΣ fractional-N PLLs. After explaining the limitations of integerN PLLs with respect to tuning resolution, the paper introduces the delta-sigma fractional-N PLL as a means of avoiding these limitations. It then presents a selfcontained explanation of the relevant aspects of deltasigma modulation, an extension of the well known integerN PLL linearized model to delta-sigma fractional-N PLLs, a design example, and techniques for wideband digital modulation of the VCO within a delta-sigma fractional-N PLL.", "title": "" }, { "docid": "10d9758469a1843d426f56a379c2fecb", "text": "A novel compact-size branch-line coupler using composite right/left-handed transmission lines is proposed in this paper. In order to obtain miniaturization, composite right/left-handed transmission lines with novel complementary split single ring resonators which are realized by loading a pair of meander-shaped-slots in the split of the ring are designed. This novel coupler occupies only 22.8% of the area of the conventional approach at 0.7 GHz. The proposed coupler can be implemented by using the standard printed-circuit-board etching processes without any implementation of lumped elements and via-holes, making it very useful for wireless communication systems. The agreement between measured and stimulated results validates the feasible configuration of the proposed coupler.", "title": "" }, { "docid": "58858f0cd3561614f1742fe7b0380861", "text": "This study focuses on how technology can encourage and ease awkwardness-free communications between people in real-world scenarios. We propose a device, The Wearable Aura, able to project a personalized animation onto one's Personal Distance zone. This projection, as an extension of one-self is reactive to user's cognitive status, aware of its environment, context and user's activity. Our user study supports the idea that an interactive projection around an individual can indeed benefit the communications with other individuals.", "title": "" }, { "docid": "e5539337c36ec7a03bf327069156ea2c", "text": "An approach is proposed to estimate the location, velocity, and acceleration of a target vehicle to avoid a possible collision. Radial distance, velocity, and acceleration are extracted from the hybrid linear frequency modulation (LFM)/frequency-shift keying (FSK) echoed signals and then processed using the Kalman filter and the trilateration process. This approach proves to converge fast with good accuracy. Two other approaches, i.e., an extended Kalman filter (EKF) and a two-stage Kalman filter (TSKF), are used as benchmarks for comparison. Several scenarios of vehicle movement are also presented to demonstrate the effectiveness of this approach.", "title": "" }, { "docid": "1ad353e3d7765e1681c062c777087be7", "text": "The cyber world provides an anonymous environment for criminals to conduct malicious activities such as spamming, sending ransom e-mails, and spreading botnet malware. Often, these activities involve textual communication between a criminal and a victim, or between criminals themselves. The forensic analysis of online textual documents for addressing the anonymity problem called authorship analysis is the focus of most cybercrime investigations. Authorship analysis is the statistical study of linguistic and computational characteristics of the written documents of individuals. This paper is the first work that presents a unified data mining solution to address authorship analysis problems based on the concept of frequent pattern-based writeprint. Extensive experiments on real-life data suggest that our proposed solution can precisely capture the writing styles of individuals. Furthermore, the writeprint is effective to identify the author of an anonymous text from ∗Corresponding author Email addresses: iqbal_f@ciise.concordia.ca (Farkhund Iqbal), h_binsal@ciise.concordia.ca (Hamad Binsalleeh), fung@ciise.concordia.ca (Benjamin C. M. Fung), debbabi@ciise.concordia.ca (Mourad Debbabi) Preprint submitted to Information Sciences March 10, 2011 a group of suspects and to infer sociolinguistic characteristics of the author.", "title": "" }, { "docid": "fb6494dcf01a927597ff784a3323e8c2", "text": "Detection of defects in induction machine rotor bars for unassembled motors is required to evaluate machines considered for repair as well as fulfilling incremental quality assurance checks in the manufacture of new machines. Detection of rotor bar defects prior to motor assembly are critical in increasing repair efficiency and assuring the quality of newly manufactured machines. Many methods of detecting rotor bar defects in unassembled motors lack the sensitivity to find both major and minor defects in both cast and fabricated rotors along with additional deficiencies in quantifiable test results and arc-flash safety hazards. A process of direct magnetic field analysis can examine measurements from induced currents in a rotor separated from its stator yielding a high-resolution fingerprint of a rotor's magnetic field. This process identifies both major and minor rotor bar defects in a repeatable and quantifiable manner appropriate for numerical evaluation without arc-flash safety hazards.", "title": "" }, { "docid": "d0e5ddcc0aa85ba6a3a18796c335dcd2", "text": "A novel planar end-fire circularly polarized (CP) complementary Yagi array antenna is proposed. The antenna has a compact and complementary structure, and exhibits excellent properties (low profile, single feed, broadband, high gain, and CP radiation). It is based on a compact combination of a pair of complementary Yagi arrays with a common driven element. In the complementary structure, the vertical polarization is contributed by a microstrip patch Yagi array, while the horizontal polarization is yielded by a strip dipole Yagi array. With the combination of the two orthogonally polarized Yagi arrays, a CP antenna with high gain and wide bandwidth is obtained. With a profile of <inline-formula> <tex-math notation=\"LaTeX\">$0.05\\lambda _{\\mathrm{0}}$ </tex-math></inline-formula> (3 mm), the antenna has a gain of about 8 dBic, an impedance bandwidth (<inline-formula> <tex-math notation=\"LaTeX\">$\\vert S_{11}\\vert < -10 $ </tex-math></inline-formula> dB) of 13.09% (4.57–5.21 GHz) and a 3-dB axial-ratio bandwidth of 10.51% (4.69–5.21 GHz).", "title": "" }, { "docid": "70c6da9da15ad40b4f64386b890ccf51", "text": "In this paper, we describe a positioning control for a SCARA robot using a recurrent neural network. The simultaneous perturbation optimization method is used for the learning rule of the recurrent neural network. Then the recurrent neural network learns inverse dynamics of the SCARA robot. We present details of the control scheme using the simultaneous perturbation. Moreover, we consider an example for two target positions using an actual SCARA robot. The result is shown.", "title": "" }, { "docid": "0fb45311d5e6a7348917eaa12ffeab46", "text": "Question Answering is a task which requires building models capable of providing answers to questions expressed in human language. Full question answering involves some form of reasoning ability. We introduce a neural network architecture for this task, which is a form of Memory Network, that recognizes entities and their relations to answers through a focus attention mechanism. Our model is named Question Dependent Recurrent Entity Network and extends Recurrent Entity Network by exploiting aspects of the question during the memorization process. We validate the model on both synthetic and real datasets: the bAbI question answering dataset and the CNN & Daily News reading comprehension dataset. In our experiments, the models achieved a State-ofThe-Art in the former and competitive results in the latter.", "title": "" }, { "docid": "decbbd09bcf7a36a3886d52864e9a08c", "text": "INTRODUCTION\nBirth preparedness and complication readiness (BPCR) is a strategy to promote timely use of skilled maternal and neonatal care during childbirth. According to World Health Organization, BPCR should be a key component of focused antenatal care. Dakshina Kannada, a coastal district of Karnataka state, is categorized as a high-performing district (institutional delivery rate >25%) under the National Rural Health Mission. However, a substantial proportion of women in the district experience complications during pregnancy (58.3%), childbirth (45.7%), and postnatal (17.4%) period. There is a paucity of data on BPCR practice and the factors associated with it in the district. Exploring this would be of great use in the evidence-based fine-tuning of ongoing maternal and child health interventions.\n\n\nOBJECTIVE\nTo assess BPCR practice and the factors associated with it among the beneficiaries of two rural Primary Health Centers (PHCs) of Dakshina Kannada district, Karnataka, India.\n\n\nMETHODS\nA facility-based cross-sectional study was conducted among 217 pregnant (>28 weeks of gestation) and recently delivered (in the last 6 months) women in two randomly selected PHCs from June -September 2013. Exit interviews were conducted using a pre-designed semi-structured interview schedule. Information regarding socio-demographic profile, obstetric variables, and knowledge of key danger signs was collected. BPCR included information on five key components: identified the place of delivery, saved money to pay for expenses, mode of transport identified, identified a birth companion, and arranged a blood donor if the need arises. In this study, a woman who recalled at least two key danger signs in each of the three phases, i.e., pregnancy, childbirth, and postpartum (total six) was considered as knowledgeable on key danger signs. Optimal BPCR practice was defined as following at least three out of five key components of BPCR.\n\n\nOUTCOME MEASURES\nProportion, Odds ratio, and adjusted Odds ratio (adj OR) for optimal BPCR practice.\n\n\nRESULTS\nA total of 184 women completed the exit interview (mean age: 26.9±3.9 years). Optimal BPCR practice was observed in 79.3% (95% CI: 73.5-85.2%) of the women. Multivariate logistic regression revealed that age >26 years (adj OR = 2.97; 95%CI: 1.15-7.7), economic status of above poverty line (adj OR = 4.3; 95%CI: 1.12-16.5), awareness of minimum two key danger signs in each of the three phases, i.e., pregnancy, childbirth, and postpartum (adj OR = 3.98; 95%CI: 1.4-11.1), preference to private health sector for antenatal care/delivery (adj OR = 2.9; 95%CI: 1.1-8.01), and woman's discussion about the BPCR with her family members (adj OR = 3.4; 95%CI: 1.1-10.4) as the significant factors associated with optimal BPCR practice.\n\n\nCONCLUSION\nIn this study population, BPCR practice was better than other studies reported from India. Healthcare workers at the grassroots should be encouraged to involve women's family members while explaining BPCR and key danger signs with a special emphasis on young (<26 years) and economically poor women. Ensuring a reinforcing discussion between woman and her family members may further enhance the BPCR practice.", "title": "" }, { "docid": "91eaef6e482601533656ca4786b7a023", "text": "Budget optimization is one of the primary decision-making issues faced by advertisers in search auctions. A quality budget optimization strategy can significantly improve the effectiveness of search advertising campaigns, thus helping advertisers to succeed in the fierce competition of online marketing. This paper investigates budget optimization problems in search advertisements and proposes a novel hierarchical budget optimization framework (BOF), with consideration of the entire life cycle of advertising campaigns. Then, we formulated our BOF framework, made some mathematical analysis on some desirable properties, and presented an effective solution algorithm. Moreover, we established a simple but illustrative instantiation of our BOF framework which can help advertisers to allocate and adjust the budget of search advertising campaigns. Our BOF framework provides an open testbed environment for various strategies of budget allocation and adjustment across search advertising markets. With field reports and logs from real-world search advertising campaigns, we designed some experiments to evaluate the effectiveness of our BOF framework and instantiated strategies. Experimental results are quite promising, where our BOF framework and instantiated strategies perform better than two baseline budget strategies commonly used in practical advertising campaigns.", "title": "" }, { "docid": "bba4d637cf40e81ea89e61e875d3c425", "text": "Recent years have witnessed the fast development of UAVs (unmanned aerial vehicles). As an alternative to traditional image acquisition methods, UAVs bridge the gap between terrestrial and airborne photogrammetry and enable flexible acquisition of high resolution images. However, the georeferencing accuracy of UAVs is still limited by the low-performance on-board GNSS and INS. This paper investigates automatic geo-registration of an individual UAV image or UAV image blocks by matching the UAV image(s) with a previously taken georeferenced image, such as an individual aerial or satellite image with a height map attached or an aerial orthophoto with a DSM (digital surface model) attached. As the biggest challenge for matching UAV and aerial images is in the large differences in scale and rotation, we propose a novel feature matching method for nadir or slightly tilted images. The method is comprised of a dense feature detection scheme, a one-to-many matching strategy and a global geometric verification scheme. The proposed method is able to find thousands of valid matches in cases where SIFT and ASIFT fail. Those matches can be used to geo-register the whole UAV image block towards the reference image data. When the reference images offer high georeferencing accuracy, the UAV images can also be geolocalized in a global coordinate system. A series of experiments involving different scenarios was conducted to validate the proposed method. The results demonstrate that our approach achieves not only decimeter-level registration accuracy, but also comparable global accuracy as the reference images.", "title": "" } ]
scidocsrr
fb3ec739ae67416aa9f0feacf4d301c9
Computational Technique for an Efficient Classification of Protein Sequences With Distance-Based Sequence Encoding Algorithm
[ { "docid": "c3525081c0f4eec01069dd4bd5ef12ab", "text": "More than twelve years have elapsed since the first public release of WEKA. In that time, the software has been rewritten entirely from scratch, evolved substantially and now accompanies a text on data mining [35]. These days, WEKA enjoys widespread acceptance in both academia and business, has an active community, and has been downloaded more than 1.4 million times since being placed on Source-Forge in April 2000. This paper provides an introduction to the WEKA workbench, reviews the history of the project, and, in light of the recent 3.6 stable release, briefly discusses what has been added since the last stable version (Weka 3.4) released in 2003.", "title": "" } ]
[ { "docid": "d8042183e064ffba69b54246b17b9ff4", "text": "Offshore software development is a new trend in the information technology (IT) outsourcing field, fueled by the globalization of IT and the improvement of telecommunication facilities. Countries such as India, Ireland, and Israel have established a significant presence in this market. In this article, we discuss how software processes affect offshore development projects. We use data from projects in India, and focus on three measures of project performance: effort, elapsed time, and software rework.", "title": "" }, { "docid": "69d3c943755734903b9266ca2bd2fad1", "text": "This paper describes experiments in Machine Learning for text classification using a new representation of text based on WordNet hypernyms. Six binary classification tasks of varying diff iculty are defined, and the Ripper system is used to produce discrimination rules for each task using the new hypernym density representation. Rules are also produced with the commonly used bag-of-words representation, incorporating no knowledge from WordNet. Experiments show that for some of the more diff icult tasks the hypernym density representation leads to significantly more accurate and more comprehensible rules.", "title": "" }, { "docid": "a2cf369a67507d38ac1a645e84525497", "text": "Development of a cystic mass on the nasal dorsum is a very rare complication of aesthetic rhinoplasty. Most reported cases are of mucous cyst and entrapment of the nasal mucosa in the subcutaneous space due to traumatic surgical technique has been suggested as a presumptive pathogenesis. Here, we report a case of dorsal nasal cyst that had a different pathogenesis for cyst formation. A 58-yr-old woman developed a large cystic mass on the nasal radix 30 yr after augmentation rhinoplasty with silicone material. The mass was removed via a direct open approach and the pathology findings revealed a foreign body inclusion cyst associated with silicone. Successful nasal reconstruction was performed with autologous cartilages. Discussion and a brief review of the literature will be focused on the pathophysiology of and treatment options for a postrhinoplasty dorsal cyst.", "title": "" }, { "docid": "60ac1fa826816d39562104849fff8f46", "text": "The increased attention to environmentalism in western societies has been accompanied by a rise in ecotourism, i.e. ecologically sensitive travel to remote areas to learn about ecosystems, as well as in cultural tourism, focusing on the people who are a part of ecosystems. Increasingly, the internet has partnered with ecotourism companies to provide information about destinations and facilitate travel arrangements. This study reviews the literature linking ecotourism and sustainable development, as well as prior research showing that cultures have been historically commodified in tourism advertising for developing countries destinations. We examine seven websites advertising ecotourism and cultural tourism and conclude that: (1) advertisements for natural and cultural spaces are not always consistent with the discourse of sustainability; and (2) earlier critiques of the commodification of culture in print advertising extend to internet advertising also.", "title": "" }, { "docid": "46170fe683c78a767cb15c0ac3437e83", "text": "Recently, efforts in the development of speech recognition systems and robots have come to fruition with an overflow of applications in our daily lives. However, we are still far from achieving natural interaction between humans and robots, given that robots do not take into account the emotional state of speakers. The objective of this research is to create an automatic emotion classifier integrated with a robot, such that the robot can understand the emotional state of a human user by analyzing the speech signals from the user. This becomes particularly relevant in the realm of using assistive robotics to tailor therapeutic techniques towards assisting children with autism spectrum disorder (ASD). Over the past two decades, the number of children being diagnosed with ASD has been rapidly increasing, yet the clinical and societal support have not been enough to cope with the needs. Therefore, finding alternative, affordable, and accessible means of therapy and assistance has become more of a concern. Improving audio-based emotion prediction for children with ASD will allow for the robotic system to properly assess the engagement level of the child and modify its responses to maximize the quality of interaction between the robot and the child and sustain an interactive learning environment.", "title": "" }, { "docid": "3a58c1a2e4428c0b875e1202055e5b13", "text": "Short texts usually encounter data sparsity and ambiguity problems in representations for their lack of context. In this paper, we propose a novel method to model short texts based on semantic clustering and convolutional neural network. Particularly, we first discover semantic cliques in embedding spaces by a fast clustering algorithm. Then, multi-scale semantic units are detected under the supervision of semantic cliques, which introduce useful external knowledge for short texts. These meaningful semantic units are combined and fed into convolutional layer, followed by max-pooling operation. Experimental results on two open benchmarks validate the effectiveness of the proposed method.", "title": "" }, { "docid": "918bf13ef0289eb9b78309c83e963b26", "text": "For information retrieval, it is useful to classify documents using a hierarchy of terms from a domain. One problem is that, for many domains, hierarchies of terms are not available. The task 17 of SemEval 2015 addresses the problem of structuring a set of terms from a given domain into a taxonomy without manual intervention. Here we present some simple taxonomy structuring techniques, such as term overlap and document and sentence cooccurrence in large quantities of text (English Wikipedia) to produce hypernym pairs for the eight domain lists supplied by the task organizers. Our submission ranked first in this 2015 benchmark, which suggests that overly complicated methods might need to be adapted to individual domains. We describe our generic techniques and present an initial evaluation of results.", "title": "" }, { "docid": "640fd96e02d8aa69be488323f77b40ba", "text": "Low Power Wide Area (LPWA) connectivity, a wireless wide area technology that is characterized for interconnecting devices with low bandwidth connectivity and focusing on range and power efficiency, is seen as one of the fastest-growing components of Internet-of-Things (IoT). The LPWA connectivity is used to serve a diverse range of vertical applications, including agriculture, consumer, industrial, logistic, smart building, smart city and utilities. 3GPP has defined the maiden Narrowband IoT (NB-IoT) specification in Release 13 (Rel-13) to accommodate the LPWA demand. Several major cellular operators, such as China Mobile, Deutsch Telekom and Vodafone, have announced their NB-IoT trials or commercial network in year 2017. In Telekom Malaysia, we have setup a NB-IoT trial network for End-to-End (E2E) integration study. Our experimental assessment showed that the battery lifetime target for NB-IoT devices as stated by 3GPP utilizing latest-to-date Commercial Off-The-Shelf (COTS) NB-IoT modules is yet to be realized. Finally, several recommendations on how to optimize the battery lifetime while designing firmware for NB-IoT device are also provided.", "title": "" }, { "docid": "aa3c0d7d023e1f9795df048ee44d92ec", "text": "Correspondence Institute of Computer Science, University of Tartu, Juhan Liivi 2, 50409 Tartu, Estonia Email: orlenyslp@ut.ee Summary Blockchain platforms, such as Ethereum, allow a set of actors to maintain a ledger of transactions without relying on a central authority and to deploy scripts, called smart contracts, that are executedwhenever certain transactions occur. These features can be used as basic building blocks for executing collaborative business processes between mutually untrusting parties. However, implementing business processes using the low-level primitives provided by blockchain platforms is cumbersome and error-prone. In contrast, established business process management systems, such as those based on the standard Business Process Model and Notation (BPMN), provide convenient abstractions for rapid development of process-oriented applications. This article demonstrates how to combine the advantages of a business process management system with those of a blockchain platform. The article introduces a blockchain-based BPMN execution engine, namely Caterpillar. Like any BPMN execution engine, Caterpillar supports the creation of instances of a process model and allows users to monitor the state of process instances and to execute tasks thereof. The specificity of Caterpillar is that the state of each process instance is maintained on the (Ethereum) blockchain and the workflow routing is performed by smart contracts generated by a BPMN-to-Solidity compiler. The Caterpillar compiler supports a large array of BPMN constructs, including subprocesses, multi-instances activities and event handlers. The paper describes the architecture of Caterpillar, and the interfaces it provides to support the monitoring of process instances, the allocation and execution of work items, and the execution of service tasks.", "title": "" }, { "docid": "8e082f030aa5c5372fe327d4291f1864", "text": "The Internet of Things (IoT) describes the interconnection of objects (or Things) for various purposes including identification, communication, sensing, and data collection. “Things” in this context range from traditional computing devices like Personal Computers (PC) to general household objects embedded with capabilities for sensing and/or communication through the use of technologies such as Radio Frequency Identification (RFID). This conceptual paper, from a philosophical viewpoint, introduces an initial set of guiding principles also referred to in the paper as commandments that can be applied by all the stakeholders involved in the IoT during its introduction, deployment and thereafter. © 2011 Published by Elsevier Ltd. Selection and/or peer-review under responsibility of [name organizer]", "title": "" }, { "docid": "f376948c1b8952b0b19efad3c5ca0471", "text": "This essay grew out of an examination of one-tailed significance testing. One-tailed tests were little advocated by the founders of modern statistics but are widely used and recommended nowadays in the biological, behavioral and social sciences. The high frequency of their use in ecology and animal behavior and their logical indefensibil-ity have been documented in a companion review paper. In the present one, we trace the roots of this problem and counter some attacks on significance testing in general. Roots include: the early but irrational dichotomization of the P scale and adoption of the 'significant/non-significant' terminology; the mistaken notion that a high P value is evidence favoring the null hypothesis over the alternative hypothesis; and confusion over the distinction between statistical and research hypotheses. Resultant widespread misuse and misinterpretation of significance tests have also led to other problems, such as unjustifiable demands that reporting of P values be disallowed or greatly reduced and that reporting of confidence intervals and standardized effect sizes be required in their place. Our analysis of these matters thus leads us to a recommendation that for standard types of significance assessment the paleoFisherian and Neyman-Pearsonian paradigms be replaced by a neoFisherian one. The essence of the latter is that a critical α (probability of type I error) is not specified, the terms 'significant' and 'non-significant' are abandoned, that high P values lead only to suspended judgments, and that the so-called \" three-valued logic \" of Cox, Kaiser, Tukey, Tryon and Harris is adopted explicitly. Confidence intervals and bands, power analyses, and severity curves remain useful adjuncts in particular situations. Analyses conducted under this paradigm we term neoFisherian significance assessments (NFSA). Their role is assessment of the existence, sign and magnitude of statistical effects. The common label of null hypothesis significance tests (NHST) is retained for paleoFisherian and Neyman-Pearsonian approaches and their hybrids. The original Neyman-Pearson framework has no utility outside quality control type applications. Some advocates of Bayesian, likelihood and information-theoretic approaches to model selection have argued that P values and NFSAs are of little or no value, but those arguments do not withstand critical review. Champions of Bayesian methods in particular continue to overstate their value and relevance. 312 Hurlbert & Lombardi • ANN. ZOOL. FeNNICI Vol. 46 \" … the object of statistical methods is the reduction of data. A quantity of data … is to be replaced by relatively few quantities which shall …", "title": "" }, { "docid": "7d68eaf1d9916b0504ac13f5ff9ef980", "text": "The success of Bitcoin largely relies on the perception of a fair underlying peer-to-peer protocol: blockchain. Fairness here essentially means that the reward (in bitcoins) given to any participant that helps maintain the consistency of the protocol by mining, is proportional to the computational power devoted by that participant to the mining task. Without such perception of fairness, honest miners might be disincentivized to maintain the protocol, leaving the space for dishonest miners to reach a majority and jeopardize the consistency of the entire system. We prove, in this paper, that blockchain is actually unfair, even in a distributed system of only two honest miners. In a realistic setting where message delivery is not instantaneous, the ratio between the (expected) number of blocks committed by two miners is at least exponential in the product of the message delay and the difference between the two miners’ hashrates. To obtain our result, we model the growth of blockchain, which may be of independent interest. We also apply our result to explain recent empirical observations and vulnerabilities.", "title": "" }, { "docid": "01165a990d16000ac28b0796e462147a", "text": "Esthesioneuroblastoma is a rare malignant tumor of sinonasal origin. These tumors typically present with unilateral nasal obstruction and epistaxis, and diagnosis is confirmed on biopsy. Over the past 15 years, significant advances have been made in endoscopic technology and techniques that have made this tumor amenable to expanded endonasal resection. There is growing evidence supporting the feasibility of safe and effective resection of esthesioneuroblastoma via an expanded endonasal approach. This article outlines a technique for endoscopic resection of esthesioneuroblastoma and reviews the current literature on esthesioneuroblastoma with emphasis on outcomes after endoscopic resection of these malignant tumors.", "title": "" }, { "docid": "71bafd4946377eaabff813bffd5617d7", "text": "Autumn-seeded winter cereals acquire tolerance to freezing temperatures and become vernalized by exposure to low temperature (LT). The level of accumulated LT tolerance depends on the cold acclimation rate and factors controlling timing of floral transition at the shoot apical meristem. In this study, genomic loci controlling the floral transition time were mapped in a winter wheat (T. aestivum L.) doubled haploid (DH) mapping population segregating for LT tolerance and rate of phenological development. The final leaf number (FLN), days to FLN, and days to anthesis were determined for 142 DH lines grown with and without vernalization in controlled environments. Analysis of trait data by composite interval mapping (CIM) identified 11 genomic regions that carried quantitative trait loci (QTLs) for the developmental traits studied. CIM analysis showed that the time for floral transition in both vernalized and non-vernalized plants was controlled by common QTL regions on chromosomes 1B, 2A, 2B, 6A and 7A. A QTL identified on chromosome 4A influenced floral transition time only in vernalized plants. Alleles of the LT-tolerant parent, Norstar, delayed floral transition at all QTLs except at the 2A locus. Some of the QTL alleles delaying floral transition also increased the length of vegetative growth and delayed flowering time. The genes underlying the QTLs identified in this study encode factors involved in regional adaptation of cold hardy winter wheat.", "title": "" }, { "docid": "1865a404c970d191ed55e7509b21fb9e", "text": "Most machine learning methods are known to capture and exploit biases of the training data. While some biases are beneficial for learning, others are harmful. Specifically, image captioning models tend to exaggerate biases present in training data (e.g., if a word is present in 60% of training sentences, it might be predicted in 70% of sentences at test time). This can lead to incorrect captions in domains where unbiased captions are desired, or required, due to over-reliance on the learned prior and image context. In this work we investigate generation of gender-specific caption words (e.g. man, woman) based on the person’s appearance or the image context. We introduce a new Equalizer model that encourages equal gender probability when gender evidence is occluded in a scene and confident predictions when gender evidence is present. The resulting model is forced to look at a person rather than use contextual cues to make a gender-specific prediction. The losses that comprise our model, the Appearance Confusion Loss and the Confident Loss, are general, and can be added to any description model in order to mitigate impacts of unwanted bias in a description dataset. Our proposed model has lower error than prior work when describing images with people and mentioning their gender and more closely matches the ground truth ratio of sentences including women to sentences including men. Finally, we show that our model more often looks at people when predicting their gender. 1", "title": "" }, { "docid": "7ad00ade30fad561b4caca2fb1326ed8", "text": "Today, digital games are available on a variety of mobile devices, such as tablet devices, portable game consoles and smart phones. Not only that, the latest mixed reality technology on mobile devices allows mobile games to integrate the real world environment into gameplay. However, little has been done to test whether the surroundings of play influence gaming experience. In this paper, we describe two studies done to test the effect of surroundings on immersion. Study One uses mixed reality games to investigate whether the integration of the real world environment reduces engagement. Whereas Study Two explored the effects of manipulating the lighting level, and therefore reducing visibility, of the surroundings. We found that immersion is reduced in the conditions where visibility of the surroundings is high. We argue that higher awareness of the surroundings has a strong impact on gaming experience.", "title": "" }, { "docid": "afe1be9e13ca6e2af2c5177809e7c893", "text": "Scar evaluation and revision techniques are chief among the most important skills in the facial plastic and reconstructive surgeon’s armamentarium. Often minimized in importance, these techniques depend as much on a thorough understanding of facial anatomy and aesthetics, advanced principles of wound healing, and an appreciation of the overshadowing psychological trauma as they do on thorough technical analysis and execution [1,2]. Scar revision is unique in the spectrum of facial plastic and reconstructive surgery because the initial traumatic event and its immediate treatment usually cannot be controlled. Patients who are candidates for scar revision procedures often present after significant loss of regional tissue, injury that crosses anatomically distinct facial aesthetic units, wound closure by personnel less experienced in plastic surgical technique, and poor post injury wound management [3,4]. While no scar can be removed completely, plastic surgeons can often improve the appearance of a scar, making it less obvious through the injection or application of certain steroid medications or through surgical procedures known as scar revisions.There are many variables affect the severity of scarring, including the size and depth of the wound, blood supply to the area, the thickness and color of your skin, and the direction of the scar [5,6].", "title": "" }, { "docid": "f284c6e32679d8413e366d2daf1d4613", "text": "Summary form only given. Existing studies on ensemble classifiers typically take a static approach in assembling individual classifiers, in which all the important features are specified in advance. In this paper, we propose a new concept, dynamic ensemble, as an advanced classifier that could have dynamic component classifiers and have dynamic configurations. Toward this goal, we have substantially expanded the existing \"overproduce and choose\" paradigm for ensemble construction. A new algorithm called BAGA is proposed to explore this approach. Taking a set of decision tree component classifiers as input, BAGA generates a set of candidate ensembles using combined bagging and genetic algorithm techniques so that component classifiers are determined at execution time. Empirical studies have been carried out on variations of the BAGA algorithm, where the sizes of chosen classifiers, effects of bag size, voting function and evaluation functions on the dynamic ensemble construction, are investigated.", "title": "" }, { "docid": "8e74a27a3edea7cf0e88317851bc15eb", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://dv1litvip.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" } ]
scidocsrr
e4ba62e072c6b93ff2d661792496595b
Game theory based mitigation of Interest flooding in Named Data Network
[ { "docid": "e253fe7f481dc9fbd14a69e4c7d3bf23", "text": "Current Internet is reaching the limits of its capabilities due to its function transition from host-to-host communication to content dissemination. Named Data Networking (NDN) - an instantiation of Content-Centric Networking approach, embraces this shift by stressing the content itself, rather than where it locates. NDN tries to provide better security and privacy than current Internet does, and resilience to Distributed Denial of Service (DDoS) is a significant issue. In this paper, we present a specific and concrete scenario of DDoS attack in NDN, where perpetrators make use of NDN's packet forwarding rules to send out Interest packets with spoofed names as attacking packets. Afterwards, we identify the victims of NDN DDoS attacks include both the hosts and routers. But the largest victim is not the hosts, but the routers, more specifically, the Pending Interest Table (PIT) within the router. PIT brings NDN many elegant features, but it suffers from vulnerability. We propose Interest traceback as a counter measure against the studied NDN DDoS attacks, which traces back to the originator of the attacking Interest packets. At last, we assess the harmful consequences brought by these NDN DDoS attacks and evaluate the Interest traceback counter measure. Evaluation results reveal that the Interest traceback method effectively mitigates the NDN DDoS attacks studied in this paper.", "title": "" } ]
[ { "docid": "2f201cd1fe90e0cd3182c672110ce96d", "text": "BACKGROUND\nFor many years, high dose radiation therapy was the standard treatment for patients with locally or regionally advanced non-small-cell lung cancer (NSCLC), despite a 5-year survival rate of only 3%-10% following such therapy. From May 1984 through May 1987, the Cancer and Leukemia Group B (CALGB) conducted a randomized trial that showed that induction chemotherapy before radiation therapy improved survival during the first 3 years of follow-up.\n\n\nPURPOSE\nThis report provides data for 7 years of follow-up of patients enrolled in the CALGB trial.\n\n\nMETHODS\nThe patient population consisted of individuals who had clinical or surgical stage III, histologically documented NSCLC; a CALGB performance status of 0-1; less than 5% loss of body weight in the 3 months preceding diagnosis; and radiographically visible disease. Patients were randomly assigned to receive either 1) cisplatin (100 mg/m2 body surface area intravenously on days 1 and 29) and vinblastine (5 mg/m2 body surface area intravenously weekly on days 1, 8, 15, 22, and 29) followed by radiation therapy with 6000 cGy given in 30 fractions beginning on day 50 (CT-RT group) or 2) radiation therapy with 6000 cGy alone beginning on day 1 (RT group) for a maximum duration of 6-7 weeks. Patients were evaluated for tumor regression if they had measurable or evaluable disease and were monitored for toxic effects, disease progression, and date of death.\n\n\nRESULTS\nThere were 78 eligible patients randomly assigned to the CT-RT group and 77 randomly assigned to the RT group. Both groups were similar in terms of sex, age, histologic cell type, performance status, substage of disease, and whether staging had been clinical or surgical. All patients had measurable or evaluable disease at the time of random assignment to treatment groups. Both groups received a similar quantity and quality of radiation therapy. As previously reported, the rate of tumor response, as determined radiographically, was 56% for the CT-RT group and 43% for the RT group (P = .092). After more than 7 years of follow-up, the median survival remains greater for the CT-RT group (13.7 months) than for the RT group (9.6 months) (P = .012) as ascertained by the logrank test (two-sided). The percentages of patients surviving after years 1 through 7 were 54, 26, 24, 19, 17, 13, and 13 for the CT-RT group and 40, 13, 10, 7, 6, 6, and 6 for the RT group.\n\n\nCONCLUSIONS\nLong-term follow-up confirms that patients with stage III NSCLC who receive 5 weeks of chemotherapy with cisplatin and vinblastine before radiation therapy have a 4.1-month increase in median survival. The use of sequential chemotherapy-radiotherapy increases the projected proportion of 5-year survivors by a factor of 2.8 compared with that of radiotherapy alone. However, inasmuch as 80%-85% of such patients still die within 5 years and because treatment failure occurs both in the irradiated field and at distant sites in patients receiving either sequential chemotherapy-radiotherapy or radiotherapy alone, the need for further improvements in both the local and systemic treatment of this disease persists.", "title": "" }, { "docid": "60d6869cadebea71ef549bb2a7d7e5c3", "text": "BACKGROUND\nAcne is a common condition seen in up to 80% of people between 11 and 30 years of age and in up to 5% of older adults. In some patients, it can result in permanent scars that are surprisingly difficult to treat. A relatively new treatment, termed skin needling (needle dermabrasion), seems to be appropriate for the treatment of rolling scars in acne.\n\n\nAIM\nTo confirm the usefulness of skin needling in acne scarring treatment.\n\n\nMETHODS\nThe present study was conducted from September 2007 to March 2008 at the Department of Systemic Pathology, University of Naples Federico II and the UOC Dermatology Unit, University of Rome La Sapienza. In total, 32 patients (20 female, 12 male patients; age range 17-45) with acne rolling scars were enrolled. Each patient was treated with a specific tool in two sessions. Using digital cameras, photos of all patients were taken to evaluate scar depth and, in five patients, silicone rubber was used to make a microrelief impression of the scars. The photographic data were analysed by using the sign test statistic (alpha < 0.05) and the data from the cutaneous casts were analysed by fast Fourier transformation (FFT).\n\n\nRESULTS\nAnalysis of the patient photographs, supported by the sign test and of the degree of irregularity of the surface microrelief, supported by FFT, showed that, after only two sessions, the severity grade of rolling scars in all patients was greatly reduced and there was an overall aesthetic improvement. No patient showed any visible signs of the procedure or hyperpigmentation.\n\n\nCONCLUSION\nThe present study confirms that skin needling has an immediate effect in improving acne rolling scars and has advantages over other procedures.", "title": "" }, { "docid": "564c71ca08e39063f5de01fa5c8e74a3", "text": "The Internet of Things (IoT) is a latest concept of machine-to-machine communication, that also gave birth to several information security problems. Many traditional software solutions fail to address these security issues such as trustworthiness of remote entities. Remote attestation is a technique given by  Trusted Computing Group (TCG) to monitor and verify this trustworthiness. In this regard, various remote validation methods have been proposed. However, static techniques cannot provide resistance to recent attacks e.g. the latest Heartbleed bug, and the recent high profile glibc attack on Linux operating system. In this research, we have designed and implemented a lightweight Linux kernel security module for IoT devices that is  scalable enough to monitor multiple applications in the kernel space. The newly built technique can measure and report multiple application’s static and dynamic behavior simultaneously. Verification of behavior of applications is performed via machine learning techniques. The result shows that deviating behavior can be detected successfully by the verifier.", "title": "" }, { "docid": "51344373373bf04846ee40b049b086b9", "text": "We present a new algorithm for real-time hand tracking on commodity depth-sensing devices. Our method does not require a user-specific calibration session, but rather learns the geometry as the user performs live in front of the camera, thus enabling seamless virtual interaction at the consumer level. The key novelty in our approach is an online optimization algorithm that jointly estimates pose and shape in each frame, and determines the uncertainty in such estimates. This knowledge allows the algorithm to integrate per-frame estimates over time, and build a personalized geometric model of the captured user. Our approach can easily be integrated in state-of-the-art continuous generative motion tracking software. We provide a detailed evaluation that shows how our approach achieves accurate motion tracking for real-time applications, while significantly simplifying the workflow of accurate hand performance capture. We also provide quantitative evaluation datasets at http://gfx.uvic.ca/datasets/handy", "title": "" }, { "docid": "d67c9703ee45ad306384bbc8fe11b50e", "text": "Approximately thirty-four percent of people who experience acute low back pain (LBP) will have recurrent episodes. It remains unclear why some people experience recurrences and others do not, but one possible cause is a loss of normal control of the back muscles. We investigated whether the control of the short and long fibres of the deep back muscles was different in people with recurrent unilateral LBP from healthy participants. Recurrent unilateral LBP patients, who were symptom free during testing, and a group of healthy volunteers, participated. Intramuscular and surface electrodes recorded the electromyographic activity (EMG) of the short and long fibres of the lumbar multifidus and the shoulder muscle, deltoid, during a postural perturbation associated with a rapid arm movement. EMG onsets of the short and long fibres, relative to that of deltoid, were compared between groups, muscles, and sides. In association with a postural perturbation, short fibre EMG onset occurred later in participants with recurrent unilateral LBP than in healthy participants (p=0.022). The short fibres were active earlier than long fibres on both sides in the healthy participants (p<0.001) and on the non-painful side in the LBP group (p=0.045), but not on the previously painful side in the LBP group. Activity of deep back muscles is different in people with a recurrent unilateral LBP, despite the resolution of symptoms. Because deep back muscle activity is critical for normal spinal control, the current results provide the first evidence of a candidate mechanism for recurrent episodes.", "title": "" }, { "docid": "efc82cbdc904f03a93fd6797024bf3cf", "text": "We propose a novel extension of the encoder-decoder framework, called a review network. The review network is generic and can enhance any existing encoderdecoder model: in this paper, we consider RNN decoders with both CNN and RNN encoders. The review network performs a number of review steps with attention mechanism on the encoder hidden states, and outputs a thought vector after each review step; the thought vectors are used as the input of the attention mechanism in the decoder. We show that conventional encoder-decoders are a special case of our framework. Empirically, we show that our framework improves over state-ofthe-art encoder-decoder systems on the tasks of image captioning and source code captioning.1", "title": "" }, { "docid": "5fcb9873afd16e6705ab77d7e59aa453", "text": "Charging PEVs (Plug-In Electric Vehicles) at public fast charging station can improve the public acceptance and increase their penetration level by solving problems related to vehicles' battery. However, the price for the impact of fast charging stations on the distribution grid has to be dealt with. The main purpose of this paper is to investigate the impacts of fast charging stations on a distribution grid using a stochastic fast charging model and to present the charging model with some of its results. The model is used to investigate the impacts on distribution transformer loading and system bus voltage profiles of the test distribution grid. Stochastic and deterministic modelling approaches are also compared. It is concluded that fast charging stations affect transformer loading and system bus voltage profiles. Hence, necessary measures such as using local energy storage and voltage conditioning devices, such as SVC (Static Var Compensator), have to be used at the charging station to handle the problems. It is also illustrated that stochastic modelling approach can produce a more sound and realistic results than deterministic approach.", "title": "" }, { "docid": "107436d5f38f3046ef28495a14cc5caf", "text": "There is a universal standard for facial beauty regardless of race, age, sex and other variables. Beautiful faces have ideal facial proportion. Ideal proportion is directly related to divine proportion, and that proportion is 1 to 1.618. All living organisms, including humans, are genetically encoded to develop to this proportion because there are extreme esthetic and physiologic benefits. The vast majority of us are not perfectly proportioned because of environmental factors. Establishment of a universal standard for facial beauty will significantly simplify the diagnosis and treatment of facial disharmonies and abnormalities. More important, treating to this standard will maximize facial esthetics, TMJ health, psychologic and physiologic health, fertility, and quality of life.", "title": "" }, { "docid": "b88a79221efb5afc717cb2f97761271d", "text": "BACKGROUND\nLymphangitic streaking, characterized by linear erythema on the skin, is most commonly observed in the setting of bacterial infection. However, a number of nonbacterial causes can result in lymphangitic streaking. We sought to elucidate the nonbacterial causes of lymphangitic streaking that may mimic bacterial infection to broaden clinicians' differential diagnosis for patients presenting with lymphangitic streaking.\n\n\nMETHODS\nWe performed a review of the literature, including all available reports pertaining to nonbacterial causes of lymphangitic streaking.\n\n\nRESULTS\nVarious nonbacterial causes can result in lymphangitic streaking, including viral and fungal infections, insect or spider bites, and iatrogenic etiologies.\n\n\nCONCLUSION\nAwareness of potential nonbacterial causes of superficial lymphangitis is important to avoid misdiagnosis and delay the administration of appropriate care.", "title": "" }, { "docid": "3269b3574b19a976de305c99f9529fcd", "text": "The objective of this master thesis is to identify \" key-drivers \" embedded in customer satisfaction data. The data was collected by a large transportation sector corporation during five years and in four different countries. The questionnaire involved several different sections of questions and ranged from demographical information to satisfaction attributes with the vehicle, dealer and several problem areas. Various regression, correlation and cooperative game theory approaches were used to identify the key satisfiers and dissatisfiers. The theoretical and practical advantages of using the Shapley value, Canonical Correlation Analysis and Hierarchical Logistic Regression has been demonstrated and applied to market research. ii iii Acknowledgements", "title": "" }, { "docid": "18883fdb506d235fdf72b46e76923e41", "text": "The Ponseti method for the management of idiopathic clubfoot has recently experienced a rise in popularity, with several centers reporting excellent outcomes. The challenge in achieving a successful outcome with this method lies not in correcting deformity but in preventing relapse. The most common cause of relapse is failure to adhere to the prescribed postcorrective bracing regimen. Socioeconomic status, cultural factors, and physician-parent communication may influence parental compliance with bracing. New, more user-friendly braces have been introduced in the hope of improving the rate of compliance. Strategies that may be helpful in promoting adherence include educating the family at the outset about the importance of bracing, encouraging calls and visits to discuss problems, providing clear written instructions, avoiding or promptly addressing skin problems, and refraining from criticism of the family when noncompliance is evident. A strong physician-family partnership and consideration of underlying cognitive, socioeconomic, and cultural issues may lead to improved adherence to postcorrective bracing protocols and better patient outcomes.", "title": "" }, { "docid": "3021929187465029b9761aeb3eb20580", "text": "We show that a deep convolutional network with an architecture inspired by the models used in image recognition can yield accuracy similar to a long-short term memory (LSTM) network, which achieves the state-of-the-art performance on the standard Switchboard automatic speech recognition task. Moreover, we demonstrate that merging the knowledge in the CNN and LSTM models via model compression further improves the accuracy of the convolutional model.", "title": "" }, { "docid": "45c006e52bdb9cfa73fd4c0ebf692dfe", "text": "Main memory capacities have grown up to a point where most databases fit into RAM. For main-memory database systems, index structure performance is a critical bottleneck. Traditional in-memory data structures like balanced binary search trees are not efficient on modern hardware, because they do not optimally utilize on-CPU caches. Hash tables, also often used for main-memory indexes, are fast but only support point queries. To overcome these shortcomings, we present ART, an adaptive radix tree (trie) for efficient indexing in main memory. Its lookup performance surpasses highly tuned, read-only search trees, while supporting very efficient insertions and deletions as well. At the same time, ART is very space efficient and solves the problem of excessive worst-case space consumption, which plagues most radix trees, by adaptively choosing compact and efficient data structures for internal nodes. Even though ART's performance is comparable to hash tables, it maintains the data in sorted order, which enables additional operations like range scan and prefix lookup.", "title": "" }, { "docid": "11c106ac9e7002d138af49f1bf303c88", "text": "The main purpose of Feature Subset Selection is to find a reduced subset of attributes from a data set described by a feature set. The task of a feature selection algorithm (FSA) is to provide with a computational solution motivated by a certain definition of relevance or by a reliable evaluation measure. In this paper several fundamental algorithms are studied to assess their performance in a controlled experimental scenario. A measure to evaluate FSAs is devised that computes the degree of matching between the output given by a FSA and the known optimal solutions. An extensive experimental study on synthetic problems is carried out to assess the behaviour of the algorithms in terms of solution accuracy and size as a function of the relevance, irrelevance, redundancy and size of the data samples. The controlled experimental conditions facilitate the derivation of better-supported and meaningful conclusions.", "title": "" }, { "docid": "f8093849e9157475149d00782c60ae60", "text": "Social media use, potential and challenges in innovation have received little attention in literature, especially from the standpoint of the business-to-business sector. Therefore, this paper focuses on bridging this gap with a survey of social media use, potential and challenges, combined with a social media - focused innovation literature review of state-of-the-art. The study also studies the essential differences between business-to-consumer and business-to-business in the above respects. The paper starts by defining of social media and web 2.0, and then characterizes social media in business, social media in business-to-business sector and social media in business-to-business innovation. Finally we present and analyze the results of our empirical survey of 122 Finnish companies. This paper suggests that there is a significant gap between perceived potential of social media and social media use in innovation activity in business-to-business companies, recognizes potentially effective ways to reduce the gap, and clarifies the found differences between B2B's and B2C's.", "title": "" }, { "docid": "79fd1db13ce875945c7e11247eb139c8", "text": "This paper provides a comprehensive review of outcome studies and meta-analyses of effectiveness studies of psychodynamic therapy (PDT) for the major categories of mental disorders. Comparisons with inactive controls (waitlist, treatment as usual and placebo) generally but by no means invariably show PDT to be effective for depression, some anxiety disorders, eating disorders and somatic disorders. There is little evidence to support its implementation for post-traumatic stress disorder, obsessive-compulsive disorder, bulimia nervosa, cocaine dependence or psychosis. The strongest current evidence base supports relatively long-term psychodynamic treatment of some personality disorders, particularly borderline personality disorder. Comparisons with active treatments rarely identify PDT as superior to control interventions and studies are generally not appropriately designed to provide tests of statistical equivalence. Studies that demonstrate inferiority of PDT to alternatives exist, but are small in number and often questionable in design. Reviews of the field appear to be subject to allegiance effects. The present review recommends abandoning the inherently conservative strategy of comparing heterogeneous \"families\" of therapies for heterogeneous diagnostic groups. Instead, it advocates using the opportunities provided by bioscience and computational psychiatry to creatively explore and assess the value of protocol-directed combinations of specific treatment components to address the key problems of individual patients.", "title": "" }, { "docid": "6902e1604957fa21adbe90674bf5488d", "text": "State-of-the-art distributed RDF systems partition data across multiple computer nodes (workers). Some systems perform cheap hash partitioning, which may result in expensive query evaluation. Others try to minimize inter-node communication, which requires an expensive data preprocessing phase, leading to a high startup cost. Apriori knowledge of the query workload has also been used to create partitions, which, however, are static and do not adapt to workload changes. In this paper, we propose AdPart, a distributed RDF system, which addresses the shortcomings of previous work. First, AdPart applies lightweight partitioning on the initial data, which distributes triples by hashing on their subjects; this renders its startup overhead low. At the same time, the locality-aware query optimizer of AdPart takes full advantage of the partitioning to (1) support the fully parallel processing of join patterns on subjects and (2) minimize data communication for general queries by applying hash distribution of intermediate results instead of broadcasting, wherever possible. Second, AdPart monitors the data access patterns and dynamically redistributes and replicates the instances of the most frequent ones among workers. As a result, the communication cost for future queries is drastically reduced or even eliminated. To control replication, AdPart implements an eviction policy for the redistributed patterns. Our experiments with synthetic and real data verify that AdPart: (1) starts faster than all existing systems; (2) processes thousands of queries before other systems become online; and (3) gracefully adapts to the query load, being able to evaluate queries on billion-scale RDF data in subseconds.", "title": "" }, { "docid": "f3467adcca693e015c9dcc85db04d492", "text": "For urban driving, knowledge of ego-vehicle’s position is a critical piece of information that enables advanced driver-assistance systems or self-driving cars to execute safety-related, autonomous driving maneuvers. This is because, without knowing the current location, it is very hard to autonomously execute any driving maneuvers for the future. The existing solutions for localization rely on a combination of a Global Navigation Satellite System, an inertial measurement unit, and a digital map. However, in urban driving environments, due to poor satellite geometry and disruption of radio signal reception, their longitudinal and lateral errors are too significant to be used for an autonomous system. To enhance the existing system’s localization capability, this work presents an effort to develop a vision-based lateral localization algorithm. The algorithm aims at reliably counting, with or without observations of lane-markings, the number of road-lanes and identifying the index of the road-lane on the roadway upon which our vehicle happens to be driving. Tests of the proposed algorithms against intercity and interstate highway videos showed promising results in terms of counting the number of road-lanes and the indices of the current road-lanes. C © 2015 Wiley Periodicals, Inc.", "title": "" }, { "docid": "5536f306c3633874299be57a19e35c01", "text": "0957-4174/$ see front matter 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.eswa.2013.04.023 ⇑ Corresponding author. Tel.: +55 8197885665. E-mail addresses: rflm@cin.ufpe.br (Rafael Ferreira), lscabral@gmail.com (L. de Souza Cabral), rdl@cin.ufpe.br (R.D. Lins), gfps.cin@gmail.com (G. Pereira e Silva), fred@cin.ufpe.br (F. Freitas), gdcc@cin.ufpe.br (G.D.C. Cavalcanti), rjlima01@gmail. com (R. Lima), steven.simske@hp.com (S.J. Simske), luciano.favaro@hp.com (L. Favaro). Rafael Ferreira a,⇑, Luciano de Souza Cabral , Rafael Dueire Lins , Gabriel Pereira e Silva , Fred Freitas , George D.C. Cavalcanti , Rinaldo Lima , Steven J. Simske , Luciano Favaro c", "title": "" } ]
scidocsrr
52c1d35a8fd58fe024f3b5b19174c2ce
Blockchain And Its Applications
[ { "docid": "469c17aa0db2c70394f081a9a7c09be5", "text": "The potential of blockchain technology has received attention in the area of FinTech — the combination of finance and technology. Blockchain technology was first introduced as the technology behind the Bitcoin decentralized virtual currency, but there is the expectation that its characteristics of accurate and irreversible data transfer in a decentralized P2P network could make other applications possible. Although a precise definition of blockchain technology has not yet been given, it is important to consider how to classify different blockchain systems in order to better understand their potential and limitations. The goal of this paper is to add to the discussion on blockchain technology by proposing a classification based on two dimensions external to the system: (1) existence of an authority (without an authority and under an authority) and (2) incentive to participate in the blockchain (market-based and non-market-based). The combination of these elements results in four types of blockchains. We define these dimensions and describe the characteristics of the blockchain systems belonging to each classification.", "title": "" }, { "docid": "4deea3312fe396f81919b07462551682", "text": "The purpose of this paper is to explore applications of blockchain technology related to the 4th Industrial Revolution (Industry 4.0) and to present an example where blockchain is employed to facilitate machine-to-machine (M2M) interactions and establish a M2M electricity market in the context of the chemical industry. The presented scenario includes two electricity producers and one electricity consumer trading with each other over a blockchain. The producers publish exchange offers of energy (in kWh) for currency (in USD) in a data stream. The consumer reads the offers, analyses them and attempts to satisfy its energy demand at a minimum cost. When an offer is accepted it is executed as an atomic exchange (multiple simultaneous transactions). Additionally, this paper describes and discusses the research and application landscape of blockchain technology in relation to the Industry 4.0. It concludes that this technology has significant under-researched potential to support and enhance the efficiency gains of the revolution and identifies areas for future research. Producer 2 • Issue energy • Post purchase offers (as atomic transactions) Consumer • Look through the posted offers • Choose cheapest and satisfy its own demand Blockchain Stream Published offers are visible here Offer sent", "title": "" } ]
[ { "docid": "98d998eae1fa7a00b73dcff0251f0bbd", "text": "Imagery texts are usually organized as a hierarchy of several visual elements, i.e. characters, words, text lines and text blocks. Among these elements, character is the most basic one for various languages such as Western, Chinese, Japanese, mathematical expression and etc. It is natural and convenient to construct a common text detection engine based on character detectors. However, training character detectors requires a vast of location annotated characters, which are expensive to obtain. Actually, the existing real text datasets are mostly annotated in word or line level. To remedy this dilemma, we propose a weakly supervised framework that can utilize word annotations, either in tight quadrangles or the more loose bounding boxes, for character detector training. When applied in scene text detection, we are thus able to train a robust character detector by exploiting word annotations in the rich large-scale real scene text datasets, e.g. ICDAR15 [19] and COCO-text [39]. The character detector acts as a key role in the pipeline of our text detection engine. It achieves the state-of-the-art performance on several challenging scene text detection benchmarks. We also demonstrate the flexibility of our pipeline by various scenarios, including deformed text detection and math expression recognition.", "title": "" }, { "docid": "d6ca38ccad91c0c2c51ba3dd5be454b2", "text": "Dirty data is a serious problem for businesses leading to incorrect decision making, inefficient daily operations, and ultimately wasting both time and money. Dirty data often arises when domain constraints and business rules, meant to preserve data consistency and accuracy, are enforced incompletely or not at all in application code. In this work, we propose a new data-driven tool that can be used within an organization’s data quality management process to suggest possible rules, and to identify conformant and non-conformant records. Data quality rules are known to be contextual, so we focus on the discovery of context-dependent rules. Specifically, we search for conditional functional dependencies (CFDs), that is, functional dependencies that hold only over a portion of the data. The output of our tool is a set of functional dependencies together with the context in which they hold (for example, a rule that states for CS graduate courses, the course number and term functionally determines the room and instructor). Since the input to our tool will likely be a dirty database, we also search for CFDs that almost hold. We return these rules together with the non-conformant records (as these are potentially dirty records). We present effective algorithms for discovering CFDs and dirty values in a data instance. Our discovery algorithm searches for minimal CFDs among the data values and prunes redundant candidates. No universal objective measures of data quality or data quality rules are known. Hence, to avoid returning an unnecessarily large number of CFDs and only those that are most interesting, we evaluate a set of interest metrics and present comparative results using real datasets. We also present an experimental study showing the scalability of our techniques.", "title": "" }, { "docid": "d65376ed544623a927a868b35394409e", "text": "The balance compensating techniques for asymmetric Marchand balun are presented in this letter. The amplitude and phase difference are characterized explicitly by S21 and S31, from which the factors responsible for the balance compensating are determined. Finally, two asymmetric Marchand baluns, which have normal and enhanced balance compensation, respectively, are designed and fabricated in a 0.18 μm CMOS technology for demonstration. The simulation and measurement results show that the proposed balance compensating techniques are valid in a very wide frequency range up to millimeter-wave (MMW) band.", "title": "" }, { "docid": "99c29c6cacb623a857817c412d6d9515", "text": "Considering the rapid growth of China’s elderly rural population, establishing both an adequate and a financially sustainable rural pension system is a major challenge. Focusing on financial sustainability, this article defines this concept of financial sustainability before constructing sound actuarial models for China’s rural pension system. Based on these models and statistical data, the analysis finds that the rural pension funding gap should rise from 97.80 billion Yuan in 2014 to 3062.31 billion Yuan in 2049, which represents an annual growth rate of 10.34%. This implies that, as it stands, the rural pension system in China is not financially sustainable. Finally, the article explains how this problem could be fixed through policy recommendations based on recent international experiences.", "title": "" }, { "docid": "b8fa649e8b5a60a05aad257a0a364b51", "text": "This work intends to build a Game Mechanics Ontology based on the mechanics category presented in BoardGameGeek.com vis à vis the formal concepts from the MDA framework. The 51 concepts presented in BoardGameGeek (BGG) as game mechanics are analyzed and arranged in a systemic way in order to build a domain sub-ontology in which the root concept is the mechanics as defined in MDA. The relations between the terms were built from its available descriptions as well as from the authors’ previous experiences. Our purpose is to show that a set of terms commonly accepted by players can lead us to better understand how players perceive the games components that are closer to the designer. The ontology proposed in this paper is not exhaustive. The intent of this work is to supply a tool to game designers, scholars, and others that see game artifacts as study objects or are interested in creating games. However, although it can be used as a starting point for games construction or study, the proposed Game Mechanics Ontology should be seen as the seed of a domain ontology encompassing game mechanics in general.", "title": "" }, { "docid": "117c66505964344d9c350a4e57a4a936", "text": "Sorting is a key kernel in numerous big data application including database operations, graphs and text analytics. Due to low control overhead, parallel bitonic sorting networks are usually employed for hardware implementations to accelerate sorting. Although a typical implementation of merge sort network can lead to low latency and small memory usage, it suffers from low throughput due to the lack of parallelism in the final stage. We analyze a pipelined merge sort network, showing its theoretical limits in terms of latency, memory and, throughput. To increase the throughput, we propose a merge sort based hybrid design where the final few stages in the merge sort network are replaced with “folded” bitonic merge networks. In these “folded” networks, all the interconnection patterns are realized by streaming permutation networks (SPN). We present a theoretical analysis to quantify latency, memory and throughput of our proposed design. Performance evaluations are performed by experiments on Xilinx Virtex-7 FPGA with post place-androute results. We demonstrate that our implementation achieves a throughput close to 10 GBps, outperforming state-of-the-art implementation of sorting on the same hardware by 1.2x, while preserving lower latency and higher memory efficiency.", "title": "" }, { "docid": "28fa91e4476522f895a6874ebc967cfa", "text": "The lifetime of micro electro–thermo–mechanical actuators with complex electro–thermo–mechanical coupling mechanisms can be decreased significantly due to unexpected failure events. Even more serious is the fact that various failures are tightly coupled due to micro-size and multi-physics effects. Interrelation between performance and potential failures should be established to predict reliability of actuators and improve their design. Thus, a multiphysics modeling approach is proposed to evaluate such interactive effects of failure mechanisms on actuators, where potential failures are pre-analyzed via FMMEA (Failure Modes, Mechanisms, and Effects Analysis) tool for guiding the electro–thermo–mechanical-reliability modeling process. Peak values of temperature, thermal stresses/strains and tip deflection are estimated as indicators for various failure modes and factors (e.g. residual stresses, thermal fatigue, electrical overstress, plastic deformation and parameter variations). Compared with analytical solutions and experimental data, the obtained simulation results were found suitable for coupled performance and reliability analysis of micro actuators and assessment of their design.", "title": "" }, { "docid": "e502cdbbbf557c8365b0d4b69745e225", "text": "This half-day hands-on studio will teach how to design and develop effective interfaces for head mounted and wrist worn wearable computers through the application of user-centered design principles. Attendees will learn gain the knowledge and tools needed to rapidly develop prototype applications, and also complete a hands-on design task. They will also learn good design guidelines for wearable systems and how to apply those guidelines. A variety of tools will be used that do not require any hardware or software experience, many of which are free and/or open source. Attendees will also be provided with material that they can use to continue their learning after the studio is over.", "title": "" }, { "docid": "7e004a7b6a39ff29176dd19a07c15448", "text": "All humans will become presbyopic as part of the aging process where the eye losses the ability to focus at different depths. Progressive additive lenses (PALs) allow a person to focus on objects located at near versus far by combing lenses of different strengths within the same spectacle. However, it is unknown why some patients easily adapt to wearing these lenses while others struggle and complain of vertigo, swim, and nausea as well as experience difficulties with balance. Sixteen presbyopes (nine who adapted to PALs and seven who had tried but could not adapt) participated in this study. This research investigated vergence dynamics and its adaptation using a short-term motor learning experiment to asses the ability to adapt. Vergence dynamics were on average faster and the ability to change vergence dynamics was also greater for presbyopes who adapted to progressive lenses compared to those who could not. Data suggest that vergence dynamics and its adaptation may be used to predict which patients will easily adapt to progressive lenses and discern those who will have difficulty.", "title": "" }, { "docid": "6f9afe3cbf5cc675c6b4e96ee2ccfa76", "text": "As more firms begin to collect (and seek value from) richer customer-level datasets, a focus on the emerging concept of customer-base analysis is becoming increasingly common and critical. Such analyses include forward-looking projections ranging from aggregate-level sales trajectories to individual-level conditional expectations (which, in turn, can be used to derive estimates of customer lifetime value). We provide an overview of a class of parsimonious models (called probability models) that are well-suited to meet these rising challenges. We first present a taxonomy that captures some of the key distinctions across different kinds of business settings and customer relationships, and identify some of the unique modeling and measurement issues that arise across them. We then provide deeper coverage of these modeling issues, first for noncontractual settings (i.e., situations in which customer “death” is unobservable), then contractual ones (i.e., situations in which customer “death” can be observed). We review recent literature in these areas, highlighting substantive insights that arise from the research as well as the methods used to capture them. We focus on practical applications that use appropriately chosen data summaries (such as recency and frequency) and rely on commonly available software packages (such as Microsoft Excel). n 2009 Direct Marketing Educational Foundation, Inc. Published by Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "213313382d4e5d24a065d551012887ed", "text": "The authors present full wave simulations and experimental results of propagation of electromagnetic waves in shallow seawaters. Transmitter and receiver antennas are ten-turns loops placed on the seabed. Some propagation frameworks are presented and simulated. Finally, simulation results are compared with experimental ones.", "title": "" }, { "docid": "b02dcd4d78f87d8ac53414f0afd8604b", "text": "This paper presents an ultra-low-power event-driven analog-to-digital converter (ADC) with real-time QRS detection for wearable electrocardiogram (ECG) sensors in wireless body sensor network (WBSN) applications. Two QRS detection algorithms, pulse-triggered (PUT) and time-assisted PUT (t-PUT), are proposed based on the level-crossing events generated from the ADC. The PUT detector achieves 97.63% sensitivity and 97.33% positive prediction in simulation on the MIT-BIH Arrhythmia Database. The t-PUT improves the sensitivity and positive prediction to 97.76% and 98.59% respectively. Fabricated in 0.13 μm CMOS technology, the ADC with QRS detector consumes only 220 nW measured under 300 mV power supply, making it the first nanoWatt compact analog-to-information (A2I) converter with embedded QRS detector.", "title": "" }, { "docid": "caab00ae6fcae59258ad4e45f787db64", "text": "Traditional bullying has received considerable research but the emerging phenomenon of cyber-bullying much less so. Our study aims to investigate environmental and psychological factors associated with traditional and cyber-bullying. In a school-based 2-year prospective survey, information was collected on 1,344 children aged 10 including bullying behavior/experience, depression, anxiety, coping strategies, self-esteem, and psychopathology. Parents reported demographic data, general health, and attention-deficit hyperactivity disorder (ADHD) symptoms. These were investigated in relation to traditional and cyber-bullying perpetration and victimization at age 12. Male gender and depressive symptoms were associated with all types of bullying behavior and experience. Living with a single parent was associated with perpetration of traditional bullying while higher ADHD symptoms were associated with victimization from this. Lower academic achievement and lower self esteem were associated with cyber-bullying perpetration and victimization, and anxiety symptoms with cyber-bullying perpetration. After adjustment, previous bullying perpetration was associated with victimization from cyber-bullying but not other outcomes. Cyber-bullying has differences in predictors from traditional bullying and intervention programmes need to take these into consideration.", "title": "" }, { "docid": "e5aed574fbe4560a794cf8b77fb84192", "text": "Warping is one of the basic image processing techniques. Directly applying existing monocular image warping techniques to stereoscopic images is problematic as it often introduces vertical disparities and damages the original disparity distribution. In this paper, we show that these problems can be solved by appropriately warping both the disparity map and the two images of a stereoscopic image. We accordingly develop a technique for extending existing image warping algorithms to stereoscopic images. This technique divides stereoscopic image warping into three steps. Our method first applies the user-specified warping to one of the two images. Our method then computes the target disparity map according to the user specified warping. The target disparity map is optimized to preserve the perceived 3D shape of image content after image warping. Our method finally warps the other image using a spatially-varying warping method guided by the target disparity map. Our experiments show that our technique enables existing warping methods to be effectively applied to stereoscopic images, ranging from parametric global warping to non-parametric spatially-varying warping.", "title": "" }, { "docid": "22bb6af742b845dea702453b6b14ef3a", "text": "Errors are prevalent in data sequences, such as GPS trajectories or sensor readings. Existing methods on cleaning sequential data employ a constraint on value changing speeds and perform constraint-based repairing. While such speed constraints are effective in identifying large spike errors, the small errors that do not significantly deviate from the truth and indeed satisfy the speed constraints can hardly be identified and repaired. To handle such small errors, in this paper, we propose a statistical based cleaning method. Rather than declaring a broad constraint of max/min speeds, we model the probability distribution of speed changes. The repairing problem is thus to maximize the likelihood of the sequence w.r.t. the probability of speed changes. We formalize the likelihood-based cleaning problem, show its NP-hardness, devise exact algorithms, and propose several approximate/heuristic methods to trade off effectiveness for efficiency. Experiments on real data sets (in various applications) demonstrate the superiority of our proposal.", "title": "" }, { "docid": "cc8a4744f05d5f46feacaff27b91a86c", "text": "In the recent past, several sampling-based algorithms have been proposed to compute trajectories that are collision-free and dynamically-feasible. However, the outputs of such algorithms are notoriously jagged. In this paper, by focusing on robots with car-like dynamics, we present a fast and simple heuristic algorithm, named Convex Elastic Smoothing (CES) algorithm, for trajectory smoothing and speed optimization. The CES algorithm is inspired by earlier work on elastic band planning and iteratively performs shape and speed optimization. The key feature of the algorithm is that both optimization problems can be solved via convex programming, making CES particularly fast. A range of numerical experiments show that the CES algorithm returns high-quality solutions in a matter of a few hundreds of milliseconds and hence appears amenable to a real-time implementation.", "title": "" }, { "docid": "f44d3512cd8658f824b0ba0ea5a69e4a", "text": "Customer retention is a major issue for various service-based organizations particularly telecom industry, wherein predictive models for observing the behavior of customers are one of the great instruments in customer retention process and inferring the future behavior of the customers. However, the performances of predictive models are greatly affected when the real-world data set is highly imbalanced. A data set is called imbalanced if the samples size from one class is very much smaller or larger than the other classes. The most commonly used technique is over/under sampling for handling the class-imbalance problem (CIP) in various domains. In this paper, we survey six well-known sampling techniques and compare the performances of these key techniques, i.e., mega-trend diffusion function (MTDF), synthetic minority oversampling technique, adaptive synthetic sampling approach, couples top-N reverse k-nearest neighbor, majority weighted minority oversampling technique, and immune centroids oversampling technique. Moreover, this paper also reveals the evaluation of four rules-generation algorithms (the learning from example module, version 2 (LEM2), covering, exhaustive, and genetic algorithms) using publicly available data sets. The empirical results demonstrate that the overall predictive performance of MTDF and rules-generation based on genetic algorithms performed the best as compared with the rest of the evaluated oversampling methods and rule-generation algorithms.", "title": "" }, { "docid": "3e9de22ac9f81cf3233950a0d72ef15a", "text": "Increasing of head rise (HR) and decreasing of head loss (HL), simultaneously, are important purpose in the design of different types of fans. Therefore, multi-objective optimization process is more applicable for the design of such turbo machines. In the present study, multi-objective optimization of Forward-Curved (FC) blades centrifugal fans is performed at three steps. At the first step, Head rise (HR) and the Head loss (HL) in a set of FC centrifugal fan is numerically investigated using commercial software NUMECA. Two meta-models based on the evolved group method of data handling (GMDH) type neural networks are obtained, at the second step, for modeling of HR and HL with respect to geometrical design variables. Finally, using obtained polynomial neural networks, multi-objective genetic algorithms are used for Pareto based optimization of FC centrifugal fans considering two conflicting objectives, HR and HL. It is shown that some interesting and important relationships as useful optimal design principles involved in the performance of FC fans can be discovered by Pareto based multi-objective optimization of the obtained polynomial meta-models representing their HR and HL characteristics. Such important optimal principles would not have been obtained without the use of both GMDH type neural network modeling and the Pareto optimization approach.", "title": "" }, { "docid": "bddf8420c2dd67dd5be10556088bf653", "text": "The Hadoop Distributed File System (HDFS) is a distributed storage system that stores large-scale data sets reliably and streams those data sets to applications at high bandwidth. HDFS provides high performance, reliability and availability by replicating data, typically three copies of every data. The data in HDFS changes in popularity over time. To get better performance and higher disk utilization, the replication policy of HDFS should be elastic and adapt to data popularity. In this paper, we describe ERMS, an elastic replication management system for HDFS. ERMS provides an active/standby storage model for HDFS. It utilizes a complex event processing engine to distinguish real-time data types, and then dynamically increases extra replicas for hot data, cleans up these extra replicas when the data cool down, and uses erasure codes for cold data. ERMS also introduces a replica placement strategy for the extra replicas of hot data and erasure coding parities. The experiments show that ERMS effectively improves the reliability and performance of HDFS and reduce storage overhead.", "title": "" }, { "docid": "40beda0d1e99f4cc5a15a3f7f6438ede", "text": "One of the major challenges with electric shipboard power systems (SPS) is preserving the survivability of the system under fault situations. Some minor faults in SPS can result in catastrophic consequences. Therefore, it is essential to investigate available fault management techniques for SPS applications that can enhance SPS robustness and reliability. Many recent studies in this area take different approaches to address fault tolerance in SPSs. This paper provides an overview of the concepts and methodologies that are utilized to deal with faults in the electric SPS. First, a taxonomy of the types of faults and their sources in SPS is presented; then, the methods that are used to detect, identify, isolate, and manage faults are reviewed. Furthermore, common techniques for designing a fault management system in SPS are analyzed and compared. This paper also highlights several possible future research directions.", "title": "" } ]
scidocsrr
31917eed92437862154233d7239c1af1
3D Point Cloud Semantic Modelling: Integrated Framework for Indoor Spaces and Furniture
[ { "docid": "1dcae3f9b4680725d2c7f5aa1736967c", "text": "Two less addressed issues of deep reinforcement learning are (1) lack of generalization capability to new goals, and (2) data inefficiency, i.e., the model requires several (and often costly) episodes of trial and error to converge, which makes it impractical to be applied to real-world scenarios. In this paper, we address these two issues and apply our model to target-driven visual navigation. To address the first issue, we propose an actor-critic model whose policy is a function of the goal as well as the current state, which allows better generalization. To address the second issue, we propose the AI2-THOR framework, which provides an environment with high-quality 3D scenes and a physics engine. Our framework enables agents to take actions and interact with objects. Hence, we can collect a huge number of training samples efficiently. We show that our proposed method (1) converges faster than the state-of-the-art deep reinforcement learning methods, (2) generalizes across targets and scenes, (3) generalizes to a real robot scenario with a small amount of fine-tuning (although the model is trained in simulation), (4) is end-to-end trainable and does not need feature engineering, feature matching between frames or 3D reconstruction of the environment.", "title": "" } ]
[ { "docid": "72b25e72706720f71ebd6fe8cf769df5", "text": "This paper reports our recent result in designing a function for autonomous APs to estimate throughput and delay of its clients in 2.4GHz WiFi channels to support those APs' dynamic channel selection. Our function takes as inputs the traffic volume and strength of signals emitted from nearby interference APs as well as the target AP's traffic volume. By this function, the target AP can estimate throughput and delay of its clients without actually moving to each channel, it is just required to monitor IEEE802.11 MAC frames sent or received by the interference APs. The function is composed of an SVM-based classifier to estimate capacity saturation and a regression function to estimate both throughput and delay in case of saturation in the target channel. The training dataset for the machine learning is created by a highly-precise network simulator. We have conducted over 10,000 simulations to train the model, and evaluated using additional 2,000 simulation results. The result shows that the estimated throughput error is less than 10%.", "title": "" }, { "docid": "b50c010e8606de8efb7a9e861ca31059", "text": "A Software Defined Network (SDN) is a new network architecture that provides central control over the network. Although central control is the major advantage of SDN, it is also a single point of failure if it is made unreachable by a Distributed Denial of Service (DDoS) Attack. To mitigate this threat, this paper proposes to use the central control of SDN for attack detection and introduces a solution that is effective and lightweight in terms of the resources that it uses. More precisely, this paper shows how DDoS attacks can exhaust controller resources and provides a solution to detect such attacks based on the entropy variation of the destination IP address. This method is able to detect DDoS within the first five hundred packets of the attack traffic.", "title": "" }, { "docid": "bf2746e237446a477919b3d6c2940237", "text": "In this paper, we first introduce the RF performance of Globalfoundries 45RFSOI process. NFET Ft > 290GHz and Fmax >380GHz. Then we present several mm-Wave circuit block designs, i.e., Switch, Power Amplifier, and LNA, based on 45RFSOI process for 5G Front End Module (FEM) applications. For the SPDT switch, insertion loss (IL) < 1dB at 30GHz with 32dBm P1dB and > 25dBm Pmax. For the PA, with a 2.9V power supply, the PA achieves 13.1dB power gain and a saturated output power (Psat) of 16.2dBm with maximum power-added efficiency (PAE) of 41.5% at 24Ghz continuous-wave (CW). With 960Mb/s 64QAM signal, 22.5% average PAE, −29.6dB EVM, and −30.5dBc ACLR are achieved with 9.5dBm average output power.", "title": "" }, { "docid": "c00a29466c82f972a662b0e41b724928", "text": "We introduce the type theory ¿µv, a call-by-value variant of Parigot's ¿µ-calculus, as a Curry-Howard representation theory of classical propositional proofs. The associated rewrite system is Church-Rosser and strongly normalizing, and definitional equality of the type theory is consistent, compatible with cut, congruent and decidable. The attendant call-by-value programming language µPCFv is obtained from ¿µv by augmenting it by basic arithmetic, conditionals and fixpoints. We study the behavioural properties of µPCFv and show that, though simple, it is a very general language for functional computation with control: it can express all the main control constructs such as exceptions and first-class continuations. Proof-theoretically the dual ¿µv-constructs of naming and µ-abstraction witness the introduction and elimination rules of absurdity respectively. Computationally they give succinct expression to a kind of generic (forward) \"jump\" operator, which may be regarded as a unifying control construct for functional computation. Our goal is that ¿µv and µPCFv respectively should be to functional computation with first-class access to the flow of control what ¿-calculus and PCF respectively are to pure functional programming: ¿µv gives the logical basis via the Curry-Howard correspondence, and µPCFv is a prototypical language albeit in purified form.", "title": "" }, { "docid": "f52cde20377d4b8b7554f9973c220d0a", "text": "A typical method to obtain valuable information is to extract the sentiment or opinion from a message. Machine learning technologies are widely used in sentiment classification because of their ability to “learn” from the training dataset to predict or support decision making with relatively high accuracy. However, when the dataset is large, some algorithms might not scale up well. In this paper, we aim to evaluate the scalability of Naïve Bayes classifier (NBC) in large datasets. Instead of using a standard library (e.g., Mahout), we implemented NBC to achieve fine-grain control of the analysis procedure. A Big Data analyzing system is also design for this study. The result is encouraging in that the accuracy of NBC is improved and approaches 82% when the dataset size increases. We have demonstrated that NBC is able to scale up to analyze the sentiment of millions movie reviews with increasing throughput.", "title": "" }, { "docid": "281323234970e764eff59579220be9b4", "text": "Methods based on kernel density estimation have been successfully applied for various data mining tasks. Their natural interpretation together with suitable properties make them an attractive tool among others in clustering problems. In this paper, the Complete Gradient Clustering Algorithm has been used to investigate a real data set of grains. The wheat varieties, Kama, Rosa and Canadian, characterized by measurements of main grain geometric features obtained by X-ray technique, have been analyzed. The proposed algorithm is expected to be an effective tool for recognizing wheat varieties. A comparison between the clustering results obtained from this method and the classical k-means clustering algorithm shows positive practical features of the Complete Gradient Clustering Algorithm.", "title": "" }, { "docid": "e872a91433539301a857eab518cacb38", "text": "Advances in deep reinforcement learning have allowed autonomous agents to perform well on Atari games, often outperforming humans, using only raw pixels to make their decisions. However, most of these games take place in 2D environments that are fully observable to the agent. In this paper, we present Arnold, a completely autonomous agent to play First-Person Shooter Games using only screen pixel data and demonstrate its effectiveness on Doom, a classical firstperson shooter game. Arnold is trained with deep reinforcement learning using a recent Action-Navigation architecture, which uses separate deep neural networks for exploring the map and fighting enemies. Furthermore, it utilizes a lot of techniques such as augmenting high-level game features, reward shaping and sequential updates for efficient training and effective performance. Arnold outperforms average humans as well as in-built game bots on different variations of the deathmatch. It also obtained the highest kill-to-death ratio in both the tracks of the Visual Doom AI Competition and placed second in terms of the number of frags.", "title": "" }, { "docid": "5374ed153eb37e5680f1500fea5b9dbe", "text": "Social media have become dominant in everyday life during the last few years where users share their thoughts and experiences about their enjoyable events in posts. Most of these posts are related to different categories related to: activities, such as dancing, landscapes, such as beach, people, such as a selfie, and animals such as pets. While some of these posts become popular and get more attention, others are completely ignored. In order to address the desire of users to create popular posts, several researches have studied post popularity prediction. Existing works focus on predicting the popularity without considering the category type of the post. In this paper we propose category specific post popularity prediction using visual and textual content for action, scene, people and animal categories. In this way we aim to answer the question What makes a post belonging to a specific action, scene, people or animal category popular? To answer to this question we perform several experiments on a collection of 65K posts crawled from Instagram.", "title": "" }, { "docid": "1af028a0cf88d0ac5c52e84019554d51", "text": "Robots exhibit life-like behavior by performing intelligent actions. To enhance human-robot interaction it is necessary to investigate and understand how end-users perceive such animate behavior. In this paper, we report an experiment to investigate how people perceived different robot embodiments in terms of animacy and intelligence. iCat and Robovie II were used as the two embodiments in this experiment. We conducted a between-subject experiment where robot type was the independent variable, and perceived animacy and intelligence of the robot were the dependent variables. Our findings suggest that a robots perceived intelligence is significantly correlated with animacy. The correlation between the intelligence and the animacy of a robot was observed to be stronger in the case of the iCat embodiment. Our results also indicate that the more animated the face of the robot, the more likely it is to attract the attention of a user. We also discuss the possible and probable explanations of the results obtained.", "title": "" }, { "docid": "c2fc81074ceed3d7c3690a4b23f7624e", "text": "The diffusion model for 2-choice decisions (R. Ratcliff, 1978) was applied to data from lexical decision experiments in which word frequency, proportion of high- versus low-frequency words, and type of nonword were manipulated. The model gave a good account of all of the dependent variables--accuracy, correct and error response times, and their distributions--and provided a description of how the component processes involved in the lexical decision task were affected by experimental variables. All of the variables investigated affected the rate at which information was accumulated from the stimuli--called drift rate in the model. The different drift rates observed for the various classes of stimuli can all be explained by a 2-dimensional signal-detection representation of stimulus information. The authors discuss how this representation and the diffusion model's decision process might be integrated with current models of lexical access.", "title": "" }, { "docid": "e3a2b7d38a777c0e7e06d2dc443774d5", "text": "The area under the ROC (Receiver Operating Characteristic) curve, or simply AUC, has been widely used to measure model performance for binary classification tasks. It can be estimated under parametric, semiparametric and nonparametric assumptions. The non-parametric estimate of the AUC, which is calculated from the ranks of predicted scores of instances, does not always sufficiently take advantage of the predicted scores. This problem is tackled in this paper. On the basis of the ranks and the original values of the predicted scores, we introduce a new metric, called a scored AUC or sAUC. Experimental results on 20 UCI data sets empirically demonstrate the validity of the new metric for classifier evaluation and selection.", "title": "" }, { "docid": "fb1d1c291b175c1fc788832fec008664", "text": "In Vehicular Ad Hoc Networks (VANETs), anonymity of the nodes sending messages should be preserved, while at the same time the law enforcement agencies should be able to trace the messages to the senders when necessary. It is also necessary that the messages sent are authenticated and delivered to the vehicles in the relevant areas quickly. In this paper, we present an efficient protocol for fast dissemination of authenticated messages in VANETs. It ensures the anonymity of the senders and also provides mechanism for law enforcement agencies to trace the messages to their senders, when necessary.", "title": "" }, { "docid": "45940a48b86645041726120fb066a1fa", "text": "For large state-space Markovian Decision Problems MonteCarlo planning is one of the few viable approaches to find near-optimal solutions. In this paper we introduce a new algorithm, UCT, that applies bandit ideas to guide Monte-Carlo planning. In finite-horizon or discounted MDPs the algorithm is shown to be consistent and finite sample bounds are derived on the estimation error due to sampling. Experimental results show that in several domains, UCT is significantly more efficient than its alternatives.", "title": "" }, { "docid": "1e6c2319e7c9e51cd4e31107d56bce91", "text": "Marketing has been criticised from all spheres today since the real worth of all the marketing efforts can hardly be precisely determined. Today consumers are better informed and also misinformed at times due to the bombardment of various pieces of information through a new type of interactive media, i.e., social media (SM). In SM, communication is through dialogue channels wherein consumers pay more attention to SM buzz rather than promotions of marketers. The various forms of SM create a complex set of online social networks (OSN), through which word-of-mouth (WOM) propagates and influence consumer decisions. With the growth of OSN and user generated contents (UGC), WOM metamorphoses to electronic word-of-mouth (eWOM), which spreads in astronomical proportions. Previous works study the effect of external and internal influences in affecting consumer behaviour. However, today the need is to resort to multidisciplinary approaches to find out how SM influence consumers with eWOM and online reviews. This paper reviews the emerging trend of how multiple disciplines viz. Statistics, Data Mining techniques, Network Analysis, etc. are being integrated by marketers today to analyse eWOM and derive actionable intelligence.", "title": "" }, { "docid": "b9a214ad1b6a97eccf6c14d3d778b2ff", "text": "In this paper a morphological tagging approach for document image invoice analysis is described. Tokens close by their morphology and confirmed in their location within different similar contexts make apparent some parts of speech representative of the structure elements. This bottom up approach avoids the use of an priori knowledge provided that there are redundant and frequent contexts in the text. The approach is applied on the invoice body text roughly recognized by OCR and automatically segmented. The method makes possible the detection of the invoice articles and their different fields. The regularity of the article composition and its redundancy in the invoice is a good help for its structure. The recognition rate of 276 invoices and 1704 articles, is over than 91.02% for articles and 92.56% for fields.", "title": "" }, { "docid": "caf1a9d9b00e7d2c79a2869b17aa7292", "text": "Human activity recognition using mobile device sensors is an active area of research in pervasive computing. In our work, we aim at implementing activity recognition approaches that are suitable for real life situations. This paper focuses on the problem of recognizing the on-body position of the mobile device which in a real world setting is not known a priori. We present a new real world data set that has been collected from 15 participants for 8 common activities were they carried 7 wearable devices in different positions. Further, we introduce a device localization method that uses random forest classifiers to predict the device position based on acceleration data. We perform the most complete experiment in on-body device location that includes all relevant device positions for the recognition of a variety of different activities. We show that the method outperforms other approaches achieving an F-Measure of 89% across different positions. We also show that the detection of the device position consistently improves the result of activity recognition for common activities.", "title": "" }, { "docid": "52e0f106480635b84339c21d1a24dcde", "text": "We propose a fast, parallel, maximum clique algorithm for large, sparse graphs that is designed to exploit characteristics of social and information networks. We observe roughly linear runtime scaling over graphs between 1000 vertices and 100M vertices. In a test with a 1.8 billion-edge social network, the algorithm finds the largest clique in about 20 minutes. For social networks, in particular, we found that using the core number of a vertex in combination with a good heuristic clique finder efficiently removes the vast majority of the search space. In addition, we parallelize the exploration of the search tree. In the algorithm, processes immediately communicate changes to upper and lower bounds on the size of maximum clique, which occasionally results in a super-linear speedup because vertices with especially large search spaces can be pruned by other processes. We use this clique finder to investigate the size of the largest temporal strong components in dynamic networks, which requires finding the largest clique in a particular temporal reachability graph.", "title": "" }, { "docid": "673cf83a9e08ed4e70b6cb706e0ffc5b", "text": "Conversation systems are of growing importance since they enable an easy interaction interface between humans and computers: using natural languages. To build a conversation system with adequate intelligence is challenging, and requires abundant resources including an acquisition of big data and interdisciplinary techniques, such as information retrieval and natural language processing. Along with the prosperity of Web 2.0, the massive data available greatly facilitate data-driven methods such as deep learning for human-computer conversation systems. Owing to the diversity of Web resources, a retrieval-based conversation system will come up with at least some results from the immense repository for any user inputs. Given a human issued message, i.e., query, a traditional conversation system would provide a response after adequate training and learning of how to respond. In this paper, we propose a new task for conversation systems: joint learning of response ranking featured with next utterance suggestion. We assume that the new conversation mode is more proactive and keeps user engaging. We examine the assumption in experiments. Besides, to address the joint learning task, we propose a novel Dual-LSTM Chain Model to couple response ranking and next utterance suggestion simultaneously. From the experimental results, we demonstrate the usefulness of the proposed task and the effectiveness of the proposed model.", "title": "" }, { "docid": "cef4c47b512eb4be7dcadcee35f0b2ca", "text": "This paper presents a project that allows the Baxter humanoid robot to play chess against human players autonomously. The complete solution uses three main subsystems: computer vision based on a single camera embedded in Baxter's arm to perceive the game state, an open-source chess engine to compute the next move, and a mechatronics subsystem with a 7-DOF arm to manipulate the pieces. Baxter can play chess successfully in unconstrained environments by dynamically responding to changes in the environment. This implementation demonstrates Baxter's capabilities of vision-based adaptive control and small-scale manipulation, which can be applicable to numerous applications, while also contributing to the computer vision chess analysis literature.", "title": "" }, { "docid": "f74ccd06a302b70980d7b3ba2ee76cfb", "text": "As the world becomes more connected to the cyber world, attackers and hackers are becoming increasingly sophisticated to penetrate computer systems and networks. Intrusion Detection System (IDS) plays a vital role in defending a network against intrusion. Many commercial IDSs are available in marketplace but with high cost. At the same time open source IDSs are also available with continuous support and upgradation from large user community. Each of these IDSs adopts a different approaches thus may target different applications. This paper provides a quick review of six Open Source IDS tools so that one can choose the appropriate Open Source IDS tool as per their organization requirements.", "title": "" } ]
scidocsrr
2b59c3f8ca29f7ebafd26cf004517e8c
Chainsaw: Chained Automated Workflow-based Exploit Generation
[ { "docid": "d0eb7de87f3d6ed3fd6c34a1f0ce47a1", "text": "STRANGER is an automata-based string analysis tool for finding and eliminating string-related security vulnerabilities in P H applications. STRANGER uses symbolic forward and backward reachability analyses t o compute the possible values that the string expressions can take during progr am execution. STRANGER can automatically (1) prove that an application is free from specified attacks or (2) generate vulnerability signatures that c racterize all malicious inputs that can be used to generate attacks.", "title": "" } ]
[ { "docid": "279c377e12cdb8aec7242e0e9da2dd26", "text": "It is well accepted that pain is a multidimensional experience, but little is known of how the brain represents these dimensions. We used positron emission tomography (PET) to indirectly measure pain-evoked cerebral activity before and after hypnotic suggestions were given to modulate the perceived intensity of a painful stimulus. These techniques were similar to those of a previous study in which we gave suggestions to modulate the perceived unpleasantness of a noxious stimulus. Ten volunteers were scanned while tonic warm and noxious heat stimuli were presented to the hand during four experimental conditions: alert control, hypnosis control, hypnotic suggestions for increased-pain intensity and hypnotic suggestions for decreased-pain intensity. As shown in previous brain imaging studies, noxious thermal stimuli presented during the alert and hypnosis-control conditions reliably activated contralateral structures, including primary somatosensory cortex (S1), secondary somatosensory cortex (S2), anterior cingulate cortex, and insular cortex. Hypnotic modulation of the intensity of the pain sensation led to significant changes in pain-evoked activity within S1 in contrast to our previous study in which specific modulation of pain unpleasantness (affect), independent of pain intensity, produced specific changes within the ACC. This double dissociation of cortical modulation indicates a relative specialization of the sensory and the classical limbic cortical areas in the processing of the sensory and affective dimensions of pain.", "title": "" }, { "docid": "da7f869037f40ab8666009d85d9540ff", "text": "A boomerang-shaped alar base excision is described to narrow the nasal base and correct the excessive alar flare. The boomerang excision combined the external alar wedge resection with an internal vestibular floor excision. The internal excision was inclined 30 to 45 degrees laterally to form the inner limb of the boomerang. The study included 46 patients presenting with wide nasal base and excessive alar flaring. All cases were followed for a mean period of 18 months (range, 8 to 36 months). The laterally oriented vestibular floor excision allowed for maximum preservation of the natural curvature of the alar rim where it meets the nostril floor and upon its closure resulted in a considerable medialization of alar lobule, which significantly reduced the amount of alar flare and the amount of external alar excision needed. This external alar excision measured, on average, 3.8 mm (range, 2 to 8 mm), which is significantly less than that needed when a standard vertical internal excision was used ( P < 0.0001). Such conservative external excisions eliminated the risk of obliterating the natural alar-facial crease, which did not occur in any of our cases. No cases of postoperative bleeding, infection, or vestibular stenosis were encountered. Keloid or hypertrophic scar formation was not encountered; however, dermabrasion of the scars was needed in three (6.5%) cases to eliminate apparent suture track marks. The boomerang alar base excision proved to be a safe and effective technique for narrowing the nasal base and elimination of the excessive flaring and resulted in a natural, well-proportioned nasal base with no obvious scarring.", "title": "" }, { "docid": "9a0b6db90dc15e04f4b860e4355996f2", "text": "This work discusses a mix of challenges arising from Watson Discovery Advisor (WDA), an industrial strength descendant of the Watson Jeopardy! Question Answering system currently used in production in industry settings. Typical challenges include generation of appropriate training questions, adaptation to new industry domains, and iterative improvement of the system through manual error analyses.", "title": "" }, { "docid": "cac081006bb1a7daefe3c62b6c80fe10", "text": "A novel chaotic time-series prediction method based on support vector machines (SVMs) and echo-state mechanisms is proposed. The basic idea is replacing \"kernel trick\" with \"reservoir trick\" in dealing with nonlinearity, that is, performing linear support vector regression (SVR) in the high-dimension \"reservoir\" state space, and the solution benefits from the advantages from structural risk minimization principle, and we call it support vector echo-state machines (SVESMs). SVESMs belong to a special kind of recurrent neural networks (RNNs) with convex objective function, and their solution is global, optimal, and unique. SVESMs are especially efficient in dealing with real life nonlinear time series, and its generalization ability and robustness are obtained by regularization operator and robust loss function. The method is tested on the benchmark prediction problem of Mackey-Glass time series and applied to some real life time series such as monthly sunspots time series and runoff time series of the Yellow River, and the prediction results are promising", "title": "" }, { "docid": "1e18f23ad8ddc4333406c4703d51d92b", "text": "from its introductory beginning and across its 446 pages, centered around the notion that computer simulations and games are not at all disparate but very much aligning concepts. This not only makes for an interesting premise but also an engaging book overall which offers a resource into an educational subject (for it is educational simulations that the authors predominantly address) which is not overly saturated. The aim of the book as a result of this decision, which is explained early on, but also because of its subsequent structure, is to enlighten its intended audience in the way that effective and successful simulations/games operate (on a theoretical/conceptual and technical level, although in the case of the latter the book intentionally never delves into the realms of software programming specifics per se), can be designed, built and, finally, evaluated. The book is structured in three different and distinct parts, with four chapters in the first, six chapters in the second and six chapters in the third and final one. The first chapter is essentially a \" teaser \" , according to the authors. There are a couple of more traditional simulations described, a couple of well-known mainstream games (Mario Kart and Portal 2, interesting choices, especially the first one) and then the authors proceed to present applications which show the simulation and game convergence. These applications have a strong educational outlook (covering on this occasion very diverse topics, from flood prevention to drink driving awareness, amongst others). This chapter works very well in initiating the audience in the subject matter and drawing the necessary parallels. With all of the simula-tions/games/educational applications included BOOK REVIEW", "title": "" }, { "docid": "9593712906aa8272716a7fe5b482b91d", "text": "User stories are a widely used notation for formulating requirements in agile development projects. Despite their popularity in industry, little to no academic work is available on assessing their quality. The few existing approaches are too generic or employ highly qualitative metrics. We propose the Quality User Story Framework, consisting of 14 quality criteria that user story writers should strive to conform to. Additionally, we introduce the conceptual model of a user story, which we rely on to design the AQUSA software tool. AQUSA aids requirements engineers in turning raw user stories into higher-quality ones by exposing defects and deviations from good practice in user stories. We evaluate our work by applying the framework and a prototype implementation to three user story sets from industry.", "title": "" }, { "docid": "511991822f427c3f62a4c091594e89e3", "text": "Reinforcement learning has recently gained popularity due to its many successful applications in various fields. In this project reinforcement learning is implemented in a simple warehouse situation where robots have to learn to interact with each other while performing specific tasks. The aim is to study whether reinforcement learning can be used to train multiple agents. Two different methods have been used to achieve this aim, Q-learning and deep Q-learning. Due to practical constraints, this paper cannot provide a comprehensive review of real life robot interactions. Both methods are tested on single-agent and multi-agent models in Python computer simulations. The results show that the deep Q-learning model performed better in the multiagent simulations than the Q-learning model and it was proven that agents can learn to perform their tasks to some degree. Although, the outcome of this project cannot yet be considered sufficient for moving the simulation into reallife, it was concluded that reinforcement learning and deep learning methods can be seen as suitable for modelling warehouse robots and their interactions.", "title": "" }, { "docid": "a6097c9898acd91feac6792251e77285", "text": "Pregabalin is a substance which modulates monoamine release in \"hyper-excited\" neurons. It binds potently to the α2-δ subunit of calcium channels. Pilotstudies on alcohol- and benzodiazepine dependent patients reported a reduction of withdrawal symptoms through Pregabalin. To our knowledge, no studies have been conducted so far assessing this effect in opiate dependent patients. We report the case of a 43-year-old patient with Pregabalin intake during opiate withdrawal. Multiple inpatient and outpatient detoxifications from maintenance replacement therapy with Buprenorphine in order to reach complete abstinence did not show success because of extended withdrawal symptoms and repeated drug intake. Finally he disrupted his heroine intake with a simultaneously self administration of 300  mg Pregabaline per day and was able to control the withdrawal symptoms. In this time we did control the Pregabalin level in serum and urine in our outpatient clinic. In the course the patient reported that he could treat further relapse with opiate or opioids with Pregabalin successful. This case shows first details for Pregabalin to relief withdrawal symptoms in opiate withdrawal.", "title": "" }, { "docid": "2eba092d19cc8fb35994e045f826e950", "text": "Deep neural networks have proven to be particularly e‚ective in visual and audio recognition tasks. Existing models tend to be computationally expensive and memory intensive, however, and so methods for hardwareoriented approximation have become a hot topic. Research has shown that custom hardware-based neural network accelerators can surpass their general-purpose processor equivalents in terms of both throughput and energy eciency. Application-tailored accelerators, when co-designed with approximation-based network training methods, transform large, dense and computationally expensive networks into small, sparse and hardware-ecient alternatives, increasing the feasibility of network deployment. In this article, we provide a comprehensive evaluation of approximation methods for high-performance network inference along with in-depth discussion of their e‚ectiveness for custom hardware implementation. We also include proposals for future research based on a thorough analysis of current trends. Œis article represents the €rst survey providing detailed comparisons of custom hardware accelerators featuring approximation for both convolutional and recurrent neural networks, through which we hope to inspire exciting new developments in the €eld.", "title": "" }, { "docid": "5a573ae9fad163c6dfe225f59b246b7f", "text": "The sharp increase of plastic wastes results in great social and environmental pressures, and recycling, as an effective way currently available to reduce the negative impacts of plastic wastes, represents one of the most dynamic areas in the plastics industry today. Froth flotation is a promising method to solve the key problem of recycling process, namely separation of plastic mixtures. This review surveys recent literature on plastics flotation, focusing on specific features compared to ores flotation, strategies, methods and principles, flotation equipments, and current challenges. In terms of separation methods, plastics flotation is divided into gamma flotation, adsorption of reagents, surface modification and physical regulation.", "title": "" }, { "docid": "b999fe9bd7147ef9c555131d106ea43e", "text": "This paper presents the DeepCD framework which learns a pair of complementary descriptors jointly for image patch representation by employing deep learning techniques. It can be achieved by taking any descriptor learning architecture for learning a leading descriptor and augmenting the architecture with an additional network stream for learning a complementary descriptor. To enforce the complementary property, a new network layer, called data-dependent modulation (DDM) layer, is introduced for adaptively learning the augmented network stream with the emphasis on the training data that are not well handled by the leading stream. By optimizing the proposed joint loss function with late fusion, the obtained descriptors are complementary to each other and their fusion improves performance. Experiments on several problems and datasets show that the proposed method1 is simple yet effective, outperforming state-of-the-art methods.", "title": "" }, { "docid": "82e5d8a3ee664f36afec3aa1b2e976f9", "text": "Real-world tasks are often highly structured. Hierarchical reinforcement learning (HRL) has attracted research interest as an approach for leveraging the hierarchical structure of a given task in reinforcement learning (RL). However, identifying the hierarchical policy structure that enhances the performance of RL is not a trivial task. In this paper, we propose an HRL method that learns a latent variable of a hierarchical policy using mutual information maximization. Our approach can be interpreted as a way to learn a discrete and latent representation of the state-action space. To learn option policies that correspond to modes of the advantage function, we introduce advantage-weighted importance sampling. In our HRL method, the gating policy learns to select option policies based on an option-value function, and these option policies are optimized based on the deterministic policy gradient method. This framework is derived by leveraging the analogy between a monolithic policy in standard RL and a hierarchical policy in HRL by using a deterministic option policy. Experimental results indicate that our HRL approach can learn a diversity of options and that it can enhance the performance of RL in continuous control tasks.", "title": "" }, { "docid": "44017678b3da8c8f4271a9832280201e", "text": "Data warehouses are users driven; that is, they allow end-users to be in control of the data. As user satisfaction is commonly acknowledged as the most useful measurement of system success, we identify the underlying factors of end-user satisfaction with data warehouses and develop an instrument to measure these factors. The study demonstrates that most of the items in classic end-user satisfaction measure are still valid in the data warehouse environment, and that end-user satisfaction with data warehouses depends heavily on the roles and performance of organizational information centers. # 2000 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "d97669811124f3c6f4cef5b2a144a46c", "text": "Relational databases are queried using database query languages such as SQL. Natural language interfaces to databases (NLIDB) are systems that translate a natural language sentence into a database query. In this modern techno-crazy world, as more and more laymen access various systems and applications through their smart phones and tablets, the need for Natural Language Interfaces (NLIs) has increased manifold. The challenges in Natural language Query processing are interpreting the sentence correctly, removal of various ambiguity and mapping to the appropriate context. Natural language access problem is actually composed of two stages Linguistic processing and Database processing. NLIDB techniques encompass a wide variety of approaches. The approaches include traditional methods such as Pattern Matching, Syntactic Parsing and Semantic Grammar to modern systems such as Intermediate Query Generation, Machine Learning and Ontologies. In this report, various approaches to build NLIDB systems have been analyzed and compared along with their advantages, disadvantages and application areas. Also, a natural language interface to a flight reservation system has been implemented comprising of flight and booking inquiry systems.", "title": "" }, { "docid": "c17e6363762e0e9683b51c0704d43fa7", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "058340d519ade55db4d6db879df95253", "text": "Despite decades of research attempting to establish conversational interaction between humans and computers, the capabilities of automated conversational systems are still limited. In this paper, we introduce Chorus, a crowd-powered conversational assistant. When using Chorus, end users converse continuously with what appears to be a single conversational partner. Behind the scenes, Chorus leverages multiple crowd workers to propose and vote on responses. A shared memory space helps the dynamic crowd workforce maintain consistency, and a game-theoretic incentive mechanism helps to balance their efforts between proposing and voting. Studies with 12 end users and 100 crowd workers demonstrate that Chorus can provide accurate, topical responses, answering nearly 93% of user queries appropriately, and staying on-topic in over 95% of responses. We also observed that Chorus has advantages over pairing an end user with a single crowd worker and end users completing their own tasks in terms of speed, quality, and breadth of assistance. Chorus demonstrates a new future in which conversational assistants are made usable in the real world by combining human and machine intelligence, and may enable a useful new way of interacting with the crowds powering other systems.", "title": "" }, { "docid": "d145bad318d074f036cf1aa1a49066b8", "text": "Based on imbalanced data, the predictive models for 5year survivability of breast cancer using decision tree are proposed. After data preprocessing from SEER breast cancer datasets, it is obviously that the category of data distribution is imbalanced. Under-sampling is taken to make up the disadvantage of the performance of models caused by the imbalanced data. The performance of the models is evaluated by AUC under ROC curve, accuracy, specificity and sensitivity with 10-fold stratified cross-validation. The performance of models is best while the distribution of data is approximately equal. Bagging algorithm is used to build an integration decision tree model for predicting breast cancer survivability. Keywords-imbalanced data;decision tree;predictive breast cancer survivability;10-fold stratified cross-validation;bagging algorithm", "title": "" }, { "docid": "406e06e00799733c517aff88c9c85e0b", "text": "Matrix rank minimization problem is in general NP-hard. The nuclear norm is used to substitute the rank function in many recent studies. Nevertheless, the nuclear norm approximation adds all singular values together and the approximation error may depend heavily on the magnitudes of singular values. This might restrict its capability in dealing with many practical problems. In this paper, an arctangent function is used as a tighter approximation to the rank function. We use it on the challenging subspace clustering problem. For this nonconvex minimization problem, we develop an effective optimization procedure based on a type of augmented Lagrange multipliers (ALM) method. Extensive experiments on face clustering and motion segmentation show that the proposed method is effective for rank approximation.", "title": "" }, { "docid": "1c78424b85b5ffd29e04e34639548bc8", "text": "Datasets in the LOD cloud are far from being static in their nature and how they are exposed. As resources are added and new links are set, applications consuming the data should be able to deal with these changes. In this paper we investigate how LOD datasets change and what sensible measures there are to accommodate dataset dynamics. We compare our findings with traditional, document-centric studies concerning the “freshness” of the document collections and propose metrics for LOD datasets.", "title": "" }, { "docid": "002acd845aa9776840dfe9e8755d7732", "text": "A detailed study on the mechanism of band-to-band tunneling in carbon nanotube field-effect transistors (CNFETs) is presented. Through a dual-gated CNFET structure tunneling currents from the valence into the conduction band and vice versa can be enabled or disabled by changing the gate potential. Different from a conventional device where the Fermi distribution ultimately limits the gate voltage range for switching the device on or off, current flow is controlled here by the valence and conduction band edges in a bandpass-filter-like arrangement. We discuss how the structure of the nanotube is the key enabler of this particular one-dimensional tunneling effect.", "title": "" } ]
scidocsrr
61b9619b02f8c7f3c0d2b06f4e6b6413
Linux kernel vulnerabilities: state-of-the-art defenses and open problems
[ { "docid": "3724a800d0c802203835ef9f68a87836", "text": "This paper presents SUD, a system for running existing Linux device drivers as untrusted user-space processes. Even if the device driver is controlled by a malicious adversary, it cannot compromise the rest of the system. One significant challenge of fully isolating a driver is to confine the actions of its hardware device. SUD relies on IOMMU hardware, PCI express bridges, and messagesignaled interrupts to confine hardware devices. SUD runs unmodified Linux device drivers, by emulating a Linux kernel environment in user-space. A prototype of SUD runs drivers for Gigabit Ethernet, 802.11 wireless, sound cards, USB host controllers, and USB devices, and it is easy to add a new device class. SUD achieves the same performance as an in-kernel driver on networking benchmarks, and can saturate a Gigabit Ethernet link. SUD incurs a CPU overhead comparable to existing runtime driver isolation techniques, while providing much stronger isolation guarantees for untrusted drivers. Finally, SUD requires minimal changes to the kernel—just two kernel modules comprising 4,000 lines of code—which may at last allow the adoption of these ideas in practice.", "title": "" }, { "docid": "68bab5e0579a0cdbaf232850e0587e11", "text": "This article presents a new mechanism that enables applications to run correctly when device drivers fail. Because device drivers are the principal failing component in most systems, reducing driver-induced failures greatly improves overall reliability. Earlier work has shown that an operating system can survive driver failures [Swift et al. 2005], but the applications that depend on them cannot. Thus, while operating system reliability was greatly improved, application reliability generally was not.To remedy this situation, we introduce a new operating system mechanism called a shadow driver. A shadow driver monitors device drivers and transparently recovers from driver failures. Moreover, it assumes the role of the failed driver during recovery. In this way, applications using the failed driver, as well as the kernel itself, continue to function as expected.We implemented shadow drivers for the Linux operating system and tested them on over a dozen device drivers. Our results show that applications and the OS can indeed survive the failure of a variety of device drivers. Moreover, shadow drivers impose minimal performance overhead. Lastly, they can be introduced with only modest changes to the OS kernel and with no changes at all to existing device drivers.", "title": "" } ]
[ { "docid": "68f10e252faf7171cac8d5ba914fcba9", "text": "Most languages have no formal writing system and at best a limited written record. However, textual data is critical to natural language processing and particularly important for the training of language models that would facilitate speech recognition of such languages. Bilingual phonetic dictionaries are often available in some form, since lexicon creation is a fundamental task of documentary linguistics. We investigate the use of such dictionaries to improve language models when textual training data is limited to as few as 1k sentences. The method involves learning cross-lingual word embeddings as a pretraining step in the training of monolingual language models. Results across a number of languages show that language models are improved by such pre-training.", "title": "" }, { "docid": "45b17b6521e84c8536ad852969b21c1d", "text": "Previous research on online media popularity prediction concluded that the rise in popularity of online videos maintains a conventional logarithmic distribution. However, recent studies have shown that a significant portion of online videos exhibit bursty/sudden rise in popularity, which cannot be accounted for by video domain features alone. In this paper, we propose a novel transfer learning framework that utilizes knowledge from social streams (e.g., Twitter) to grasp sudden popularity bursts in online content. We develop a transfer learning algorithm that can learn topics from social streams allowing us to model the social prominence of video content and improve popularity predictions in the video domain. Our transfer learning framework has the ability to scale with incoming stream of tweets, harnessing physical world event information in real-time. Using data comprising of 10.2 million tweets and 3.5 million YouTube videos, we show that social prominence of the video topic (context) is responsible for the sudden rise in its popularity where social trends have a ripple effect as they spread from the Twitter domain to the video domain. We envision that our cross-domain popularity prediction model will be substantially useful for various media applications that could not be previously solved by traditional multimedia techniques alone.", "title": "" }, { "docid": "28b7905d804cef8e54dbdf4f63f6495d", "text": "The recently introduced Galois/Counter Mode (GCM) of operation for block ciphers provides both encryption and message authentication, using universal hashing based on multiplication in a binary finite field. We analyze its security and performance, and show that it is the most efficient mode of operation for high speed packet networks, by using a realistic model of a network crypto module and empirical data from studies of Internet traffic in conjunction with software experiments and hardware designs. GCM has several useful features: it can accept IVs of arbitrary length, can act as a stand-alone message authentication code (MAC), and can be used as an incremental MAC. We show that GCM is secure in the standard model of concrete security, even when these features are used. We also consider several of its important system-security aspects.", "title": "" }, { "docid": "a83b417c2be604427eacf33b1db91468", "text": "We report a male infant with iris coloboma, choanal atresia, postnatal retardation of growth and psychomotor development, genital anomaly, ear anomaly, and anal atresia. In addition, there was cutaneous syndactyly and nail hypoplasia of the second and third fingers on the right and hypoplasia of the left second finger nail. Comparable observations have rarely been reported and possibly represent genetic heterogeneity.", "title": "" }, { "docid": "71759cdcf18dabecf1d002727eb9d8b8", "text": "A commonly observed neural correlate of working memory is firing that persists after the triggering stimulus disappears. Substantial effort has been devoted to understanding the many potential mechanisms that may underlie memory-associated persistent activity. These rely either on the intrinsic properties of individual neurons or on the connectivity within neural circuits to maintain the persistent activity. Nevertheless, it remains unclear which mechanisms are at play in the many brain areas involved in working memory. Herein, we first summarize the palette of different mechanisms that can generate persistent activity. We then discuss recent work that asks which mechanisms underlie persistent activity in different brain areas. Finally, we discuss future studies that might tackle this question further. Our goal is to bridge between the communities of researchers who study either single-neuron biophysical, or neural circuit, mechanisms that can generate the persistent activity that underlies working memory.", "title": "" }, { "docid": "0cd5813a069c8955871784cd3e63aa83", "text": "Fundamental observations and principles derived from traditional physiological studies of multisensory integration have been difficult to reconcile with computational and psychophysical studies that share the foundation of probabilistic (Bayesian) inference. We review recent work on multisensory integration, focusing on experiments that bridge single-cell electrophysiology, psychophysics, and computational principles. These studies show that multisensory (visual-vestibular) neurons can account for near-optimal cue integration during the perception of self-motion. Unlike the nonlinear (superadditive) interactions emphasized in some previous studies, visual-vestibular neurons accomplish near-optimal cue integration through subadditive linear summation of their inputs, consistent with recent computational theories. Important issues remain to be resolved, including the observation that variations in cue reliability appear to change the weights that neurons apply to their different sensory inputs.", "title": "" }, { "docid": "4dd0d34f6b67edee60f2e6fae5bd8dd9", "text": "Virtual learning environments facilitate online learning, generating and storing large amounts of data during the learning/teaching process. This stored data enables extraction of valuable information using data mining. In this article, we present a systematic mapping, containing 42 papers, where data mining techniques are applied to predict students performance using Moodle data. Results show that decision trees are the most used classification approach. Furthermore, students interactions in forums are the main Moodle attribute analyzed by researchers.", "title": "" }, { "docid": "03f98b18392bd178ea68ce19b13589fa", "text": "Neural network techniques are widely used in network embedding, boosting the result of node classification, link prediction, visualization and other tasks in both aspects of efficiency and quality. All the state of art algorithms put effort on the neighborhood information and try to make full use of it. However, it is hard to recognize core periphery structures simply based on neighborhood. In this paper, we first discuss the influence brought by random-walk based sampling strategies to the embedding results. Theoretical and experimental evidences show that random-walk based sampling strategies fail to fully capture structural equivalence. We present a new method, SNS, that performs network embeddings using structural information (namely graphlets) to enhance its quality. SNS effectively utilizes both neighbor information and local-subgraphs similarity to learn node embeddings. This is the first framework that combines these two aspects as far as we know, positively merging two important areas in graph mining and machine learning. Moreover, we investigate what kinds of local-subgraph features matter the most on the node classification task, which enables us to further improve the embedding quality. Experiments show that our algorithm outperforms other unsupervised and semi-supervised neural network embedding algorithms on several real-world datasets.", "title": "" }, { "docid": "4e46fb5c1abb3379519b04a84183b055", "text": "Categorical models of emotions posit neurally and physiologically distinct human basic emotions. We tested this assumption by using multivariate pattern analysis (MVPA) to classify brain activity patterns of 6 basic emotions (disgust, fear, happiness, sadness, anger, and surprise) in 3 experiments. Emotions were induced with short movies or mental imagery during functional magnetic resonance imaging. MVPA accurately classified emotions induced by both methods, and the classification generalized from one induction condition to another and across individuals. Brain regions contributing most to the classification accuracy included medial and inferior lateral prefrontal cortices, frontal pole, precentral and postcentral gyri, precuneus, and posterior cingulate cortex. Thus, specific neural signatures across these regions hold representations of different emotional states in multimodal fashion, independently of how the emotions are induced. Similarity of subjective experiences between emotions was associated with similarity of neural patterns for the same emotions, suggesting a direct link between activity in these brain regions and the subjective emotional experience.", "title": "" }, { "docid": "2f17160c9f01aa779b1745a57e34e1aa", "text": "OBJECTIVE\nTo report an ataxic variant of Alzheimer disease expressing a novel molecular phenotype.\n\n\nDESIGN\nDescription of a novel phenotype associated with a presenilin 1 mutation.\n\n\nSETTING\nThe subject was an outpatient who was diagnosed at the local referral center.\n\n\nPATIENT\nA 28-year-old man presented with psychiatric symptoms and cerebellar signs, followed by cognitive dysfunction. Severe beta-amyloid (Abeta) deposition was accompanied by neurofibrillary tangles and cell loss in the cerebral cortex and by Purkinje cell dendrite loss in the cerebellum. A presenilin 1 gene (PSEN1) S170F mutation was detected.\n\n\nMAIN OUTCOME MEASURES\nWe analyzed the processing of Abeta precursor protein in vitro as well as the Abeta species in brain tissue.\n\n\nRESULTS\nThe PSEN1 S170F mutation induced a 3-fold increase of both secreted Abeta(42) and Abeta(40) species and a 60% increase of secreted Abeta precursor protein in transfected cells. Soluble and insoluble fractions isolated from brain tissue showed a prevalence of N-terminally truncated Abeta species ending at both residues 40 and 42.\n\n\nCONCLUSION\nThese findings define a new Alzheimer disease molecular phenotype and support the concept that the phenotypic variability associated with PSEN1 mutations may be dictated by the Abeta aggregates' composition.", "title": "" }, { "docid": "f5df06ebd22d4eac95287b38a5c3cc6b", "text": "We discuss the use of a double exponentially tapered slot antenna (DETSA) fabricated on flexible liquid crystal polymer (LCP) as a candidate for ultrawideband (UWB) communications systems. The features of the antenna and the effect of the antenna on a transmitted pulse are investigated. Return loss and E and H plane radiation pattern measurements are presented in several frequencies covering the whole ultra wide band. The return loss remains below -10 dB and the shape of the radiation pattern remains fairly constant in the whole UWB range (3.1 to 10.6 GHz). The main lobe characteristic of the radiation pattern remains stable even when the antenna is significantly conformed. The major effect of the conformation is an increase in the cross polarization component amplitude. The system: transmitter DETSA-channel receiver DETSA is measured in frequency domain and shows that the antenna adds very little distortion on a transmitted pulse. The distortion remains small even when both transmitter and receiver antennas are folded, although it increases slightly.", "title": "" }, { "docid": "27bcbde431c340db7544b58faa597fb7", "text": "Face and eye detection algorithms are deployed in a wide variety of applications. Unfortunately, there has been no quantitative comparison of how these detectors perform under difficult circumstances. We created a dataset of low light and long distance images which possess some of the problems encountered by face and eye detectors solving real world problems. The dataset we created is composed of reimaged images (photohead) and semi-synthetic heads imaged under varying conditions of low light, atmospheric blur, and distances of 3m, 50m, 80m, and 200m. This paper analyzes the detection and localization performance of the participating face and eye algorithms compared with the Viola Jones detector and four leading commercial face detectors. Performance is characterized under the different conditions and parameterized by per-image brightness and contrast. In localization accuracy for eyes, the groups/companies focusing on long-range face detection outperform leading commercial applications.", "title": "" }, { "docid": "a583bbf2deac0bf99e2790c47598cddd", "text": "We introduce TensorFlow Agents, an efficient infrastructure paradigm for building parallel reinforcement learning algorithms in TensorFlow. We simulate multiple environments in parallel, and group them to perform the neural network computation on a batch rather than individual observations. This allows the TensorFlow execution engine to parallelize computation, without the need for manual synchronization. Environments are stepped in separate Python processes to progress them in parallel without interference of the global interpreter lock. As part of this project, we introduce BatchPPO, an efficient implementation of the proximal policy optimization algorithm. By open sourcing TensorFlow Agents, we hope to provide a flexible starting point for future projects that accelerates future research in the field.", "title": "" }, { "docid": "6e63767a96f0d57ecfe98f55c89ae778", "text": "We investigate the use of Deep Q-Learning to control a simulated car via reinforcement learning. We start by implementing the approach of [5] ourselves, and then experimenting with various possible alterations to improve performance on our selected task. In particular, we experiment with various reward functions to induce specific driving behavior, double Q-learning, gradient update rules, and other hyperparameters. We find we are successfully able to train an agent to control the simulated car in JavaScript Racer [3] in some respects. Our agent successfully learned the turning operation, progressively gaining the ability to navigate larger sections of the simulated raceway without crashing. In obstacle avoidance, however, our agent faced challenges which we suspect are due to insufficient training time.", "title": "" }, { "docid": "c71d27d4e4e9c85e3f5016fa36d20a16", "text": "We present, GEM, the first heterogeneous graph neural network approach for detecting malicious accounts at Alipay, one of the world's leading mobile cashless payment platform. Our approach, inspired from a connected subgraph approach, adaptively learns discriminative embeddings from heterogeneous account-device graphs based on two fundamental weaknesses of attackers, i.e. device aggregation and activity aggregation. For the heterogeneous graph consists of various types of nodes, we propose an attention mechanism to learn the importance of different types of nodes, while using the sum operator for modeling the aggregation patterns of nodes in each type. Experiments show that our approaches consistently perform promising results compared with competitive methods over time.", "title": "" }, { "docid": "fa99f24d38858b5951c7af587194f4e3", "text": "Understanding foreign speech is difficult, in part because of unusual mappings between sounds and words. It is known that listeners in their native language can use lexical knowledge (about how words ought to sound) to learn how to interpret unusual speech-sounds. We therefore investigated whether subtitles, which provide lexical information, support perceptual learning about foreign speech. Dutch participants, unfamiliar with Scottish and Australian regional accents of English, watched Scottish or Australian English videos with Dutch, English or no subtitles, and then repeated audio fragments of both accents. Repetition of novel fragments was worse after Dutch-subtitle exposure but better after English-subtitle exposure. Native-language subtitles appear to create lexical interference, but foreign-language subtitles assist speech learning by indicating which words (and hence sounds) are being spoken.", "title": "" }, { "docid": "951d3f81129ecafa2d271d4398d9b3e6", "text": "The content-based image retrieval methods are developed to help people find what they desire based on preferred images instead of linguistic information. This paper focuses on capturing the image features representing details of the collar designs, which is important for people to choose clothing. The quality of the feature extraction methods is important for the queries. This paper presents several new methods for the collar-design feature extraction. A prototype of clothing image retrieval system based on relevance feedback approach and optimum-path forest algorithm is also developed to improve the query results and allows users to find clothing image of more preferred design. A series of experiments are conducted to test the qualities of the feature extraction methods and validate the effectiveness and efficiency of the RF-OPF prototype from multiple aspects. The evaluation scores of initial query results are used to test the qualities of the feature extraction methods. The average scores of all RF steps, the average numbers of RF iterations taken before achieving desired results and the score transition of RF iterations are used to validate the effectiveness and efficiency of the proposed RF-OPF prototype.", "title": "" }, { "docid": "37b60f30aba47a0c2bb3d31c848ee4bc", "text": "This research analyzed the perception of Makassar’s teenagers toward Korean drama and music and their influences to them. Interviews and digital recorder were provided as instruments of the research to ten respondents who are members of Makassar Korean Lover Community. Then, in analyzing data the researchers used descriptive qualitative method that aimed to get deep information about Korean wave in Makassar. The Results of the study found that Makassar’s teenagers put enormous interest in Korean culture especially Korean drama and music. However, most respondents also realize that the presence of Korean culture has a great negative impact to them and their environments. Korean culture itself gives effect in several aspects such as the influence on behavior, Influence on the taste and Influence on the environment as well.", "title": "" }, { "docid": "8b548e2c1922e6e105ab40b60fd7433c", "text": "Although there have been many decades of research and commercial presence on high performance general purpose processors, there are still many applications that require fully customized hardware architectures for further computational acceleration. Recently, deep learning has been successfully used to learn in a wide variety of applications, but their heavy computation demand has considerably limited their practical applications. This paper proposes a fully pipelined acceleration architecture to alleviate high computational demand of an artificial neural network (ANN) which is restricted Boltzmann machine (RBM) ANNs. The implemented RBM ANN accelerator (integrating $1024\\times 1024$ network size, using 128 input cases per batch, and running at a 303-MHz clock frequency) integrated in a state-of-the art field-programmable gate array (FPGA) (Xilinx Virtex 7 XC7V-2000T) provides a computational performance of 301-billion connection-updates-per-second and about 193 times higher performance than a software solution running on general purpose processors. Most importantly, the architecture enables over 4 times (12 times in batch learning) higher performance compared with a previous work when both are implemented in an FPGA device (XC2VP70).", "title": "" }, { "docid": "56e406924a967700fba3fe554b9a8484", "text": "Wearable orthoses can function both as assistive devices, which allow the user to live independently, and as rehabilitation devices, which allow the user to regain use of an impaired limb. To be fully wearable, such devices must have intuitive controls, and to improve quality of life, the device should enable the user to perform Activities of Daily Living. In this context, we explore the feasibility of using electromyography (EMG) signals to control a wearable exotendon device to enable pick and place tasks. We use an easy to don, commodity forearm EMG band with 8 sensors to create an EMG pattern classification control for an exotendon device. With this control, we are able to detect a user's intent to open, and can thus enable extension and pick and place tasks. In experiments with stroke survivors, we explore the accuracy of this control in both non-functional and functional tasks. Our results support the feasibility of developing wearable devices with intuitive controls which provide a functional context for rehabilitation.", "title": "" } ]
scidocsrr
b0bf55e123a1d0efe1fd44d5b3c1e4e9
Oruta: Privacy-Preserving Public Auditing for Shared Data in the Cloud
[ { "docid": "70cc8c058105b905eebdf941ca2d3f2e", "text": "Cloud computing is an emerging computing paradigm in which resources of the computing infrastructure are provided as services over the Internet. As promising as it is, this paradigm also brings forth many new challenges for data security and access control when users outsource sensitive data for sharing on cloud servers, which are not within the same trusted domain as data owners. To keep sensitive user data confidential against untrusted servers, existing solutions usually apply cryptographic methods by disclosing data decryption keys only to authorized users. However, in doing so, these solutions inevitably introduce a heavy computation overhead on the data owner for key distribution and data management when fine-grained data access control is desired, and thus do not scale well. The problem of simultaneously achieving fine-grainedness, scalability, and data confidentiality of access control actually still remains unresolved. This paper addresses this challenging open issue by, on one hand, defining and enforcing access policies based on data attributes, and, on the other hand, allowing the data owner to delegate most of the computation tasks involved in fine-grained data access control to untrusted cloud servers without disclosing the underlying data contents. We achieve this goal by exploiting and uniquely combining techniques of attribute-based encryption (ABE), proxy re-encryption, and lazy re-encryption. Our proposed scheme also has salient properties of user access privilege confidentiality and user secret key accountability. Extensive analysis shows that our proposed scheme is highly efficient and provably secure under existing security models.", "title": "" } ]
[ { "docid": "8f78f2efdd2fecaf32fbc7f5ffa79218", "text": "Evolutionary population dynamics (EPD) deal with the removal of poor individuals in nature. It has been proven that this operator is able to improve the median fitness of the whole population, a very effective and cheap method for improving the performance of meta-heuristics. This paper proposes the use of EPD in the grey wolf optimizer (GWO). In fact, EPD removes the poor search agents of GWO and repositions them around alpha, beta, or delta wolves to enhance exploitation. The GWO is also required to randomly reinitialize its worst search agents around the search space by EPD to promote exploration. The proposed GWO–EPD algorithm is benchmarked on six unimodal and seven multi-modal test functions. The results are compared to the original GWO algorithm for verification. It is demonstrated that the proposed operator is able to significantly improve the performance of the GWO algorithm in terms of exploration, local optima avoidance, exploitation, local search, and convergence rate.", "title": "" }, { "docid": "8905bd760b0c72fbfe4fbabd778ff408", "text": "Boredom and low levels of task engagement while driving can pose road safety risks, e.g., inattention during low traffic, routine trips, or semi-automated driving. Digital technology interventions that increase task engagement, e.g., through performance feedback, increased challenge, and incentives (often referred to as ‘gamification’), could therefore offer safety benefits. To explore the impact of such interventions, we conducted experiments in a highfidelity driving simulator with thirty-two participants. In two counterbalanced conditions (control and intervention), we compared driving behaviour, physiological arousal, and subjective experience. Results indicate that the gamified boredom intervention reduced unsafe coping mechanisms such as speeding while promoting anticipatory driving. We can further infer that the intervention not only increased one’s attention and arousal during the intermittent gamification challenges, but that these intermittent stimuli may also help sustain one’s attention and arousal in between challenges and throughout a drive. At the same time, the gamified condition led to slower hazard reactions and short off-road glances. Our contributions deepen our understanding of driver boredom and pave the way for engaging interventions for safety critical tasks.", "title": "" }, { "docid": "d5d96493b34cfbdf135776e930ec5979", "text": "We propose an approach for the static analysis of probabilistic programs that sense, manipulate, and control based on uncertain data. Examples include programs used in risk analysis, medical decision making and cyber-physical systems. Correctness properties of such programs take the form of queries that seek the probabilities of assertions over program variables. We present a static analysis approach that provides guaranteed interval bounds on the values (assertion probabilities) of such queries. First, we observe that for probabilistic programs, it is possible to conclude facts about the behavior of the entire program by choosing a finite, adequate set of its paths. We provide strategies for choosing such a set of paths and verifying its adequacy. The queries are evaluated over each path by a combination of symbolic execution and probabilistic volume-bound computations. Each path yields interval bounds that can be summed up with a \"coverage\" bound to yield an interval that encloses the probability of assertion for the program as a whole. We demonstrate promising results on a suite of benchmarks from many different sources including robotic manipulators and medical decision making programs.", "title": "" }, { "docid": "90c3543eca7a689188725e610e106ce9", "text": "Lithium-based battery technology offers performance advantages over traditional battery technologies at the cost of increased monitoring and controls overhead. Multiple-cell Lead-Acid battery packs can be equalized by a controlled overcharge, eliminating the need to periodically adjust individual cells to match the rest of the pack. Lithium-based based batteries cannot be equalized by an overcharge, so alternative methods are required. This paper discusses several cell-balancing methodologies. Active cell balancing methods remove charge from one or more high cells and deliver the charge to one or more low cells. Dissipative techniques find the high cells in the pack, and remove excess energy through a resistive element until their charges match the low cells. This paper presents the theory of charge balancing techniques and the advantages and disadvantages of the presented methods. INTRODUCTION Lithium Ion and Lithium Polymer battery chemistries cannot be overcharged without damaging active materials [1-5]. The electrolyte breakdown voltage is precariously close to the fully charged terminal voltage, typically in the range of 4.1 to 4.3 volts/cell. Therefore, careful monitoring and controls must be implemented to avoid any single cell from experiencing an overvoltage due to excessive charging. Single lithium-based cells require monitoring so that cell voltage does not exceed predefined limits of the chemistry. Series connected lithium cells pose a more complex problem: each cell in the string must be monitored and controlled. Even though the pack voltage may appear to be within acceptable limits, one cell of the series string may be experiencing damaging voltage due to cell-to-cell imbalances. Traditionally, cell-to-cell imbalances in lead-acid batteries have been solved by controlled overcharging [6,7]. Leadacid batteries can be brought into overcharge conditions without permanent cell damage, as the excess energy is released by gassing. This gassing mechanism is the natural method for balancing a series string of lead acid battery cells. Other chemistries, such as NiMH, exhibit similar natural cell-to-cell balancing mechanisms [8]. Because a Lithium battery cannot be overcharged, there is no natural mechanism for cell equalization. Therefore, an alternative method must be employed. This paper discusses three categories of cell balancing methodologies: charging methods, active methods, and passive methods. Cell balancing is necessary for highly transient lithium battery applications, especially those applications where charging occurs frequently, such as regenerative braking in electric vehicle (EV) or hybrid electric vehicle (HEV) applications. Regenerative braking can cause problems for Lithium Ion batteries because the instantaneous regenerative braking current inrush can cause battery voltage to increase suddenly, possibly over the electrolyte breakdown threshold voltage. Deviations in cell behaviors generally occur because of two phenomenon: changes in internal impedance or cell capacity reduction due to aging. In either case, if one cell in a battery pack experiences deviant cell behavior, that cell becomes a likely candidate to overvoltage during high power charging events. Cells with reduced capacity or high internal impedance tend to have large voltage swings when charging and discharging. For HEV applications, it is necessary to cell balance lithium chemistry because of this overvoltage potential. For EV applications, cell balancing is desirable to obtain maximum usable capacity from the battery pack. During charging, an out-of-balance cell may prematurely approach the end-of-charge voltage (typically 4.1 to 4.3 volts/cell) and trigger the charger to turn off. Cell balancing is useful to control the higher voltage cells until the rest of the cells can catch up. In this way, the charger is not turned off until the cells simultaneously reach the end-of-charge voltage. END-OF-CHARGE CELL BALANCING METHODS Typically, cell-balancing methods employed during and at end-of-charging are useful only for electric vehicle purposes. This is because electric vehicle batteries are generally fully charged between each use cycle. Hybrid electric vehicle batteries may or may not be maintained fully charged, resulting in unpredictable end-of-charge conditions to enact the balancing mechanism. Hybrid vehicle batteries also require both high power charge (regenerative braking) and discharge (launch assist or boost) capabilities. For this reason, their batteries are usually maintained at a SOC that can discharge the required power but still have enough headroom to accept the necessary regenerative power. To fully charge the HEV battery for cell balancing would diminish charge acceptance capability (regenerative braking). CHARGE SHUNTING The charge-shunting cell balancing method selectively shunts the charging current around each cell as they become fully charged (Figure 1). This method is most efficiently employed on systems with known charge rates. The shunt resistor R is sized to shunt exactly the charging current I when the fully charged cell voltage V is reached. If the charging current decreases, resistor R will discharge the shunted cell. To avoid extremely large power dissipations due to R, this method is best used with stepped-current chargers with a small end-of-charge current.", "title": "" }, { "docid": "e49f9ad79d3d4d31003c0cda7d7d49c5", "text": "Greater trochanter pain syndrome due to tendinopathy or bursitis is a common cause of hip pain. The previously reported magnetic resonance (MR) findings of trochanteric tendinopathy and bursitis are peritrochanteric fluid and abductor tendon abnormality. We have often noted peritrochanteric high T2 signal in patients without trochanteric symptoms. The purpose of this study was to determine whether the MR findings of peritrochanteric fluid or hip abductor tendon pathology correlate with trochanteric pain. We retrospectively reviewed 131 consecutive MR examinations of the pelvis (256 hips) for T2 peritrochanteric signal and abductor tendon abnormalities without knowledge of the clinical symptoms. Any T2 peritrochanteric abnormality was characterized by size as tiny, small, medium, or large; by morphology as feathery, crescentic, or round; and by location as bursal or intratendinous. The clinical symptoms of hip pain and trochanteric pain were compared to the MR findings on coronal, sagittal, and axial T2 sequences using chi-square or Fisher’s exact test with significance assigned as p < 0.05. Clinical symptoms of trochanteric pain syndrome were present in only 16 of the 256 hips. All 16 hips with trochanteric pain and 212 (88%) of 240 without trochanteric pain had peritrochanteric abnormalities (p = 0.15). Eighty-eight percent of hips with trochanteric symptoms had gluteus tendinopathy while 50% of those without symptoms had such findings (p = 0.004). Other than tendinopathy, there was no statistically significant difference between hips with or without trochanteric symptoms and the presence of peritrochanteric T2 abnormality, its size or shape, and the presence of gluteus medius or minimus partial thickness tears. Patients with trochanteric pain syndrome always have peritrochanteric T2 abnormalities and are significantly more likely to have abductor tendinopathy on magnetic resonance imaging (MRI). However, although the absence of peritrochanteric T2 MR abnormalities makes trochanteric pain syndrome unlikely, detection of these abnormalities on MRI is a poor predictor of trochanteric pain syndrome as these findings are present in a high percentage of patients without trochanteric pain.", "title": "" }, { "docid": "8aa305f217314d60ed6c9f66d20a7abf", "text": "The circadian timing system drives daily rhythmic changes in drug metabolism and controls rhythmic events in cell cycle, DNA repair, apoptosis, and angiogenesis in both normal tissue and cancer. Rodent and human studies have shown that the toxicity and anticancer activity of common cancer drugs can be significantly modified by the time of administration. Altered sleep/activity rhythms are common in cancer patients and can be disrupted even more when anticancer drugs are administered at their most toxic time. Disruption of the sleep/activity rhythm accelerates cancer growth. The complex circadian time-dependent connection between host, cancer and therapy is further impacted by other factors including gender, inter-individual differences and clock gene polymorphism and/or down regulation. It is important to take circadian timing into account at all stages of new drug development in an effort to optimize the therapeutic index for new cancer drugs. Better measures of the individual differences in circadian biology of host and cancer are required to further optimize the potential benefit of chronotherapy for each individual patient.", "title": "" }, { "docid": "9164dab8c4c55882f8caecc587c32eb1", "text": "We suggest an approach to exploratory analysis of diverse types of spatiotemporal data with the use of clustering and interactive visual displays. We can apply the same generic clustering algorithm to different types of data owing to the separation of the process of grouping objects from the process of computing distances between the objects. In particular, we apply the densitybased clustering algorithm OPTICS to events (i.e. objects having spatial and temporal positions), trajectories of moving entities, and spatial distributions of events or moving entities in different time intervals. Distances are computed in a specific way for each type of objects; moreover, it may be useful to have several different distance functions for the same type of objects. Thus, multiple distance functions available for trajectories support different analysis tasks. We demonstrate the use of our approach by example of two datasets from the VAST Challenge 2008: evacuation traces (trajectories of moving entities) and landings and interdictions of migrant boats (events).", "title": "" }, { "docid": "052a83669b39822eda51f2e7222074b4", "text": "A class-E synchronous rectifier has been designed and implemented using 0.13-μm CMOS technology. A design methodology based on the theory of time-reversal duality has been used where a class-E amplifier circuit is transformed into a class-E rectifier circuit. The methodology is distinctly different from other CMOS RF rectifier designs which use voltage multiplier techniques. Power losses in the rectifier are analyzed including saturation resistance in the switch, inductor losses, and current/voltage overlap losses. The rectifier circuit includes a 50-Ω single-ended RF input port with on-chip matching. The circuit is self-biased and completely powered from the RF input signal. Experimental results for the rectifier show a peak RF-to-dc conversion efficiency of 30% measured at a frequency of 2.4 GHz.", "title": "" }, { "docid": "0bcff493580d763dbc1dd85421546201", "text": "The development of powerful imaging tools, editing images for changing their data content is becoming a mark to undertake. Tempering image contents by adding, removing, or copying/moving without leaving a trace or unable to be discovered by the investigation is an issue in the computer forensic world. The protection of information shared on the Internet like images and any other con?dential information is very signi?cant. Nowadays, forensic image investigation tools and techniques objective is to reveal the tempering strategies and restore the firm belief in the reliability of digital media. This paper investigates the challenges of detecting steganography in computer forensics. Open source tools were used to analyze these challenges. The experimental investigation focuses on using steganography applications that use same algorithms to hide information exclusively within an image. The research finding denotes that, if a certain steganography tool A is used to hide some information within a picture, and then tool B which uses the same procedure would not be able to recover the embedded image.", "title": "" }, { "docid": "a0d34b1c003b7e88c2871deaaba761ed", "text": "Sentence simplification aims to make sentences easier to read and understand. Most recent approaches draw on insights from machine translation to learn simplification rewrites from monolingual corpora of complex and simple sentences. We address the simplification problem with an encoder-decoder model coupled with a deep reinforcement learning framework. Our model, which we call DRESS (as shorthand for Deep REinforcement Sentence Simplification), explores the space of possible simplifications while learning to optimize a reward function that encourages outputs which are simple, fluent, and preserve the meaning of the input. Experiments on three datasets demonstrate that our model outperforms competitive simplification systems.1", "title": "" }, { "docid": "7e78dd27dd2d4da997ceef7e867b7cd2", "text": "Extracting facial feature is a key step in facial expression recognition (FER). Inaccurate feature extraction very often results in erroneous categorizing of facial expressions. Especially in robotic application, environmental factors such as illumination variation may cause FER system to extract feature inaccurately. In this paper, we propose a robust facial feature point extraction method to recognize facial expression in various lighting conditions. Before extracting facial features, a face is localized and segmented from a digitized image frame. Face preprocessing stage consists of face normalization and feature region localization steps to extract facial features efficiently. As regions of interest corresponding to relevant features are determined, Gabor jets are applied based on Gabor wavelet transformation to extract the facial points. Gabor jets are more invariable and reliable than gray-level values, which suffer from ambiguity as well as illumination variation while representing local features. Each feature point can be matched by a phase-sensitivity similarity function in the relevant regions of interest. Finally, the feature values are evaluated from the geometric displacement of facial points. After tested using the AR face database and the database built in our lab, average facial expression recognition rates of 84.1% and 81.3% are obtained respectively.", "title": "" }, { "docid": "be29160b73b9ab727eb760a108a7254a", "text": "Two-dimensional (2-D) analytical permanent-magnet (PM) eddy-current loss calculations are presented for slotless PM synchronous machines (PMSMs) with surface-inset PMs considering the current penetration effect. In this paper, the term slotless implies that either the stator is originally slotted but the slotting effects are neglected or the stator is originally slotless. The analytical magnetic field distribution is computed in polar coordinates from the 2-D subdomain method (i.e., based on formal resolution of Maxwell's equation applied in subdomain). Based on the predicted magnetic field distribution, the eddy-currents induced in the PMs are analytically obtained and the PM eddy-current losses considering eddy-current reaction field are calculated. The analytical expressions can be used for slotless PMSMs with any number of phases and any form of current and overlapping winding distribution. The effects of stator slotting are neglected and the current density distribution is modeled by equivalent current sheets located on the slot opening. To evaluate the efficacy of the proposed technique, the 2-D PM eddy-current losses for two slotless PMSMs are analytically calculated and compared with those obtained by 2-D finite-element analysis (FEA). The effects of the rotor rotational speed and the initial rotor mechanical angular position are investigated. The analytical results are in good agreement with those obtained by the 2-D FEA.", "title": "" }, { "docid": "136ed8dc00926ceec6d67b9ab35e8444", "text": "This paper addresses the property requirements of repair materials for high durability performance for concrete structure repair. It is proposed that the high tensile strain capacity of High Performance Fiber Reinforced Cementitious Composites (HPFRCC) makes such materials particularly suitable for repair applications, provided that the fresh properties are also adaptable to those required in placement techniques in typical repair applications. A specific version of HPFRCC, known as Engineered Cementitious Composites (ECC), is described. It is demonstrated that the fresh and hardened properties of ECC meet many of the requirements for durable repair performance. Recent experience in the use of this material in a bridge deck patch repair is highlighted. The origin of this article is a summary of a keynote lecture with the same title given at the Conference on Fiber Composites, High-Performance Concretes and Smart Materials, Chennai, India, Jan., 2004. It is only slightly updated here.", "title": "" }, { "docid": "d7eb92756c8c3fb0ab49d7b101d96343", "text": "Pretraining with language modeling and related unsupervised tasks has recently been shown to be a very effective enabling technology for the development of neural network models for language understanding tasks. In this work, we show that although language model-style pretraining is extremely effective at teaching models about language, it does not yield an ideal starting point for efficient transfer learning. By supplementing language model-style pretraining with further training on data-rich supervised tasks, we are able to achieve substantial additional performance improvements across the nine target tasks in the GLUE benchmark. We obtain an overall score of 76.9 on GLUE—a 2.3 point improvement over our baseline system adapted from Radford et al. (2018) and a 4.1 point improvement over Radford et al.’s reported score. We further use training data downsampling to show that the benefits of this supplementary training are even more pronounced in data-constrained regimes.", "title": "" }, { "docid": "0bf150f6cd566c31ec840a57d8d2fa55", "text": "Within the past few years, organizations in diverse industries have adopted MapReduce-based systems for large-scale data processing. Along with these new users, important new workloads have emerged which feature many small, short, and increasingly interactive jobs in addition to the large, long-running batch jobs for which MapReduce was originally designed. As interactive, large-scale query processing is a strength of the RDBMS community, it is important that lessons from that field be carried over and applied where possible in this new domain. However, these new workloads have not yet been described in the literature. We fill this gap with an empirical analysis of MapReduce traces from six separate business-critical deployments inside Facebook and at Cloudera customers in e-commerce, telecommunications, media, and retail. Our key contribution is a characterization of new MapReduce workloads which are driven in part by interactive analysis, and which make heavy use of querylike programming frameworks on top of MapReduce. These workloads display diverse behaviors which invalidate prior assumptions about MapReduce such as uniform data access, regular diurnal patterns, and prevalence of large jobs. A secondary contribution is a first step towards creating a TPC-like data processing benchmark for MapReduce.", "title": "" }, { "docid": "ef4272cd4b0d4df9aa968cc9ff528c1e", "text": "Estimating action quality, the process of assigning a \"score\" to the execution of an action, is crucial in areas such as sports and health care. Unlike action recognition, which has millions of examples to learn from, the action quality datasets that are currently available are small-typically comprised of only a few hundred samples. This work presents three frameworks for evaluating Olympic sports which utilize spatiotemporal features learned using 3D convolutional neural networks (C3D) and perform score regression with i) SVR ii) LSTM and iii) LSTM followed by SVR. An efficient training mechanism for the limited data scenarios is presented for clip-based training with LSTM. The proposed systems show significant improvement over existing quality assessment approaches on the task of predicting scores of diving, vault, figure skating. SVR-based frameworks yield better results, LSTM-based frameworks are more natural for describing an action and can be used for improvement feedback.", "title": "" }, { "docid": "d8befc5eb47ac995e245cf9177c16d3d", "text": "Our hypothesis is that the video game industry, in the attempt to simulate a realistic experience, has inadvertently collected very accurate data which can be used to solve problems in the real world. In this paper we describe a novel approach to soccer match prediction that makes use of only virtual data collected from a video game(FIFA 2015). Our results were comparable and in some places better than results achieved by predictors that used real data. We also use the data provided for each player and the players present in the squad, to analyze the team strategy. Based on our analysis, we were able to suggest better strategies for weak teams", "title": "" }, { "docid": "eba545eb04c950ecd9462558c9d3da85", "text": "The ability to recognize facial expressions automatically enables novel applications in human-computer interaction and other areas. Consequently, there has been active research in this field, with several recent works utilizing Convolutional Neural Networks (CNNs) for feature extraction and inference. These works differ significantly in terms of CNN architectures and other factors. Based on the reported results alone, the performance impact of these factors is unclear. In this paper, we review the state of the art in image-based facial expression recognition using CNNs and highlight algorithmic differences and their performance impact. On this basis, we identify existing bottlenecks and consequently directions for advancing this research field. Furthermore, we demonstrate that overcoming one of these bottlenecks – the comparatively basic architectures of the CNNs utilized in this field – leads to a substantial performance increase. By forming an ensemble of modern deep CNNs, we obtain a FER2013 test accuracy of 75.2%, outperforming previous works without requiring auxiliary training data or face registration.", "title": "" }, { "docid": "a31692667282fe92f2eefc63cd562c9e", "text": "Multivariate data sets including hundreds of variables are increasingly common in many application areas. Most multivariate visualization techniques are unable to display such data effectively, and a common approach is to employ dimensionality reduction prior to visualization. Most existing dimensionality reduction systems focus on preserving one or a few significant structures in data. For many analysis tasks, however, several types of structures can be of high significance and the importance of a certain structure compared to the importance of another is often task-dependent. This paper introduces a system for dimensionality reduction by combining user-defined quality metrics using weight functions to preserve as many important structures as possible. The system aims at effective visualization and exploration of structures within large multivariate data sets and provides enhancement of diverse structures by supplying a range of automatic variable orderings. Furthermore it enables a quality-guided reduction of variables through an interactive display facilitating investigation of trade-offs between loss of structure and the number of variables to keep. The generality and interactivity of the system is demonstrated through a case scenario.", "title": "" } ]
scidocsrr
08f2b24f0b7bc1bc200f868e5fa932a7
Facial volume restoration of the aging face with poly-l-lactic acid.
[ { "docid": "41ac115647c421c44d7ef1600814dc3e", "text": "PURPOSE\nThe bony skeleton serves as the scaffolding for the soft tissues of the face; however, age-related changes of bony morphology are not well defined. This study sought to compare the anatomic relationships of the facial skeleton and soft tissue structures between young and old men and women.\n\n\nMETHODS\nA retrospective review of CT scans of 100 consecutive patients imaged at Duke University Medical Center between 2004 and 2007 was performed using the Vitrea software package. The study population included 25 younger women (aged 18-30 years), 25 younger men, 25 older women (aged 55-65 years), and 25 older men. Using a standardized reference line, the distances from the anterior corneal plane to the superior orbital rim, lateral orbital rim, lower eyelid fat pad, inferior orbital rim, anterior cheek mass, and pyriform aperture were measured. Three-dimensional bony reconstructions were used to record the angular measurements of 4 bony regions: glabellar, orbital, maxillary, and pyriform aperture.\n\n\nRESULTS\nThe glabellar (p = 0.02), orbital (p = 0.0007), maxillary (p = 0.0001), and pyriform (p = 0.008) angles all decreased with age. The maxillary pyriform (p = 0.003) and infraorbital rim (p = 0.02) regressed with age. Anterior cheek mass became less prominent with age (p = 0.001), but the lower eyelid fat pad migrated anteriorly over time (p = 0.007).\n\n\nCONCLUSIONS\nThe facial skeleton appears to remodel throughout adulthood. Relative to the globe, the facial skeleton appears to rotate such that the frontal bone moves anteriorly and inferiorly while the maxilla moves posteriorly and superiorly. This rotation causes bony angles to become more acute and likely has an effect on the position of overlying soft tissues. These changes appear to be more dramatic in women.", "title": "" }, { "docid": "0802735955b52c1dae64cf34a97a33fb", "text": "Cutaneous facial aging is responsible for the increasingly wrinkled and blotchy appearance of the skin, whereas aging of the facial structures is attributed primarily to gravity. This article purports to show, however, that the primary etiology of structural facial aging relates instead to repeated contractions of certain facial mimetic muscles, the age marker fascicules, whereas gravity only secondarily abets an aging process begun by these muscle contractions. Magnetic resonance imaging (MRI) has allowed us to study the contrasts in the contour of the facial mimetic muscles and their associated deep and superficial fat pads in patients of different ages. The MRI model shows that the facial mimetic muscles in youth have a curvilinear contour presenting an anterior surface convexity. This curve reflects an underlying fat pad lying deep to these muscles, which acts as an effective mechanical sliding plane. The muscle’s anterior surface convexity constitutes the key evidence supporting the authors’ new aging theory. It is this youthful convexity that dictates a specific characteristic to the muscle contractions conveyed outwardly as youthful facial expression, a specificity of both direction and amplitude of facial mimetic movement. With age, the facial mimetic muscles (specifically, the age marker fascicules), as seen on MRI, gradually straighten and shorten. The authors relate this radiologic end point to multiple repeated muscle contractions over years that both expel underlying deep fat from beneath the muscle plane and increase the muscle resting tone. Hence, over time, structural aging becomes more evident as the facial appearance becomes more rigid.", "title": "" } ]
[ { "docid": "ecca793cace7cbf6cc142f2412847df4", "text": "The development of capacitive power transfer (CPT) as a competitive wireless/contactless power transfer solution over short distances is proving viable in both consumer and industrial electronic products/systems. The CPT is usually applied in low-power applications, due to small coupling capacitance. Recent research has increased the coupling capacitance from the pF to the nF scale, enabling extension of CPT to kilowatt power level applications. This paper addresses the need of efficient power electronics suitable for CPT at higher power levels, while remaining cost effective. Therefore, to reduce the cost and losses single-switch-single-diode topologies are investigated. Four single active switch CPT topologies based on the canonical Ćuk, SEPIC, Zeta, and Buck-boost converters are proposed and investigated. Performance tradeoffs within the context of a CPT system are presented and corroborated with experimental results. A prototype single active switch converter demonstrates 1-kW power transfer at a frequency of 200 kHz with >90% efficiency.", "title": "" }, { "docid": "0fc3976820ca76c630476647761f9c21", "text": "Paper Mechatronics is a novel interdisciplinary design medium, enabled by recent advances in craft technologies: the term refers to a reappraisal of traditional papercraft in combination with accessible mechanical, electronic, and computational elements. I am investigating the design space of paper mechatronics as a new hands-on medium by developing a series of examples and building a computational tool, FoldMecha, to support non-experts to design and construct their own paper mechatronics models. This paper describes how I used the tool to create two kinds of paper mechatronics models: walkers and flowers and discuss next steps.", "title": "" }, { "docid": "4c9d20c4d264a950cb89bd41401ec99a", "text": "The primary goal of a recommender system is to generate high quality user-centred recommendations. However, the traditional evaluation methods and metrics were developed before researchers understood all the factors that increase user satisfaction. This study is an introduction to a novel user and item classification framework. It is proposed that this framework should be used during user-centred evaluation of recommender systems and the need for this framework is justified through experiments. User profiles are constructed and matched against other users’ profiles to formulate neighbourhoods and generate top-N recommendations. The recommendations are evaluated to measure the success of the process. In conjunction with the framework, a new diversity metric is presented and explained. The accuracy, coverage, and diversity of top-N recommendations is illustrated and discussed for groups of users. It is found that in contradiction to common assumptions, not all users suffer as expected from the data sparsity problem. In fact, the group of users that receive the most accurate recommendations do not belong to the least sparse area of the dataset.", "title": "" }, { "docid": "3da6c20ba154de6fbea24c3cbb9c8ebb", "text": "The tourism industry is characterized by ever-increasing competition, causing destinations to seek new methods to attract tourists. Traditionally, a decision to visit a destination is interpreted, in part, as a rational calculation of the costs/benefits of a set of alternative destinations, which were derived from external information sources, including e-WOM (word-of-mouth) or travelers' blogs. There are numerous travel blogs available for people to share and learn about travel experiences. Evidence shows, however, that not every blog exerts the same degree of influence on tourists. Therefore, which characteristics of these travel blogs attract tourists' attention and influence their decisions, becomes an interesting research question. Based on the concept of information relevance, a model is proposed for interrelating various attributes specific to blog's content and perceived enjoyment, an intrinsic motivation of information systems usage, to mitigate the above-mentioned gap. Results show that novelty, understandability, and interest of blogs' content affect behavioral intention through blog usage enjoyment. Finally, theoretical and practical implications are proposed. Tourism is a popular activity in modern life and has contributed significantly to economic development for decades. However, competition in almost every sector of this industry has intensified during recent years & Pan, 2008); tourism service providers are now finding it difficult to acquire and keep customers (Echtner & Ritchie, 1991; Ho, 2007). Therefore, methods of attracting tourists to a destination are receiving greater attention from researchers, policy makers, and marketers. Before choosing a destination, tourists may search for information to support their decision-making By understanding the relationships between various information sources' characteristics and destination choice, tourism managers can improve their marketing efforts. Recently, personal blogs have become an important source for acquiring travel information With personal blogs, many tourists can share their travel experiences with others and potential tourists can search for and respond to others' experiences. Therefore, a blog can be seen as an asynchronous and many-to-many channel for conveying travel-related electronic word-of-mouth (e-WOM). By using these forms of inter-personal influence media, companies in this industry can create a competitive advantage (Litvin et al., 2008; Singh et al., 2008). Weblogs are now widely available; therefore, it is not surprising that the quantity of available e-WOM has increased (Xiang & Gret-zel, 2010) to an extent where information overload has become a Empirical evidence , however, indicates that people may not consult numerous blogs for advice; the degree of inter-personal influence varies from blog to blog (Zafiropoulos, 2012). Determining …", "title": "" }, { "docid": "8a91835866267ef83ba245c12ce1283d", "text": "Due to the increasing demand in the agricultural industry, the need to effectively grow a plant and increase its yield is very important. In order to do so, it is important to monitor the plant during its growth period, as well as, at the time of harvest. In this paper image processing is used as a tool to monitor the diseases on fruits during farming, right from plantation to harvesting. For this purpose artificial neural network concept is used. Three diseases of grapes and two of apple have been selected. The system uses two image databases, one for training of already stored disease images and the other for implementation of query images. Back propagation concept is used for weight adjustment of training database. The images are classified and mapped to their respective disease categories on basis of three feature vectors, namely, color, texture and morphology. From these feature vectors morphology gives 90% correct result and it is more than other two feature vectors. This paper demonstrates effective algorithms for spread of disease and mango counting. Practical implementation of neural networks has been done using MATLAB.", "title": "" }, { "docid": "c9e47bfe0f1721a937ba503ed9913dba", "text": "The Web contains a vast amount of structured information such as HTML tables, HTML lists and deep-web databases; there is enormous potential in combining and re-purposing this data in creative ways. However, integrating data from this relational web raises several challenges that are not addressed by current data integration systems or mash-up tools. First, the structured data is usually not published cleanly and must be extracted (say, from an HTML list) before it can be used. Second, due to the vastness of the corpus, a user can never know all of the potentially-relevant databases ahead of time (much less write a wrapper or mapping for each one); the source databases must be discovered during the integration process. Third, some of the important information regarding the data is only present in its enclosing web page and needs to be extracted appropriately. This paper describes Octopus, a system that combines search, extraction, data cleaning and integration, and enables users to create new data sets from those found on the Web. The key idea underlying Octopus is to offer the user a set of best-effort operators that automate the most labor-intensive tasks. For example, the Search operator takes a search-style keyword query and returns a set of relevance-ranked and similarity-clustered structured data sources on the Web; the Context operator helps the user specify the semantics of the sources by inferring attribute values that may not appear in the source itself, and the Extend operator helps the user find related sources that can be joined to add new attributes to a table. Octopus executes some of these operators automatically, but always allows the user to provide feedback and correct errors. We describe the algorithms underlying each of these operators and experiments that demonstrate their efficacy.", "title": "" }, { "docid": "c32d61da51308397d889db143c3e6f9d", "text": "Children’s neurological development is influenced by their experiences. Early experiences and the environments in which they occur can alter gene expression and affect long-term neural development. Today, discretionary screen time, often involving multiple devices, is the single main experience and environment of children. Various screen activities are reported to induce structural and functional brain plasticity in adults. However, childhood is a time of significantly greater changes in brain anatomical structure and connectivity. There is empirical evidence that extensive exposure to videogame playing during childhood may lead to neuroadaptation and structural changes in neural regions associated with addiction. Digital natives exhibit a higher prevalence of screen-related ‘addictive’ behaviour that reflect impaired neurological rewardprocessing and impulse-control mechanisms. Associations are emerging between screen dependency disorders such as Internet Addiction Disorder and specific neurogenetic polymorphisms, abnormal neural tissue and neural function. Although abnormal neural structural and functional characteristics may be a precondition rather than a consequence of addiction, there may also be a bidirectional relationship. As is the case with substance addictions, it is possible that intensive routine exposure to certain screen activities during critical stages of neural development may alter gene expression resulting in structural, synaptic and functional changes in the developing brain leading to screen dependency disorders, particularly in children with predisposing neurogenetic profiles. There may also be compound/secondary effects on neural development. Screen dependency disorders, even at subclinical levels, involve high levels of discretionary screen time, inducing greater child sedentary behaviour thereby reducing vital aerobic fitness, which plays an important role in the neurological health of children, particularly in brain structure and function. Child health policy must therefore adhere to the principle of precaution as a prudent approach to protecting child neurological integrity and well-being. This paper explains the basis of current paediatric neurological concerns surrounding screen dependency disorders and proposes preventive strategies for child neurology and allied professions.", "title": "" }, { "docid": "910fdcf9e9af05b5d1cb70a9c88e4143", "text": "We propose NEURAL ENQUIRER — a neural network architecture for answering natural language (NL) questions given a knowledge base (KB) table. Unlike previous work on end-to-end training of semantic parsers, NEURAL ENQUIRER is fully “neuralized”: it gives distributed representations of queries and KB tables, and executes queries through a series of differentiable operations. The model can be trained with gradient descent using both endto-end and step-by-step supervision. During training the representations of queries and the KB table are jointly optimized with the query execution logic. Our experiments show that the model can learn to execute complex NL queries on KB tables with rich structures.", "title": "" }, { "docid": "c56c392e1a7d58912eeeb1718379fa37", "text": "The changing face of technology has played an integral role in the development of the hotel and restaurant industry. The manuscript investigated the impact that technology has had on the hotel and restaurant industry. A detailed review of the literature regarding the growth of technology in the industry was linked to the development of strategic direction. The manuscript also looked at the strategic analysis methodology for evaluating and taking advantage of current and future technological innovations for the hospitality industry. Identification and implementation of these technologies can help in building a sustainable competitive advantage for hotels and restaurants.", "title": "" }, { "docid": "1040e96ab179d5705eeb2983bdef31d3", "text": "Neural network-based systems can now learn to locate the referents of words and phrases in images, answer questions about visual scenes, and even execute symbolic instructions as first-person actors in partially-observable worlds. To achieve this so-called grounded language learning, models must overcome certain well-studied learning challenges that are also fundamental to infants learning their first words. While it is notable that models with no meaningful prior knowledge overcome these learning obstacles, AI researchers and practitioners currently lack a clear understanding of exactly how they do so. Here we address this question as a way of achieving a clearer general understanding of grounded language learning, both to inform future research and to improve confidence in model predictions. For maximum control and generality, we focus on a simple neural network-based language learning agent trained via policy-gradient methods to interpret synthetic linguistic instructions in a simulated 3D world. We apply experimental paradigms from developmental psychology to this agent, exploring the conditions under which established human biases and learning effects emerge. We further propose a novel way to visualise and analyse semantic representation in grounded language learning agents that yields a plausible computational account of the observed effects.", "title": "" }, { "docid": "b0d959bdb58fbcc5e324a854e9e07b81", "text": "It is well known that the road signs play’s a vital role in road safety its ignorance results in accidents .This Paper proposes an Idea for road safety by using a RFID based traffic sign recognition system. By using it we can prevent the road risk up to a great extend.", "title": "" }, { "docid": "659e71fb9274c47f369c37de751a91b2", "text": "The Timed Up and Go (TUG) is a clinical test used widely to measure balance and mobility, e.g. in Parkinson's disease (PD). The test includes a sequence of functional activities, namely: sit-to-stand, 3-meters walk, 180° turn, walk back, another turn and sit on the chair. Meanwhile the stopwatch is used to score the test by measuring the time which the patients with PD need to perform the test. Here, the work presents an instrumented TUG using a wearable inertial sensor unit attached on the lower back of the person. The approach is used to automate the process of assessment compared with the manual evaluation by using visual observation and a stopwatch. The developed algorithm is based on the Dynamic Time Warping (DTW) for multi-dimensional time series and has been applied with the augmented feature for detection and duration assessment of turn state transitions, while a 1-dimensional DTW is used to detect the sit-to-stand and stand-to-sit phases. The feature set is a 3-dimensional vector which consists of the angular velocity, derived angle and features from Linear Discriminant Analysis (LDA). The algorithm was tested on 10 healthy individuals and 20 patients with PD (10 patients with early and late disease phases respectively). The test demonstrates that the developed technique can successfully extract the time information of the sit-to-stand, both turns and stand-to-sit transitions in the TUG test.", "title": "" }, { "docid": "3e83f454f66e8aba14733205c8e19753", "text": "BACKGROUND\nNormal-weight adults gain lower-body fat via adipocyte hyperplasia and upper-body subcutaneous (UBSQ) fat via adipocyte hypertrophy.\n\n\nOBJECTIVES\nWe investigated whether regional fat loss mirrors fat gain and whether the loss of lower-body fat is attributed to decreased adipocyte number or size.\n\n\nDESIGN\nWe assessed UBSQ, lower-body, and visceral fat gains and losses in response to overfeeding and underfeeding in 23 normal-weight adults (15 men) by using dual-energy X-ray absorptiometry and abdominal computed tomography scans. Participants gained ∼5% of weight in 8 wk and lost ∼80% of gained fat in 8 wk. We measured abdominal subcutaneous and femoral adipocyte sizes and numbers after weight gain and loss.\n\n\nRESULTS\nVolunteers gained 3.1 ± 2.1 (mean ± SD) kg body fat with overfeeding and lost 2.4 ± 1.7 kg body fat with underfeeding. Although UBSQ and visceral fat gains were completely reversed after 8 wk of underfeeding, lower-body fat had not yet returned to baseline values. Abdominal and femoral adipocyte sizes, but not numbers, decreased with weight loss. Decreases in abdominal adipocyte size and UBSQ fat mass were correlated (ρ = 0.76, P = 0.001), as were decreases in femoral adipocyte size and lower-body fat (ρ = 0.49, P = 0.05).\n\n\nCONCLUSIONS\nUBSQ and visceral fat increase and decrease proportionately with a short-term weight gain and loss, whereas a gain of lower-body fat does not relate to the loss of lower-body fat. The loss of lower-body fat is attributed to a reduced fat cell size, but not number, which may result in long-term increases in fat cell numbers.", "title": "" }, { "docid": "2c8e50194e4b2238b9af86806323e2c5", "text": "Previous research suggests a possible link between eveningness and general difficulties with self-regulation (e.g., evening types are more likely than other chronotypes to have irregular sleep schedules and social rhythms and use substances). Our study investigated the relationship between eveningness and self-regulation by using two standardized measures of self-regulation: the Self-Control Scale and the Procrastination Scale. We predicted that an eveningness preference would be associated with poorer self-control and greater procrastination than would an intermediate or morningness preference. Participants were 308 psychology students (mean age=19.92 yrs) at a small Canadian college. Students completed the self-regulation questionnaires and Morningness/Eveningness Questionnaire (MEQ) online. The mean MEQ score was 46.69 (SD=8.20), which is intermediate between morningness and eveningness. MEQ scores ranged from definite morningness to definite eveningness, but the dispersion of scores was skewed toward more eveningness. Pearson and partial correlations (controlling for age) were used to assess the relationship between MEQ score and the Self-Control Scale (global score and 5 subscale scores) and Procrastination Scale (global score). All correlations were significant. The magnitude of the effects was medium for all measures except one of the Self-Control subscales, which was small. A multiple regression analysis to predict MEQ score using the Self-Control Scale (global score), Procrastination Scale, and age as predictors indicated the Self-Control Scale was a significant predictor (accounting for 20% of the variance). A multiple regression analysis to predict MEQ scores using the five subscales of the Self-Control Scale and age as predictors showed the subscales for reliability and work ethic were significant predictors (accounting for 33% of the variance). Our study showed a relationship between eveningness and low self-control, but it did not address whether the relationship is a causal one.", "title": "" }, { "docid": "81b3562907a19a12f02b82f927d89dc7", "text": "Warehouse automation systems that use robots to save human labor are becoming increasingly common. In a previous study, a picking system using a multi-joint type robot was developed. However, articulated robots are not ideal in warehouse scenarios, since inter-shelf space can limit their freedom of motion. Although the use of linear motion-type robots has been suggested as a solution, their drawback is that an additional cable carrier is needed. The authors therefore propose a new configuration for a robot manipulator that uses wireless power transmission (WPT), which delivers power without physical contact except at the base of the robot arm. We describe here a WPT circuit design suitable for rotating and sliding-arm mechanisms. Overall energy efficiency was confirmed to be 92.0%.", "title": "" }, { "docid": "3609f4923b9aebc3d18f31ac6ae78bea", "text": "Cloud computing is playing an ever larger role in the IT infrastructure. The migration into the cloud means that we must rethink and adapt our security measures. Ultimately, both the cloud provider and the customer have to accept responsibilities to ensure security best practices are followed. Firewalls are one of the most critical security features. Most IaaS providers make firewalls available to their customers. In most cases, the customer assumes a best-case working scenario which is often not assured. In this paper, we studied the filtering behavior of firewalls provided by five different cloud providers. We found that three providers have firewalls available within their infrastructure. Based on our findings, we developed an open-ended firewall monitoring tool which can be used by cloud customers to understand the firewall's filtering behavior. This information can then be efficiently used for risk management and further security considerations. Measuring today's firewalls has shown that they perform well for the basics, although may not be fully featured considering fragmentation or stateful behavior.", "title": "" }, { "docid": "b3f5d9335cccf62797c86b76fa2c9e7e", "text": "For most families with elderly relatives, care within their own home is by far the most preferred option both for the elderly and their carers. However, frequently these carers are the partners of the person with long-term care needs, and themselves are elderly and in need of support to cope with the burdens and stress associated with these duties. When it becomes too much for them, they may have to rely on professional care services, or even use residential care for a respite. In order to support the carers as well as the elderly person, an ambient assisted living platform has been developed. The system records information about the activities of daily living using unobtrusive sensors within the home, and allows the carers to record their own wellbeing state. By providing facilities to schedule and monitor the activities of daily care, and providing orientation and advice to improve the care given and their own wellbeing, the system helps to reduce the burden on the informal carers. Received on 30 August 2016; accepted on 03 February 2017; published on 21 March 2017", "title": "" }, { "docid": "60971d26877ef62b816526f13bd76c24", "text": "Breast cancer is one of the leading causes of cancer death among women worldwide. In clinical routine, automatic breast ultrasound (BUS) image segmentation is very challenging and essential for cancer diagnosis and treatment planning. Many BUS segmentation approaches have been studied in the last two decades, and have been proved to be effective on private datasets. Currently, the advancement of BUS image segmentation seems to meet its bottleneck. The improvement of the performance is increasingly challenging, and only few new approaches were published in the last several years. It is the time to look at the field by reviewing previous approaches comprehensively and to investigate the future directions. In this paper, we study the basic ideas, theories, pros and cons of the approaches, group them into categories, and extensively review each category in depth by discussing the principles, application issues, and advantages/disadvantages. Keyword: breast ultrasound (BUS) images; breast cancer; segmentation; benchmark; early detection; computer-aided diagnosis (CAD)", "title": "" }, { "docid": "da5562859bfed0057e0566679a4aca3d", "text": "Machine-to-Machine (M2M) paradigm enables machines (sensors, actuators, robots, and smart meter readers) to communicate with each other with little or no human intervention. M2M is a key enabling technology for the cyber-physical systems (CPSs). This paper explores CPS beyond M2M concept and looks at futuristic applications. Our vision is CPS with distributed actuation and in-network processing. We describe few particular use cases that motivate the development of the M2M communication primitives tailored to large-scale CPS. M2M communications in literature were considered in limited extent so far. The existing work is based on small-scale M2M models and centralized solutions. Different sources discuss different primitives. Few existing decentralized solutions do not scale well. There is a need to design M2M communication primitives that will scale to thousands and trillions of M2M devices, without sacrificing solution quality. The main paradigm shift is to design localized algorithms, where CPS nodes make decisions based on local knowledge. Localized coordination and communication in networked robotics, for matching events and robots, were studied to illustrate new directions.", "title": "" }, { "docid": "72d74a0eaa768f46b17bf75f1a059d3f", "text": "Cloud gaming represents a highly interactive service whereby game logic is rendered in the cloud and streamed as a video to end devices. While benefits include the ability to stream high-quality graphics games to practically any end user device, drawbacks include high bandwidth requirements and very low latency. Consequently, a challenge faced by cloud gaming service providers is the design of algorithms for adapting video streaming parameters to meet the end user system and network resource constraints. In this paper, we conduct an analysis of the commercial NVIDIA GeForce NOW game streaming platform adaptation mechanisms in light of variable network conditions. We further conduct an empirical user study involving the GeForce NOW platform to assess player Quality of Experience when such adaptation mechanisms are employed. The results provide insight into limitations of the currently deployed mechanisms, as well as aim to provide input for the proposal of designing future video encoding adaptation strategies.", "title": "" } ]
scidocsrr
e0301c813aa0aeaac7d4039bc9b5e5ae
The roles of brand community and community engagement in building brand trust on social media
[ { "docid": "64e0a1345e5a181191c54f6f9524c96d", "text": "Social media based brand communities are communities initiated on the platform of social media. In this article, we explore whether brand communities based on social media (a special type of online brand communities) have positive effects on the main community elements and value creation practices in the communities as well as on brand trust and brand loyalty. A survey based empirical study with 441 respondents was conducted. The results of structural equation modeling show that brand communities established on social media have positive effects on community markers (i.e., shared consciousness, shared rituals and traditions, and obligations to society), which have positive effects on value creation practices (i.e., social networking, community engagement, impressions management, and brand use). Such communities could enhance brand loyalty through brand use and impression management practices. We show that brand trust has a full mediating role in converting value creation practices into brand loyalty. Implications for practice and future research opportunities are discussed.", "title": "" } ]
[ { "docid": "89652309022bc00c7fd76c4fe1c5d644", "text": "In first encounters people quickly form impressions of each other’s personality and interpersonal attitude. We conducted a study to investigate how this transfers to first encounters between humans and virtual agents. In the study, subjects’ avatars approached greeting agents in a virtual museum rendered in both first and third person perspective. Each agent exclusively exhibited nonverbal immediacy cues (smile, gaze and proximity) during the approach. Afterwards subjects judged its personality (extraversion) and interpersonal attitude (hostility/friendliness). We found that within only 12.5 seconds of interaction subjects formed impressions of the agents based on observed behavior. In particular, proximity had impact on judgments of extraversion whereas smile and gaze on friendliness. These results held for the different camera perspectives. Insights on how the interpretations might change according to the user’s own personality are also provided.", "title": "" }, { "docid": "c1906bcb735d0c77057441f13ea282fc", "text": "It has long been known that storage of information in working memory suffers as a function of proactive interference. Here we review the results of experiments using approaches from cognitive neuroscience to reveal a pattern of brain activity that is a signature of proactive interference. Many of these results derive from a single paradigm that requires one to resolve interference from a previous experimental trial. The importance of activation in left inferior frontal cortex is shown repeatedly using this task and other tasks. We review a number of models that might account for the behavioral and imaging findings about proactive interference, raising questions about the adequacy of these models.", "title": "" }, { "docid": "c4ecf2d867a84a94ad34a1d4943071df", "text": "This paper introduces our submission to the 2nd Facial Landmark Localisation Competition. We present a deep architecture to directly detect facial landmarks without using face detection as an initialization. The architecture consists of two stages, a Basic Landmark Prediction Stage and a Whole Landmark Regression Stage. At the former stage, given an input image, the basic landmarks of all faces are detected by a sub-network of landmark heatmap and affinity field prediction. At the latter stage, the coarse canonical face and the pose can be generated by a Pose Splitting Layer based on the visible basic landmarks. According to its pose, each canonical state is distributed to the corresponding branch of the shape regression sub-networks for the whole landmark detection. Experimental results show that our method obtains promising results on the 300-W dataset, and achieves superior performances over the baselines of the semi-frontal and the profile categories in this competition.", "title": "" }, { "docid": "c6d2371a165acc46029eb4ad42df3270", "text": "Video game playing is a popular activity and its enjoyment among frequent players has been associated with absorption and immersion experiences. This paper examines how immersion in the video game environment can influence the player during the game and afterwards (including fantasies, thoughts, and actions). This is what is described as Game Transfer Phenomena (GTP). GTP occurs when video game elements are associated with real life elements triggering subsequent thoughts, sensations and/or player actions. To investigate this further, a total of 42 frequent video game players aged between 15 and 21 years old were interviewed. Thematic analysis showed that many players experienced GTP, where players appeared to integrate elements of video game playing into their real lives. These GTP were then classified as either intentional or automatic experiences. Results also showed that players used video games for interacting with others as a form of amusement, modeling or mimicking video game content, and daydreaming about video games. Furthermore, the findings demonstrate how video games triggered intrusive thoughts, sensations, impulses, reflexes, visual illusions, and dissociations. DOI: 10.4018/ijcbpl.2011070102 16 International Journal of Cyber Behavior, Psychology and Learning, 1(3), 15-33, July-September 2011 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. 24/7 activity (e.g., Ng & Weimer-Hastings, 2005; Chappell, Eatough, Davies, & Griffiths, 2006; Grüsser, Thalemann, & Griffiths, 2007). Today’s video games have evolved due to technological advance, resulting in high levels of realism and emotional design that include diversity, experimentation, and (perhaps in some cases) sensory overload. Furthermore, video games have been considered as fantasy triggers because they offer ‘what if’ scenarios (Baranowski, Buday, Thompson, & Baranowski, 2008). What if the player could become someone else? What if the player could inhabit an improbable world? What if the player could interact with fantasy characters or situations (Woolley, 1995)? Entertainment media content can be very effective in capturing the minds and eliciting emotions in the individual. Research about novels, films, fairy tales and television programs has shown that entertainment can generate emotions such as joy, awe, compassion, fear and anger (Oatley, 1999; Tan 1996; Valkenburg Cantor & Peeters, 2000, cited in Jansz et al., 2005). Video games also have the capacity to generate such emotions and have the capacity for players to become both immersed in, and dissociated from, the video game. Dissociation and Immersion It is clear that dissociation is a somewhat “fuzzy” concept as there is no clear accepted definition of what it actually constitutes (Griffiths, Wood, Parke, & Parke, 2006). Most would agree that dissociation is a form of altered state of consciousness. However, dissociative behaviours lie on a continuum and range from individuals losing track of time, feeling like they are someone else, blacking out, not recalling how they got somewhere or what they did, and being in a trance like state (Griffiths et al., 2006). Studies have found that dissociation is related to an extensive involvement in fantasizing, and daydreaming (Giesbrecht, Geraerts, & Merckelbach, 2007). Dissociative phenomena of the non-pathological type include absorption and imaginative involvement (Griffith et al., 2006) and are psychological phenomena that can occur during video game playing. Anyone can, to some degree, experience dissociative states in their daily lives (Giesbrecht et al., 2007). Furthermore, these states can happen episodically and can be situationally triggered (Griffiths et al., 2006). When people become engaged in games they may experience psychological absorption. More commonly known as ‘immersion’, this refers to when individual logical integration of thoughts, feelings and experiences is suspended (Funk, Chan, Brouwer, & Curtiss, 2006; Wood, Griffiths, & Parke, 2007). This can incur an altered state of consciousness such as altered time perception and change in degree of control over cognitive functioning (Griffiths et al., 2006). Video game enjoyment has been associated with absorption and immersion experiences (IJsselsteijn, Kort, de Poels, Jurgelionis, & Belotti, 2007). How an individual can get immersed in video games has been explained by the phenomenon of ‘flow’ (Csikszentmihalyi, 1988). Flow refers to the optimum experience a person achieves when performing an activity (e.g., video game playing) and may be induced, in part, by the structural characteristics of the activity itself. Structural characteristics of video games (i.e., the game elements that are incorporated into the game by the games designers) are usually based on a balance between skill and challenge (Wood et al., 2004; King, Delfabbro, & Griffiths, 2010), and help make playing video games an intrinsically rewarding activity (Csikszentmihalyi, 1988; King, et al. 2010). Studying Video Game Playing Studying the effects of video game playing requires taking in consideration four independent dimensions suggested by Gentile and Stone (2005); amount, content, form, and mechanism. The amount is understood as the time spent playing and gaming habits. Content refers to the message and topic delivered by the video game. Form focuses on the types of activity necessary to perform in the video game. The mechanism refers to the input-output devices used, which 17 more pages are available in the full version of this document, which may be purchased using the \"Add to Cart\" button on the product's webpage: www.igi-global.com/article/game-transfer-phenomena-videogame/58041?camid=4v1 This title is available in InfoSci-Journals, InfoSci-Journal Disciplines Communications and Social Science, InfoSciCommunications, Online Engagement, and Media eJournal Collection, InfoSci-Educational Leadership, Administration, and Technologies eJournal Collection, InfoSci-Healthcare Administration, Clinical Practice, and Bioinformatics eJournal Collection, InfoSci-Select, InfoSci-Journal Disciplines Library Science, Information Studies, and Education, InfoSci-Journal Disciplines Medicine, Healthcare, and Life Science. Recommend this product to your librarian: www.igi-global.com/e-resources/libraryrecommendation/?id=2", "title": "" }, { "docid": "2390d3d6c51c4a6857c517eb2c2cb3c0", "text": "It is common for organizations to maintain multiple variants of a given business process, such as multiple sales processes for different products or multiple bookkeeping processes for different countries. Conventional business process modeling languages do not explicitly support the representation of such families of process variants. This gap triggered significant research efforts over the past decade, leading to an array of approaches to business process variability modeling. In general, each of these approaches extends a conventional process modeling language with constructs to capture customizable process models. A customizable process model represents a family of process variants in a way that a model of each variant can be derived by adding or deleting fragments according to customization options or according to a domain model. This survey draws up a systematic inventory of approaches to customizable process modeling and provides a comparative evaluation with the aim of identifying common and differentiating modeling features, providing criteria for selecting among multiple approaches, and identifying gaps in the state of the art. The survey puts into evidence an abundance of customizable process-modeling languages, which contrasts with a relative scarcity of available tool support and empirical comparative evaluations.", "title": "" }, { "docid": "9676c561df01b794aba095dc66b684f8", "text": "The differentiation of B lymphocytes in the bone marrow is guided by the surrounding microenvironment determined by cytokines, adhesion molecules, and the extracellular matrix. These microenvironmental factors are mainly provided by stromal cells. In this paper, we report the identification of a VCAM-1-positive stromal cell population by flow cytometry. This population showed the expression of cell surface markers known to be present on stromal cells (CD10, CD13, CD90, CD105) and had a fibroblastoid phenotype in vitro. Single cell RT-PCR analysis of its cytokine expression pattern revealed transcripts for haematopoietic cytokines important for either the early B lymphopoiesis like flt3L or the survival of long-lived plasma cells like BAFF or both processes like SDF-1. Whereas SDF-1 transcripts were detectable in all VCAM-1-positive cells, flt3L and BAFF were only expressed by some cells suggesting the putative existence of different subpopulations with distinct functional properties. In summary, the VCAM-1-positive cell population seems to be a candidate stromal cell population supporting either developing B cells and/or long-lived plasma cells in human bone marrow.", "title": "" }, { "docid": "9c28badf1e53e69452c1d7aad2a87fab", "text": "While an al dente character of 5G is yet to emerge, network densification, miscellany of node types, split of control and data plane, network virtualization, heavy and localized cache, infrastructure sharing, concurrent operation at multiple frequency bands, simultaneous use of different medium access control and physical layers, and flexible spectrum allocations can be envisioned as some of the potential ingredients of 5G. It is not difficult to prognosticate that with such a conglomeration of technologies, the complexity of operation and OPEX can become the biggest challenge in 5G. To cope with similar challenges in the context of 3G and 4G networks, recently, self-organizing networks, or SONs, have been researched extensively. However, the ambitious quality of experience requirements and emerging multifarious vision of 5G, and the associated scale of complexity and cost, demand a significantly different, if not totally new, approach toward SONs in order to make 5G technically as well as financially feasible. In this article we first identify what challenges hinder the current self-optimizing networking paradigm from meeting the requirements of 5G. We then propose a comprehensive framework for empowering SONs with big data to address the requirements of 5G. Under this framework we first characterize big data in the context of future mobile networks, identifying its sources and future utilities. We then explicate the specific machine learning and data analytics tools that can be exploited to transform big data into the right data that provides a readily useable knowledge base to create end-to-end intelligence of the network. We then explain how a SON engine can build on the dynamic models extractable from the right data. The resultant dynamicity of a big data empowered SON (BSON) makes it more agile and can essentially transform the SON from being a reactive to proactive paradigm and hence act as a key enabler for 5G's extremely low latency requirements. Finally, we demonstrate the key concepts of our proposed BSON framework through a case study of a problem that the classic 3G/4G SON fails to solve.", "title": "" }, { "docid": "12af7a639f885a173950304cf44b5a42", "text": "Objective:To compare fracture rates in four diet groups (meat eaters, fish eaters, vegetarians and vegans) in the Oxford cohort of the European Prospective Investigation into Cancer and Nutrition (EPIC-Oxford).Design:Prospective cohort study of self-reported fracture risk at follow-up.Setting:The United Kingdom.Subjects:A total of 7947 men and 26 749 women aged 20–89 years, including 19 249 meat eaters, 4901 fish eaters, 9420 vegetarians and 1126 vegans, recruited by postal methods and through general practice surgeries.Methods:Cox regression.Results:Over an average of 5.2 years of follow-up, 343 men and 1555 women reported one or more fractures. Compared with meat eaters, fracture incidence rate ratios in men and women combined adjusted for sex, age and non-dietary factors were 1.01 (95% CI 0.88–1.17) for fish eaters, 1.00 (0.89–1.13) for vegetarians and 1.30 (1.02–1.66) for vegans. After further adjustment for dietary energy and calcium intake the incidence rate ratio among vegans compared with meat eaters was 1.15 (0.89–1.49). Among subjects consuming at least 525 mg/day calcium the corresponding incidence rate ratios were 1.05 (0.90–1.21) for fish eaters, 1.02 (0.90–1.15) for vegetarians and 1.00 (0.69–1.44) for vegans.Conclusions:In this population, fracture risk was similar for meat eaters, fish eaters and vegetarians. The higher fracture risk in the vegans appeared to be a consequence of their considerably lower mean calcium intake. An adequate calcium intake is essential for bone health, irrespective of dietary preferences.Sponsorship:The EPIC-Oxford study is supported by The Medical Research Council and Cancer Research UK.", "title": "" }, { "docid": "b1e039673d60defd9b8699074235cf1b", "text": "Sentiment classification has undergone significant development in recent years. However, most existing studies assume the balance between negative and positive samples, which may not be true in reality. In this paper, we investigate imbalanced sentiment classification instead. In particular, a novel clustering-based stratified under-sampling framework and a centroid-directed smoothing strategy are proposed to address the imbalanced class and feature distribution problems respectively. Evaluation across different datasets shows the effectiveness of both the under-sampling framework and the smoothing strategy in handling the imbalanced problems in real sentiment classification applications.", "title": "" }, { "docid": "8aacdb790ddec13f396a0591c0cd227a", "text": "This paper reports on a qualitative study of journal entries written by students in six health professions participating in the Interprofessional Health Mentors program at the University of British Columbia, Canada. The study examined (1) what health professions students learn about professional language and communication when given the opportunity, in an interprofessional group with a patient or client, to explore the uses, meanings, and effects of common health care terms, and (2) how health professional students write about their experience of discussing common health care terms, and what this reveals about how students see their development of professional discourse and participation in a professional discourse community. Using qualitative thematic analysis to address the first question, the study found that discussion of these health care terms provoked learning and reflection on how words commonly used in one health profession can be understood quite differently in other health professions, as well as on how health professionals' language choices may be perceived by patients and clients. Using discourse analysis to address the second question, the study further found that many of the students emphasized accuracy and certainty in language through clear definitions and intersubjective agreement. However, when prompted by the discussion they were willing to consider other functions and effects of language.", "title": "" }, { "docid": "26feac05cc1827728cbcb6be3b4bf6d1", "text": "This paper presents a Linux kernel module, DigSig, which helps system administrators control Executable and Linkable Format (ELF) binary execution and library loading based on the presence of a valid digital signature. By preventing attackers from replacing libraries and sensitive, privileged system daemons with malicious code, DigSig increases the difficulty of hiding illicit activities such as access to compromised systems. DigSig provides system administrators with an efficient tool which mitigates the risk of running malicious code at run time. This tool adds extra functionality previously unavailable for the Linux operating system: kernel level RSA signature verification with caching and revocation of signatures.", "title": "" }, { "docid": "a134fe9ffdf7d99593ad9cdfd109b89d", "text": "A hybrid particle swarm optimization (PSO) for the job shop problem (JSP) is proposed in this paper. In previous research, PSO particles search solutions in a continuous solution space. Since the solution space of the JSP is discrete, we modified the particle position representation, particle movement, and particle velocity to better suit PSO for the JSP. We modified the particle position based on preference list-based representation, particle movement based on swap operator, and particle velocity based on the tabu list concept in our algorithm. Giffler and Thompson’s heuristic is used to decode a particle position into a schedule. Furthermore, we applied tabu search to improve the solution quality. The computational results show that the modified PSO performs better than the original design, and that the hybrid PSO is better than other traditional metaheuristics. 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "76f033087b24fdb7494dd7271adbb346", "text": "Navigation research is attracting renewed interest with the advent of learning-based methods. However, this new line of work is largely disconnected from well-established classic navigation approaches. In this paper, we take a step towards coordinating these two directions of research. We set up classic and learning-based navigation systems in common simulated environments and thoroughly evaluate them in indoor spaces of varying complexity, with access to different sensory modalities. Additionally, we measure human performance in the same environments. We find that a classic pipeline, when properly tuned, can perform very well in complex cluttered environments. On the other hand, learned systems can operate more robustly with a limited sensor suite. Both approaches are still far from human-level performance.", "title": "" }, { "docid": "21d84bd9ea7896892a3e69a707b03a6a", "text": "Tahoe is a system for secure, distributed storage. It uses capabilities for access control, cryptography for confidentiality and integrity, and erasure coding for fault-tolerance. It has been deployed in a commercial backup service and is currently operational. The implementation is Open Source.", "title": "" }, { "docid": "3230fba68358a08ab9112887bdd73bb9", "text": "The local field potential (LFP) reflects activity of many neurons in the vicinity of the recording electrode and is therefore useful for studying local network dynamics. Much of the nature of the LFP is, however, still unknown. There are, for instance, contradicting reports on the spatial extent of the region generating the LFP. Here, we use a detailed biophysical modeling approach to investigate the size of the contributing region by simulating the LFP from a large number of neurons around the electrode. We find that the size of the generating region depends on the neuron morphology, the synapse distribution, and the correlation in synaptic activity. For uncorrelated activity, the LFP represents cells in a small region (within a radius of a few hundred micrometers). If the LFP contributions from different cells are correlated, the size of the generating region is determined by the spatial extent of the correlated activity.", "title": "" }, { "docid": "e00295dc86476d1d350d11068439fe87", "text": "A 10-bit LCD column driver, consisting of piecewise linear digital to analog converters (DACs), is proposed. Piecewise linear compensation is utilized to reduce the die area and to increase the effective color depth. The data conversion is carried out by a resistor string type DAC (R-DAC) and a charge sharing DAC, which are used for the most significant bit and least significant bit data conversions, respectively. Gamma correction voltages are applied to the R-DAC to lit the inverse of the liquid crystal trans-mittance-voltage characteristic. The gamma correction can also be digitally fine-tuned in the timing controller or column drivers. A prototype 10-bit LCD column driver implemented in a 0.35-mum CMOS technology demonstrates that the settling time is within 3 mus and the average die size per channel is 0.063 mm2, smaller than those of column drivers based exclusively on R-DACs.", "title": "" }, { "docid": "4c261e2b54a12270f158299733942a5f", "text": "Applying Data Mining (DM) in education is an emerging interdisciplinary research field also known as Educational Data Mining (EDM). Ensemble techniques have been successfully applied in the context of supervised learning to increase the accuracy and stability of prediction. In this paper, we present a hybrid procedure based on ensemble classification and clustering that enables academicians to firstly predict students’ academic performance and then place each student in a well-defined cluster for further advising. Additionally, it endows instructors an anticipated estimation of their students’ capabilities during team forming and in-class participation. For ensemble classification, we use multiple classifiers (Decision Trees-J48, Naïve Bayes and Random Forest) to improve the quality of student data by eliminating noisy instances, and hence improving predictive accuracy. We then use the approach of bootstrap (sampling with replacement) averaging, which consists of running k-means clustering algorithm to convergence of the training data and averaging similar cluster centroids to obtain a single model. We empirically compare our technique with other ensemble techniques on real world education datasets.", "title": "" }, { "docid": "2a7b7d9fab496be18f6bf50add2f7b1e", "text": "BACKROUND\nSuperior Mesenteric Artery Syndrome (SMAS) is a rare disorder caused by compression of the third portion of the duodenum by the SMA. Once a conservative approach fails, usual surgical strategies include Duodenojejunostomy and Strong's procedure. The latter avoids potential anastomotic risks and complications. Robotic Strong's procedure (RSP) combines both the benefits of a minimal invasive approach and also enchased robotic accuracy and efficacy.\n\n\nMETHODS\nFor a young girl who was unsuccessfully treated conservatively, the paper describes the RSP surgical technique. To the authors' knowledge, this is the first report in the literature.\n\n\nRESULTS\nMinimal blood loss, short operative time, short hospital stay and early recovery were the short-term benefits. Significant weight gain was achieved three months after the surgery.\n\n\nCONCLUSION\nBased on primary experience, it is suggested that RSP is a very effective alternative in treating SMAS.", "title": "" }, { "docid": "d18c53be23600c9b0ae2efa215c7c4af", "text": "The problem of large-scale image search has been traditionally addressed with the bag-of-visual-words (BOV). In this article, we propose to use as an alternative the Fisher kernel framework. We first show why the Fisher representation is well-suited to the retrieval problem: it describes an image by what makes it different from other images. One drawback of the Fisher vector is that it is high-dimensional and, as opposed to the BOV, it is dense. The resulting memory and computational costs do not make Fisher vectors directly amenable to large-scale retrieval. Therefore, we compress Fisher vectors to reduce their memory footprint and speed-up the retrieval. We compare three binarization approaches: a simple approach devised for this representation and two standard compression techniques. We show on two publicly available datasets that compressed Fisher vectors perform very well using as little as a few hundreds of bits per image, and significantly better than a very recent compressed BOV approach.", "title": "" }, { "docid": "c32c1c16aec9bc6dcfb5fa8fb4f25140", "text": "Logo detection is a challenging task with many practical applications in our daily life and intellectual property protection. The two main obstacles here are lack of public logo datasets and effective design of logo detection structure. In this paper, we first manually collected and annotated 6,400 images and mix them with FlickrLogo-32 dataset, forming a larger dataset. Secondly, we constructed Faster R-CNN frameworks with several widely used classification models for logo detection. Furthermore, the transfer learning method was introduced in the training process. Finally, clustering was used to guarantee suitable hyper-parameters and more precise anchors of RPN. Experimental results show that the proposed framework outper-forms the state of-the-art methods with a noticeable margin.", "title": "" } ]
scidocsrr
b0772812a9182f6354e8b447ff0558a0
Maximum Power Point Tracking for PV system under partial shading condition via particle swarm optimization
[ { "docid": "470093535d4128efa9839905ab2904a5", "text": "Photovolatic systems normally use a maximum power point tracking (MPPT) technique to continuously deliver the highest possible power to the load when variations in the insolation and temperature occur. It overcomes the problem of mismatch between the solar arrays and the given load. A simple method of tracking the maximum power points (MPP’s) and forcing the system to operate close to these points is presented. The principle of energy conservation is used to derive the largeand small-signal model and transfer function. By using the proposed model, the drawbacks of the state-space-averaging method can be overcome. The TI320C25 digital signal processor (DSP) was used to implement the proposed MPPT controller, which controls the dc/dc converter in the photovoltaic system. Simulations and experimental results show excellent performance.", "title": "" } ]
[ { "docid": "e4132ac9af863c2c17489817898dbd1c", "text": "This paper presents automatic parallel parking for car-like vehicle, with highlights on a path planning algorithm for arbitrary initial angle using two tangential arcs of different radii. The algorithm is divided into three parts. Firstly, a simple kinematic model of the vehicle is established based on Ackerman steering geometry; secondly, not only a minimal size of the parking space is analyzed based on the size and the performance of the vehicle but also an appropriate target point is chosen based on the size of the parking space and the vehicle; Finally, a path is generated based on two tangential arcs of different radii. The simulation results show that the feasibility of the proposed algorithm.", "title": "" }, { "docid": "26095dbc82b68c32881ad9316256bc42", "text": "BACKGROUND\nSchizophrenia causes great suffering for patients and families. Today, patients are treated with medications, but unfortunately many still have persistent symptoms and an impaired quality of life. During the last 20 years of research in cognitive behavioral therapy (CBT) for schizophrenia, evidence has been found that the treatment is good for patients but it is not satisfactory enough, and more studies are being carried out hopefully to achieve further improvement.\n\n\nPURPOSE\nClinical trials and meta-analyses are being used to try to prove the efficacy of CBT. In this article, we summarize recent research using the cognitive model for people with schizophrenia.\n\n\nMETHODS\nA systematic search was carried out in PubMed (Medline). Relevant articles were selected if they contained a description of cognitive models for schizophrenia or psychotic disorders.\n\n\nRESULTS\nThere is now evidence that positive and negative symptoms exist in a continuum, from normality (mild form and few symptoms) to fully developed disease (intensive form with many symptoms). Delusional patients have reasoning bias such as jumping to conclusions, and those with hallucination have impaired self-monitoring and experience their own thoughts as voices. Patients with negative symptoms have negative beliefs such as low expectations regarding pleasure and success. In the entire patient group, it is common to have low self-esteem.\n\n\nCONCLUSIONS\nThe cognitive model integrates very well with the aberrant salience model. It takes into account neurobiology, cognitive, emotional and social processes. The therapist uses this knowledge when he or she chooses techniques for treatment of patients.", "title": "" }, { "docid": "49ff105e4bd35d88e2cbf988e22a7a3a", "text": "Personality testing is a popular method that used to be commonly employed in selection decisions in organizational settings. However, it is also a controversial practice according to a number researcher who claims that especially explicit measures of personality may be prone to the negative effects of faking and response distortion. The first aim of the present paper is to summarize Morgeson, Morgeson, Campion, Dipboye, Hollenbeck, Murphy and Schmitt’s paper that discussed the limitations of personality testing for performance ratings in relation to its basic conclusions about faking and response distortion. Secondly, the results of Rosse, Stecher, Miller and Levin’s study that investigated the effects of faking in personality testing on selection decisions will be discussed in detail. Finally, recent research findings related to implicit personality measures will be introduced along with the examples of the results related to the implications of those measures for response distortion in personality research and the suggestions for future research.", "title": "" }, { "docid": "1d1f14cb78693e56d014c89eacfcc3ef", "text": "We undertook a meta-analysis of six Crohn's disease genome-wide association studies (GWAS) comprising 6,333 affected individuals (cases) and 15,056 controls and followed up the top association signals in 15,694 cases, 14,026 controls and 414 parent-offspring trios. We identified 30 new susceptibility loci meeting genome-wide significance (P < 5 × 10−8). A series of in silico analyses highlighted particular genes within these loci and, together with manual curation, implicated functionally interesting candidate genes including SMAD3, ERAP2, IL10, IL2RA, TYK2, FUT2, DNMT3A, DENND1B, BACH2 and TAGAP. Combined with previously confirmed loci, these results identify 71 distinct loci with genome-wide significant evidence for association with Crohn's disease.", "title": "" }, { "docid": "9a7016a02eda7fcae628197b0625832b", "text": "We present a vertical-silicon-nanowire-based p-type tunneling field-effect transistor (TFET) using CMOS-compatible process flow. Following our recently reported n-TFET , a low-temperature dopant segregation technique was employed on the source side to achieve steep dopant gradient, leading to excellent tunneling performance. The fabricated p-TFET devices demonstrate a subthreshold swing (SS) of 30 mV/decade averaged over a decade of drain current and an Ion/Ioff ratio of >; 105. Moreover, an SS of 50 mV/decade is maintained for three orders of drain current. This demonstration completes the complementary pair of TFETs to implement CMOS-like circuits.", "title": "" }, { "docid": "c4fe9fd7e506e18f1a38bc71b7434b99", "text": "We introduce Evenly Cascaded convolutional Network (ECN), a neural network taking inspiration from the cascade algorithm of wavelet analysis. ECN employs two feature streams - a low-level and high-level steam. At each layer these streams interact, such that low-level features are modulated using advanced perspectives from the high-level stream. ECN is evenly structured through resizing feature map dimensions by a consistent ratio, which removes the burden of ad-hoc specification of feature map dimensions. ECN produces easily interpretable features maps, a result whose intuition can be understood in the context of scale-space theory. We demonstrate that ECN’s design facilitates the training process through providing easily trainable shortcuts. We report new state-of-the-art results for small networks, without the need for additional treatment such as pruning or compression - a consequence of ECN’s simple structure and direct training. A 6-layered ECN design with under 500k parameters achieves 95.24% and 78.99% accuracy on CIFAR-10 and CIFAR-100 datasets, respectively, outperforming the current state-of-the-art on small parameter networks, and a 3 million parameter ECN produces results competitive to the state-of-the-art.", "title": "" }, { "docid": "2d0cc4c7ca6272200bb1ed1c9bba45f0", "text": "Advanced Driver Assistance Systems (ADAS) based on video camera tends to be generalized in today's automotive. However, if most of these systems perform nicely in good weather conditions, they perform very poorly under adverse weather particularly under rain. We present a novel approach that aims at detecting raindrops on a car windshield using only images from an in-vehicle camera. Based on the photometric properties of raindrops, the algorithm relies on image processing technics to highlight raindrops. Its results can be further used for image restoration and vision enhancement and hence it is a valuable tool for ADAS.", "title": "" }, { "docid": "65a990303d1d6efd3aea5307e7db9248", "text": "The presentation of news articles to meet research needs has traditionally been a document-centric process. Yet users often want to monitor developing news stories based on an event, rather than by examining an exhaustive list of retrieved documents. In this work, we illustrate a news retrieval system, eventNews, and an underlying algorithm which is event-centric. Through this system, news articles are clustered around a single news event or an event and its sub-events. The algorithm presented can leverage the creation of new Reuters stories and their compact labels as seed documents for the clustering process. The system is configured to generate top-level clusters for news events based on an editorially supplied topical label, known as a ‘slugline,’ and to generate sub-topic-focused clusters based on the algorithm. The system uses an agglomerative clustering algorithm to gather and structure documents into distinct result sets. Decisions on whether to merge related documents or clusters are made according to the similarity of evidence derived from two distinct sources, one, relying on a digital signature based on the unstructured text in the document, the other based on the presence of named entity tags that have been assigned to the document by a named entity tagger, in this case Thomson Reuters’ Calais engine. Copyright c © 2016 for the individual papers by the paper’s authors. Copying permitted for private and academic purposes. This volume is published and copyrighted by its editors. In: M. Martinez, U. Kruschwitz, G. Kazai, D. Corney, F. Hopfgartner, R. Campos and D. Albakour (eds.): Proceedings of the NewsIR’16 Workshop at ECIR, Padua, Italy, 20-March-2016, published at http://ceur-ws.org", "title": "" }, { "docid": "6e8cf6a53e1a9d571d5e5d1644c56e57", "text": "Previous research on relation classification has verified the effectiveness of using dependency shortest paths or subtrees. In this paper, we further explore how to make full use of the combination of these dependency information. We first propose a new structure, termed augmented dependency path (ADP), which is composed of the shortest dependency path between two entities and the subtrees attached to the shortest path. To exploit the semantic representation behind the ADP structure, we develop dependency-based neural networks (DepNN): a recursive neural network designed to model the subtrees, and a convolutional neural network to capture the most important features on the shortest path. Experiments on the SemEval-2010 dataset show that our proposed method achieves state-of-art results.", "title": "" }, { "docid": "9814af3a2c855717806ad7496d21f40e", "text": "This chapter gives an extended introduction to the lightweight profiles OWL EL, OWL QL, and OWL RL of the Web Ontology Language OWL. The three ontology language standards are sublanguages of OWL DL that are restricted in ways that significantly simplify ontological reasoning. Compared to OWL DL as a whole, reasoning algorithms for the OWL profiles show higher performance, are easier to implement, and can scale to larger amounts of data. Since ontological reasoning is of great importance for designing and deploying OWL ontologies, the profiles are highly attractive for many applications. These advantages come at a price: various modelling features of OWL are not available in all or some of the OWL profiles. Moreover, the profiles are mutually incomparable in the sense that each of them offers a combination of features that is available in none of the others. This chapter provides an overview of these differences and explains why some of them are essential to retain the desired properties. To this end, we recall the relationship between OWL and description logics (DLs), and show how each of the profiles is typically treated in reasoning algorithms.", "title": "" }, { "docid": "1f93c117c048be827d0261f419c9cce3", "text": "Due to increasing number of internet users, popularity of Broadband Internet also increasing. Hence the connection cost should be decrease due to Wi Fi connectivity and built-in sensors in devices as well the maximum number of devices should be connected through a common medium. To meet all these requirements, the technology so called Internet of Things is evolved. Internet of Things (IoT) can be considered as a connection of computing devices like smart phones, coffee maker, washing machines, wearable device with an internet. IoT create network and connect \"things\" and people together by creating relationship between either people-people, people-things or things-things. As the number of device connection is increased, it increases the Security risk. Security is the biggest issue for IoT at any companies across the globe. Furthermore, privacy and data sharing can again be considered as a security concern for IoT. Companies, those who use IoT technique, need to find a way to store, track, analyze and make sense of the large amounts of data that will be generated. Few security techniques of IoT are necessary to implement to protect your confidential and important data as well for device protection through some internet security threats.", "title": "" }, { "docid": "e62e09ce3f4f135b12df4d643df02de6", "text": "Septic arthritis/tenosynovitis in the horse can have life-threatening consequences. The purpose of this cross-sectional retrospective study was to describe ultrasound characteristics of septic arthritis/tenosynovitis in a group of horses. Diagnosis of septic arthritis/tenosynovitis was based on historical and clinical findings as well as the results of the synovial fluid analysis and/or positive synovial culture. Ultrasonographic findings recorded were degree of joint/sheath effusion, degree of synovial membrane thickening, echogenicity of the synovial fluid, and presence of hyperechogenic spots and fibrinous loculations. Ultrasonographic findings were tested for dependence on the cause of sepsis, time between admission and beginning of clinical signs, and the white blood cell counts in the synovial fluid. Thirty-eight horses with confirmed septic arthritis/tenosynovitis of 43 joints/sheaths were included. Degree of effusion was marked in 81.4% of cases, mild in 16.3%, and absent in 2.3%. Synovial thickening was mild in 30.9% of cases and moderate/severe in 69.1%. Synovial fluid was anechogenic in 45.2% of cases and echogenic in 54.8%. Hyperechogenic spots were identified in 32.5% of structures and fibrinous loculations in 64.3%. Relationships between the degree of synovial effusion, degree of the synovial thickening, presence of fibrinous loculations, and the time between admission and beginning of clinical signs were identified, as well as between the presence of fibrinous loculations and the cause of sepsis (P ≤ 0.05). Findings indicated that ultrasonographic findings of septic arthritis/tenosynovitis may vary in horses, and may be influenced by time between admission and beginning of clinical signs.", "title": "" }, { "docid": "41d97d98a524e5f1e45ae724017819d9", "text": "Dynamically changing (reconfiguring) the membership of a replicated distributed system while preserving data consistency and system availability is a challenging problem. In this paper, we show that reconfiguration can be simplified by taking advantage of certain properties commonly provided by Primary/Backup systems. We describe a new reconfiguration protocol, recently implemented in Apache Zookeeper. It fully automates configuration changes and minimizes any interruption in service to clients while maintaining data consistency. By leveraging the properties already provided by Zookeeper our protocol is considerably simpler than state of the art.", "title": "" }, { "docid": "9d75520f138bcf7c529488f29d01efbb", "text": "High utilization of cargo volume is an essential factor in the success of modern enterprises in the market. Although mathematical models have been presented for container loading problems in the literature, there is still a lack of studies that consider practical constraints. In this paper, a Mixed Integer Linear Programming is developed for the problem of packing a subset of rectangular boxes inside a container such that the total value of the packed boxes is maximized while some realistic constraints, such as vertical stability, are considered. The packing is orthogonal, and the boxes can be freely rotated into any of the six orientations. Moreover, a sequence triple-based solution methodology is proposed, simulated annealing is used as modeling technique, and the situation where some boxes are preplaced in the container is investigated. These preplaced boxes represent potential obstacles. Numerical experiments are conducted for containers with and without obstacles. The results show that the simulated annealing approach is successful and can handle large number of packing instances.", "title": "" }, { "docid": "d5907911dfa7340b786f85618702ac12", "text": "In recent years many popular data visualizations have emerged that are created largely by designers whose main area of expertise is not computer science. Designers generate these visualizations using a handful of design tools and environments. To better inform the development of tools intended for designers working with data, we set out to understand designers' challenges and perspectives. We interviewed professional designers, conducted observations of designers working with data in the lab, and observed designers working with data in team settings in the wild. A set of patterns emerged from these observations from which we extract a number of themes that provide a new perspective on design considerations for visualization tool creators, as well as on known engineering problems.", "title": "" }, { "docid": "baad4c23994bafbdfba2a3d566c83558", "text": "Memories today expose an all-or-nothing correctness model that incurs significant costs in performance, energy, area, and design complexity. But not all applications need high-precision storage for all of their data structures all of the time. This article proposes mechanisms that enable applications to store data approximately and shows that doing so can improve the performance, lifetime, or density of solid-state memories. We propose two mechanisms. The first allows errors in multilevel cells by reducing the number of programming pulses used to write them. The second mechanism mitigates wear-out failures and extends memory endurance by mapping approximate data onto blocks that have exhausted their hardware error correction resources. Simulations show that reduced-precision writes in multilevel phase-change memory cells can be 1.7 × faster on average and using failed blocks can improve array lifetime by 23% on average with quality loss under 10%.", "title": "" }, { "docid": "a31652c0236fb5da569ffbf326eb29e5", "text": "Since 2012, citizens in Alaska, Colorado, Oregon, and Washington have voted to legalize the recreational use of marijuana by adults. Advocates of legalization have argued that prohibition wastes scarce law enforcement resources by selectively arresting minority users of a drug that has fewer adverse health effects than alcohol.1,2 It would be better, they argue, to legalize, regulate, and tax marijuana, like alcohol.3 Opponents of legalization argue that it will increase marijuana use among youth because it will make marijuana more available at a cheaper price and reduce the perceived risks of its use.4 Cerdá et al5 have assessed these concerns by examining the effects of marijuana legalization in Colorado and Washington on attitudes toward marijuana and reported marijuana use among young people. They used surveys from Monitoring the Future between 2010 and 2015 to examine changes in the perceived risks of occasional marijuana use and self-reported marijuana use in the last 30 days among students in eighth, 10th, and 12th grades in Colorado and Washington before and after legalization. They compared these changes with changes among students in states in the contiguous United States that had not legalized marijuana (excluding Oregon, which legalized in 2014). The perceived risks of using marijuana declined in all states, but there was a larger decline in perceived risks and a larger increase in marijuana use in the past 30 days among eighth and 10th graders from Washington than among students from other states. They did not find any such differences between students in Colorado and students in other US states that had not legalized, nor did they find any of these changes in 12th graders in Colorado or Washington. If the changes observed in Washington are attributable to legalization, why were there no changes found in Colorado? The authors suggest that this may have been because Colorado’s medical marijuana laws were much more liberal before legalization than those in Washington. After 2009, Colorado permitted medical marijuana to be supplied through for-profit dispensaries and allowed advertising of medical marijuana products. This hypothesisissupportedbyotherevidencethattheperceivedrisks of marijuana use decreased and marijuana use increased among young people in Colorado after these changes in 2009.6", "title": "" }, { "docid": "42d3f666325c3c9e2d61fcbad3c6659a", "text": "Supernumerary or accessory nostrils are a very rare type of congenital nasal anomaly, with only a few cases reported in the literature. They can be associated with such malformations as facial clefts and they can be unilateral or bilateral, with most cases reported being unilateral. The accessory nostril may or may not communicate with the ipsilateral nasal cavity, probably depending on the degree of embryological progression of the anomaly. A case of simple supernumerary left nostril with no nasal cavity communication and with a normally developed nose is presented. The surgical treatment is described and the different speculative theories related to the embryogenesis of supernumerary nostrils are also reviewed.", "title": "" }, { "docid": "468cdc4decf3871314ce04d6e49f6fad", "text": "Documents come naturally with structure: a section contains paragraphs which itself contains sentences; a blog page contains a sequence of comments and links to related blogs. Structure, of course, implies something about shared topics. In this paper we take the simplest form of structure, a document consisting of multiple segments, as the basis for a new form of topic model. To make this computationally feasible, and to allow the form of collapsed Gibbs sampling that has worked well to date with topic models, we use the marginalized posterior of a two-parameter Poisson-Dirichlet process (or Pitman-Yor process) to handle the hierarchical modelling. Experiments using either paragraphs or sentences as segments show the method significantly outperforms standard topic models on either whole document or segment, and previous segmented models, based on the held-out perplexity measure.", "title": "" }, { "docid": "578130d8ef9d18041c84ed226af8c84a", "text": "Ranking and scoring are ubiquitous. We consider the setting in which an institution, called a ranker, evaluates a set of individuals based on demographic, behavioral or other characteristics. The final output is a ranking that represents the relative quality of the individuals. While automatic and therefore seemingly objective, rankers can, and often do, discriminate against individuals and systematically disadvantage members of protected groups. This warrants a careful study of the fairness of a ranking scheme, to enable data science for social good applications, among others.\n In this paper we propose fairness measures for ranked outputs. We develop a data generation procedure that allows us to systematically control the degree of unfairness in the output, and study the behavior of our measures on these datasets. We then apply our proposed measures to several real datasets, and detect cases of bias. Finally, we show preliminary results of incorporating our ranked fairness measures into an optimization framework, and show potential for improving fairness of ranked outputs while maintaining accuracy.\n The code implementing all parts of this work is publicly available at https://github.com/DataResponsibly/FairRank.", "title": "" } ]
scidocsrr
8a69f2cdc23badb693bf45b084f5a6b8
Forecasting time series with complex seasonal patterns using exponential smoothing
[ { "docid": "ca29fee64e9271e8fce675e970932af1", "text": "This paper considers univariate online electricity demand forecasting for lead times from a half-hour-ahead to a day-ahead. A time series of demand recorded at half-hourly intervals contains more than one seasonal pattern. A within-day seasonal cycle is apparent from the similarity of the demand profile from one day to the next, and a within-week seasonal cycle is evident when one compares the demand on the corresponding day of adjacent weeks. There is strong appeal in using a forecasting method that is able to capture both seasonalities. The multiplicative seasonal ARIMA model has been adapted for this purpose. In this paper, we adapt the Holt-Winters exponential smoothing formulation so that it can accommodate two seasonalities. We correct for residual autocorrelation using a simple autoregressive model. The forecasts produced by the new double seasonal Holt-Winters method outperform those from traditional Holt-Winters and from a well-specified multiplicative double seasonal ARIMA model.", "title": "" } ]
[ { "docid": "b1d2def5ce60ff9e787eb32a3b0431a6", "text": "OSHA Region VIII office and the HBA of Metropolitan Denver who made this research possible and the Centers for Disease Control and Prevention, the National Institute for Occupational Safety and Health (NIOSH) for their support and funding via the awards 1 R03 OH04199-0: Occupational Low Back Pain in Residential Carpentry: Ergonomic Elements of Posture and Strain within the HomeSafe Pilot Program sponsored by OSHA and the HBA. Correspondence and requests for offprints should be sent to David P. Gilkey, Department of Environmental and Radiological Health Sciences, Colorado State University, Ft. Collins, CO 80523-1681, USA. E-mail: <dgilkey@colostate.edu>. Low Back Pain Among Residential Carpenters: Ergonomic Evaluation Using OWAS and 2D Compression Estimation", "title": "" }, { "docid": "cfd3548d7cf15b411b49eb77543d7903", "text": "INTRODUCTION\nLiquid injectable silicone (LIS) has been used for soft tissue augmentation in excess of 50 years. Until recently, all literature on penile augmentation with LIS consisted of case reports or small cases series, most involving surgical intervention to correct the complications of LIS. New formulations of LIS and new methodologies for injection have renewed interest in this procedure.\n\n\nAIM\nWe reported a case of penile augmentation with LIS and reviewed the pertinent literature.\n\n\nMETHODS\nComprehensive literature review was performed using PubMed. We performed additional searches based on references from relevant review articles.\n\n\nRESULTS\nInjection of medical grade silicone for soft tissue augmentation has a role in carefully controlled study settings. Historically, the use of LIS for penile augmentation has had poor outcomes and required surgical intervention to correct complications resulting from LIS.\n\n\nCONCLUSIONS\nWe currently discourage the use of LIS for penile augmentation until carefully designed and evaluated trials have been completed.", "title": "" }, { "docid": "e33129014269c9cf1579c5912f091916", "text": "Cloud service brokerage has been identified as a key concern for future cloud technology development and research. We compare service brokerage solutions. A range of specific concerns like architecture, programming and quality will be looked at. We apply a 2-pronged classification and comparison framework. We will identify challenges and wider research objectives based on an identification of cloud broker architecture concerns and technical requirements for service brokerage solutions. We will discuss complex cloud architecture concerns such as commoditisation and federation of integrated, vertical cloud stacks.", "title": "" }, { "docid": "4f42f1a6a9804f292b81313d9e8e04bf", "text": "An integrated high performance, highly reliable, scalable, and secure communications network is critical for the successful deployment and operation of next-generation electricity generation, transmission, and distribution systems — known as “smart grids.” Much of the work done to date to define a smart grid communications architecture has focused on high-level service requirements with little attention to implementation challenges. This paper investigates in detail a smart grid communication network architecture that supports today's grid applications (such as supervisory control and data acquisition [SCADA], mobile workforce communication, and other voice and data communication) and new applications necessitated by the introduction of smart metering and home area networking, support of demand response applications, and incorporation of renewable energy sources in the grid. We present design principles for satisfying the diverse quality of service (QoS) and reliability requirements of smart grids.", "title": "" }, { "docid": "c724224060408a1e13b135cb7c2bb9e4", "text": "Large datasets are increasingly common and are often difficult to interpret. Principal component analysis (PCA) is a technique for reducing the dimensionality of such datasets, increasing interpretability but at the same time minimizing information loss. It does so by creating new uncorrelated variables that successively maximize variance. Finding such new variables, the principal components, reduces to solving an eigenvalue/eigenvector problem, and the new variables are defined by the dataset at hand, not a priori, hence making PCA an adaptive data analysis technique. It is adaptive in another sense too, since variants of the technique have been developed that are tailored to various different data types and structures. This article will begin by introducing the basic ideas of PCA, discussing what it can and cannot do. It will then describe some variants of PCA and their application.", "title": "" }, { "docid": "f296b374b635de4f4c6fc9c6f415bf3e", "text": "People increasingly use the Internet for obtaining information regarding diseases, diagnoses and available treatments. Currently, many online health portals already provide non-personalized health information in the form of articles. However, it can be challenging to find information relevant to one's condition, interpret this in context, and understand the medical terms and relationships. Recommender Systems (RS) already help these systems perform precise information filtering. In this short paper, we look one step ahead and show the progress made towards RS helping users find personalized, complex medical interventions or support them with preventive healthcare measures. We identify key challenges that need to be addressed for RS to offer the kind of decision support needed in high-risk domains like healthcare.", "title": "" }, { "docid": "8c51c464d9137eec4600a5df5c6b451a", "text": "An increasing number of disasters (natural and man-made) with a large number of victims and significant social and economical losses are observed in the past few years. Although particular events can always be attributed to fate, it is improving the disaster management that have to contribute to decreasing damages and ensuring proper care for citizens in affected areas. Some of the lessons learned in the last several years give clear indications that the availability, management and presentation of geo-information play a critical role in disaster management. However, all the management techniques that are being developed are understood by, and confined to the intellectual community and hence lack mass participation. Awareness of the disasters is the only effective way in which one can bring about mass participation. Hence, any disaster management is successful only when the general public has some awareness about the disaster. In the design of such awareness program, intelligent mapping through analysis and data sharing also plays a very vital role. The analytical capabilities of GIS support all aspects of disaster management: planning, response and recovery, and records management. The proposed GIS based awareness program in this paper would improve the currently practiced disaster management programs and if implemented, would result in a proper dosage of awareness and caution to the general public, which in turn would help to cope with the dangerous activities of disasters in future.", "title": "" }, { "docid": "c2e0b234898df278ee57ae5827faadeb", "text": "In this paper, we consider the problem of single image super-resolution and propose a novel algorithm that outperforms state-of-the-art methods without the need of learning patches pairs from external data sets. We achieve this by modeling images and, more precisely, lines of images as piecewise smooth functions and propose a resolution enhancement method for this type of functions. The method makes use of the theory of sampling signals with finite rate of innovation (FRI) and combines it with traditional linear reconstruction methods. We combine the two reconstructions by leveraging from the multi-resolution analysis in wavelet theory and show how an FRI reconstruction and a linear reconstruction can be fused using filter banks. We then apply this method along vertical, horizontal, and diagonal directions in an image to obtain a single-image super-resolution algorithm. We also propose a further improvement of the method based on learning from the errors of our super-resolution result at lower resolution levels. Simulation results show that our method outperforms state-of-the-art algorithms under different blurring kernels.", "title": "" }, { "docid": "d612aeb7f7572345bab8609571f4030d", "text": "In conventional supervised training, a model is trained to fit all the training examples. However, having a monolithic model may not always be the best strategy, as examples could vary widely. In this work, we explore a different learning protocol that treats each example as a unique pseudo-task, by reducing the original learning problem to a few-shot meta-learning scenario with the help of a domain-dependent relevance function.1 When evaluated on the WikiSQL dataset, our approach leads to faster convergence and achieves 1.1%–5.4% absolute accuracy gains over the non-meta-learning counterparts.", "title": "" }, { "docid": "f8d256bf6fea179847bfb4cc8acd986d", "text": "We present a logic for stating properties such as, “after a request for service there is at least a 98% probability that the service will be carried out within 2 seconds”. The logic extends the temporal logic CTL by Emerson, Clarke and Sistla with time and probabilities. Formulas are interpreted over discrete time Markov chains. We give algorithms for checking that a given Markov chain satisfies a formula in the logic. The algorithms require a polynomial number of arithmetic operations, in size of both the formula and the Markov chain. A simple example is included to illustrate the algorithms.", "title": "" }, { "docid": "cccecb08c92f8bcec4a359373a20afcb", "text": "To solve the problem of the false matching and low robustness in detecting copy-move forgeries, a new method was proposed in this study. It involves the following steps: first, establish a Gaussian scale space; second, extract the orientated FAST key points and the ORB features in each scale space; thirdly, revert the coordinates of the orientated FAST key points to the original image and match the ORB features between every two different key points using the hamming distance; finally, remove the false matched key points using the RANSAC algorithm and then detect the resulting copy-move regions. The experimental results indicate that the new algorithm is effective for geometric transformation, such as scaling and rotation, and exhibits high robustness even when an image is distorted by Gaussian blur, Gaussian white noise and JPEG recompression; the new algorithm even has great detection on the type of hiding object forgery.", "title": "" }, { "docid": "65b2d6ea5e1089c52378b4fd6386224c", "text": "In traffic environment, conventional FMCW radar with triangular transmit waveform may bring out many false targets in multi-target situations and result in a high false alarm rate. An improved FMCW waveform and multi-target detection algorithm for vehicular applications is presented. The designed waveform in each small cycle is composed of two-segment: LFM section and constant frequency section. They have the same duration, yet in two adjacent small cycles the two LFM slopes are opposite sign and different size. Then the two adjacent LFM bandwidths are unequal. Within a determinate frequency range, the constant frequencies are modulated by a unique PN code sequence for different automotive radar in a big period. Corresponding to the improved waveform, which combines the advantages of both FSK and FMCW formats, a judgment algorithm is used in the continuous small cycle to further eliminate the false targets. The combination of unambiguous ranges and relative velocities can confirm and cancel most false targets in two adjacent small cycles.", "title": "" }, { "docid": "9abd7aedf336f32abed7640dd3f4d619", "text": "BACKGROUND\nAlthough evidence-based and effective treatments are available for people with depression, a substantial number does not seek or receive help. Therefore, it is important to gain a better understanding of the reasons why people do or do not seek help. This study examined what predisposing and need factors are associated with help-seeking among people with major depression.\n\n\nMETHODS\nA cross-sectional study was conducted in 102 subjects with major depression. Respondents were recruited from the general population in collaboration with three Municipal Health Services (GGD) across different regions in the Netherlands. Inclusion criteria were: being aged 18 years or older, a high score on a screening instrument for depression (K10 > 20), and a diagnosis of major depression established through the Composite International Diagnostic Interview (CIDI 2.1).\n\n\nRESULTS\nOf the total sample, 65 % (n = 66) had received help in the past six months. Results showed that respondents with a longer duration of symptoms and those with lower personal stigma were more likely to seek help. Other determinants were not significantly related to help-seeking.\n\n\nCONCLUSIONS\nLonger duration of symptoms was found to be an important determinant of help-seeking among people with depression. It is concerning that stigma was related to less help-seeking. Knowledge and understanding of depression should be promoted in society, hopefully leading to reduced stigma and increased help-seeking.", "title": "" }, { "docid": "dc75c32aceb78acd8267e7af442b992c", "text": "While pulmonary embolism (PE) causes approximately 100 000-180 000 deaths per year in the United States, mortality is restricted to patients who have massive or submassive PEs. This state of the art review familiarizes the reader with these categories of PE. The review discusses the following topics: pathophysiology, clinical presentation, rationale for stratification, imaging, massive PE management and outcomes, submassive PE management and outcomes, and future directions. It summarizes the most up-to-date literature on imaging, systemic thrombolysis, surgical embolectomy, and catheter-directed therapy for submassive and massive PE and gives representative examples that reflect modern practice. © RSNA, 2017.", "title": "" }, { "docid": "25d913188ee5790d5b3a9f5fb8b68dda", "text": "RPL, the routing protocol proposed by IETF for IPv6/6LoWPAN Low Power and Lossy Networks has significant complexity. Another protocol called LOADng, a lightweight variant of AODV, emerges as an alternative solution. In this paper, we compare the performance of the two protocols in a Home Automation scenario with heterogenous traffic patterns including a mix of multipoint-to-point and point-to-multipoint routes in realistic dense non-uniform network topologies. We use Contiki OS and Cooja simulator to evaluate the behavior of the ContikiRPL implementation and a basic non-optimized implementation of LOADng. Unlike previous studies, our results show that RPL provides shorter delays, less control overhead, and requires less memory than LOADng. Nevertheless, enhancing LOADng with more efficient flooding and a better route storage algorithm may improve its performance.", "title": "" }, { "docid": "5124bfe94345f2abe6f91fe717731945", "text": "Recently, IT trends such as big data, cloud computing, internet of things (IoT), 3D visualization, network, and so on demand terabyte/s bandwidth computer performance in a graphics card. In order to meet these performance, terabyte/s bandwidth graphics module using 2.5D-IC with high bandwidth memory (HBM) technology has been emerged. Due to the difference in scale of interconnect pitch between GPU or HBM and package substrate, the HBM interposer is certainly required for terabyte/s bandwidth graphics module. In this paper, the electrical performance of the HBM interposer channel in consideration of the manufacturing capabilities is analyzed by simulation both the frequency- and time-domain. Furthermore, although the silicon substrate is most widely employed for the HBM interposer fabrication, the organic and glass substrate are also proposed to replace the high cost and high loss silicon substrate. Therefore, comparison and analysis of the electrical performance of the HBM interposer channel using silicon, organic, and glass substrate are conducted.", "title": "" }, { "docid": "342b57da0f0fcf190f926dfe0744977d", "text": "Spike timing-dependent plasticity (STDP) as a Hebbian synaptic learning rule has been demonstrated in various neural circuits over a wide spectrum of species, from insects to humans. The dependence of synaptic modification on the order of pre- and postsynaptic spiking within a critical window of tens of milliseconds has profound functional implications. Over the past decade, significant progress has been made in understanding the cellular mechanisms of STDP at both excitatory and inhibitory synapses and of the associated changes in neuronal excitability and synaptic integration. Beyond the basic asymmetric window, recent studies have also revealed several layers of complexity in STDP, including its dependence on dendritic location, the nonlinear integration of synaptic modification induced by complex spike trains, and the modulation of STDP by inhibitory and neuromodulatory inputs. Finally, the functional consequences of STDP have been examined directly in an increasing number of neural circuits in vivo.", "title": "" }, { "docid": "58fffa67053a82875177f32e126c2e43", "text": "Cracking-resistant password vaults have been recently proposed with the goal of thwarting offline attacks. This requires the generation of synthetic password vaults that are statistically indistinguishable from real ones. In this work, we establish a conceptual link between this problem and steganography, where the stego objects must be undetectable among cover objects. We compare the two frameworks and highlight parallels and differences. Moreover, we transfer results obtained in the steganography literature into the context of decoy generation. Our results include the infeasibility of perfectly secure decoy vaults and the conjecture that secure decoy vaults are at least as hard to construct as secure steganography.", "title": "" }, { "docid": "49a54c57984c3feaef32b708ae328109", "text": "While it has a long history, the last 30 years have brought considerable advances to the discipline of forensic anthropology worldwide. Every so often it is essential that these advances are noticed and trends assessed. It is also important to identify those research areas that are needed for the forthcoming years. The purpose of this special issue is to examine some of the examples of research that might identify the trends in the 21st century. Of the 14 papers 5 dealt with facial features and identification such as facial profile determination and skull-photo superimposition. Age (fetus and cranial thickness), sex (supranasal region, arm and leg bones) and stature (from the arm bones) estimation were represented by five articles. Others discussed the estimation of time since death, skull color and diabetes, and a case study dealing with a mummy and skeletal analysis in comparison with DNA identification. These papers show that age, sex, and stature are still important issues of the discipline. Research on the human face is moving from hit and miss case studies to a more scientifically sound direction. A lack of studies on trauma and taphonomy is very clear. Anthropologists with other scientists can develop research areas to make the identification process more reliable. Research should include the assessment of animal attacks on human remains, factors affecting decomposition rates, and aging of the human face. Lastly anthropologists should be involved in the education of forensic pathologists about osteological techniques and investigators regarding archaeology of crime scenes.", "title": "" } ]
scidocsrr
1149bf34849583bfda1a14a163505f1f
Towards Generalization and Simplicity in Continuous Control
[ { "docid": "05b6f7fd65ae6eee7fb3ae44e98fb2f9", "text": "We explore learning-based approaches for feedback control of a dexterous five-finger hand performing non-prehensile manipulation. First, we learn local controllers that are able to perform the task starting at a predefined initial state. These controllers are constructed using trajectory optimization with respect to locally-linear time-varying models learned directly from sensor data. In some cases, we initialize the optimizer with human demonstrations collected via teleoperation in a virtual environment. We demonstrate that such controllers can perform the task robustly, both in simulation and on the physical platform, for a limited range of initial conditions around the trained starting state. We then consider two interpolation methods for generalizing to a wider range of initial conditions: deep learning, and nearest neighbors. We find that nearest neighbors achieve higher performance under full observability, while a neural network proves advantages under partial observability: it uses only tactile and proprioceptive feedback but no feedback about the object (i.e. it performs the task blind) and learns a time-invariant policy. In contrast, the nearest neighbors method switches between time-varying local controllers based on the proximity of initial object states sensed via motion capture. While both generalization methods leave room for improvement, our work shows that (i) local trajectory-based controllers for complex non-prehensile manipulation tasks can be constructed from surprisingly small amounts of training data, and (ii) collections of such controllers can be interpolated to form more global controllers. Results are summarized in the supplementary video: https://youtu.be/E0wmO6deqjo", "title": "" } ]
[ { "docid": "c8be0e643c72c7abea1ad758ac2b49a8", "text": "Visual attention plays an important role to understand images and demonstrates its effectiveness in generating natural language descriptions of images. On the other hand, recent studies show that language associated with an image can steer visual attention in the scene during our cognitive process. Inspired by this, we introduce a text-guided attention model for image captioning, which learns to drive visual attention using associated captions. For this model, we propose an exemplarbased learning approach that retrieves from training data associated captions with each image, and use them to learn attention on visual features. Our attention model enables to describe a detailed state of scenes by distinguishing small or confusable objects effectively. We validate our model on MSCOCO Captioning benchmark and achieve the state-of-theart performance in standard metrics.", "title": "" }, { "docid": "9a973833c640e8a9fe77cd7afdae60f2", "text": "Metastasis is a characteristic trait of most tumour types and the cause for the majority of cancer deaths. Many tumour types, including melanoma and breast and prostate cancers, first metastasize via lymphatic vessels to their regional lymph nodes. Although the connection between lymph node metastases and shorter survival times of patients was made decades ago, the active involvement of the lymphatic system in cancer, metastasis has been unravelled only recently, after molecular markers of lymphatic vessels were identified. A growing body of evidence indicates that tumour-induced lymphangiogenesis is a predictive indicator of metastasis to lymph nodes and might also be a target for prevention of metastasis. This article reviews the current understanding of lymphangiogenesis in cancer anti-lymphangiogenic strategies for prevention and therapy of metastatic disease, quantification of lymphangiogenesis for the prognosis and diagnosis of metastasis and in vivo imaging technologies for the assessment of lymphatic vessels, drainage and lymph nodes.", "title": "" }, { "docid": "7a2d4032d79659a70ed2f8a6b75c4e71", "text": "In recent years, transition-based parsers have shown promise in terms of efficiency and accuracy. Though these parsers have been extensively explored for multiple Indian languages, there is still considerable scope for improvement by properly incorporating syntactically relevant information. In this article, we enhance transition-based parsing of Hindi and Urdu by redefining the features and feature extraction procedures that have been previously proposed in the parsing literature of Indian languages. We propose and empirically show that properly incorporating syntactically relevant information like case marking, complex predication and grammatical agreement in an arc-eager parsing model can significantly improve parsing accuracy. Our experiments show an absolute improvement of ∼2% LAS for parsing of both Hindi and Urdu over a competitive baseline which uses rich features like part-of-speech (POS) tags, chunk tags, cluster ids and lemmas. We also propose some heuristics to identify ezafe constructions in Urdu texts which show promising results in parsing these constructions.", "title": "" }, { "docid": "c6dfe01e87a7ec648f0857bf1a74a3ba", "text": "Received: 12 June 2006 Revised: 10 May 2007 Accepted: 22 July 2007 Abstract Although there is widespread agreement that leadership has important effects on information technology (IT) acceptance and use, relatively little empirical research to date has explored this phenomenon in detail. This paper integrates the unified theory of acceptance and use of technology (UTAUT) with charismatic leadership theory, and examines the role of project champions influencing user adoption. PLS analysis of survey data collected from 209 employees in seven organizations that had engaged in a large-scale IT implementation revealed that project champion charisma was positively associated with increased performance expectancy, effort expectancy, social influence and facilitating condition perceptions of users. Theoretical and managerial implications are discussed, and suggestions for future research in this area are provided. European Journal of Information Systems (2007) 16, 494–510. doi:10.1057/palgrave.ejis.3000682", "title": "" }, { "docid": "f88235f1056d66c5dc188fcf747bf570", "text": "In this paper, we compare the differences between traditional Kelly Criterion and Vince's optimal f through backtesting actual financial transaction data. We apply a momentum trading strategy to the Taiwan Weighted Index Futures, and analyze its profit-and-loss vectors of Kelly Criterion and Vince's optimal f, respectively. Our numerical experiments demonstrate that there is nearly 90% chance that the difference gap between the bet ratio recommended by Kelly criterion and and Vince's optimal f lies within 2%. Therefore, in the actual transaction, the values from Kelly Criterion could be taken directly as the optimal bet ratio for funds control.", "title": "" }, { "docid": "329a84a4757e7ee595c31d53a4ab84d0", "text": "Generating a reasonable ending for a given story context, i.e., story ending generation, is a strong indication of story comprehension. This task requires not only to understand the context clues which play an important role in planning the plot, but also to handle implicit knowledge to make a reasonable, coherent story. In this paper, we devise a novel model for story ending generation. The model adopts an incremental encoding scheme to represent context clues which are spanning in the story context. In addition, commonsense knowledge is applied through multi-source attention to facilitate story comprehension, and thus to help generate coherent and reasonable endings. Through building context clues and using implicit knowledge, the model is able to produce reasonable story endings. Automatic and manual evaluation shows that our model can generate more reasonable story endings than state-of-the-art baselines. 1", "title": "" }, { "docid": "438094ef7913de0236b57a85e7d511c2", "text": "Magnetic resonance (MR) is the best way to assess the new anatomy of the pelvis after male to female (MtF) sex reassignment surgery. The aim of the study was to evaluate the radiological appearance of the small pelvis after MtF surgery and to compare it with the normal women's anatomy. Fifteen patients who underwent MtF surgery were subjected to pelvic MR at least 6 months after surgery. The anthropometric parameters of the small pelvis were measured and compared with those of ten healthy women (control group). Our personal technique (creation of the mons Veneris under the pubic skin) was performed in all patients. In patients who underwent MtF surgery, the mean neovaginal depth was slightly superior than in women (P=0.009). The length of the inferior pelvic aperture and of the inlet of pelvis was higher in the control group (P<0.005). The inclination between the axis of the neovagina and the inferior pelvis aperture, the thickness of the mons Veneris and the thickness of the rectovaginal septum were comparable between the two study groups. MR consents a detailed assessment of the new pelvic anatomy after MtF surgery. The anthropometric parameters measured in our patients were comparable with those of women.", "title": "" }, { "docid": "1d6a5ba2f937caa1df5f6d32ffd3bcb4", "text": "The objective of this study is to present an offline control of highly non-linear inverted pendulum system moving on a plane inclined at an angle of 10° from horizontal. The stabilisation was achieved using three different soft-computing control techniques i.e. Proportional-integral-derivative (PID), Fuzzy logic and Adaptive neuro fuzzy inference system (ANFIS). A Matlab-Simulink model of the proposed system was initially developed which was further simulated using PID controllers based on trial and error method. The ANFIS controller were trained using data sets generated from simulation results of PID controller. The ANFIS controllers were designed using only three membership functions. A fuzzy logic control of the proposed system is also shown using nine membership functions. The study compares the three techniques in terms of settling time, maximum overshoot and steady state error. The simulation results are shown with the help of graphs and tables which validates the effectiveness of proposed techniques.", "title": "" }, { "docid": "cd2ad7c7243c2b690239f1466b57c0ea", "text": "In 2001, JPL commissioned four industry teams to make a fresh examination of Mars Sample Return (MSR) mission architectures. As new fiscal realities of a cost-capped Mars Exploration Program unfolded, it was evident that the converged-upon MSR concept did not fit reasonably within a balanced program. Therefore, along with a new MSR Science Steering Group, JPL asked the industry teams plus JPL's Team-X to explore ways to reduce the cost. A paper presented at last year's conference described the emergence of a new, affordable \"Groundbreaking-MSR\" concept (Mattingly et al., 2003). This work addresses the continued evolution of the Groundbreaking MSR concept over the last year. One of the tenets of the low-cost approach is to use substantial heritage from an earlier mission, Mars Science Laboratory (MSL). Recently, the MSL project developed and switched its baseline to a revolutionary landing approach, coined \"skycrane\" where the MSL, which is a rover, would be lowered gently to the Martian surface from a hovering vehicle. MSR has adopted this approach in its mission studies, again continuing to capitalize on the heritage for a significant portion of the new lander. In parallel, a MSR Technology Board was formed to reexamine MSR technology needs and participate in a continuing refinement of architectural trades. While the focused technology program continues to be definitized through the remainder of this year, the current assessment of what technology development is required, is discussed in this paper. In addition, the results of new trade studies and considerations will be discussed. Adopting these changes, the Groundbreaking MSR concept has shifted to that presented in this paper. It remains a project that is affordable and meets the basic science needs defined by the MSR Science Steering Group in 2002.", "title": "" }, { "docid": "020e01f6914b518d77887b1fef1a7be2", "text": "Scene-agnostic visual inpainting remains very challenging despite progress in patch-based methods. Recently, Pathak et al. [26] have introduced convolutional \"context encoders'' (CEs) for unsupervised feature learning through image completion tasks. With the additional help of adversarial training, CEs turned out to be a promising tool to complete complex structures in real inpainting problems. In the present paper we propose to push further this key ability by relying on perceptual reconstruction losses at training time. We show on a wide variety of visual scenes the merit of the approach forstructural inpainting, and confirm it through a user study. Combined with the optimization-based refinement of [32] with neural patches, our context encoder opens up new opportunities for prior-free visual inpainting.", "title": "" }, { "docid": "ce1d25b3d2e32f903ce29470514abcce", "text": "We present a method to generate a robot control strategy that maximizes the probability to accomplish a task. The task is given as a Linear Temporal Logic (LTL) formula over a set of properties that can be satisfied at the regions of a partitioned environment. We assume that the probabilities with which the properties are satisfied at the regions are known, and the robot can determine the truth value of a proposition only at the current region. Motivated by several results on partitioned-based abstractions, we assume that the motion is performed on a graph. To account for noisy sensors and actuators, we assume that a control action enables several transitions with known probabilities. We show that this problem can be reduced to the problem of generating a control policy for a Markov Decision Process (MDP) such that the probability of satisfying an LTL formula over its states is maximized. We provide a complete solution for the latter problem that builds on existing results from probabilistic model checking. We include an illustrative case study.", "title": "" }, { "docid": "00b80ec74135b3190a50b4e0d83af17a", "text": "Many organizations aspire to adopt agile processes to take advantage of the numerous benefits that they offer to an organization. Those benefits include, but are not limited to, quicker return on investment, better software quality, and higher customer satisfaction. To date, however, there is no structured process (at least that is published in the public domain) that guides organizations in adopting agile practices. To address this situation, we present the agile adoption framework and the innovative approach we have used to implement it. The framework consists of two components: an agile measurement index, and a four-stage process, that together guide and assist the agile adoption efforts of organizations. More specifically, the Sidky Agile Measurement Index (SAMI) encompasses five agile levels that are used to identify the agile potential of projects and organizations. The four-stage process, on the other hand, helps determine (a) whether or not organizations are ready for agile adoption, and (b) guided by their potential, what set of agile practices can and should be introduced. To help substantiate the “goodness” of the Agile Adoption Framework, we presented it to various members of the agile community, and elicited responses through questionnaires. The results of that substantiation effort are encouraging, and are also presented in this paper.", "title": "" }, { "docid": "d11d8408649280e26172886fc8341954", "text": "OBJECTIVE\nSelf-stigma is highly prevalent in schizophrenia and can be seen as an important factor leading to low self-esteem. It is however unclear how psychological factors and actual adverse events contribute to self-stigma. This study empirically examines how symptom severity and the experience of being victimized affect both self-stigma and self-esteem.\n\n\nMETHODS\nPersons with a schizophrenia spectrum disorder (N = 102) were assessed with a battery of self-rating questionnaires and interviews. Structural equation modelling (SEM) was subsequently applied to test the fit of three models: a model with symptoms and victimization as direct predictors of self-stigma and negative self-esteem, a model with an indirect effect for symptoms mediated by victimization and a third model with a direct effect for negative symptoms and an indirect effect for positive symptoms mediated by victimization.\n\n\nRESULTS\nResults showed good model fit for the direct effects of both symptoms and victimization: both lead to an increase of self-stigma and subsequent negative self-esteem. Negative symptoms had a direct association with self-stigma, while the relationship between positive symptoms and self-stigma was mediated by victimization.\n\n\nCONCLUSIONS\nOur findings suggest that symptoms and victimization may contribute to self-stigma, leading to negative self-esteem in individuals with a schizophrenia spectrum disorder. Especially for patients with positive symptoms victimization seems to be an important factor in developing self-stigma. Given the burden of self-stigma on patients and the constraining effects on societal participation and service use, interventions targeting victimization as well as self-stigma are needed.", "title": "" }, { "docid": "00bcce935ca2e4d443941b7e90d644c9", "text": "Nairovirus, one of five bunyaviral genera, includes seven species. Genomic sequence information is limited for members of the Dera Ghazi Khan, Hughes, Qalyub, Sakhalin, and Thiafora nairovirus species. We used next-generation sequencing and historical virus-culture samples to determine 14 complete and nine coding-complete nairoviral genome sequences to further characterize these species. Previously unsequenced viruses include Abu Mina, Clo Mor, Great Saltee, Hughes, Raza, Sakhalin, Soldado, and Tillamook viruses. In addition, we present genomic sequence information on additional isolates of previously sequenced Avalon, Dugbe, Sapphire II, and Zirqa viruses. Finally, we identify Tunis virus, previously thought to be a phlebovirus, as an isolate of Abu Hammad virus. Phylogenetic analyses indicate the need for reassignment of Sapphire II virus to Dera Ghazi Khan nairovirus and reassignment of Hazara, Tofla, and Nairobi sheep disease viruses to novel species. We also propose new species for the Kasokero group (Kasokero, Leopards Hill, Yogue viruses), the Ketarah group (Gossas, Issyk-kul, Keterah/soft tick viruses) and the Burana group (Wēnzhōu tick virus, Huángpí tick virus 1, Tǎchéng tick virus 1). Our analyses emphasize the sister relationship of nairoviruses and arenaviruses, and indicate that several nairo-like viruses (Shāyáng spider virus 1, Xīnzhōu spider virus, Sānxiá water strider virus 1, South Bay virus, Wǔhàn millipede virus 2) require establishment of novel genera in a larger nairovirus-arenavirus supergroup.", "title": "" }, { "docid": "f03cc92b0bc69845b9f2b6c0c6f3168b", "text": "Relational database management systems (RDBMSs) are powerful because they are able to optimize and answer queries against any relational database. A natural language interface (NLI) for a database, on the other hand, is tailored to support that specific database. In this work, we introduce a general purpose transfer-learnable NLI with the goal of learning one model that can be used as NLI for any relational database. We adopt the data management principle of separating data and its schema, but with the additional support for the idiosyncrasy and complexity of natural languages. Specifically, we introduce an automatic annotation mechanism that separates the schema and the data, where the schema also covers knowledge about natural language. Furthermore, we propose a customized sequence model that translates annotated natural language queries to SQL statements. We show in experiments that our approach outperforms previous NLI methods on the WikiSQL dataset and the model we learned can be applied to another benchmark dataset OVERNIGHT without retraining.", "title": "" }, { "docid": "36bdc3b5f9ce2fbbff0dd815bf3eee67", "text": "A patient with upper limb dimelia including a double scapula, humerus, radius, and ulna, 11 metacarpals and digits (5 on the superior side, 6 on the inferior side) was treated with a simple amputation of the inferior limb resulting in cosmetic improvement and maintenance of range of motion in the preserved limb. During the amputation, the 2 limbs were found to be anatomically separate except for the ulnar nerve, which, in the superior limb, bifurcated into the sensory branch of radial nerve in the inferior limb, and the brachial artery, which bifurcated into the radial artery. Each case of this rare anomaly requires its own individually carefully planned surgical procedure.", "title": "" }, { "docid": "21afffc79652f8e6c0f5cdcd74a03672", "text": "It’s useful to automatically transform an image from its original form to some synthetic form (style, partial contents, etc.), while keeping the original structure or semantics. We define this requirement as the ”image-to-image translation” problem, and propose a general approach to achieve it, based on deep convolutional and conditional generative adversarial networks (GANs), which has gained a phenomenal success to learn mapping images from noise input since 2014. In this work, we develop a two step (unsupervised) learning method to translate images between different domains by using unlabeled images without specifying any correspondence between them, so that to avoid the cost of acquiring labeled data. Compared with prior works, we demonstrated the capacity of generality in our model, by which variance of translations can be conduct by a single type of model. Such capability is desirable in applications like bidirectional translation", "title": "" }, { "docid": "938f8383d25d30b39b6cd9c78d1b3ab5", "text": "In the last two decades, the Lattice Boltzmann method (LBM) has emerged as a promising tool for modelling the Navier-Stokes equations and simulating complex fluid flows. LBM is based on microscopic models and mesoscopic kinetic equations. In some perspective, it can be viewed as a finite difference method for solving the Boltzmann transport equation. Moreover the Navier-Stokes equations can be recovered by LBM with a proper choice of the collision operator. In Section 2 and 3, we first introduce this method and describe some commonly used boundary conditions. In Section 4, the validity of this method is confirmed by comparing the numerical solution to the exact solution of the steady plane Poiseuille flow and convergence of solution is established. Some interesting numerical simulations, including the lid-driven cavity flow, flow past a circular cylinder and the Rayleigh-Bénard convection for a range of Reynolds numbers, are carried out in Section 5, 6 and 7. In Section 8, we briefly highlight the procedure of recovering the Navier-Stokes equations from LBM. A summary is provided in Section 9.", "title": "" }, { "docid": "d49260a42c4d800963ca8779cf50f1ee", "text": "Autoencoders learn data representations (codes) in such a way that the input is reproduced at the output of the network. However, it is not always clear what kind of properties of the input data need to be captured by the codes. Kernel machines have experienced great success by operating via inner-products in a theoretically well-defined reproducing kernel Hilbert space, hence capturing topological properties of input data. In this paper, we enhance the autoencoder’s ability to learn effective data representations by aligning inner products between codes with respect to a kernel matrix. By doing so, the proposed kernelized autoencoder allows learning similarity-preserving embeddings of input data, where the notion of similarity is explicitly controlled by the user and encoded in a positive semi-definite kernel matrix. Experiments are performed for evaluating both reconstruction and kernel alignment performance in classification tasks and visualization of high-dimensional data. Additionally, we show that our method is capable to emulate kernel principal component analysis on a denoising task, obtaining competitive results at a much lower computational cost.", "title": "" } ]
scidocsrr
71d797de968480d5b70ea2b8cdb7ca0d
Coming of Age (Digitally): An Ecological View of Social Media Use among College Students
[ { "docid": "7e4c00d8f17166cbfb3bdac8d5e5ad09", "text": "Twitter is now used to distribute substantive content such as breaking news, increasing the importance of assessing the credibility of tweets. As users increasingly access tweets through search, they have less information on which to base credibility judgments as compared to consuming content from direct social network connections. We present survey results regarding users' perceptions of tweet credibility. We find a disparity between features users consider relevant to credibility assessment and those currently revealed by search engines. We then conducted two experiments in which we systematically manipulated several features of tweets to assess their impact on credibility ratings. We show that users are poor judges of truthfulness based on content alone, and instead are influenced by heuristics such as user name when making credibility assessments. Based on these findings, we discuss strategies tweet authors can use to enhance their credibility with readers (and strategies astute readers should be aware of!). We propose design improvements for displaying social search results so as to better convey credibility.", "title": "" }, { "docid": "d06a7c8379ba991385af5dc986537360", "text": "Though social network site use is often treated as a monolithic activity, in which all time is equally social and its impact the same for all users, we examine how Facebook affects social capital depending upon: (1) types of site activities, contrasting one-on-one communication, broadcasts to wider audiences, and passive consumption of social news, and (2) individual differences among users, including social communication skill and self-esteem. Longitudinal surveys matched to server logs from 415 Facebook users reveal that receiving messages from friends is associated with increases in bridging social capital, but that other uses are not. However, using the site to passively consume news assists those with lower social fluency draw value from their connections. The results inform site designers seeking to increase social connectedness and the value of those connections.", "title": "" }, { "docid": "be6ce39ba9565f4d28dfeb29528a5046", "text": "The negative aspects of smartphone overuse on young adults, such as sleep deprivation and attention deficits, are being increasingly recognized recently. This emerging issue motivated us to analyze the usage patterns related to smartphone overuse. We investigate smartphone usage for 95 college students using surveys, logged data, and interviews. We first divide the participants into risk and non-risk groups based on self-reported rating scale for smartphone overuse. We then analyze the usage data to identify between-group usage differences, which ranged from the overall usage patterns to app-specific usage patterns. Compared with the non-risk group, our results show that the risk group has longer usage time per day and different diurnal usage patterns. Also, the risk group users are more susceptible to push notifications, and tend to consume more online content. We characterize the overall relationship between usage features and smartphone overuse using analytic modeling and provide detailed illustrations of problematic usage behaviors based on interview data.", "title": "" }, { "docid": "b8f1c6553cd97fab63eae159ae01797e", "text": "0747-5632/$ see front matter 2010 Elsevier Ltd. A doi:10.1016/j.chb.2010.02.004 * Corresponding author. E-mail address: malinda.desjarlais@gmail.com (M. Using computers with friends either in person or online has become ubiquitous in the life of most adolescents; however, little is known about the complex relation between this activity and friendship quality. This study examined direct support for the social compensation and rich-get-richer hypotheses among adolescent girls and boys by including social anxiety as a moderating factor. A sample of 1050 adolescents completed a survey in grade 9 and then again in grades 11 and 12. For girls, there was a main effect of using computers with friends on friendship quality; providing support for both hypotheses. For adolescent boys, however, social anxiety moderated this relation, supporting the social compensation hypothesis. These findings were identical for online communication and were stable throughout adolescence. Furthermore, participating in organized sports did not compensate for social anxiety for either adolescent girls or boys. Therefore, characteristics associated with using computers with friends may create a comfortable environment for socially anxious adolescents to interact with their peers which may be distinct from other more traditional adolescent activities. 2010 Elsevier Ltd. All rights reserved.", "title": "" } ]
[ { "docid": "55631b81d46fc3dcaad8375176cb1c68", "text": "UNLABELLED\nThe need for long-term retention to prevent post-treatment tooth movement is now widely accepted by orthodontists. This may be achieved with removable retainers or permanent bonded retainers. This article aims to provide simple guidance for the dentist on how to maintain and repair both removable and fixed retainers.\n\n\nCLINICAL RELEVANCE\nThe general dental practitioner is more likely to review patients over time and needs to be aware of the need for long-term retention and how to maintain and repair the retainers.", "title": "" }, { "docid": "1c16eec32b941af1646843bb81d16b5f", "text": "Facebook is rapidly gaining recognition as a powerful research tool for the social sciences. It constitutes a large and diverse pool of participants, who can be selectively recruited for both online and offline studies. Additionally, it facilitates data collection by storing detailed records of its users' demographic profiles, social interactions, and behaviors. With participants' consent, these data can be recorded retrospectively in a convenient, accurate, and inexpensive way. Based on our experience in designing, implementing, and maintaining multiple Facebook-based psychological studies that attracted over 10 million participants, we demonstrate how to recruit participants using Facebook, incentivize them effectively, and maximize their engagement. We also outline the most important opportunities and challenges associated with using Facebook for research, provide several practical guidelines on how to successfully implement studies on Facebook, and finally, discuss ethical considerations.", "title": "" }, { "docid": "e8c7f00d775254bd6b8c5393397d05a6", "text": "PURPOSE\nVirtual reality devices, including virtual reality head-mounted displays, are becoming increasingly accessible to the general public as technological advances lead to reduced costs. However, there are numerous reports that adverse effects such as ocular discomfort and headache are associated with these devices. To investigate these adverse effects, questionnaires that have been specifically designed for other purposes such as investigating motion sickness have often been used. The primary purpose of this study was to develop a standard questionnaire for use in investigating symptoms that result from virtual reality viewing. In addition, symptom duration and whether priming subjects elevates symptom ratings were also investigated.\n\n\nMETHODS\nA list of the most frequently reported symptoms following virtual reality viewing was determined from previously published studies and used as the basis for a pilot questionnaire. The pilot questionnaire, which consisted of 12 nonocular and 11 ocular symptoms, was administered to two groups of eight subjects. One group was primed by having them complete the questionnaire before immersion; the other group completed the questionnaire postviewing only. Postviewing testing was carried out immediately after viewing and then at 2-min intervals for a further 10 min.\n\n\nRESULTS\nPriming subjects did not elevate symptom ratings; therefore, the data were pooled and 16 symptoms were found to increase significantly. The majority of symptoms dissipated rapidly, within 6 min after viewing. Frequency of endorsement data showed that approximately half of the symptoms on the pilot questionnaire could be discarded because <20% of subjects experienced them.\n\n\nCONCLUSIONS\nSymptom questionnaires to investigate virtual reality viewing can be administered before viewing, without biasing the findings, allowing calculation of the amount of change from pre- to postviewing. However, symptoms dissipate rapidly and assessment of symptoms needs to occur in the first 5 min postviewing. Thirteen symptom questions, eight nonocular and five ocular, were determined to be useful for a questionnaire specifically related to virtual reality viewing using a head-mounted display.", "title": "" }, { "docid": "8b552849d9c41d82171de2e87967836c", "text": "The need for building robots with soft materials emerged recently from considerations of the limitations of service robots in negotiating natural environments, from observation of the role of compliance in animals and plants [1], and even from the role attributed to the physical body in movement control and intelligence, in the so-called embodied intelligence or morphological computation paradigm [2]-[4]. The wide spread of soft robotics relies on numerous investigations of diverse materials and technologies for actuation and sensing, and on research of control techniques, all of which can serve the purpose of building robots with high deformability and compliance. But the core challenge of soft robotics research is, in fact, the variability and controllability of such deformability and compliance.", "title": "" }, { "docid": "0be3de2b6f0dd5d3158cc7a98286d571", "text": "The use of tablet PCs is spreading rapidly, and accordingly users browsing and inputting personal information in public spaces can often be seen by third parties. Unlike conventional mobile phones and notebook PCs equipped with distinct input devices (e.g., keyboards), tablet PCs have touchscreen keyboards for data input. Such integration of display and input device increases the potential for harm when the display is captured by malicious attackers. This paper presents the description of reconstructing tablet PC displays via measurement of electromagnetic (EM) emanation. In conventional studies, such EM display capture has been achieved by using non-portable setups. Those studies also assumed that a large amount of time was available in advance of capture to obtain the electrical parameters of the target display. In contrast, this paper demonstrates that such EM display capture is feasible in real time by a setup that fits in an attaché case. The screen image reconstruction is achieved by performing a prior course profiling and a complemental signal processing instead of the conventional fine parameter tuning. Such complemental processing can eliminate the differences of leakage parameters among individuals and therefore correct the distortions of images. The attack distance, 2 m, makes this method a practical threat to general tablet PCs in public places. This paper discusses possible attack scenarios based on the setup described above. In addition, we describe a mechanism of EM emanation from tablet PCs and a countermeasure against such EM display capture.", "title": "" }, { "docid": "4df7857714e8b5149e315666fd4badd2", "text": "Visual place recognition and loop closure is critical for the global accuracy of visual Simultaneous Localization and Mapping (SLAM) systems. We present a place recognition algorithm which operates by matching local query image sequences to a database of image sequences. To match sequences, we calculate a matrix of low-resolution, contrast-enhanced image similarity probability values. The optimal sequence alignment, which can be viewed as a discontinuous path through the matrix, is found using a Hidden Markov Model (HMM) framework reminiscent of Dynamic Time Warping from speech recognition. The state transitions enforce local velocity constraints and the most likely path sequence is recovered efficiently using the Viterbi algorithm. A rank reduction on the similarity probability matrix is used to provide additional robustness in challenging conditions when scoring sequence matches. We evaluate our approach on seven outdoor vision datasets and show improved precision-recall performance against the recently published seqSLAM algorithm.", "title": "" }, { "docid": "0e238250d980c944ed7046448d2681fa", "text": "Analysing the behaviour of student performance in classroom education is an active area in educational research. Early prediction of student performance may be helpful for both teacher and the student. However, the influencing factors of the student performance need to be identified first to build up such early prediction model. The existing data mining literature on student performance primarily focuses on student-related factors, though it may be influenced by many external factors also. Superior teaching acts as a catalyst which improves the knowledge dissemination process from teacher to the student. It also motivates the student to put more effort on the study. However, the research question, how the performance or grade correlates with teaching, is still relevant in present days. In this work, we propose a quantifiable measure of improvement with respect to the expected performance of a student. Furthermore, this study analyses the impact of teaching on performance improvement in theoretical courses of classroom-based education. It explores nearly 0.2 million academic records collected from an online system of an academic institute of national importance in India. The association mining approach has been adopted here and the result shows that confidence of both non-negative and positive improvements increase with superior teaching. This result indeed establishes the fact that teaching has a positive impact on student performance. To be more specific, the growing confidence of non-negative and positive improvements indicate that superior teaching facilitates more students to obtain either expected or better than expected grade.", "title": "" }, { "docid": "d46af3854769569a631fab2c3c7fa8f3", "text": "Existing vector space models typically map synonyms and antonyms to similar word vectors, and thus fail to represent antonymy. We introduce a new vector space representation where antonyms lie on opposite sides of a sphere: in the word vector space, synonyms have cosine similarities close to one, while antonyms are close to minus one. We derive this representation with the aid of a thesaurus and latent semantic analysis (LSA). Each entry in the thesaurus – a word sense along with its synonyms and antonyms – is treated as a “document,” and the resulting document collection is subjected to LSA. The key contribution of this work is to show how to assign signs to the entries in the co-occurrence matrix on which LSA operates, so as to induce a subspace with the desired property. We evaluate this procedure with the Graduate Record Examination questions of (Mohammed et al., 2008) and find that the method improves on the results of that study. Further improvements result from refining the subspace representation with discriminative training, and augmenting the training data with general newspaper text. Altogether, we improve on the best previous results by 11 points absolute in F measure.", "title": "" }, { "docid": "1e56ff2af1b76571823d54d1f7523b49", "text": "Open-source intelligence offers value in information security decision making through knowledge of threats and malicious activities that potentially impact business. Open-source intelligence using the internet is common, however, using the darknet is less common for the typical cybersecurity analyst. The challenges to using the darknet for open-source intelligence includes using specialized collection, processing, and analysis tools. While researchers share techniques, there are few publicly shared tools; therefore, this paper explores an open-source intelligence automation toolset that scans across the darknet connecting, collecting, processing, and analyzing. It describes and shares the tools and processes to build a secure darknet connection, and then how to collect, process, store, and analyze data. Providing tools and processes serves as an on-ramp for cybersecurity intelligence analysts to search for threats. Future studies may refine, expand, and deepen this paper's toolset framework. © 2 01 7 T he SA NS In sti tut e, Au tho r R eta ins Fu ll R igh ts © 2017 The SANS Institute Author retains full rights. Data Mining in the Dark 2 Nafziger, Brian", "title": "" }, { "docid": "9e0cbbe8d95298313fd929a7eb2bfea9", "text": "We compare two technological approaches to augmented reality for 3-D medical visualization: optical and video see-through devices. We provide a context to discuss the technology by reviewing several medical applications of augmented-reality re search efforts driven by real needs in the medical field, both in the United States and in Europe. We then discuss the issues for each approach, optical versus video, from both a technology and human-factor point of view. Finally, we point to potentially promising future developments of such devices including eye tracking and multifocus planes capabilities, as well as hybrid optical/video technology.", "title": "" }, { "docid": "c2c8efe7f626899f1a160aaa0112c80a", "text": "The genome of a cancer cell carries somatic mutations that are the cumulative consequences of the DNA damage and repair processes operative during the cellular lineage between the fertilized egg and the cancer cell. Remarkably, these mutational processes are poorly characterized. Global sequencing initiatives are yielding catalogs of somatic mutations from thousands of cancers, thus providing the unique opportunity to decipher the signatures of mutational processes operative in human cancer. However, until now there have been no theoretical models describing the signatures of mutational processes operative in cancer genomes and no systematic computational approaches are available to decipher these mutational signatures. Here, by modeling mutational processes as a blind source separation problem, we introduce a computational framework that effectively addresses these questions. Our approach provides a basis for characterizing mutational signatures from cancer-derived somatic mutational catalogs, paving the way to insights into the pathogenetic mechanism underlying all cancers.", "title": "" }, { "docid": "72462dd37b9d83f240778c794ddf0162", "text": "A new record conversion efficiency of 24.7% was attained at the research level by using a heterojunction with intrinsic thin-layer structure of practical size (101.8 cm2, total area) at a 98-μm thickness. This is a world height record for any crystalline silicon-based solar cell of practical size (100 cm2 and above). Since we announced our former record of 23.7%, we have continued to reduce recombination losses at the hetero interface between a-Si and c-Si along with cutting down resistive losses by improving the silver paste with lower resistivity and optimization of the thicknesses in a-Si layers. Using a new technology that enables the formation of a-Si layer of even higher quality on the c-Si substrate, while limiting damage to the surface of the substrate, the Voc has been improved from 0.745 to 0.750 V. We also succeeded in improving the fill factor from 0.809 to 0.832.", "title": "" }, { "docid": "61406f27199acc5f034c2721d66cda89", "text": "Fischler PER •Sequence of tokens mapped to word embeddings. •Bidirectional LSTM builds context-dependent representations for each word. •A small feedforward layer encourages generalisation. •Conditional Random Field (CRF) at the top outputs the most optimal label sequence for the sentence. •Using character-based dynamic embeddings (Rei et al., 2016) to capture morphological patterns and unseen words.", "title": "" }, { "docid": "65eb604a2d45f29923ba24976130adc1", "text": "The recognition of boundaries, e.g., between chorus and verse, is an important task in music structure analysis. The goal is to automatically detect such boundaries in audio signals so that the results are close to human annotation. In this work, we apply Convolutional Neural Networks to the task, trained directly on mel-scaled magnitude spectrograms. On a representative subset of the SALAMI structural annotation dataset, our method outperforms current techniques in terms of boundary retrieval F -measure at different temporal tolerances: We advance the state-of-the-art from 0.33 to 0.46 for tolerances of±0.5 seconds, and from 0.52 to 0.62 for tolerances of ±3 seconds. As the algorithm is trained on annotated audio data without the need of expert knowledge, we expect it to be easily adaptable to changed annotation guidelines and also to related tasks such as the detection of song transitions.", "title": "" }, { "docid": "a1fed0bcce198ad333b45bfc5e0efa12", "text": "Contemporary games are making significant strides towards offering complex, immersive experiences for players. We can now explore sprawling 3D virtual environments populated by beautifully rendered characters and objects with autonomous behavior, engage in highly visceral action-oriented experiences offering a variety of missions with multiple solutions, and interact in ever-expanding online worlds teeming with physically customizable player avatars.", "title": "" }, { "docid": "5054ad32c33dc2650c1dcee640961cd5", "text": "Benchmarks have played a vital role in the advancement of visual object recognition and other fields of computer vision (LeCun et al., 1998; Deng et al., 2009; ). The challenges posed by these standard datasets have helped identify and overcome the shortcomings of existing approaches, and have led to great advances of the state of the art. Even the recent massive increase of interest in deep learning methods can be attributed to their success in difficult benchmarks such as ImageNet (Krizhevsky et al., 2012; LeCun et al., 2015). Neuromorphic vision uses silicon retina sensors such as the dynamic vision sensor (DVS; Lichtsteiner et al., 2008). These sensors and their DAVIS (Dynamic and Activepixel Vision Sensor) and ATIS (Asynchronous Time-based Image Sensor) derivatives (Brandli et al., 2014; Posch et al., 2014) are inspired by biological vision by generating streams of asynchronous events indicating local log-intensity brightness changes. They thereby greatly reduce the amount of data to be processed, and their dynamic nature makes them a good fit for domains such as optical flow, object tracking, action recognition, or dynamic scene understanding. Compared to classical computer vision, neuromorphic vision is a younger and much smaller field of research, and lacks benchmarks, which impedes the progress of the field. To address this we introduce the largest event-based vision benchmark dataset published to date, hoping to satisfy a growing demand and stimulate challenges for the community. In particular, the availability of such benchmarks should help the development of algorithms processing event-based vision input, allowing a direct fair comparison of different approaches. We have explicitly chosen mostly dynamic vision tasks such as action recognition or tracking, which could benefit from the strengths of neuromorphic vision sensors, although algorithms that exploit these features are largely missing. A major reason for the lack of benchmarks is that currently neuromorphic vision sensors are only available as R&D prototypes. Nonetheless, there are several datasets already available; see Tan et al. (2015) for an informative review. Unlabeled DVS data was made available around 2007 in the jAER project1 and was used for development of spike timing-based unsupervised feature learning e.g., in Bichler et al. (2012). The first labeled and published event-based neuromorphic vision sensor benchmarks were created from the MNIST digit recognition dataset by jiggling the image on the screen (see Serrano-Gotarredona and Linares-Barranco, 2015 for an informative history) and later to reduce frame artifacts by jiggling the camera view with a pan-tilt unit (Orchard et al., 2015). These datasets automated the scene movement necessary to generate DVS output from the static images, and will be an important step forward for evaluating neuromorphic object recognition systems such as spiking deep networks (Pérez-Carrasco et al., 2013; O’Connor et al., 2013; Cao et al., 2014; Diehl et al., 2015), which so far have been tested mostly on static image datasets converted", "title": "" }, { "docid": "b6cc88bc123a081d580c9430c0ad0207", "text": "This paper presents a comparative survey of research activities and emerging technologies of solid-state fault current limiters for power distribution systems.", "title": "" }, { "docid": "503101a7b0f923f8fecb6dc9bb0bde37", "text": "In-vehicle electronic equipment aims to increase safety, by detecting risk factors and taking/suggesting corrective actions. This paper presents a knowledge-based framework for assisting a driver via her PDA. Car data extracted under On Board Diagnostics (OBD-II) protocol, data acquired from PDA embedded micro-devices and information retrieved from the Web are properly combined: a simple data fusion algorithm has been devised to collect and semantically annotate relevant safety events. Finally, a logic-based matchmaking allows to infer potential risk factors, enabling the system to issue accurate and timely warnings. The proposed approach has been implemented in a prototypical application for the Apple iPhone platform, in order to provide experimental evaluation in real-world test drives for corroborating the approach. Keywords-Semantic Web; On Board Diagnostics; Ubiquitous Computing; Data Fusion; Intelligent Transportation Systems", "title": "" }, { "docid": "94f040bf8f9bc6f30109b822b977c3b5", "text": "Introduction: The tooth mobility due to periodontal bone loss can cause masticatory discomfort, mainly in protrusive movements in the region of the mandibular anterior teeth. Thus, the splinting is a viable alternative to keep them in function satisfactorily. Objective: This study aimed to demonstrate, through a clinical case with medium-term following-up, the clinical application of splinting with glass fiber-reinforced composite resin. Case report: Female patient, 73 years old, complained about masticatory discomfort related to the right mandibular lateral incisor. Clinical and radiographic evaluation showed grade 2 dental mobility, bone loss and increased periodontal ligament space. The proposed treatment was splinting with glass fiber-reinforced composite resin from the right mandibular canine to left mandibular canine. Results: Four-year follow-up showed favorable clinical and radiographic results with respect to periodontal health and maintenance of functional aspects. Conclusion: The splinting with glass fiber-reinforced composite resin is a viable technique and stable over time for the treatment of tooth mobility.", "title": "" } ]
scidocsrr
1433b929b171815ba51b87a2f3459e9b
Automatic video description generation via LSTM with joint two-stream encoding
[ { "docid": "4f58d355a60eb61b1c2ee71a457cf5fe", "text": "Real-world videos often have complex dynamics, methods for generating open-domain video descriptions should be sensitive to temporal structure and allow both input (sequence of frames) and output (sequence of words) of variable length. To approach this problem we propose a novel end-to-end sequence-to-sequence model to generate captions for videos. For this we exploit recurrent neural networks, specifically LSTMs, which have demonstrated state-of-the-art performance in image caption generation. Our LSTM model is trained on video-sentence pairs and learns to associate a sequence of video frames to a sequence of words in order to generate a description of the event in the video clip. Our model naturally is able to learn the temporal structure of the sequence of frames as well as the sequence model of the generated sentences, i.e. a language model. We evaluate several variants of our model that exploit different visual features on a standard set of YouTube videos and two movie description datasets (M-VAD and MPII-MD).", "title": "" }, { "docid": "9734f4395c306763e6cc5bf13b0ca961", "text": "Generating descriptions for videos has many applications including assisting blind people and human-robot interaction. The recent advances in image captioning as well as the release of large-scale movie description datasets such as MPII-MD [28] allow to study this task in more depth. Many of the proposed methods for image captioning rely on pre-trained object classifier CNNs and Long-Short Term Memory recurrent networks (LSTMs) for generating descriptions. While image description focuses on objects, we argue that it is important to distinguish verbs, objects, and places in the challenging setting of movie description. In this work we show how to learn robust visual classifiers from the weak annotations of the sentence descriptions. Based on these visual classifiers we learn how to generate a description using an LSTM. We explore different design choices to build and train the LSTM and achieve the best performance to date on the challenging MPII-MD dataset. We compare and analyze our approach and prior work along various dimensions to better understand the key challenges of the movie description task.", "title": "" }, { "docid": "7ebff2391401cef25b27d510675e9acd", "text": "We present a new approach for modeling multi-modal data sets, focusing on the specific case of segmented images with associated text. Learning the joint distribution of image regions and words has many applications. We consider in detail predicting words associated with whole images (auto-annotation) and corresponding to particular image regions (region naming). Auto-annotation might help organize and access large collections of images. Region naming is a model of object recognition as a process of translating image regions to words, much as one might translate from one language to another. Learning the relationships between image regions and semantic correlates (words) is an interesting example of multi-modal data mining, particularly because it is typically hard to apply data mining techniques to collections of images. We develop a number of models for the joint distribution of image regions and words, including several which explicitly learn the correspondence between regions and words. We study multi-modal and correspondence extensions to Hofmann’s hierarchical clustering/aspect model, a translation model adapted from statistical machine translation (Brown et al.), and a multi-modal extension to mixture of latent Dirichlet allocation (MoM-LDA). All models are assessed using a large collection of annotated images of real c ©2003 Kobus Barnard, Pinar Duygulu, David Forsyth, Nando de Freitas, David Blei and Michael Jordan. BARNARD, DUYGULU, FORSYTH, DE FREITAS, BLEI AND JORDAN scenes. We study in depth the difficult problem of measuring performance. For the annotation task, we look at prediction performance on held out data. We present three alternative measures, oriented toward different types of task. Measuring the performance of correspondence methods is harder, because one must determine whether a word has been placed on the right region of an image. We can use annotation performance as a proxy measure, but accurate measurement requires hand labeled data, and thus must occur on a smaller scale. We show results using both an annotation proxy, and manually labeled data.", "title": "" }, { "docid": "cd45dd9d63c85bb0b23ccb4a8814a159", "text": "Parameter set learned using all WMT12 data (Callison-Burch et al., 2012): • 100,000 binary rankings covering 8 language directions. •Restrict scoring for all languages to exact and paraphrase matching. Parameters encode human preferences that generalize across languages: •Prefer recall over precision. •Prefer word choice over word order. •Prefer correct translations of content words over function words. •Prefer exact matches over paraphrase matches, while still giving significant credit to paraphrases. Visualization", "title": "" } ]
[ { "docid": "af6b26efef62f3017a0eccc5d2ae3c33", "text": "Universal, intelligent, and multifunctional devices controlling power distribution and measurement will become the enabling technology of the Smart Grid ICT. In this paper, we report on a novel automation architecture which supports distributed multiagent intelligence, interoperability, and configurability and enables efficient simulation of distributed automation systems. The solution is based on the combination of IEC 61850 object-based modeling and interoperable communication with IEC 61499 function block executable specification. Using the developed simulation environment, we demonstrate the possibility of multiagent control to achieve self-healing grid through collaborative fault location and power restoration.", "title": "" }, { "docid": "4761b8398018e4a15a1d67a127dd657d", "text": "The increasing popularity of social networks, such as Facebook and Orkut, has raised several privacy concerns. Traditional ways of safeguarding privacy of personal information by hiding sensitive attributes are no longer adequate. Research shows that probabilistic classification techniques can effectively infer such private information. The disclosed sensitive information of friends, group affiliations and even participation in activities, such as tagging and commenting, are considered background knowledge in this process. In this paper, we present a privacy protection tool, called Privometer, that measures the amount of sensitive information leakage in a user profile and suggests self-sanitization actions to regulate the amount of leakage. In contrast to previous research, where inference techniques use publicly available profile information, we consider an augmented model where a potentially malicious application installed in the user's friend profiles can access substantially more information. In our model, merely hiding the sensitive information is not sufficient to protect the user privacy. We present an implementation of Privometer in Facebook.", "title": "" }, { "docid": "f8ecc204d84c239b9f3d544fd8d74a5c", "text": "Storyline detection from news articles aims at summarizing events described under a certain news topic and revealing how those events evolve over time. It is a difficult task because it requires first the detection of events from news articles published in different time periods and then the construction of storylines by linking events into coherent news stories. Moreover, each storyline has different hierarchical structures which are dependent across epochs. Existing approaches often ignore the dependency of hierarchical structures in storyline generation. In this paper, we propose an unsupervised Bayesian model, called dynamic storyline detection model, to extract structured representations and evolution patterns of storylines. The proposed model is evaluated on a large scale news corpus. Experimental results show that our proposed model outperforms several baseline approaches.", "title": "" }, { "docid": "d8b19c953cc66b6157b87da402dea98a", "text": "In this paper we propose a new semi-supervised GAN architecture (ss-InfoGAN) for image synthesis that leverages information from few labels (as little as 0.22%, max. 10% of the dataset) to learn semantically meaningful and controllable data representations where latent variables correspond to label categories. The architecture builds on Information Maximizing Generative Adversarial Networks (InfoGAN) and is shown to learn both continuous and categorical codes and achieves higher quality of synthetic samples compared to fully unsupervised settings. Furthermore, we show that using small amounts of labeled data speeds-up training convergence. The architecture maintains the ability to disentangle latent variables for which no labels are available. Finally, we contribute an information-theoretic reasoning on how introducing semi-supervision increases mutual information between synthetic and real data.", "title": "" }, { "docid": "285da3b342a3b3bd14fb14bca73914cd", "text": "This paper presents expressions for the waveforms and design equations to satisfy the ZVS/ZDS conditions in the class-E power amplifier, taking into account the MOSFET gate-to-drain linear parasitic capacitance and the drain-to-source nonlinear parasitic capacitance. Expressions are given for power output capability and power conversion efficiency. Design examples are presented along with the PSpice-simulation and experimental waveforms at 2.3 W output power and 4 MHz operating frequency. It is shown from the expressions that the slope of the voltage across the MOSFET gate-to-drain parasitic capacitance during the switch-off state affects the switch-voltage waveform. Therefore, it is necessary to consider the MOSFET gate-to-drain capacitance for achieving the class-E ZVS/ZDS conditions. As a result, the power output capability and the power conversion efficiency are also affected by the MOSFET gate-to-drain capacitance. The waveforms obtained from PSpice simulations and circuit experiments showed the quantitative agreements with the theoretical predictions, which verify the expressions given in this paper.", "title": "" }, { "docid": "175551435f1a4c73110b79e01306412f", "text": "The development of MEMS actuators is rapidly evolving and continuously new progress in terms of efficiency, power and force output is reported. Pneumatic and hydraulic are an interesting class of microactuators that are easily overlooked. Despite the 20 years of research, and hundreds of publications on this topic, these actuators are only popular in microfluidic systems. In other MEMS applications, pneumatic and hydraulic actuators are rare in comparison with electrostatic, thermal or piezo-electric actuators. However, several studies have shown that hydraulic and pneumatic actuators deliver among the highest force and power densities at microscale. It is believed that this asset is particularly important in modern industrial and medical microsystems, and therefore, pneumatic and hydraulic actuators could start playing an increasingly important role. This paper shows an in-depth overview of the developments in this field ranging from the classic inflatable membrane actuators to more complex piston–cylinder and drag-based microdevices. (Some figures in this article are in colour only in the electronic version)", "title": "" }, { "docid": "1675d99203da64eab8f9722b77edaab5", "text": "Estimation of the semantic relatedness between biomedical concepts has utility for many informatics applications. Automated methods fall into two broad categories: methods based on distributional statistics drawn from text corpora, and methods based on the structure of existing knowledge resources. In the former case, taxonomic structure is disregarded. In the latter, semantically relevant empirical information is not considered. In this paper, we present a method that retrofits the context vector representation of MeSH terms by using additional linkage information from UMLS/MeSH hierarchy such that linked concepts have similar vector representations. We evaluated the method relative to previously published physician and coder’s ratings on sets of MeSH terms. Our experimental results demonstrate that the retrofitted word vector measures obtain a higher correlation with physician judgments. The results also demonstrate a clear improvement on the correlation with experts’ ratings from the retrofitted vector representation in comparison to the vector representation without retrofitting.", "title": "" }, { "docid": "47e84cacb4db05a30bedfc0731dd2717", "text": "Although short-range wireless communication explicitly targets local and regional applications, range continues to be a highly important issue. The range directly depends on the so-called link budget, which can be increased by the choice of modulation and coding schemes. The recent transceiver generation in particular comes with extensive and flexible support for software-defined radio (SDR). The SX127× family from Semtech Corp. is a member of this device class and promises significant benefits for range, robust performance, and battery lifetime compared to competing technologies. This contribution gives a short overview of the technologies to support Long Range (LoRa™) and the corresponding Layer 2 protocol (LoRaWAN™). It particularly describes the possibility to combine the Internet Protocol, i.e. IPv6, into LoRaWAN™, so that it can be directly integrated into a full-fledged Internet of Things (IoT). The proposed solution, which we name 6LoRaWAN, has been implemented and tested; results of the experiments are also shown in this paper.", "title": "" }, { "docid": "c78a4446be38b8fff2a949cba30a8b65", "text": "This paper will derive the Black-Scholes pricing model of a European option by calculating the expected value of the option. We will assume that the stock price is log-normally distributed and that the universe is riskneutral. Then, using Ito’s Lemma, we will justify the use of the risk-neutral rate in these initial calculations. Finally, we will prove put-call parity in order to price European put options, and extend the concepts of the Black-Scholes formula to value an option with pricing barriers.", "title": "" }, { "docid": "c5443c3bdfed74fd643e7b6c53a70ccc", "text": "Background\nAbsorbable suture suspension (Silhouette InstaLift, Sinclair Pharma, Irvine, CA) is a novel, minimally invasive system that utilizes a specially manufactured synthetic suture to help address the issues of facial aging, while minimizing the risks associated with historic thread lifting modalities.\n\n\nObjectives\nThe purpose of the study was to assess the safety, efficacy, and patient satisfaction of the absorbable suture suspension system in regards to facial rejuvenation and midface volume enhancement.\n\n\nMethods\nThe first 100 treated patients who underwent absorbable suture suspension, by the senior author, were critically evaluated. Subjects completed anonymous surveys evaluating their experience with the new modality.\n\n\nResults\nSurvey results indicate that absorbable suture suspension is a tolerable (96%) and manageable (89%) treatment that improves age related changes (83%), which was found to be in concordance with our critical review.\n\n\nConclusions\nAbsorbable suture suspension generates high patient satisfaction by nonsurgically lifting mid and lower face and neck skin and has the potential to influence numerous facets of aesthetic medicine. The study provides a greater understanding concerning patient selection, suture trajectory, and possible adjuvant therapies.\n\n\nLevel of Evidence 4", "title": "" }, { "docid": "246866da7509b2a8a2bda734a664de9c", "text": "In this paper we present an approach of procedural game content generation that focuses on a gameplay loops formal language (GLFL). In fact, during an iterative game design process, game designers suggest modifications that often require high development costs. The proposed language and its operational semantic allow reducing the gap between game designers' requirement and game developers' needs, enhancing therefore video games productivity. Using gameplay loops concept for game content generation offers a low cost solution to adjust game challenges, objectives and rewards in video games. A pilot experiment have been conducted to study the impact of this approach on game development.", "title": "" }, { "docid": "b776b58f6f78e77c81605133c6e4edce", "text": "The phase response of noisy speech has largely been ignored, but recent research shows the importance of phase for perceptual speech quality. A few phase enhancement approaches have been developed. These systems, however, require a separate algorithm for enhancing the magnitude response. In this paper, we present a novel framework for performing monaural speech separation in the complex domain. We show that much structure is exhibited in the real and imaginary components of the short-time Fourier transform, making the complex domain appropriate for supervised estimation. Consequently, we define the complex ideal ratio mask (cIRM) that jointly enhances the magnitude and phase of noisy speech. We then employ a single deep neural network to estimate both the real and imaginary components of the cIRM. The evaluation results show that complex ratio masking yields high quality speech enhancement, and outperforms related methods that operate in the magnitude domain or separately enhance magnitude and phase.", "title": "" }, { "docid": "4783e35e54d0c7f555015427cbdc011d", "text": "The language of deaf and dumb which uses body parts to convey the message is known as sign language. Here, we are doing a study to convert speech into sign language used for conversation. In this area we have many developed method to recognize alphabets and numerals of ISL (Indian sign language). There are various approaches for recognition of ISL and we have done a comparative studies between them [1].", "title": "" }, { "docid": "2ed36e909f52e139b5fd907436e80443", "text": "It is difficult to draw sweeping general conclusions about the blastogenesis of CT, principally because so few thoroughly studied cases are reported. It is to be hoped that methods such as painstaking gross or electronic dissection will increase the number of well-documented cases. Nevertheless, the following conclusions can be proposed: 1. Most CT can be classified into a few main anatomic types (or paradigms), and there are also rare transitional types that show gradation between the main types. 2. Most CT have two full notochordal axes (Fig. 5); the ventral organs induced along these axes may be severely disorientated, malformed, or aplastic in the process of being arranged within one body. Reported anatomic types of CT represent those notochordal arrangements that are compatible with reasonably complete embryogenesis. New ventro-lateral axes are formed in many types of CT because of space constriction in the ventral zones. The new structures represent areas of \"mutual recognition and organization\" rather than \"fusion\" (Fig. 17). 3. Orientations of the pairs of axes in the embryonic disc can be deduced from the resulting anatomy. Except for dicephalus, the axes are not side by side. Notochords are usually \"end-on\" or ventro-ventral in orientation (Fig. 5). 4. A single gastrulation event or only partial duplicated gastrulation event seems to occur in dicephalics, despite a full double notochord. 5. The anatomy of diprosopus requires further clarification, particularly in cases with complete crania rather than anencephaly-equivalent. Diprosopus CT offer the best opportunity to study the effects of true forking of the notochord, if this actually occurs. 6. In cephalothoracopagus, thoracopagus, and ischiopagus, remarkably complete new body forms are constructed at right angles to the notochordal axes. The extent of expression of viscera in these types depends on the degree of noncongruity of their ventro-ventral axes (Figs. 4, 11, 15b). 7. Some organs and tissues fail to develop (interaction aplasia) because of conflicting migrational pathways or abnormal concentrations of morphogens in and around the neoaxes. 8. Where the cardiovascular system is discordantly expressed in dicephalus and thoracopagus twins, the right heart is more severely malformed, depending on the degree of interaction of the two embryonic septa transversa. 9. The septum transversum provides mesenchymal components to the heawrt and liver; the epithelial components (derived fro the foregut[s]) may vary in number from the number of mesenchymal septa transversa contributing to the liver of the CT embryo.(ABSTRACT TRUNCATED AT 400 WORDS)", "title": "" }, { "docid": "33e45b66cca92f15270500c32a1c0b94", "text": "We study a dataset of billions of program binary files that appeared on 100 million computers over the course of 12 months, discovering that 94% of these files were present on a single machine. Though malware polymorphism is one cause for the large number of singleton files, additional factors also contribute to polymorphism, given that the ratio of benign to malicious singleton files is 80:1. The huge number of benign singletons makes it challenging to reliably identify the minority of malicious singletons. We present a large-scale study of the properties, characteristics, and distribution of benign and malicious singleton files. We leverage the insights from this study to build a classifier based purely on static features to identify 92% of the remaining malicious singletons at a 1.4% percent false positive rate, despite heavy use of obfuscation and packing techniques by most malicious singleton files that we make no attempt to de-obfuscate. Finally, we demonstrate robustness of our classifier to important classes of automated evasion attacks.", "title": "" }, { "docid": "9b17dd1fc2c7082fa8daecd850fab91c", "text": "This paper presents all the stages of development of a solar tracker for a photovoltaic panel. The system was made with a microcontroller which was design as an embedded control. It has a data base of the angles of orientation horizontal axle, therefore it has no sensor inlet signal and it function as an open loop control system. Combined of above mention characteristics in one the tracker system is a new technique of the active type. It is also a rotational robot of 1 degree of freedom.", "title": "" }, { "docid": "a02fb872137fe7bc125af746ba814849", "text": "23% of the total global burden of disease is attributable to disorders in people aged 60 years and older. Although the proportion of the burden arising from older people (≥60 years) is highest in high-income regions, disability-adjusted life years (DALYs) per head are 40% higher in low-income and middle-income regions, accounted for by the increased burden per head of population arising from cardiovascular diseases, and sensory, respiratory, and infectious disorders. The leading contributors to disease burden in older people are cardiovascular diseases (30·3% of the total burden in people aged 60 years and older), malignant neoplasms (15·1%), chronic respiratory diseases (9·5%), musculoskeletal diseases (7·5%), and neurological and mental disorders (6·6%). A substantial and increased proportion of morbidity and mortality due to chronic disease occurs in older people. Primary prevention in adults aged younger than 60 years will improve health in successive cohorts of older people, but much of the potential to reduce disease burden will come from more effective primary, secondary, and tertiary prevention targeting older people. Obstacles include misplaced global health priorities, ageism, the poor preparedness of health systems to deliver age-appropriate care for chronic diseases, and the complexity of integrating care for complex multimorbidities. Although population ageing is driving the worldwide epidemic of chronic diseases, substantial untapped potential exists to modify the relation between chronological age and health. This objective is especially important for the most age-dependent disorders (ie, dementia, stroke, chronic obstructive pulmonary disease, and vision impairment), for which the burden of disease arises more from disability than from mortality, and for which long-term care costs outweigh health expenditure. The societal cost of these disorders is enormous.", "title": "" }, { "docid": "afae66e9ff49274bbb546cd68490e5e4", "text": "Question-Answering Bulletin Boards (QABB), such as Yahoo! Answers and Windows Live QnA, are gaining popularity recently. Communications on QABB connect users, and the overall connections can be regarded as a social network. If the evolution of social networks can be predicted, it is quite useful for encouraging communications among users. This paper describes an improved method for predicting links based on weighted proximity measures of social networks. The method is based on an assumption that proximities between nodes can be estimated better by using both graph proximity measures and the weights of existing links in a social network. In order to show the effectiveness of our method, the data of Yahoo! Chiebukuro (Japanese Yahoo! Answers) are used for our experiments. The results show that our method outperforms previous approaches, especially when target social networks are sufficiently dense.", "title": "" }, { "docid": "6d13952afa196a6a77f227e1cc9f43bd", "text": "Spreadsheets contain valuable data on many topics, but they are difficult to integrate with other sources. Converting spreadsheet data to the relational model would allow relational integration tools to be used, but using manual methods to do this requires large amounts of work for each integration candidate. Automatic data extraction would be useful but it is very challenging: spreadsheet designs generally requires human knowledge to understand the metadata being described. Even if it is possible to obtain this metadata information automatically, a single mistake can yield an output relation with a huge number of incorrect tuples. We propose a two-phase semiautomatic system that extracts accurate relational metadata while minimizing user effort. Based on conditional random fields (CRFs), our system enables downstream spreadsheet integration applications. First, the automatic extractor uses hints from spreadsheets’ graphical style and recovered metadata to extract the spreadsheet data as accurately as possible. Second, the interactive repair component identifies similar regions in distinct spreadsheets scattered across large spreadsheet corpora, allowing a user’s single manual repair to be amortized over many possible extraction errors. Through our method of integrating the repair workflow into the extraction system, a human can obtain the accurate extraction with just 31% of the manual operations required by a standard classification based technique. We demonstrate and evaluate our system using two corpora: more than 1,000 spreadsheets published by the US government and more than 400,000 spreadsheets downloaded from the Web.", "title": "" }, { "docid": "1d3b2a5906d7db650db042db9ececed1", "text": "Music consists of precisely patterned sequences of both movement and sound that engage the mind in a multitude of experiences. We move in response to music and we move in order to make music. Because of the intimate coupling between perception and action, music provides a panoramic window through which we can examine the neural organization of complex behaviors that are at the core of human nature. Although the cognitive neuroscience of music is still in its infancy, a considerable behavioral and neuroimaging literature has amassed that pertains to neural mechanisms that underlie musical experience. Here we review neuroimaging studies of explicit sequence learning and temporal production—findings that ultimately lay the groundwork for understanding how more complex musical sequences are represented and produced by the brain. These studies are also brought into an existing framework concerning the interaction of attention and time-keeping mechanisms in perceiving complex patterns of information that are distributed in time, such as those that occur in music.", "title": "" } ]
scidocsrr
516a57352a3d2bbf6172c2e4425d424d
Recent Advance in Content-based Image Retrieval: A Literature Survey
[ { "docid": "d063f8a20e2b6522fe637794e27d7275", "text": "Bag-of-Words (BoW) model based on SIFT has been widely used in large scale image retrieval applications. Feature quantization plays a crucial role in BoW model, which generates visual words from the high dimensional SIFT features, so as to adapt to the inverted file structure for indexing. Traditional feature quantization approaches suffer several problems: 1) high computational cost---visual words generation (codebook construction) is time consuming especially with large amount of features; 2) limited reliability---different collections of images may produce totally different codebooks and quantization error is hard to be controlled; 3) update inefficiency--once the codebook is constructed, it is not easy to be updated. In this paper, a novel feature quantization algorithm, scalar quantization, is proposed. With scalar quantization, a SIFT feature is quantized to a descriptive and discriminative bit-vector, of which the first tens of bits are taken out as code word. Our quantizer is independent of collections of images. In addition, the result of scalar quantization naturally lends itself to adapt to the classic inverted file structure for image indexing. Moreover, the quantization error can be flexibly reduced and controlled by efficiently enumerating nearest neighbors of code words.\n The performance of scalar quantization has been evaluated in partial-duplicate Web image search on a database of one million images. Experiments reveal that the proposed scalar quantization achieves a relatively 42% improvement in mean average precision over the baseline (hierarchical visual vocabulary tree approach), and also outperforms the state-of-the-art Hamming Embedding approach and soft assignment method.", "title": "" }, { "docid": "83ad3f9cce21b2f4c4f8993a3d418a44", "text": "Effective and efficient generation of keypoints from an image is a well-studied problem in the literature and forms the basis of numerous Computer Vision applications. Established leaders in the field are the SIFT and SURF algorithms which exhibit great performance under a variety of image transformations, with SURF in particular considered as the most computationally efficient amongst the high-performance methods to date. In this paper we propose BRISK1, a novel method for keypoint detection, description and matching. A comprehensive evaluation on benchmark datasets reveals BRISK's adaptive, high quality performance as in state-of-the-art algorithms, albeit at a dramatically lower computational cost (an order of magnitude faster than SURF in cases). The key to speed lies in the application of a novel scale-space FAST-based detector in combination with the assembly of a bit-string descriptor from intensity comparisons retrieved by dedicated sampling of each keypoint neighborhood.", "title": "" } ]
[ { "docid": "c16f21fd2b50f7227ea852882004ef5b", "text": "We study a stock dealer’s strategy for submitting bid and ask quotes in a limit order book. The agent faces an inventory risk due to the diffusive nature of the stock’s mid-price and a transactions risk due to a Poisson arrival of market buy and sell orders. After setting up the agent’s problem in a maximal expected utility framework, we derive the solution in a two step procedure. First, the dealer computes a personal indifference valuation for the stock, given his current inventory. Second, he calibrates his bid and ask quotes to the market’s limit order book. We compare this ”inventory-based” strategy to a ”naive” best bid/best ask strategy by simulating stock price paths and displaying the P&L profiles of both strategies. We find that our strategy has a P&L profile that has both a higher return and lower variance than the benchmark strategy.", "title": "" }, { "docid": "7f68d112267f94d91cd4c45ecb7f874a", "text": "In this paper we study the problem of learning Rectified Linear Units (ReLUs) which are functions of the form x ↦ max(0, ⟨w,x⟩) with w ∈ R denoting the weight vector. We study this problem in the high-dimensional regime where the number of observations are fewer than the dimension of the weight vector. We assume that the weight vector belongs to some closed set (convex or nonconvex) which captures known side-information about its structure. We focus on the realizable model where the inputs are chosen i.i.d. from a Gaussian distribution and the labels are generated according to a planted weight vector. We show that projected gradient descent, when initialized at 0, converges at a linear rate to the planted model with a number of samples that is optimal up to numerical constants. Our results on the dynamics of convergence of these very shallow neural nets may provide some insights towards understanding the dynamics of deeper architectures.", "title": "" }, { "docid": "5207f7a986dd1fecbe4afd0789d0628a", "text": "Characterization of driving maneuvers or driving styles through motion sensors has become a field of great interest. Before now, this characterization used to be carried out with signals coming from extra equipment installed inside the vehicle, such as On-Board Diagnostic (OBD) devices or sensors in pedals. Nowadays, with the evolution and scope of smartphones, these have become the devices for recording mobile signals in many driving characterization applications. Normally multiple available sensors are used, such as accelerometers, gyroscopes, magnetometers or the Global Positioning System (GPS). However, using sensors such as GPS increase significantly battery consumption and, additionally, many current phones do not include gyroscopes. Therefore, we propose the characterization of driving style through only the use of smartphone accelerometers. We propose a deep neural network (DNN) architecture that combines convolutional and recurrent networks to estimate the vehicle movement direction (VMD), which is the forward movement directional vector captured in a phone's coordinates. Once VMD is obtained, multiple applications such as characterizing driving styles or detecting dangerous events can be developed. In the development of the proposed DNN architecture, two different methods are compared. The first one is based on the detection and classification of significant acceleration driving forces, while the second one relies on longitudinal and transversal signals derived from the raw accelerometers. The final success rate of VMD estimation for the best method is of 90.07%.", "title": "" }, { "docid": "ff8dec3914e16ae7da8801fe67421760", "text": "A hypothesized need to form and maintain strong, stable interpersonal relationships is evaluated in light of the empirical literature. The need is for frequent, nonaversive interactions within an ongoing relational bond. Consistent with the belongingness hypothesis, people form social attachments readily under most conditions and resist the dissolution of existing bonds. Belongingness appears to have multiple and strong effects on emotional patterns and on cognitive processes. Lack of attachments is linked to a variety of ill effects on health, adjustment, and well-being. Other evidence, such as that concerning satiation, substitution, and behavioral consequences, is likewise consistent with the hypothesized motivation. Several seeming counterexamples turned out not to disconfirm the hypothesis. Existing evidence supports the hypothesis that the need to belong is a powerful, fundamental, and extremely pervasive motivation.", "title": "" }, { "docid": "d2bea5e928167f295e05412962d44b99", "text": "The development of e-commerce has increased the popularity of online shopping worldwide. In Malaysia, it was reported that online shopping market size was RM1.8 billion in 2013 and it is estimated to reach RM5 billion by 2015. However, online shopping was rated 11 th out of 15 purposes of using internet in 2012. Consumers’ perceived risks of online shopping becomes a hot topic to research as it will directly influence users’ attitude towards online purchasing, and their attitude will have significant impact to the online purchasing behaviour. The conceptualization of consumers’ perceived risk, attitude and online shopping behaviour of this study provides empirical evidence in the study of consumer online behaviour. Four types of risks product risk, financial, convenience and non-delivery risks were examined in term of their effect on consumers’ online attitude. A web-based survey was employed, and a total of 300 online shoppers of a Malaysia largest online marketplace participated in this study. The findings indicated that product risk, financial and non-delivery risks are hazardous and negatively affect the attitude of online shoppers. Convenience risk was found to have positive effect on consumers’ attitude, denoting that online buyers of this site trusted the online seller and they encountered less troublesome with the site. It also implies that consumers did not really concern on non-convenience aspect of online shopping, such as handling of returned products and examine the quality of products featured in the online seller website. The online buyers’ attitude was significantly and positively affects their online purchasing behaviour. The findings provide useful model for measuring and managing consumers’ perceived risk in internet-based transaction to increase their involvement in online shopping and to reduce their cognitive dissonance in the e-commerce setting.", "title": "" }, { "docid": "8f13fbf6de0fb0685b4a39ee5f3bb415", "text": "This review presents one of the eight theories of the quality of life (QOL) used for making the SEQOL (self-evaluation of quality of life) questionnaire or the quality of life as realizing life potential. This theory is strongly inspired by Maslow and the review furthermore serves as an example on how to fulfill the demand for an overall theory of life (or philosophy of life), which we believe is necessary for global and generic quality-of-life research. Whereas traditional medical science has often been inspired by mechanical models in its attempts to understand human beings, this theory takes an explicitly biological starting point. The purpose is to take a close view of life as a unique entity, which mechanical models are unable to do. This means that things considered to be beyond the individual's purely biological nature, notably the quality of life, meaning in life, and aspirations in life, are included under this wider, biological treatise. Our interpretation of the nature of all living matter is intended as an alternative to medical mechanism, which dates back to the beginning of the 20th century. New ideas such as the notions of the human being as nestled in an evolutionary and ecological context, the spontaneous tendency of self-organizing systems for realization and concord, and the central role of consciousness in interpreting, planning, and expressing human reality are unavoidable today in attempts to scientifically understand all living matter, including human life.", "title": "" }, { "docid": "753983a2361a2439fe031543a209ad79", "text": "Social media is playing an increasingly important role as the sources of health related information. The goal of this study is to investigate the extent social media appear in search engine results in the context of health-related information search. We simulate an information seeker’s use of a search engine for health consultation using a set of pre-defined keywords in combination with 5 types of complaints. The results showed that social media constitute a significant part of the search results, indicating that search engines likely direct information seekers to social media sites. This study confirms the growing importance of social media in health communication. It also provides evidence regarding opportunities and challenges faced by health professionals and general public.", "title": "" }, { "docid": "7a005d66591330d6fdea5ffa8cb9020a", "text": "First impressions influence the behavior of people towards a newly encountered person or a human-like agent. Apart from the physical characteristics of the encountered face, the emotional expressions displayed on it, as well as ambient information affect these impressions. In this work, we propose an approach to predict the first impressions people will have for a given video depicting a face within a context. We employ pre-trained Deep Convolutional Neural Networks to extract facial expressions, as well as ambient information. After video modeling, visual features that represent facial expression and scene are combined and fed to Kernel Extreme Learning Machine regressor. The proposed system is evaluated on the ChaLearn Challenge Dataset on First Impression Recognition, where the classification target is the ”Big Five” personality trait labels for each video. Our system achieved an accuracy of 90.94% on the sequestered test set, 0.36% points below the top system in the competition.", "title": "" }, { "docid": "e3b707ad340b190393d3384a1a364e63", "text": "ed Log Lines Categorize Bins Figure 3. High-level overview of our approach for abstracting execution logs to execution events. Table III. Log lines used as a running example to explain our approach. 1. Start check out 2. Paid for, item=bag, quality=1, amount=100 3. Paid for, item=book, quality=3, amount=150 4. Check out, total amount is 250 5. Check out done Copyright q 2008 John Wiley & Sons, Ltd. J. Softw. Maint. Evol.: Res. Pract. 2008; 20:249–267 DOI: 10.1002/smr AN AUTOMATED APPROACH FOR ABSTRACTING EXECUTION LOGS 257 Table IV. Running example logs after the anonymize step. 1. Start check out 2. Paid for, item=$v, quality=$v, amount=$v 3. Paid for, item=$v, quality=$v, amount=$v 4. Check out, total amount=$v 5. Check out done Table V. Running example logs after the tokenize step. Bin names (no. of words, no. of parameters) Log lines (3,0) 1. Start check out 5. Check out done (5,1) 4. Check out, total amount=$v (8,3) 2. Paid for, item=$v, quality=$v, amount=$v 3. Paid for, item=$v, quality=$v, amount=$v 4.2.2. The tokenize step The tokenize step separates the anonymized log lines into different groups (i.e., bins) according to the number of words and estimated parameters in each log line. The use of multiple bins limits the search space of the following step (i.e., the categorize step). The use of bins permits us to process large log files in a timely fashion using a limited memory footprint since the analysis is done per bin instead of having to load up all the lines in the log file. We estimate the number of parameters in a log line by counting the number of generic terms (i.e., $v). Log lines with the same number of tokens and parameters are placed in the same bin. Table V shows the sample log lines after the anonymize and tokenize steps. The left column indicates the name of a bin. Each bin is named with a tuple: number of words and number of parameters that are contained in the log line associated with that bin. The right column in Table VI shows the log lines. Each row shows the bin and its corresponding log lines. The second and the third log lines contain 8 words and are likely to contain 3 parameters. Thus, the second and third log lines are grouped together in the (8,3) bin. Similarly, the first and last log lines are grouped together in the (3,0) bin since they both contain 3 words and are likely to contain no parameters. 4.2.3. The categorize step The categorize step compares log lines in each bin and abstracts them to the corresponding execution events. The inferred execution events are stored in an execution events database for future references. The algorithm used in the categorize step is shown below. Our algorithm goes through the log lines Copyright q 2008 John Wiley & Sons, Ltd. J. Softw. Maint. Evol.: Res. Pract. 2008; 20:249–267 DOI: 10.1002/smr 258 Z. M. JIANG ET AL. Table VI. Running example logs after the categorize step. Execution events (word parameter id) Log lines 3 0 1 1. Start check out 3 0 2 5. Check out done 5 1 1 4. Check out, total amount=$v 8 3 1 2. Paid for, item=$v, quality=$v, amount=$v 8 3 1 3. Paid for, item=$v, quality=$v, amount=$v bin by bin. After this step, each log line should be abstracted to an execution event. Table VI shows the results of our working example after the categorize step. for each bin bi for each log line lk in bin bi for each execution event e(bi , j) corresponding to bi in the events DB perform word by word comparison between e(bi , j) and lk if (there is no difference) then lk is of type e(bi , j) break end if end for // advance to next e(bi , j) if ( lk does not have a matching execution event) then lk is a new execution event store an abstracted lk into the execution events DB end if end for // advance to the next log line end for // advance to the next bin We now explain our algorithm using the running example. Our algorithm starts with the (3,0) bin. Initially, there are no execution events that correspond to this bin yet. Therefore, the execution event corresponding to the first log line becomes the first execution event namely 3 0 1. The 1 at the end of 3 0 1 indicates that this is the first execution event to correspond to the bin, which has 3 words and no parameters (i.e., bin 3 0). Then the algorithm moves to the next log line in the (3,0) bin, which contains the fifth log line. The algorithm compares the fifth log line with all the existing execution events in the (3,0) bin. Currently, there is only one execution event: 3 0 1. As the fifth log line is not similar to the 3 0 1 execution event, we create a new execution event 3 0 2 for the fifth log line. With all the log lines in the (3,0) bin processed, we can move on to the (5,1) bin. As there are no execution events that correspond to the (5,1) bin initially, the fourth log line gets assigned to a new execution event 5 1 1. Finally, we move on to the (8,3) bin. First, the second log line gets assigned with a new execution event 8 3 1 since there are no execution events corresponding to this bin yet. As the third log line is the same as the second log line (after the anonymize step), the third log line is categorized as the same execution event as the second log Copyright q 2008 John Wiley & Sons, Ltd. J. Softw. Maint. Evol.: Res. Pract. 2008; 20:249–267 DOI: 10.1002/smr AN AUTOMATED APPROACH FOR ABSTRACTING EXECUTION LOGS 259 line. Table VI shows the sample log lines after the categorize step. The left column is the abstracted execution event. The right column shows the line number together with the corresponding log lines. 4.2.4. The reconcile step Since the anonymize step uses heuristics to identify dynamic information in a log line, there is a chance that we might miss to anonymize some dynamic information. The missed dynamic information will result in the abstraction of several log lines to several execution events that are very similar. Table VII shows an example of dynamic information that was missed by the anonymize step. The table shows five different execution events. However, the user names after ‘for user’ are dynamic information and should have been replaced by the generic token ‘$v’. All the log lines shown in Table VII should have been abstracted to the same execution event after the categorize step. The reconcile step addresses this situation. All execution events are re-examined to identify which ones are to be merged. Execution events are merged if: 1. They belong to the same bin. 2. They differ from each other by one token at the same positions. 3. There exists a few of such execution events. We used a threshold of five events in our case studies. Other values are possibly based on the content of the analyzed log files. The threshold prevents the merging of similar yet different execution events, such as ‘Start processing’ and ‘Stop processing’, which should not be merged. Looking at the execution events in Table VII, we note that they all belong to the ‘5 0’ bin and differ from each other only in the last token. Since there are five of such events, we merged them into one event. Table VIII shows the execution events from Table VII after the reconcile step. Note that if the ‘5 0’ bin contains another execution event: ‘Stop processing for user John’; it will not be merged with the above execution events since it differs by two tokens instead of only the last token. Table VII. Sample logs that the categorize step would fail to abstract. Event IDs Execution events 5 0 1 Start processing for user Jen 5 0 2 Start processing for user Tom 5 0 3 Start processing for user Henry 5 0 4 Start processing for user Jack 5 0 5 Start processing for user Peter Table VIII. Sample logs after the reconcile step. Event IDs Execution events 5 0 1 Start processing for user $v Copyright q 2008 John Wiley & Sons, Ltd. J. Softw. Maint. Evol.: Res. Pract. 2008; 20:249–267 DOI: 10.1002/smr 260 Z. M. JIANG ET AL.", "title": "" }, { "docid": "94c6ab34e39dd642b94cc2f538451af8", "text": "Like every other social practice, journalism cannot now fully be understood apart from globalization. As part of a larger platform of communication media, journalism contributes to this experience of the world-as-a-single-place and thus represents a key component in these social transformations, both as cause and outcome. These issues at the intersection of journalism and globalization define an important and growing field of research, particularly concerning the public sphere and spaces for political discourse. In this essay, I review this intersection of journalism and globalization by considering the communication field’s approach to ‘media globalization’ within a broader interdisciplinary perspective that mixes the sociology of globalization with aspects of geography and social anthropology. By placing the emphasis on social practices, elites, and specific geographical spaces, I introduce a less media-centric approach to media globalization and how journalism fits into the process. Beyond ‘global village journalism,’ this perspective captures the changes globalization has brought to journalism. Like every other social practice, journalism cannot now fully be understood apart from globalization. This process refers to the intensification of social interconnections, which allows apprehending the world as a single place, creating a greater awareness of our own place and its relative location within the range of world experience. As part of a larger platform of communication media, journalism contributes to this experience and thus represents a key component in these social transformations, both as cause and outcome. These issues at the intersection of journalism and globalization define an important and growing field of research, particularly concerning the public sphere and spaces for political discourse. The study of globalization has become a fashionable growth industry, attracting an interdisciplinary assortment of scholars. Journalism, meanwhile, itself has become an important subject in its own right within media studies, with a growing number of projects taking an international perspective (reviewed in Reese 2009). Combining the two areas yields a complex subject that requires some careful sorting out to get beyond the jargon and the easy country–by-country case studies. From the globalization studies side, the media role often seems like an afterthought, a residual category of social change, or a self-evident symbol of the global era–CNN, for example. Indeed, globalization research has been slower to consider the changing role of journalism, compared to the attention devoted to financial and entertainment flows. That may be expected, given that economic and cultural globalization is further along than that of politics, and journalism has always been closely tied to democratic structures, many of which are inherently rooted in local communities. The media-centrism of communication research, on the other hand, may give the media—and the journalism associated with them—too much credit in the globalization process, treating certain media as the primary driver of global connections and the proper object of study. Global connections support new forms of journalism, which create politically significant new spaces within social systems, lead to social change, and privilege certain forms Sociology Compass 4/6 (2010): 344–353, 10.1111/j.1751-9020.2010.00282.x a 2010 The Author Journal Compilation a 2010 Blackwell Publishing Ltd of power. Therefore, we want to know how journalism has contributed to these new spaces, bringing together new combinations of transnational élites, media professionals, and citizens. To what extent are these interactions shaped by a globally consistent shared logic, and what are the consequences for social change and democratic values? Here, however, the discussion often gets reduced to whether a cultural homogenization is taking place, supporting a ‘McWorld’ thesis of a unitary media and journalistic form. But we do not have to subscribe to a one-world media monolith prediction to expect certain transnational logics to emerge to take their place along side existing ones. Journalism at its best contributes to social transparency, which is at the heart of the globalization optimists’ hopes for democracy (e.g. Giddens 2000). The insertion of these new logics into national communities, especially those closed or tightly controlled societies, can bring an important impulse for social change (seen in a number of case studies from China, as in Reese and Dai 2009). In this essay, I will review a few of the issues at the intersection of journalism and globalization and consider a more nuanced view of media within a broader network of actors, particularly in the case of journalism as it helps create emerging spaces for public affairs discourse. Understanding the complex interplay of the global and local requires an interdisciplinary perspective, mixing the sociology of globalization with aspects of geography and social anthropology. This helps avoid equating certain emerging global news forms with a new and distinct public sphere. The globalization of journalism occurs through a multitude of levels, relationships, social actors, and places, as they combine to create new public spaces. Communication research may bring journalism properly to the fore, but it must be considered within the insights into places and relationships provided by these other disciplines. Before addressing these questions, it is helpful to consider how journalism has figured into some larger debates. Media Globalization: Issues of Scale and Homogeneity One major fault line lies within the broader context of ‘media,’ where journalism has been seen as providing flows of information and transnational connections. That makes it a key factor in the phenomenon of ‘media globalization.’ McLuhan gave us the enduring image of the ‘global village,’ a quasi-utopian idea that has seeped into such theorizing about the contribution of media. The metaphor brings expectations of an extensive, unitary community, with a corresponding set of universal, global values, undistorted by parochial interests and propaganda. The interaction of world media systems, however, has not as of yet yielded the kind of transnational media and programs that would support such ‘village’-worthy content (Ferguson 1992; Sparks 2007). In fact, many of the communication barriers show no signs of coming down, with many specialized enclaves becoming stronger. In this respect, changes in media reflect the larger crux of globalization that it simultaneously facilitates certain ‘monoculture’ global standards along with the proliferation of a host of micro-communities that were not possible before. In a somewhat analogous example, the global wine trade has led to convergent trends in internationally desirable tastes but also allowed a number of specialized local wineries to survive and flourish through the ability to reach global markets. The very concept of ‘media globalization’ suggests that we are not quite sure if media lead to globalization or are themselves the result of it. In any case, giving the media a privileged place in shaping a globalized future has led to high expectations for international journalism, satellite television, and other media to provide a workable global public sphere, making them an easy target if they come up short. In his book, Media globalization Journalism and Globalization 345 a 2010 The Author Sociology Compass 4/6 (2010): 344–353, 10.1111/j.1751-9020.2010.00282.x Journal Compilation a 2010 Blackwell Publishing Ltd myth, Kai Hafez (2007) provides that kind of attack. Certainly, much of the discussion has suffered from overly optimistic and under-conceptualized research, with global media technology being a ‘necessary but not sufficient condition for global communication.’ (p. 2) Few truly transnational media forms have emerged that have a more supranational than national allegiance (among newspapers, the International Herald Tribune, Wall St. Journal Europe, Financial Times), and among transnational media even CNN does not present a single version to the world, split as it is into various linguistic viewer zones. Defining cross-border communication as the ‘core phenomenon’ of globalization leads to comparing intrato inter-national communication as the key indicator of globalization. For example, Hafez rejects the internet as a global system of communication, because global connectivity does not exceed local and regional connections. With that as a standard, we may indeed conclude that media globalization has failed to produce true transnational media platforms or dialogs across boundaries. Rather a combination of linguistic and digital divides, along with enduring regional preferences, actually reinforces some boundaries. (The wishful thinking for a global media may be tracked to highly mobile Western scholars, who in Hafez’s ‘hotel thesis’ overestimate the role of such transnational media, because they are available to them in their narrow and privileged travel circles.) Certainly, the foreign news most people receive, even about big international events, is domesticated through the national journalistic lens. Indeed, international reporting, as a key component of the would-be global public sphere, flunks Hafez’s ‘global test,’ incurring the same criticisms others have leveled for years at national journalism: elite-focused, conflictual, and sensational, with a narrow, parochial emphasis. If ‘global’ means giving ‘dialogic’ voices a chance to speak to each other without reproducing national ethnocentrism, then the world’s media still fail to measure up. Conceptualizing the ‘Global’ For many, ‘global’ means big. That goes too for the global village perspective, which emphasizes the scaling dimension and equates the global with ‘bigness,’ part of a nested hierarchy of levels of analysis based on size: beyond local, regional, and nationa", "title": "" }, { "docid": "d2fe95e4955b78aeef8c8a565fbc9fae", "text": "With the advance of the World-Wide Web (WWW) technology, people can easily share content on the Web, including geospatial data and web services. Thus, the “big geospatial data management” issues start attracting attention. Among the big geospatial data issues, this research focuses on discovering distributed geospatial resources. As resources are scattered on the WWW, users cannot find resources of their interests efficiently. While the WWW has Web search engines addressing web resource discovery issues, we envision that the geospatial Web (i.e., GeoWeb) also requires GeoWeb search engines. To realize a GeoWeb search engine, one of the first steps is to proactively discover GeoWeb resources on the WWW. Hence, in this study, we propose the GeoWeb Crawler, an extensible Web crawling framework that can find various types of GeoWeb resources, such as Open Geospatial Consortium (OGC) web services, Keyhole Markup Language (KML) and Environmental Systems Research Institute, Inc (ESRI) Shapefiles. In addition, we apply the distributed computing concept to promote the performance of the GeoWeb Crawler. The result shows that for 10 targeted resources types, the GeoWeb Crawler discovered 7351 geospatial services and 194,003 datasets. As a result, the proposed GeoWeb Crawler framework is proven to be extensible and scalable to provide a comprehensive index of GeoWeb.", "title": "" }, { "docid": "e5d474fc8c0d2c97cc798eda4f9c52dd", "text": "Gesture typing is an efficient input method for phones and tablets using continuous traces created by a pointed object (e.g., finger or stylus). Translating such continuous gestures into textual input is a challenging task as gesture inputs exhibit many features found in speech and handwriting such as high variability, co-articulation and elision. In this work, we address these challenges with a hybrid approach, combining a variant of recurrent networks, namely Long Short Term Memories [1] with conventional Finite State Transducer decoding [2]. Results using our approach show considerable improvement relative to a baseline shape-matching-based system, amounting to 4% and 22% absolute improvement respectively for small and large lexicon decoding on real datasets and 2% on a synthetic large scale dataset.", "title": "" }, { "docid": "88ffb30f1506bedaf7c1a3f43aca439e", "text": "The multiprotein mTORC1 protein kinase complex is the central component of a pathway that promotes growth in response to insulin, energy levels, and amino acids and is deregulated in common cancers. We find that the Rag proteins--a family of four related small guanosine triphosphatases (GTPases)--interact with mTORC1 in an amino acid-sensitive manner and are necessary for the activation of the mTORC1 pathway by amino acids. A Rag mutant that is constitutively bound to guanosine triphosphate interacted strongly with mTORC1, and its expression within cells made the mTORC1 pathway resistant to amino acid deprivation. Conversely, expression of a guanosine diphosphate-bound Rag mutant prevented stimulation of mTORC1 by amino acids. The Rag proteins do not directly stimulate the kinase activity of mTORC1, but, like amino acids, promote the intracellular localization of mTOR to a compartment that also contains its activator Rheb.", "title": "" }, { "docid": "9bca70974fcccc23c2b3463909c1d641", "text": "Advances in online and computer supported education afford exciting opportunities to revolutionize the classroom, while also presenting a number of new challenges not faced in traditional educational settings. Foremost among these challenges is the problem of accurately and efficiently evaluating learner work as the class size grows, which is directly related to the larger goal of providing quality, timely, and actionable formative feedback. Recently there has been a surge in interest in using peer grading methods coupled with machine learning to accurately and fairly evaluate learner work while alleviating the instructor bottleneck and grading overload. Prior work in peer grading almost exclusively focuses on numerically scored grades -- either real-valued or ordinal. In this work, we consider the implications of peer ranking in which learners rank a small subset of peer work from strongest to weakest, and propose new types of computational analyses that can be applied to this ranking data. We adopt a Bayesian approach to the ranked peer grading problem and develop a novel model and method for utilizing ranked peer-grading data. We additionally develop a novel procedure for adaptively identifying which work should be ranked by particular peers in order to dynamically resolve ambiguity in the data and rapidly resolve a clearer picture of learner performance. We showcase our results on both synthetic and several real-world educational datasets.", "title": "" }, { "docid": "72d59a0605a82fc714020ac67ac1e52b", "text": "We present an accurate stereo matching method using <italic>local expansion moves</italic> based on graph cuts. This new move-making scheme is used to efficiently infer per-pixel 3D plane labels on a pairwise Markov random field (MRF) that effectively combines recently proposed slanted patch matching and curvature regularization terms. The local expansion moves are presented as many <inline-formula><tex-math notation=\"LaTeX\">$\\alpha$</tex-math><alternatives> <inline-graphic xlink:href=\"taniai-ieq1-2766072.gif\"/></alternatives></inline-formula>-expansions defined for small grid regions. The local expansion moves extend traditional expansion moves by two ways: localization and spatial propagation. By localization, we use different candidate <inline-formula><tex-math notation=\"LaTeX\">$\\alpha$</tex-math> <alternatives><inline-graphic xlink:href=\"taniai-ieq2-2766072.gif\"/></alternatives></inline-formula>-labels according to the locations of local <inline-formula><tex-math notation=\"LaTeX\">$\\alpha$</tex-math><alternatives> <inline-graphic xlink:href=\"taniai-ieq3-2766072.gif\"/></alternatives></inline-formula>-expansions. By spatial propagation, we design our local <inline-formula><tex-math notation=\"LaTeX\">$\\alpha$</tex-math><alternatives> <inline-graphic xlink:href=\"taniai-ieq4-2766072.gif\"/></alternatives></inline-formula>-expansions to propagate currently assigned labels for nearby regions. With this localization and spatial propagation, our method can efficiently infer MRF models with a continuous label space using randomized search. Our method has several advantages over previous approaches that are based on fusion moves or belief propagation; it produces <italic>submodular moves </italic> deriving a <italic>subproblem optimality</italic>; it helps find good, smooth, piecewise linear disparity maps; it is suitable for parallelization; it can use cost-volume filtering techniques for accelerating the matching cost computations. Even using a simple pairwise MRF, our method is shown to have best performance in the Middlebury stereo benchmark V2 and V3.", "title": "" }, { "docid": "e1fe3c9b60f316c8658a18796245c243", "text": "The ransomware nightmare is taking over the internet impacting common users, small businesses and large ones. The interest and investment which are pushed into this market each month, tells us a few things about the evolution of both technical and social engineering and what to expect in the short-coming future from them. In this paper we analyze how ransomware programs developed in the last few years and how they were released in certain market segments throughout the deep web via RaaS, exploits or SPAM, while learning from their own mistakes to bring profit to the next level. We will also try to highlight some mistakes that were made, which allowed recovering the encrypted data, along with the ransomware authors preference for specific encryption types, how they got to distribute, the silent agreement between ransomwares, coin-miners and bot-nets and some edge cases of encryption, which may prove to be exploitable in the short-coming future.", "title": "" }, { "docid": "f4a31f5dbd98ae0cc9faf3f0255dbca6", "text": "Automotive SoCs are constantly being tested for correct functional operation, even long after they have left fabrication. The testing is done at the start of operation (car ignition) and repeatedly during operation (during the drive) to check for faults. Faults can result from, but are not restricted to, a failure in a part of a semiconductor circuit such as a failed transistor, interconnect failure due to electromigration, or faults caused by soft errors (e.g., an alpha particle switching a bit in a RAM or other circuit element). While the tests can run long after the chip was taped-out, the safety definition and test plan effort is starting as early as the specification definitions. In this paper we give an introduction to functional safety concentrating on the ISO26262 standard and we touch on a couple of approaches to functional safety for an Intellectual Property (IP) part such as a microprocessor, including software self-test libraries and logic BIST. We discuss the additional effort needed for developing a design for the automotive market. Lastly, we focus on our experience of using fault grading as a method for developing a self-test library that periodically tests the circuit operation. We discuss the effect that implementation decisions have on this effort and why it is important to start with this effort early in the design process.", "title": "" }, { "docid": "a3205b696c9f93f1fbe1c8a198d41c57", "text": "The axial magnetic flux leakage(MFL) inspection tools cannot reliably detect or size axially aligned cracks, such as SCC, longitudinal corrosion, long seam defects, and axially oriented mechanical damage. To focus on this problem, the circumferential MFL inspection tool is introduced. The finite element (FE) model is established by adopting ANSYS software to simulate magnetostatics. The results show that the amount of flux that is diverted out of the pipe depends on the geometry of the defect, the primary variables that affect the flux leakage are the ones that define the volume of the defect. The defect location can significantly affect flux leakage, the magnetic field magnitude arising due to the presence of the defect is immersed in the high field close to the permanent magnets. These results demonstrate the feasibility of detecting narrow axial defects and the practicality of developing a circumferential MFL tool.", "title": "" }, { "docid": "2e4c4e734532fb9e70742c3a6333d592", "text": "In this paper we address the problem of automated classification of isolates, i.e., the problem of determining the family of genomes to which a given genome belongs. Additionally, we address the problem of automated unsupervised hierarchical clustering of isolates according only to their statistical substring properties. For both of these problems we present novel algorithms based on nucleotide n-grams, with no required preprocessing steps such as sequence alignment. Results obtained experimentally are very positive and suggest that the proposed techniques can be successfully used in a variety of related problems. The reported experiments demonstrate better performance than some of the state-of-the-art methods. We report on a new distance measure between n-gram profiles, which shows superior performance compared to many other measures, including commonly used Euclidean distance.", "title": "" }, { "docid": "ccfa5c06643cb3913b0813103a85e0b0", "text": "We consider the problem of zero-shot recognition: learning a visual classifier for a category with zero training examples, just using the word embedding of the category and its relationship to other categories, which visual data are provided. The key to dealing with the unfamiliar or novel category is to transfer knowledge obtained from familiar classes to describe the unfamiliar class. In this paper, we build upon the recently introduced Graph Convolutional Network (GCN) and propose an approach that uses both semantic embeddings and the categorical relationships to predict the classifiers. Given a learned knowledge graph (KG), our approach takes as input semantic embeddings for each node (representing visual category). After a series of graph convolutions, we predict the visual classifier for each category. During training, the visual classifiers for a few categories are given to learn the GCN parameters. At test time, these filters are used to predict the visual classifiers of unseen categories. We show that our approach is robust to noise in the KG. More importantly, our approach provides significant improvement in performance compared to the current state-of-the-art results (from 2 ~ 3% on some metrics to whopping 20% on a few).", "title": "" } ]
scidocsrr
4f452ff1503a47b7a94c925f46b3c649
Bounded Rationality, Abstraction, and Hierarchical Decision-Making: An Information-Theoretic Optimality Principle
[ { "docid": "4e42d29a924c6e1e11456255c1f6cba0", "text": "We present a reformulation of the stochastic optimal control problem in terms of KL divergence minimisation, not only providing a unifying perspective of previous approaches in this area, but also demonstrating that the formalism leads to novel practical approaches to the control problem. Specifically, a natural relaxation of the dual formulation gives rise to exact iterative solutions to the finite and infinite horizon stochastic optimal control problem, while direct application of Bayesian inference methods yields instances of risk sensitive control. We furthermore study corresponding formulations in the reinforcement learning setting and present model free algorithms for problems with both discrete and continuous state and action spaces. Evaluation of the proposed methods on the standard Gridworld and Cart-Pole benchmarks verifies the theoretical insights and shows that the proposed methods improve upon current approaches.", "title": "" }, { "docid": "2efd26fc1e584aa5f70bdf9d24e5c2cd", "text": "Bridging cultures that have often been distant, Julia combines expertise from the diverse fields of computer science and computational science to create a new approach to numerical computing. Julia is designed to be easy and fast and questions notions generally held to be “laws of nature” by practitioners of numerical computing: 1. High-level dynamic programs have to be slow. 2. One must prototype in one language and then rewrite in another language for speed or deployment. 3. There are parts of a system appropriate for the programmer, and other parts that are best left untouched as they have been built by the experts. We introduce the Julia programming language and its design—a dance between specialization and abstraction. Specialization allows for custom treatment. Multiple dispatch, a technique from computer science, picks the right algorithm for the right circumstance. Abstraction, which is what good computation is really about, recognizes what remains the same after differences are stripped away. Abstractions in mathematics are captured as code through another technique from computer science, generic programming. Julia shows that one can achieve machine performance without sacrificing human convenience.", "title": "" } ]
[ { "docid": "31b26778e230d2ea40f9fe8996e095ed", "text": "The effects of beverage alcohol (ethanol) on the body are determined largely by the rate at which it and its main breakdown product, acetaldehyde, are metabolized after consumption. The main metabolic pathway for ethanol involves the enzymes alcohol dehydrogenase (ADH) and aldehyde dehydrogenase (ALDH). Seven different ADHs and three different ALDHs that metabolize ethanol have been identified. The genes encoding these enzymes exist in different variants (i.e., alleles), many of which differ by a single DNA building block (i.e., single nucleotide polymorphisms [SNPs]). Some of these SNPs result in enzymes with altered kinetic properties. For example, certain ADH1B and ADH1C variants that are commonly found in East Asian populations lead to more rapid ethanol breakdown and acetaldehyde accumulation in the body. Because acetaldehyde has harmful effects on the body, people carrying these alleles are less likely to drink and have a lower risk of alcohol dependence. Likewise, an ALDH2 variant with reduced activity results in acetaldehyde buildup and also has a protective effect against alcoholism. In addition to affecting drinking behaviors and risk for alcoholism, ADH and ALDH alleles impact the risk for esophageal cancer.", "title": "" }, { "docid": "d48053467e72a6a550de8cb66b005475", "text": "In Slavic languages, verbal prefixes can be applied to perfective verbs deriving new perfective verbs, and multiple prefixes can occur in a single verb. This well-known type of data has not yet been adequately analyzed within current approaches to the semantics of Slavic verbal prefixes and aspect. The notion “aspect” covers “grammatical aspect”, or “viewpoint aspect” (see Smith 1991/1997), best characterized by the formal perfective vs. imperfective distinction, which is often expressed by inflectional morphology (as in Romance languages), and corresponds to propositional operators at the semantic level of representation. It also covers “lexical aspect”, “situation aspect” (see Smith ibid.), “eventuality types” (Bach 1981, 1986), or “Aktionsart” (as in Hinrichs 1985; Van Valin 1990; Dowty 1999; Paslawska and von Stechow 2002, for example), which regards the telic vs. atelic distinction and its Vendlerian subcategories (activities, accomplishments, achievements and states). It is lexicalized by verbs, encoded by derivational morphology, or by a variety of elements at the level of syntax, among which the direct object argument has a prominent role, however, the subject (external) argument is arguably a contributing factor, as well (see Dowty 1991, for example). These two “aspect” categories are orthogonal to each other and interact in systematic ways (see also Filip 1992, 1997, 1993/99; de Swart 1998; Paslawska and von Stechow 2002; Rothstein 2003, for example). Multiple prefixation and application of verbal prefixes to perfective bases is excluded by the common view of Slavic prefixes, according to which all perfective verbs are telic and prefixes constitute a uniform class of “perfective” markers that that are applied to imperfective verbs that are atelic and derive perfective verbs that are telic. Moreover, this view of perfective verbs and prefixes predicts rampant violations of the intuitive “one delimitation per event” constraint, whenever a prefix is applied to a perfective verb. This intuitive constraint is motivated by the observation that an event expressed within a single predication can be delimited only once: cp. *run a mile for ten minutes, *wash the clothes clean white.", "title": "" }, { "docid": "cf5c6b5593ef5f0fd54c4fc7951e2460", "text": "Aiming at inferring 3D shapes from 2D images, 3D shape reconstruction has drawn huge attention from researchers in computer vision and deep learning communities. However, it is not practical to assume that 2D input images and their associated ground truth 3D shapes are always available during training. In this paper, we propose a framework for semi-supervised 3D reconstruction. This is realized by our introduced 2D-3D self-consistency, which aligns the predicted 3D models and the projected 2D foreground segmentation masks. Moreover, our model not only enables recovering 3D shapes with the corresponding 2D masks, camera pose information can be jointly disentangled and predicted, even such supervision is never available during training. In the experiments, we qualitatively and quantitatively demonstrate the effectiveness of our model, which performs favorably against state-of-the-art approaches in either supervised or semi-supervised settings.", "title": "" }, { "docid": "1f139fff7af5a49ee0e21f61bdf5a9b8", "text": "This paper presents a technique for word segmentation for the Urdu OCR system. Word segmentation or word tokenization is a preliminary task for Urdu language processing. Several techniques are available for word segmentation in other languages. A methodology is proposed for word segmentation in this paper which determines the boundaries of words given a sequence of ligatures, based on collocation of ligatures and words in the corpus. Using this technique, word identification rate of 96.10% is achieved, using trigram probabilities normalized over the number of ligatures and words in the sequence.", "title": "" }, { "docid": "d3d6a1793ce81ba0f4f0ffce0477a0ec", "text": "Portable Document Format (PDF) is one of the widely-accepted document format. However, it becomes one of the most attractive targets for exploitation by malware developers and vulnerability researchers. Malicious PDF files can be used in Advanced Persistent Threats (APTs) targeting individuals, governments, and financial sectors. The existing tools such as intrusion detection systems (IDSs) and antivirus packages are inefficient to mitigate this kind of attacks. This is because these techniques need regular updates with the new malicious PDF files which are increasing every day. In this paper, a new algorithm is presented for detecting malicious PDF files based on data mining techniques. The proposed algorithm consists of feature selection stage and classification stage. The feature selection stage is used to the select the optimum number of features extracted from the PDF file to achieve high detection rate and low false positive rate with small computational overhead. Experimental results show that the proposed algorithm can achieve 99.77% detection rate, 99.84% accuracy, and 0.05% false positive rate.", "title": "" }, { "docid": "df7922bcf3a0ecac69b2ac283505c312", "text": "With the growing use of distributed information networks, there is an increasing need for algorithmic and system solutions for data-driven knowledge acquisition using distributed, heterogeneous and autonomous data repositories. In many applications, practical constraints require such systems to provide support for data analysis where the data and the computational resources are available. This presents us with distributed learning problems. We precisely formulate a class of distributed learning problems; present a general strategy for transforming traditional machine learning algorithms into distributed learning algorithms; and demonstrate the application of this strategy to devise algorithms for decision tree induction (using a variety of splitting criteria) from distributed data. The resulting algorithms are provably exact in that the decision tree constructed from distributed data is identical to that obtained by the corresponding algorithm when in the batch setting. The distributed decision tree induction algorithms have been implemented as part of INDUS, an agent-based system for data-driven knowledge acquisition from heterogeneous, distributed, autonomous data sources.", "title": "" }, { "docid": "ad2e02fd3b349b2a66ac53877b82e9bb", "text": "This paper proposes a novel approach for the evolution of artificial creatures which moves in a 3D virtual environment based on the neuroevolution of augmenting topologies (NEAT) algorithm. The NEAT algorithm is used to evolve neural networks that observe the virtual environment and respond to it, by controlling the muscle force of the creature. The genetic algorithm is used to emerge the architecture of creature based on the distance metrics for fitness evaluation. The damaged morphologies of creature are elaborated, and a crossover algorithm is used to control it. Creatures with similar morphological traits are grouped into the same species to limit the complexity of the search space. The motion of virtual creature having 2–3 limbs is recorded at three different angles to check their performance in different types of viscous mediums. The qualitative demonstration of motion of virtual creature represents that improved swimming of virtual creatures is achieved in simulating mediums with viscous drag 1–10 arbitrary unit.", "title": "" }, { "docid": "b82440fdab626e7a2f02c2dc9b7c359a", "text": "This study formulates a two-objective model to determine the optimal liner routing, ship size, and sailing frequency for container carriers by minimizing shipping costs and inventory costs. First, shipping and inventory cost functions are formulated using an analytical method. Then, based on a trade-off between shipping costs and inventory costs, Pareto optimal solutions of the twoobjective model are determined. Not only can the optimal ship size and sailing frequency be determined for any route, but also the routing decision on whether to route containers through a hub or directly to their destination can be made in objective value space. Finally, the theoretical findings are applied to a case study, with highly reasonable results. The results show that the optimal routing, ship size, and sailing frequency with respect to each level of inventory costs and shipping costs can be determined using the proposed model. The optimal routing decision tends to be shipping the cargo through a hub as the hub charge is decreased or its efficiency improved. In addition, the proposed model not only provides a tool to analyze the trade-off between shipping costs and inventory costs, but it also provides flexibility on the decision-making for container carriers. c © 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "d5a7b2c027679d016c7c1ed128e48fd8", "text": "Figure 3: Example of phase correlation between two microphones. The peak of this function indicates the inter-channel delay. index associated with peak value of f(t). This delay estimator is computationally convenient and more robust to noise and reverberation than other approaches based on cross-correlation or adaptive ltering. In ideal conditions, the output of Equation (5) is a delta function centered on the correct delay. In real applications with a wide band signal, e.g., a speech signal, the outcome is not a perfect delta function. Rather it resembles a correlation function of a random process. The time index associated with the maximum value of the output of Equation (5) provides an estimation of the delay. The system can produce wrong answers when two or more peaks of similar amplitude are present, i.e., in highly reverber-ant conditions. The resolution in delay estimation is limited in discrete systems by the sampling frequency. In order to increase the accuracy, oversampling can be applied in the neighborhood of the peak, to achieve sub-sample precision. Fig. 3 demonstrates an example of the result of a cross-power spectrum time delay estimator. Once the relative delays associated with all considered microphone pairs are known, the source position (x s ; y s) is estimated as the point that would produce the most similar delay values to the observed ones. This optimization is performed by a downhill sim-plex algorithm 6] applied to minimize the Euclidean distance between M observed delays ^ i and the corresponding M theoretical delays i : An analysis of the impulse responses associated with all the microphones, given an acoustic source emitting at a speciic position, has shown that constructive interference phenomena occur in the presence of signiicant reverberation. In some cases, the direct wavefront happens to be weaker than a coincidence of reeections, inducing a wrong estimation of the arrival direction and leading to an incorrect result. Selecting only microphone pairs that show the highest peaks of phase correlation generally alleviates this problem. Location results obtained with this strategy show comparable performance (mean posi-Reverb. Time Average Error 10 mic pairs 4 mic pairs 0.1sec 38.4 cm 29.8 cm 0.6sec 51.3 cm 32.1 cm 1.7sec 105.0 cm 46.4 cm Table 1: Average location error using either all 10 pairs or 4 pairs of microphones. Three reverberation time conditions are considered. tion error of about 0.3 m) at reverberation times of 0.1 s and 0.6 s. …", "title": "" }, { "docid": "0ded64c37e44433f9822650615e0ef7a", "text": "Transseptal catheterization is a vital component of percutaneous transvenous mitral commissurotomy. Therefore, a well-executed transseptal catheterization is the key to a safe and successful percutaneous transvenous mitral commissurotomy. Two major problems inherent in atrial septal puncture for percutaneous transvenous mitral commissurotomy are cardiac perforation and puncture of an inappropriate atrial septal site. The former may lead to serious complication of cardiac tamponade and the latter to possible difficulty in maneuvering the Inoue balloon catheter across the mitral orifice. This article details atrial septal puncture technique, including landmark selection for optimal septal puncture sites, avoidance of inappropriate puncture sites, and step-by-step description of atrial septal puncture.", "title": "" }, { "docid": "27a0c382d827f920c25f7730ddbacdc0", "text": "Some new parameters in Vivaldi Notch antennas are debated over in this paper. They can be availed for the bandwidth application amelioration. The aforementioned limiting factors comprise two parameters for the radial stub dislocation, one parameter for the stub opening angle, and one parameter for the stub’s offset angle. The aforementioned parameters are rectified by means of the optimization algorithm to accomplish a better frequency application. The results obtained in this article will eventually be collated with those of the other similar antennas. The best achieved bandwidth in this article is 17.1 GHz.", "title": "" }, { "docid": "39bf990d140eb98fa7597de1b6165d49", "text": "The Internet of Things (IoT) is expected to substantially support sustainable development of future smart cities. This article identifies the main issues that may prevent IoT from playing this crucial role, such as the heterogeneity among connected objects and the unreliable nature of associated services. To solve these issues, a cognitive management framework for IoT is proposed, in which dynamically changing real-world objects are represented in a virtualized environment, and where cognition and proximity are used to select the most relevant objects for the purpose of an application in an intelligent and autonomic way. Part of the framework is instantiated in terms of building blocks and demonstrated through a smart city scenario that horizontally spans several application domains. This preliminary proof of concept reveals the high potential that self-reconfigurable IoT can achieve in the context of smart cities.", "title": "" }, { "docid": "581c4d11e59dc17e0cb6ecf5fa7bea93", "text": "This paper describes the three methodologies used by CALCE in their winning entry for the IEEE 2012 PHM Data Challenge competition. An experimental data set from seventeen ball bearings was provided by the FEMTO-ST Institute. The data set consisted of data from six bearings for algorithm training and data from eleven bearings for testing. The authors developed prognostic algorithms based on the data from the training bearings to estimate the remaining useful life of the test bearings. Three methodologies are presented in this paper. Result accuracies of the winning methodology are presented.", "title": "" }, { "docid": "d2af69233bf30376afb81b204b063c81", "text": "Exploiting the security vulnerabilities in web browsers, web applications and firewalls is a fundamental trait of cross-site scripting (XSS) attacks. Majority of web population with basic web awareness are vulnerable and even expert web users may not notice the attack to be able to respond in time to neutralize the ill effects of attack. Due to their subtle nature, a victimized server, a compromised browser, an impersonated email or a hacked web application tends to keep this form of attacks alive even in the present times. XSS attacks severely offset the benefits offered by Internet based services thereby impacting the global internet community. This paper focuses on defense, detection and prevention mechanisms to be adopted at various network doorways to neutralize XSS attacks using open source tools.", "title": "" }, { "docid": "c01e634ef86002a8b6fa2e78e3e1a32a", "text": "In an effort to overcome the data deluge in computational biology and bioinformatics and to facilitate bioinformatics research in the era of big data, we identify some of the most influential algorithms that have been widely used in the bioinformatics community. These top data mining and machine learning algorithms cover classification, clustering, regression, graphical model-based learning, and dimensionality reduction. The goal of this study is to guide the focus of scalable computing experts in the endeavor of applying new storage and scalable computation designs to bioinformatics algorithms that merit their attention most, following the engineering maxim of “optimize the common case”.", "title": "" }, { "docid": "13452d0ceb4dfd059f1b48dba6bf5468", "text": "This paper presents an extension to the technology acceptance model (TAM) and empirically examines it in an enterprise resource planning (ERP) implementation environment. The study evaluated the impact of one belief construct (shared beliefs in the benefits of a technology) and two widely recognized technology implementation success factors (training and communication) on the perceived usefulness and perceived ease of use during technology implementation. Shared beliefs refer to the beliefs that organizational participants share with their peers and superiors on the benefits of the ERP system. Using data gathered from the implementation of an ERP system, we showed that both training and project communication influence the shared beliefs that users form about the benefits of the technology and that the shared beliefs influence the perceived usefulness and ease of use of the technology. Thus, we provided empirical and theoretical support for the use of managerial interventions, such as training and communication, to influence the acceptance of technology, since perceived usefulness and ease of use contribute to behavioral intention to use the technology. # 2003 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "eb7990a677cd3f96a439af6620331400", "text": "Solving the visual symbol grounding problem has long been a goal of artificial intelligence. The field appears to be advancing closer to this goal with recent breakthroughs in deep learning for natural language grounding in static images. In this paper, we propose to translate videos directly to sentences using a unified deep neural network with both convolutional and recurrent structure. Described video datasets are scarce, and most existing methods have been applied to toy domains with a small vocabulary of possible words. By transferring knowledge from 1.2M+ images with category labels and 100,000+ images with captions, our method is able to create sentence descriptions of open-domain videos with large vocabularies. We compare our approach with recent work using language generation metrics, subject, verb, and object prediction accuracy, and a human evaluation.", "title": "" }, { "docid": "9327a13308cd713bcfb3b4717eaafef0", "text": "A review of both laboratory and field studies on the effects of setting goals when performing a task found that in 90% of the studies, specific and challenging goals lead to higher performance than easy goals, \"do your best\" goals, or no goals. Goals affect performance by directing attention, mobilizing effort, increasing persistence, and motivating strategy development. Goal setting is most likely to improve task performance when the goals are specific and sufficiently challenging, the subjects have sufficient ability (and ability differences are controlled), feedback is provided to show progress in relation to the goal, rewards such as money are given for goal attainment, the experimenter or manager is supportive, and assigned goals are accepted by the individual. No reliable individual differences have emerged in goal-setting studies, probably because the goals were typically assigned rather than self-set. Need for achievement and self-esteem may be the most promising individual difference variables.", "title": "" }, { "docid": "460e8daf5dfc9e45c3ade5860aa9cc57", "text": "Combining deep model-free reinforcement learning with on-line planning is a promising approach to building on the successes of deep RL. On-line planning with look-ahead trees has proven successful in environments where transition models are known a priori. However, in complex environments where transition models need to be learned from data, the deficiencies of learned models have limited their utility for planning. To address these challenges, we propose TreeQN, a differentiable, recursive, tree-structured model that serves as a drop-in replacement for any value function network in deep RL with discrete actions. TreeQN dynamically constructs a tree by recursively applying a transition model in a learned abstract state space and then aggregating predicted rewards and state-values using a tree backup to estimate Q-values. We also propose ATreeC, an actor-critic variant that augments TreeQN with a softmax layer to form a stochastic policy network. Both approaches are trained end-to-end, such that the learned model is optimised for its actual use in the planner. We show that TreeQN and ATreeC outperform n-step DQN and A2C on a box-pushing task, as well as n-step DQN and value prediction networks (Oh et al., 2017) on multiple Atari games, with deeper trees often outperforming shallower ones. We also present a qualitative analysis that sheds light on the trees learned by TreeQN.", "title": "" } ]
scidocsrr
42c5ebd88bc77fbaab6795a44f86e514
Developing a Knowledge Management Strategy: Reflections from an Action Research Project
[ { "docid": "a2047969c4924a1e93b805b4f7d2402c", "text": "Knowledge is a resource that is valuable to an organization's ability to innovate and compete. It exists within the individual employees, and also in a composite sense within the organization. According to the resourcebased view of the firm (RBV), strategic assets are the critical determinants of an organization's ability to maintain a sustainable competitive advantage. This paper will combine RBV theory with characteristics of knowledge to show that organizational knowledge is a strategic asset. Knowledge management is discussed frequently in the literature as a mechanism for capturing and disseminating the knowledge that exists within the organization. This paper will also explain practical considerations for implementation of knowledge management principles.", "title": "" }, { "docid": "ca6b556eb4de9a8f66aefd5505c20f3d", "text": "Knowledge is a broad and abstract notion that has defined epistemological debate in western philosophy since the classical Greek era. In the past Richard Watson was the accepting senior editor for this paper. MISQ Review articles survey, conceptualize, and synthesize prior MIS research and set directions for future research. For more details see http://www.misq.org/misreview/announce.html few years, however, there has been a growing interest in treating knowledge as a significant organizational resource. Consistent with the interest in organizational knowledge and knowledge management (KM), IS researchers have begun promoting a class of information systems, referred to as knowledge management systems (KMS). The objective of KMS is to support creation, transfer, and application of knowledge in organizations. Knowledge and knowledge management are complex and multi-faceted concepts. Thus, effective development and implementation of KMS requires a foundation in several rich", "title": "" } ]
[ { "docid": "1dcbd0c9fad30fcc3c0b6f7c79f5d04c", "text": "Anvil is a tool for the annotation of audiovisual material containing multimodal dialogue. Annotation takes place on freely definable, multiple layers (tracks) by inserting time-anchored elements that hold a number of typed attribute-value pairs. Higher-level elements (suprasegmental) consist of a sequence of elements. Attributes contain symbols or cross-level links to arbitrary other elements. Anvil is highly generic (usable with different annotation schemes), platform-independent, XMLbased and fitted with an intuitive graphical user interface. For project integration, Anvil offers the import of speech transcription and export of text and table data for further statistical processing.", "title": "" }, { "docid": "7f5e6c0061351ab064aa7fd25d076a1b", "text": "Guadua angustifolia Kunth was successfully propagated in vitro from axillary buds. Culture initiation, bud sprouting, shoot and plant multiplication, rooting and acclimatization, were evaluated. Best results were obtained using explants from greenhouse-cultivated plants, following a disinfection procedure that comprised the sequential use of an alkaline detergent, a mixture of the fungicide Benomyl and the bactericide Agri-mycin, followed by immersion in sodium hypochlorite (1.5% w/v) for 10 min, and culturing on Murashige and Skoog medium containing 2 ml l−1 of Plant Preservative Mixture®. Highest bud sprouting in original explants was observed when 3 mg l−1 N6-benzylaminopurine (BAP) was incorporated into the culture medium. Production of lateral shoots in in vitro growing plants increased with BAP concentration in culture medium, up to 5 mg l−1, the highest concentration assessed. After six subcultures, clumps of 8–12 axes were obtained, and their division in groups of 3–5 axes allowed multiplication of the plants. Rooting occurred in vitro spontaneously in 100% of the explants that produced lateral shoots. Successful acclimatization of well-rooted clumps of 5–6 axes was achieved in the greenhouse under mist watering in a mixture of soil, sand and rice hulls (1:1:1).", "title": "" }, { "docid": "32e864c7f9ee7258091ecc4604c7e346", "text": "\"The second edition is clearer and adds more examples on how to use STL in a practical environment. Moreover, it is more concerned with performance and tools for its measurement. Both changes are very welcome.\"--Lawrence Rauchwerger, Texas A&M University \"So many algorithms, so little time! The generic algorithms chapter with so many more examples than in the previous edition is delightful! The examples work cumulatively to give a sense of comfortable competence with the algorithms, containers, and iterators used.\"--Max A. Lebow, Software Engineer, Unisys Corporation The STL Tutorial and Reference Guide is highly acclaimed as the most accessible, comprehensive, and practical introduction to the Standard Template Library (STL). Encompassing a set of C++ generic data structures and algorithms, STL provides reusable, interchangeable components adaptable to many different uses without sacrificing efficiency. Written by authors who have been instrumental in the creation and practical application of STL, STL Tutorial and Reference Guide, Second Edition includes a tutorial, a thorough description of each element of the library, numerous sample applications, and a comprehensive reference. You will find in-depth explanations of iterators, generic algorithms, containers, function objects, and much more. Several larger, non-trivial applications demonstrate how to put STL's power and flexibility to work. This book will also show you how to integrate STL with object-oriented programming techniques. In addition, the comprehensive and detailed STL reference guide will be a constant and convenient companion as you learn to work with the library. This second edition is fully updated to reflect all of the changes made to STL for the final ANSI/ISO C++ language standard. It has been expanded with new chapters and appendices. Many new code examples throughout the book illustrate individual concepts and techniques, while larger sample programs demonstrate the use of the STL in real-world C++ software development. An accompanying Web site, including source code and examples referenced in the text, can be found at http://www.cs.rpi.edu/~musser/stl-book/index.html.", "title": "" }, { "docid": "416a3d01c713a6e751cb7893c16baf21", "text": "BACKGROUND\nAnaemia is associated with poor cancer control, particularly in patients undergoing radiotherapy. We investigated whether anaemia correction with epoetin beta could improve outcome of curative radiotherapy among patients with head and neck cancer.\n\n\nMETHODS\nWe did a multicentre, double-blind, randomised, placebo-controlled trial in 351 patients (haemoglobin <120 g/L in women or <130 g/L in men) with carcinoma of the oral cavity, oropharynx, hypopharynx, or larynx. Patients received curative radiotherapy at 60 Gy for completely (R0) and histologically incomplete (R1) resected disease, or 70 Gy for macroscopically incompletely resected (R2) advanced disease (T3, T4, or nodal involvement) or for primary definitive treatment. All patients were assigned to subcutaneous placebo (n=171) or epoetin beta 300 IU/kg (n=180) three times weekly, from 10-14 days before and continuing throughout radiotherapy. The primary endpoint was locoregional progression-free survival. We assessed also time to locoregional progression and survival. Analysis was by intention to treat.\n\n\nFINDINGS\n148 (82%) patients given epoetin beta achieved haemoglobin concentrations higher than 140 g/L (women) or 150 g/L (men) compared with 26 (15%) given placebo. However, locoregional progression-free survival was poorer with epoetin beta than with placebo (adjusted relative risk 1.62 [95% CI 1.22-2.14]; p=0.0008). For locoregional progression the relative risk was 1.69 (1.16-2.47, p=0.007) and for survival was 1.39 (1.05-1.84, p=0.02).\n\n\nINTERPRETATION\nEpoetin beta corrects anaemia but does not improve cancer control or survival. Disease control might even be impaired. Patients receiving curative cancer treatment and given erythropoietin should be studied in carefully controlled trials.", "title": "" }, { "docid": "8738ec0c6e265f0248d7fa65de4cdd05", "text": "BACKGROUND\nCaring traditionally has been at the center of nursing. Effectively measuring the process of nurse caring is vital in nursing research. A short, less burdensome dimensional instrument for patients' use is needed for this purpose.\n\n\nOBJECTIVES\nTo derive and validate a shorter Caring Behaviors Inventory (CBI) within the context of the 42-item CBI.\n\n\nMETHODS\nThe responses to the 42-item CBI from 362 hospitalized patients were used to develop a short form using factor analysis. A test-retest reliability study was conducted by administering the shortened CBI to new samples of patients (n = 64) and nurses (n = 42).\n\n\nRESULTS\nFactor analysis yielded a 24-item short form (CBI-24) that (a) covers the four major dimensions assessed by the 42-item CBI, (b) has internal consistency (alpha =.96) and convergent validity (r =.62) similar to the 42-item CBI, (c) reproduces at least 97% of the variance of the 42 items in patients and nurses, (d) provides statistical conclusions similar to the 42-item CBI on scoring for caring behaviors by patients and nurses, (e) has similar sensitivity in detecting between-patient difference in perceptions, (f) obtains good test-retest reliability (r = .88 for patients and r=.82 for nurses), and (g) confirms high internal consistency (alpha >.95) as a stand-alone instrument administered to the new samples.\n\n\nCONCLUSION\nCBI-24 appears to be equivalent to the 42-item CBI in psychometric properties, validity, reliability, and scoring for caring behaviors among patients and nurses. These results recommend the use of CBI-24 to reduce response burden and research costs.", "title": "" }, { "docid": "09e882927b53708eef7648d16e6ec380", "text": "The main aim of the current paper is to develop a high-order numerical scheme to solve the space–time tempered fractional diffusion-wave equation. The convergence order of the proposed method is O(τ2 + h4). Also, we prove the unconditional stability and convergence of the developed method. The numerical results show the efficiency of the provided numerical scheme. © 2017 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "a934474bb38e37e8246ff561efd74bd3", "text": "While it is possible to understand utopias and dystopias as particular kinds of sociopolitical systems, in this text we argue that utopias and dystopias can also be understood as particular kinds of information systems in which data is received, stored, generated, processed, and transmitted by the minds of human beings that constitute the system’s ‘nodes’ and which are connected according to specific network topologies. We begin by formulating a model of cybernetic information-processing properties that characterize utopias and dystopias. It is then shown that the growing use of neuroprosthetic technologies for human enhancement is expected to radically reshape the ways in which human minds access, manipulate, and share information with one another; for example, such technologies may give rise to posthuman ‘neuropolities’ in which human minds can interact with their environment using new sensorimotor capacities, dwell within shared virtual cyberworlds, and link with one another to form new kinds of social organizations, including hive minds that utilize communal memory and decision-making. Drawing on our model, we argue that the dynamics of such neuropolities will allow (or perhaps even impel) the creation of new kinds of utopias and dystopias that were previously impossible to realize. Finally, we suggest that it is important that humanity begin thoughtfully exploring the ethical, social, and political implications of realizing such technologically enabled societies by studying neuropolities in a place where they have already been ‘pre-engineered’ and provisionally exist: in works of audiovisual science fiction such as films, television series, and role-playing games", "title": "" }, { "docid": "039dddd12a436dc8ab8a36eef2d2ff6d", "text": "Despite significant accuracy improvement in convolutional neural networks (CNN) based object detectors, they often require prohibitive runtimes to process an image for real-time applications. State-of-the-art models often use very deep networks with a large number of floating point operations. Efforts such as model compression learn compact models with fewer number of parameters, but with much reduced accuracy. In this work, we propose a new framework to learn compact and fast object detection networks with improved accuracy using knowledge distillation [20] and hint learning [34]. Although knowledge distillation has demonstrated excellent improvements for simpler classification setups, the complexity of detection poses new challenges in the form of regression, region proposals and less voluminous labels. We address this through several innovations such as a weighted cross-entropy loss to address class imbalance, a teacher bounded loss to handle the regression component and adaptation layers to better learn from intermediate teacher distributions. We conduct comprehensive empirical evaluation with different distillation configurations over multiple datasets including PASCAL, KITTI, ILSVRC and MS-COCO. Our results show consistent improvement in accuracy-speed trade-offs for modern multi-class detection models.", "title": "" }, { "docid": "dbfbdd4866d7fd5e34620c82b8124c3a", "text": "Searle (1989) posits a set of adequacy criteria for any account of the meaning and use of performative verbs, such as order or promise. Central among them are: (a) performative utterances are performances of the act named by the performative verb; (b) performative utterances are self-verifying; (c) performative utterances achieve (a) and (b) in virtue of their literal meaning. He then argues that the fundamental problem with assertoric accounts of performatives is that they fail (b), and hence (a), because being committed to having an intention does not guarantee having that intention. Relying on a uniform meaning for verbs on their reportative and performative uses, we propose an assertoric analysis of performative utterances that does not require an actual intention for deriving (b), and hence can meet (a) and (c). Explicit performative utterances are those whose illocutionary force is made explicit by the verbs appearing in them (Austin 1962): (1) I (hereby) promise you to be there at five. (is a promise) (2) I (hereby) order you to be there at five. (is an order) (3) You are (hereby) ordered to report to jury duty. (is an order) (1)–(3) look and behave syntactically like declarative sentences in every way. Hence there is no grammatical basis for the once popular claim that I promise/ order spells out a ‘performative prefix’ that is silent in all other declaratives. Such an analysis, in any case, leaves unanswered the question of how illocutionary force is related to compositional meaning and, consequently, does not explain how the first person and present tense are special, so that first-person present tense forms can spell out performative prefixes, while others cannot. Minimal variations in person or tense remove the ‘performative effect’: (4) I promised you to be there at five. (is not a promise) (5) He promises to be there at five. (is not a promise) An attractive idea is that utterances of sentences like those in (1)–(3) are asser∗ The names of the authors appear in alphabetical order. 150 Condoravdi & Lauer tions, just like utterances of other declaratives, whose truth is somehow guaranteed. In one form or another, this basic strategy has been pursued by a large number of authors ever since Austin (1962) (Lemmon 1962; Hedenius 1963; Bach & Harnish 1979; Ginet 1979; Bierwisch 1980; Leech 1983; among others). One type of account attributes self-verification to meaning proper. Another type, most prominently exemplified by Bach & Harnish (1979), tries to derive the performative effect by means of an implicature-like inference that the hearer may draw based on the utterance of the explicit performative. Searle’s (1989) Challenge Searle (1989) mounts an argument against analyses of explicit performative utterances as self-verifying assertions. He takes the argument to show that an assertoric account is impossible. Instead, we take it to pose a challenge that can be met, provided one supplies the right semantics for the verbs involved. Searle’s argument is based on the following desiderata he posits for any theory of explicit performatives: (a) performative utterances are performances of the act named by the performative verb; (b) performative utterances are self-guaranteeing; (c) performative utterances achieve (a) and (b) in virtue of their literal meaning, which, in turn, ought to be based on a uniform lexical meaning of the verb across performative and reportative uses. According to Searle’s speech act theory, making a promise requires that the promiser intend to do so, and similarly for other performative verbs (the sincerity condition). It follows that no assertoric account can meet (a-c): An assertion cannot ensure that the speaker has the necessary intention. “Such an assertion does indeed commit the speaker to the existence of the intention, but the commitment to having the intention doesn’t guarantee the actual presence of the intention.” Searle (1989: 546) Hence assertoric accounts must fail on (b), and, a forteriori, on (a) and (c).1 Although Searle’s argument is valid, his premise that for truth to be guaranteed the speaker must have a particular intention is questionable. In the following, we give an assertoric account that delivers on (a-c). We aim for an 1 It should be immediately clear that inference-based accounts cannot meet (a-c) above. If the occurrence of the performative effect depends on the hearer drawing an inference, then such sentences could not be self-verifying, for the hearer may well fail to draw the inference. Performative Verbs and Performative Acts 151 account on which the assertion of the explicit performative is the performance of the act named by the performative verb. No hearer inferences are necessary. 1 Reportative and Performative Uses What is the meaning of the word order, then, so that it can have both reportative uses – as in (6) – and performative uses – as in (7)? (6) A ordered B to sign the report. (7) [A to B] I order you to sign the report now. The general strategy in this paper will be to ask what the truth conditions of reportative uses of performative verbs are, and then see what happens if these verbs are put in the first person singular present tense. The reason to start with the reportative uses is that speakers have intuitions about their truth conditions. This is not true for performative uses, because these are always true when uttered, obscuring the truth-conditional content of the declarative sentence.2 An assertion of (6) takes for granted that A presumed to have authority over B and implies that there was a communicative act from A to B. But what kind of communicative act? (7) or, in the right context, (8a-c) would suffice. (8) a. Sign the report now! b. You must sign the report now! c. I want you to sign the report now! What do these sentences have in common? We claim it is this: In the right context they commit A to a particular kind of preference for B signing the report immediately. If B accepts the utterance, he takes on a commitment to act as though he, too, prefers signing the report. If the report is co-present with A and B, he will sign it, if the report is in his office, he will leave to go there immediately, and so on. To comply with an order to p is to act as though one prefers p. One need not actually prefer it, but one has to act as if one did. The authority mentioned above amounts to this acceptance being socially or institutionally mandated. Of course, B has the option to refuse to take on this commitment, in either of two ways: (i) he can deny A’s authority, (ii) while accepting the authority, he can refuse to abide by it, thereby violating the institutional or social mandate. Crucially, in either case, (6) will still be true, as witnessed by the felicity of: 2 Szabolcsi (1982), in one of the earliest proposals for a compositional semantics of performative utterances, already pointed out the importance of reportative uses. 152 Condoravdi & Lauer (9) a. (6), but B refused to do it. b. (6), but B questioned his authority. Not even uptake by the addressee is necessary for order to be appropriate, as seen in (10) and the naturally occurring (11):3 (10) (6), but B did not hear him. (11) He ordered Kornilov to desist but either the message failed to reach the general or he ignored it.4 What is necessary is that the speaker expected uptake to happen, arguably a minimal requirement for an act to count as a communicative event. To sum up, all that is needed for (6) to be true and appropriate is that (i) there is a communicative act from A to B which commits A to a preference for B signing the report immediately and (ii) A presumes to have authority over B. The performative effect arises precisely when the utterance itself is a witness for the existential claim in (i). There are two main ingredients in the meaning of order informally outlined above: the notion of a preference, in particular a special kind of preference that guides action, and the notion of a commitment. The next two sections lay some conceptual groundwork before we spell out our analysis in section 4. 2 Representing Preferences To represent preferences that guide action, we need a way to represent preferences of different strength. Kratzer’s (1981) theory of modality is not suitable for this purpose. Suppose, for instance, that Sven desires to finish his paper and that he also wants to lie around all day, doing nothing. Modeling his preferences in the style of Kratzer, the propositions expressed by (12) and (13) would have to be part of Sven’s bouletic ordering source assigned to the actual world: (12) Sven finishes his paper. (13) Sven lies around all day, doing nothing. But then, Sven should be equally happy if he does nothing as he is if he finishes his paper. We want to be able to explain why, given his knowledge that (12) and (13) are incompatible, he works on his paper. Intuitively, it is because the preference expressed by (12) is more important than that expressed by (13). 3 We owe this observation to Lauri Karttunen. 4 https://tspace.library.utoronto.ca/citd/RussianHeritage/12.NR/NR.12.html Performative Verbs and Performative Acts 153 Preference Structures Definition 1. A preference structure relative to an information state W is a pair 〈P,≤〉, where P⊆℘(W ) and ≤ is a (weak) partial order on P. We can now define a notion of consistency that is weaker than requiring that all propositions in the preference structure be compatible: Definition 2. A preference structure 〈P,≤〉 is consistent iff for any p,q ∈ P such that p∩q = / 0, either p < q or q < p. Since preference structures are defined relative to an information state W , consistency will require not only logically but also contextually incompatible propositions to be strictly ranked. For example, if W is Sven’s doxastic state, and he knows that (12) and (13) are incompatible, for a bouletic preference structure of his to be consistent it must strictly rank the two propositions. In general, bouletic preference ", "title": "" }, { "docid": "e934c6e5797148d9cfa6cff5e3bec698", "text": "Ego level is a broad construct that summarizes individual differences in personality development 1 . We examine ego level as it is represented in natural language, using a composite sample of four datasets comprising nearly 44,000 responses. We find support for a developmental sequence in the structure of correlations between ego levels, in analyses of Linguistic Inquiry and Word Count (LIWC) categories 2 and in an examination of the individual words that are characteristic of each level. The LIWC analyses reveal increasing complexity and, to some extent, increasing breadth of perspective with higher levels of development. The characteristic language of each ego level suggests, for example, a shift from consummatory to appetitive desires at the lowest stages, a dawning of doubt at the Self-aware stage, the centrality of achievement motivation at the Conscientious stage, an increase in mutuality and intellectual growth at the Individualistic stage and some renegotiation of life goals and reflection on identity at the highest levels of development. Continuing empirical analysis of ego level and language will provide a deeper understanding of ego development, its relationship with other models of personality and individual differences, and its utility in characterizing people, texts and the cultural contexts that produce them. A linguistic analysis of nearly 44,000 responses to the Washington University Sentence Completion Test elucidates the construct of ego development (personality development through adulthood) and identifies unique linguistic markers of each level of development.", "title": "" }, { "docid": "ff429302ec983dd1203ac6dd97506ef8", "text": "Financial crises have occurred for many centuries. They are often preceded by a credit boom and a rise in real estate and other asset prices, as in the current crisis. They are also often associated with severe disruption in the real economy. This paper surveys the theoretical and empirical literature on crises. The first explanation of banking crises is that they are a panic. The second is that they are part of the business cycle. Modeling crises as a global game allows the two to be unified. With all the liquidity problems in interbank markets that have occurred during the current crisis, there is a growing literature on this topic. Perhaps the most serious market failure associated with crises is contagion, and there are many papers on this important topic. The relationship between asset price bubbles, particularly in real estate, and crises is discussed at length. Disciplines Economic Theory | Finance | Finance and Financial Management This journal article is available at ScholarlyCommons: http://repository.upenn.edu/fnce_papers/403 Financial Crises: Theory and Evidence Franklin Allen University of Pennsylvania Ana Babus Cambridge University Elena Carletti European University Institute", "title": "" }, { "docid": "932088f443c5f0f3e239ed13032e56d7", "text": "Hydro Muscles are linear actuators resembling ordinary biological muscles in terms of active dynamic output, passive material properties and appearance. The passive and dynamic characteristics of the latex based Hydro Muscle are addressed. The control tests of modular muscles are presented together with a muscle model relating sensed quantities with net force. Hydro Muscles are discussed in the context of conventional actuators. The hypothesis that Hydro Muscles have greater efficiency than McKibben Muscles is experimentally verified. Hydro Muscle peak efficiency with (without) back flow consideration was 88% (27%). Possible uses of Hydro Muscles are illustrated by relevant robotics projects at WPI. It is proposed that Hydro Muscles can also be an excellent educational tool for moderate-budget robotics classrooms and labs; the muscles are inexpensive (in the order of standard latex tubes of comparable size), made of off-the-shelf elements in less than 10 minutes, easily customizable, lightweight, biologically inspired, efficient, compliant soft linear actuators that are adept for power-augmentation. Moreover, a single source can actuate many muscles by utilizing control of flow and/or pressure. Still further, these muscles can utilize ordinary tap water and successfully operate within a safe range of pressures not overly exceeding standard water household pressure of about 0.59 MPa (85 psi).", "title": "" }, { "docid": "1e4cb8960a99ad69e54e8c44fb21e855", "text": "Over the last decade, the endocannabinoid system has emerged as a pivotal mediator of acute and chronic liver injury, with the description of the role of CB1 and CB2 receptors and their endogenous lipidic ligands in various aspects of liver pathophysiology. A large number of studies have demonstrated that CB1 receptor antagonists represent an important therapeutic target, owing to beneficial effects on lipid metabolism and in light of its antifibrogenic properties. Unfortunately, the brain-penetrant CB1 antagonist rimonabant, initially approved for the management of overweight and related cardiometabolic risks, was withdrawn because of an alarming rate of mood adverse effects. However, the efficacy of peripherally-restricted CB1 antagonists with limited brain penetrance has now been validated in preclinical models of NAFLD, and beneficial effects on fibrosis and its complications are anticipated. CB2 receptor is currently considered as a promising anti-inflammatory and antifibrogenic target, although clinical development of CB2 agonists is still awaited. In this review, we highlight the latest advances on the impact of the endocannabinoid system on the key steps of chronic liver disease progression and discuss the therapeutic potential of molecules targeting cannabinoid receptors.", "title": "" }, { "docid": "397f6c39825a5d8d256e0cc2fbba5d15", "text": "This paper presents a video-based motion modeling technique for capturing physically realistic human motion from monocular video sequences. We formulate the video-based motion modeling process in an image-based keyframe animation framework. The system first computes camera parameters, human skeletal size, and a small number of 3D key poses from video and then uses 2D image measurements at intermediate frames to automatically calculate the \"in between\" poses. During reconstruction, we leverage Newtonian physics, contact constraints, and 2D image measurements to simultaneously reconstruct full-body poses, joint torques, and contact forces. We have demonstrated the power and effectiveness of our system by generating a wide variety of physically realistic human actions from uncalibrated monocular video sequences such as sports video footage.", "title": "" }, { "docid": "f7ce06365e2c74ccbf8dcc04277cfb9d", "text": "In this paper, we propose an enhanced method for detecting light blobs (LBs) for intelligent headlight control (IHC). The main function of the IHC system is to automatically convert high-beam headlights to low beam when vehicles are found in the vicinity. Thus, to implement the IHC, it is necessary to detect preceding or oncoming vehicles. Generally, this process of detecting vehicles is done by detecting LBs in the images. Previous works regarding LB detection can largely be categorized into two approaches by the image type they use: low-exposure (LE) images or autoexposure (AE) images. While they each have their own strengths and weaknesses, the proposed method combines them by integrating the use of the partial region of the AE image confined by the lane detection information and the LE image. Consequently, the proposed method detects headlights at various distances and taillights at close distances using LE images while handling taillights at distant locations by exploiting the confined AE images. This approach enhances the performance of detecting the distant LBs while maintaining low false detections.", "title": "" }, { "docid": "1ee540a265f71c1bf4b92c169556eaa3", "text": "Guided by the aim to construct light fields with spin-like orbital angular momentum (OAM), that is light fields with a uniform and intrinsic OAM density, we investigate the OAM of arrays of optical vortices with rectangular symmetry. We find that the OAM per unit cell depends on the choice of unit cell and can even change sign when the unit cell is translated. This is the case even if the OAM in each unit cell is intrinsic, that is independent of the choice of measurement axis. We show that spin-like OAM can be found only if the OAM per unit cell vanishes. Our results are applicable to the z component of the angular momentum of any x- and y-periodic momentum distribution in the xy plane, and can also be applied other periodic light beams, arrays of rotating massive objects and periodic motion of liquids.", "title": "" }, { "docid": "8eb5e5d7c224782506aba37dcb91614f", "text": "With adolescents’ frequent use of social media, electronic bullying has emerged as a powerful platform for peer victimization. The present two studies explore how adolescents perceive electronic vs. traditional bullying in emotional impact and strategic responses. In Study 1, 97 adolescents (mean age = 15) viewed hypothetical peer victimization scenarios, in parallel electronic and traditional forms, with female characters experiencing indirect relational aggression and direct verbal aggression. In Study 2, 47 adolescents (mean age = 14) viewed the direct verbal aggression scenario from Study 1, and a new scenario, involving male characters in the context of direct verbal aggression. Participants were asked to imagine themselves as the victim in all scenarios and then rate their emotional reactions, strategic responses, and goals for the outcome. Adolescents reported significant negative emotions and disruptions in typical daily activities as the victim across divergent bullying scenarios. In both studies few differences emerged when comparing electronic to traditional bullying, suggesting that online and off-line bullying are subtypes of peer victimization. There were expected differences in strategic responses that fit the medium of the bullying. Results also suggested that embarrassment is a common and highly relevant negative experience in both indirect relational and direct verbal aggression among", "title": "" }, { "docid": "7bf959cd3d5ffaf845510ce0eb69c6d6", "text": "This paper describes the approach that was developed for SemEval 2018 Task 2 (Multilingual Emoji Prediction) by the DUTH Team. First, we employed a combination of preprocessing techniques to reduce the noise of tweets and produce a number of features. Then, we built several N-grams, to represent the combination of word and emojis. Finally, we trained our system with a tuned LinearSVC classifier. Our approach in the leaderboard ranked 18th amongst 48 teams.", "title": "" }, { "docid": "c543f7a65207e7de9cc4bc6fa795504a", "text": "Compressive sensing (CS) is an emerging approach for the acquisition of signals having a sparse or compressible representation in some basis. While the CS literature has mostly focused on problems involving 1-D signals and 2-D images, many important applications involve multidimensional signals; the construction of sparsifying bases and measurement systems for such signals is complicated by their higher dimensionality. In this paper, we propose the use of Kronecker product matrices in CS for two purposes. First, such matrices can act as sparsifying bases that jointly model the structure present in all of the signal dimensions. Second, such matrices can represent the measurement protocols used in distributed settings. Our formulation enables the derivation of analytical bounds for the sparse approximation of multidimensional signals and CS recovery performance, as well as a means of evaluating novel distributed measurement schemes.", "title": "" }, { "docid": "49f955fb928955da09a3bfe08efe78bc", "text": "A novel macro model approach for modeling ESD MOS snapback is introduced. The macro model consists of standard components only. It includes a MOS transistor modeled by BSIM3v3, a bipolar transistor modeled by VBIC, and a resistor for substrate resistance. No external current source, which is essential in most publicly reported macro models, is included since both BSIM3vs and VBIC have formulations built in to model the relevant effects. The simplicity of the presented macro model makes behavior languages, such as Verilog-A, and special ESD equations not necessary in model implementation. This offers advantages of high simulation speed, wider availability, and less convergence issues. Measurement and simulation of the new approach indicates that good silicon correlation can be achieved.", "title": "" } ]
scidocsrr
c0224b859e856875fef59a0c77f04b2f
Map-Reduce for Machine Learning on Multicore
[ { "docid": "6b038c702a3636664a2f7d4e3dcde4ff", "text": "This article is reprinted from the Internaional Electron Devices Meeting (1975). It discusses the complexity of integrated circuits, identifies their manufacture, production, and deployment, and addresses trends to their future deployment.", "title": "" } ]
[ { "docid": "b9e4a201050b379500e5e8a2bca81025", "text": "On the basis of a longitudinal field study of domestic communication, we report some essential constituents of the user experience of awareness of others who are distant in space or time, i.e. presence-in-absence. We discuss presence-in-absence in terms of its social (Contact) and informational (Content) facets, and the circumstances of the experience (Context). The field evaluation of a prototype, 'The Cube', designed to support presence-in-absence, threw up issues in the interrelationships between contact, content and context; issues that the designers of similar social artifacts will need to address.", "title": "" }, { "docid": "bc5a3cd619be11132ea39907f732bf4c", "text": "A burgeoning interest in the intersection of neuroscience and architecture promises to offer biologically inspired insights into the design of spaces. The goal of such interdisciplinary approaches to architecture is to motivate construction of environments that would contribute to peoples' flourishing in behavior, health, and well-being. We suggest that this nascent field of neuroarchitecture is at a pivotal point in which neuroscience and architecture are poised to extend to a neuroscience of architecture. In such a research program, architectural experiences themselves are the target of neuroscientific inquiry. Here, we draw lessons from recent developments in neuroaesthetics to suggest how neuroarchitecture might mature into an experimental science. We review the extant literature and offer an initial framework from which to contextualize such research. Finally, we outline theoretical and technical challenges that lie ahead.", "title": "" }, { "docid": "2a43e164e536600ee6ceaf6a9c1af1be", "text": "Unsupervised paraphrase acquisition has been an active research field in recent years, but its effective coverage and performance have rarely been evaluated. We propose a generic paraphrase-based approach for Relation Extraction (RE), aiming at a dual goal: obtaining an applicative evaluation scheme for paraphrase acquisition and obtaining a generic and largely unsupervised configuration for RE. We analyze the potential of our approach and evaluate an implemented prototype of it using an RE dataset. Our findings reveal a high potential for unsupervised paraphrase acquisition. We also identify the need for novel robust models for matching paraphrases in texts, which should address syntactic complexity and variability.", "title": "" }, { "docid": "611b985ae194f562e459dc78f7aafdc3", "text": "In order to understand the formation and subsequent evolution of galaxies one must first distinguish between the two main morphological classes of massive systems: spirals and early-type systems. This paper introduces a project, Galaxy Zoo, which provides visual morphological classifications for nearly one million galaxies, extracted from the Sloan Digital Sky Survey (SDSS). This achievement was made possible by inviting the general public to visually inspect and classify these galaxies via the internet. The project has obtained more than 4 × 107 individual classifications made by ∼105 participants. We discuss the motivation and strategy for this project, and detail how the classifications were performed and processed. We find that Galaxy Zoo results are consistent with those for subsets of SDSS galaxies classified by professional astronomers, thus demonstrating that our data provide a robust morphological catalogue. Obtaining morphologies by direct visual inspection avoids introducing biases associated with proxies for morphology such as colour, concentration or structural parameters. In addition, this catalogue can be used to directly compare SDSS morphologies with older data sets. The colour–magnitude diagrams for each morphological class are shown, and we illustrate how these distributions differ from those inferred using colour alone as a proxy for", "title": "" }, { "docid": "8d07f52f154f81ce9dedd7c5d7e3182d", "text": "We present a 3D face reconstruction system that takes as input either one single view or several different views. Given a facial image, we first classify the facial pose into one of five predefined poses, then detect two anchor points that are then used to detect a set of predefined facial landmarks. Based on these initial steps, for a single view we apply a warping process using a generic 3D face model to build a 3D face. For multiple views, we apply sparse bundle adjustment to reconstruct 3D landmarks which are used to deform the generic 3D face model. Experimental results on the Color FERET and CMU multi-PIE databases confirm our framework is effective in creating realistic 3D face models that can be used in many computer vision applications, such as 3D face recognition at a distance.", "title": "" }, { "docid": "ac96a4c1644dfbabc1dd02878c43c966", "text": "A labeled text corpus made up of Turkish papers' titles, abstracts and keywords is collected. The corpus includes 35 number of different disciplines, and 200 documents per subject. This study presents the text corpus' collection and content. The classification performance of Term Frequcney - Inverse Document Frequency (TF-IDF) and topic probabilities of Latent Dirichlet Allocation (LDA) features are compared for the text corpus. The text corpus is shared as open source so that it could be used for natural language processing applications with academic purposes.", "title": "" }, { "docid": "242e78ed606d13502ace6d5eae00b315", "text": "Use of information technology management framework plays a major influence on organizational success. This article focuses on the field of Internet of Things (IoT) management. In this study, a number of risks in the field of IoT is investigated, then with review of a number of COBIT5 risk management schemes, some associated strategies, objectives and roles are provided. According to the in-depth studies of this area it is expected that using the best practices of COBIT5 can be very effective, while the use of this standard considerably improve some criteria such as performance, cost and time. Finally, the paper proposes a framework which reflects the best practices and achievements in the field of IoT risk management.", "title": "" }, { "docid": "e6c7d1db1e1cfaab5fdba7dd1146bcd2", "text": "We define the object detection from imagery problem as estimating a very large but extremely sparse bounding box dependent probability distribution. Subsequently we identify a sparse distribution estimation scheme, Directed Sparse Sampling, and employ it in a single end-to-end CNN based detection model. This methodology extends and formalizes previous state-of-the-art detection models with an additional emphasis on high evaluation rates and reduced manual engineering. We introduce two novelties, a corner based region-of-interest estimator and a deconvolution based CNN model. The resulting model is scene adaptive, does not require manually defined reference bounding boxes and produces highly competitive results on MSCOCO, Pascal VOC 2007 and Pascal VOC 2012 with real-time evaluation rates. Further analysis suggests our model performs particularly well when finegrained object localization is desirable. We argue that this advantage stems from the significantly larger set of available regions-of-interest relative to other methods. Source-code is available from: https://github.com/lachlants/denet", "title": "" }, { "docid": "77d80da2b0cd3e8598f9c677fc8827a9", "text": "In this report, our approach to tackling the task of ActivityNet 2018 Kinetics-600 challenge is described in detail. Though spatial-temporal modelling methods, which adopt either such end-to-end framework as I3D [1] or two-stage frameworks (i.e., CNN+RNN), have been proposed in existing state-of-the-arts for this task, video modelling is far from being well solved. In this challenge, we propose spatial-temporal network (StNet) for better joint spatial-temporal modelling and comprehensively video understanding. Besides, given that multimodal information is contained in video source, we manage to integrate both early-fusion and later-fusion strategy of multi-modal information via our proposed improved temporal Xception network (iTXN) for video understanding. Our StNet RGB single model achieves 78.99% top-1 precision in the Kinetics-600 validation set and that of our improved temporal Xception network which integrates RGB, flow and audio modalities is up to 82.35%. After model ensemble, we achieve top-1 precision as high as 85.0% on the validation set and rank No.1 among all submissions.", "title": "" }, { "docid": "e61a0ba24db737d42a730d5738583ffa", "text": "We present a logical formalism for expressing properties of continuous time Markov chains. The semantics for such properties arise as a natural extension of previous work on discrete time Markov chains to continuous time. The major result is that the veriication problem is decidable; this is shown using results in algebraic and transcendental number theory.", "title": "" }, { "docid": "c227cae0ec847a227945f1dec0b224d2", "text": "We present a highly flexible and efficient software pipeline for programmable triangle voxelization. The pipeline, entirely written in CUDA, supports both fully conservative and thin voxelizations, multiple boolean, floating point, vector-typed render targets, user-defined vertex and fragment shaders, and a bucketing mode which can be used to generate 3D A-buffers containing the entire list of fragments belonging to each voxel. For maximum efficiency, voxelization is implemented as a sort-middle tile-based rasterizer, while the A-buffer mode, essentially performing 3D binning of triangles over uniform grids, uses a sort-last pipeline. Despite its major flexibility, the performance of our tile-based rasterizer is always competitive with and sometimes more than an order of magnitude superior to that of state-of-the-art binary voxelizers, whereas our bucketing system is up to 4 times faster than previous implementations. In both cases the results have been achieved through the use of careful load-balancing and high performance sorting primitives.", "title": "" }, { "docid": "cf45599aeb22470b7922fc64394f114c", "text": "This paper addresses the task of assigning multiple labels of fine-grained named entity (NE) types to Wikipedia articles. To address the sparseness of the input feature space, which is salient particularly in fine-grained type classification, we propose to learn article vectors (i.e. entity embeddings) from hypertext structure of Wikipedia using a Skip-gram model and incorporate them into the input feature set. To conduct large-scale practical experiments, we created a new dataset containing over 22,000 manually labeled instances. The results of our experiments show that our idea gained statistically significant improvements in classification results.", "title": "" }, { "docid": "9d19d15b070faf62ecfa99d90e37b908", "text": "Title of Thesis: SYMBOL-BASED CONTROL OF A BALL-ON-PLATE MECHANICAL SYSTEM Degree candidate: Phillip Yip Degree and year: Master of Science, 2004 Thesis directed by: Assistant Professor Dimitrios Hristu-Varsakelis Department of Mechanical Engineering Modern control systems often consist of networks of components that must share a common communication channel. Not all components of the networked control system can communicate with one another simultaneously at any given time. The “attention” that each component receives is an important factor that affects the system’s overall performance. An effective controller should ensure that sensors and actuators receive sufficient attention. This thesis describes a “ball-on-plate” dynamical system that includes a digital controller, which communicates with a pair of language-driven actuators, and an overhead camera. A control algorithm was developed to restrict the ball to a small region on the plate using a quantized set of language-based commands. The size of this containment region was analytically determined as a function of the communication constraints and other control system parameters. The effectiveness of the proposed control law was evaluated in experiments and mathematical simulations. SYMBOL-BASED CONTROL OF A BALL-ON-PLATE MECHANICAL SYSTEM by Phillip Yip Thesis submitted to the Faculty of the Graduate School of the University of Maryland, College Park in partial fulfillment of the requirements for the degree of Master of Science 2004 Advisory Commmittee: Assistant Professor Dimitrios Hristu-Varsakelis, Chair/Advisor Professor Balakumar Balachandran Professor Amr Baz c ©Copyright by Phillip T. Yip 2004 DEDICATION: To my family", "title": "" }, { "docid": "3f40b9d1dfff00d8310f08df12096d63", "text": "This paper explores a monetary policy model with habit formation for consumers, in which consumers’ utility depends in part on current consumption relative to past consumption. The empirical tests developed in the paper show that one can reject the hypothesis of no habit formation with tremendous confidence, largely because the habit formation model captures the gradual hump-shaped response of real spending to various shocks. The paper then embeds the habit consumption specification in a monetary policy model and finds that the responses of both spending and inflation to monetary policy actions are significantly improved by this modification. (JEL D12, E52, E43) Forthcoming, American Economic Review, June 2000. With the resurgence of interest in the effects of monetary policy on the macroeconomy, led by the work of the Christina D. and David H. Romer (1989), Ben S. Bernanke and Alan S. Blinder (1992), Lawrence J. Christiano, Martin S. Eichenbaum, and Charles L. Evans (1996), and others, the need for a structural model that could plausibly be used for monetary policy analysis has become evident. Of course, many extant models have been used for monetary policy analysis, but many of these are perceived as having critical shortcomings. First, some models do not incorporate explicit expectations behavior, so that changes in policy (or private) behavior could cause shifts in reduced-form parameters (i.e., the critique of Robert E. Lucas 1976). Others incorporate expectations, but derive key relationships from ad hoc behavioral assumptions, rather than from explicit optimizing problems for consumers and firms (Fuhrer and George R. Moore 1995b is an example). Explicit expectations and optimizing behavior are both desirable, other things equal, for a model of monetary analysis. First, analyzing potential improvements to monetary policy relative to historical policies requires a model that is stable across alternative policy regimes. This underlines the importance of explicit expectations formation. Second, the “optimal” in optimal monetary policy must ultimately refer to social welfare. Many have approximated social welfare with weighted averages of output and inflation variances, but one cannot know how good these approximations are without more explicit modeling of welfare. This implies that the model be closely tied to the underlying objectives of consumers and firms, hence the emphasis on optimization-based models. A critical test for whether a model reflects underlying objectives is its ability to accurately reflect the dominant dynamic interactions in the data. A number of recent papers (see, for example, Robert G. King and Alexander L. Wolman (1996), Bennett T. McCallum and Edward Nelson (1999a, 1999b); Julio R. Rotemberg and Michael Woodford (1997)) have developed models that incorporate explicit expectations, optimizing behavior, and frictions that allow monetary policy to have real effects. This paper continues in that line of research by documenting the empirical importance of a key feature of aggregate data: the “hump-shaped,” gradual response of spending and inflation to shocks. It then develops a monetary policy model that can capture this feature, as well as all of the features (e.g. the real effects of monetary policy, the persistence of inflation and output) embodied in earlier models. The key to the model’s success on the spending side is the inclusion of habit formation in the consumer’s utility function. This modification", "title": "" }, { "docid": "f709802a6da7db7c71dfa67930111b04", "text": "Generative adversarial networks (GANs) are a class of unsupervised machine learning algorithms that can produce realistic images from randomly-sampled vectors in a multi-dimensional space. Until recently, it was not possible to generate realistic high-resolution images using GANs, which has limited their applicability to medical images that contain biomarkers only detectable at native resolution. Progressive growing of GANs is an approach wherein an image generator is trained to initially synthesize low resolution synthetic images (8x8 pixels), which are then fed to a discriminator that distinguishes these synthetic images from real downsampled images. Additional convolutional layers are then iteratively introduced to produce images at twice the previous resolution until the desired resolution is reached. In this work, we demonstrate that this approach can produce realistic medical images in two different domains; fundus photographs exhibiting vascular pathology associated with retinopathy of prematurity (ROP), and multi-modal magnetic resonance images of glioma. We also show that fine-grained details associated with pathology, such as retinal vessels or tumor heterogeneity, can be preserved and enhanced by including segmentation maps as additional channels. We envisage several applications of the approach, including image augmentation and unsupervised classification of pathology.", "title": "" }, { "docid": "81243e721527e74f0997d6aeb250cc23", "text": "This paper compares the attributes of 36 slot, 33 slot and 12 slot brushless interior permanent magnet motor designs, each with an identical 10 pole interior magnet rotor. The aim of the paper is to quantify the trade-offs between alternative distributed and concentrated winding configurations taking into account aspects such as thermal performance, field weakening behaviour, acoustic noise, and efficiency. It is found that the concentrated 12 slot design gives the highest theoretical performance however significant rotor losses are found during testing and a large amount of acoustic noise and vibration is generated. The 33 slot design is found to have marginally better performance than the 36 slot but it also generates some unbalanced magnetic pull on the rotor which may lead to mechanical issues at higher speeds.", "title": "" }, { "docid": "22c6ae71c708d5e2d1bc7e5e085c4842", "text": "Head pose estimation is a fundamental task for face and social related research. Although 3D morphable model (3DMM) based methods relying on depth information usually achieve accurate results, they usually require frontal or mid-profile poses which preclude a large set of applications where such conditions can not be garanteed, like monitoring natural interactions from fixed sensors placed in the environment. A major reason is that 3DMM models usually only cover the face region. In this paper, we present a framework which combines the strengths of a 3DMM model fitted online with a prior-free reconstruction of a 3D full head model providing support for pose estimation from any viewpoint. In addition, we also proposes a symmetry regularizer for accurate 3DMM fitting under partial observations, and exploit visual tracking to address natural head dynamics with fast accelerations. Extensive experiments show that our method achieves state-of-the-art performance on the public BIWI dataset, as well as accurate and robust results on UbiPose, an annotated dataset of natural interactions that we make public and where adverse poses, occlusions or fast motions regularly occur.", "title": "" }, { "docid": "31e8d60af8a1f9576d28c4c1e0a3db86", "text": "Management of bulk sensor data is one of the challenging problems in the development of Internet of Things (IoT) applications. High volume of sensor data induces for optimal implementation of appropriate sensor data compression technique to deal with the problem of energy-efficient transmission, storage space optimization for tiny sensor devices, and cost-effective sensor analytics. The compression performance to realize significant gain in processing high volume sensor data cannot be attained by conventional lossy compression methods, which are less likely to exploit the intrinsic unique contextual characteristics of sensor data. In this paper, we propose SensCompr, a dynamic lossy compression method specific for sensor datasets and it is easily realizable with standard compression methods. Senscompr leverages robust statistical and information theoretic techniques and does not require specific physical modeling. It is an information-centric approach that exhaustively analyzes the inherent properties of sensor data for extracting the embedded useful information content and accordingly adapts the parameters of compression scheme to maximize compression gain while optimizing information loss. Senscompr is successfully applied to compress large sets of heterogeneous real sensor datasets like ECG, EEG, smart meter, accelerometer. To the best of our knowledge, for the first time 'sensor information content'-centric dynamic compression technique is proposed and implemented particularly for IoT-applications and this method is independent to sensor data types.", "title": "" }, { "docid": "fbebf8aaeadbd4816a669bd0b23e0e2b", "text": "In traditional cloud storage systems, attribute-based encryption (ABE) is regarded as an important technology for solving the problem of data privacy and fine-grained access control. However, in all ABE schemes, the private key generator has the ability to decrypt all data stored in the cloud server, which may bring serious problems such as key abuse and privacy data leakage. Meanwhile, the traditional cloud storage model runs in a centralized storage manner, so single point of failure may leads to the collapse of system. With the development of blockchain technology, decentralized storage mode has entered the public view. The decentralized storage approach can solve the problem of single point of failure in traditional cloud storage systems and enjoy a number of advantages over centralized storage, such as low price and high throughput. In this paper, we study the data storage and sharing scheme for decentralized storage systems and propose a framework that combines the decentralized storage system interplanetary file system, the Ethereum blockchain, and ABE technology. In this framework, the data owner has the ability to distribute secret key for data users and encrypt shared data by specifying access policy, and the scheme achieves fine-grained access control over data. At the same time, based on smart contract on the Ethereum blockchain, the keyword search function on the cipher text of the decentralized storage systems is implemented, which solves the problem that the cloud server may not return all of the results searched or return wrong results in the traditional cloud storage systems. Finally, we simulated the scheme in the Linux system and the Ethereum official test network Rinkeby, and the experimental results show that our scheme is feasible.", "title": "" }, { "docid": "0342f89c44e0b86026953196de34b608", "text": "In this paper, we introduce an approach for recognizing the absence of opposing arguments in persuasive essays. We model this task as a binary document classification and show that adversative transitions in combination with unigrams and syntactic production rules significantly outperform a challenging heuristic baseline. Our approach yields an accuracy of 75.6% and 84% of human performance in a persuasive essay corpus with various topics.", "title": "" } ]
scidocsrr
670e509f17f1f032a90f88c1dcfc2d9b
A Warning System for Obstacle Detection at Vehicle Lateral Blind Spot Area
[ { "docid": "2d0cc4c7ca6272200bb1ed1c9bba45f0", "text": "Advanced Driver Assistance Systems (ADAS) based on video camera tends to be generalized in today's automotive. However, if most of these systems perform nicely in good weather conditions, they perform very poorly under adverse weather particularly under rain. We present a novel approach that aims at detecting raindrops on a car windshield using only images from an in-vehicle camera. Based on the photometric properties of raindrops, the algorithm relies on image processing technics to highlight raindrops. Its results can be further used for image restoration and vision enhancement and hence it is a valuable tool for ADAS.", "title": "" } ]
[ { "docid": "b3edfd5b56831080a663faeb0e159627", "text": "Because wireless sensor networks (WSNs) are becoming increasingly integrated into daily life, solving the energy efficiency problem of such networks is an urgent problem. Many energy-efficient algorithms have been proposed to reduce energy consumption in traditional WSNs. The emergence of software-defined networks (SDNs) enables the transformation of WSNs. Some SDN-based WSNs architectures have been proposed and energy-efficient algorithms in SDN-based WSNs architectures have been studied. In this paper, we integrate an SDN into WSNs and an improved software-defined WSNs (SD-WSNs) architecture is presented. Based on the improved SD-WSNs architecture, we propose an energy-efficient algorithm. This energy-efficient algorithm is designed to match the SD-WSNs architecture, and is based on the residual energy and the transmission power, and the game theory is introduced to extend the network lifetime. Based on the SD-WSNs architecture and the energy-efficient algorithm, we provide a detailed introduction to the operating mechanism of the algorithm in the SD-WSNs. The simulation results show that our proposed algorithm performs better in terms of balancing energy consumption and extending the network lifetime compared with the typical energy-efficient algorithms in traditional WSNs.", "title": "" }, { "docid": "338dcbb45ff0c1752eeb34ec1be1babe", "text": "I present a new way to parallelize the training of convolutional neural networks across multiple GPUs. The method scales significantly better than all alternatives when applied to modern convolutional neural", "title": "" }, { "docid": "fae65e55a1a670738d39a3d2db279ceb", "text": "This paper presents a method to extract tone relevant features based on pitch flux from continuous speech signal. The autocorrelations of two adjacent frames are calculated and the covariance between them is estimated to extract multi-dimensional pitch flux features. These features, together with MFCCs, are modeled in a 2-stream GMM models, and are tested in a 3-dialect identification task for Chinese. The pitch flux features have shown to be very effective in identifying tonal languages with short speech segments. For the test speech segments of 3 seconds, 2-stream model achieves more than 30% error reduction over MFCC-based model", "title": "" }, { "docid": "cf3ee200705e8bb564303bd758e8e235", "text": "The current state of the art in playing many important perfect information games, including Chess and Go, combines planning and deep reinforcement learning with self-play. We extend this approach to imperfect information games and present ExIt-OOS, a novel approach to playing imperfect information games within the Expert Iteration framework and inspired by AlphaZero. We use Online Outcome Sampling, an online search algorithm for imperfect information games in place of MCTS. While training online, our neural strategy is used to improve the accuracy of playouts in OOS, allowing a learning and planning feedback loop for imperfect information games.", "title": "" }, { "docid": "dd545adf1fba52e794af4ee8de34fc60", "text": "We propose solving continuous parametric simulation optimizations using a deterministic nonlinear optimization algorithm and sample-path simulations. The optimization problem is written in a modeling language with a simulation module accessed with an external function call. Since we allow no changes to the simulation code at all, we propose using a quadratic approximation of the simulation function to obtain derivatives. Results on three different queueing models are presented that show our method to be effective on a variety of practical problems.", "title": "" }, { "docid": "2e40682bca56659428d2919191e1cbf3", "text": "Single-cell RNA-Seq (scRNA-Seq) has attracted much attention recently because it allows unprecedented resolution into cellular activity; the technology, therefore, has been widely applied in studying cell heterogeneity such as the heterogeneity among embryonic cells at varied developmental stages or cells of different cancer types or subtypes. A pertinent question in such analyses is to identify cell subpopulations as well as their associated genetic drivers. Consequently, a multitude of approaches have been developed for clustering or biclustering analysis of scRNA-Seq data. In this article, we present a fast and simple iterative biclustering approach called \"BiSNN-Walk\" based on the existing SNN-Cliq algorithm. One of BiSNN-Walk's differentiating features is that it returns a ranked list of clusters, which may serve as an indicator of a cluster's reliability. Another important feature is that BiSNN-Walk ranks genes in a gene cluster according to their level of affiliation to the associated cell cluster, making the result more biologically interpretable. We also introduce an entropy-based measure for choosing a highly clusterable similarity matrix as our starting point among a wide selection to facilitate the efficient operation of our algorithm. We applied BiSNN-Walk to three large scRNA-Seq studies, where we demonstrated that BiSNN-Walk was able to retain and sometimes improve the cell clustering ability of SNN-Cliq. We were able to obtain biologically sensible gene clusters in terms of GO term enrichment. In addition, we saw that there was significant overlap in top characteristic genes for clusters corresponding to similar cell states, further demonstrating the fidelity of our gene clusters.", "title": "" }, { "docid": "0b1e0145affcdf2ff46580d9e5615211", "text": "Traditional topic models do not account for semantic regularities in language. Recent distributional representations of words exhibit semantic consistency over directional metrics such as cosine similarity. However, neither categorical nor Gaussian observational distributions used in existing topic models are appropriate to leverage such correlations. In this paper, we propose to use the von Mises-Fisher distribution to model the density of words over a unit sphere. Such a representation is well-suited for directional data. We use a Hierarchical Dirichlet Process for our base topic model and propose an efficient inference algorithm based on Stochastic Variational Inference. This model enables us to naturally exploit the semantic structures of word embeddings while flexibly discovering the number of topics. Experiments demonstrate that our method outperforms competitive approaches in terms of topic coherence on two different text corpora while offering efficient inference.", "title": "" }, { "docid": "b26882cddec1690e3099757e835275d2", "text": "Accumulating evidence suggests that, independent of physical activity levels, sedentary behaviours are associated with increased risk of cardio-metabolic disease, all-cause mortality, and a variety of physiological and psychological problems. Therefore, the purpose of this systematic review is to determine the relationship between sedentary behaviour and health indicators in school-aged children and youth aged 5-17 years. Online databases (MEDLINE, EMBASE and PsycINFO), personal libraries and government documents were searched for relevant studies examining time spent engaging in sedentary behaviours and six specific health indicators (body composition, fitness, metabolic syndrome and cardiovascular disease, self-esteem, pro-social behaviour and academic achievement). 232 studies including 983,840 participants met inclusion criteria and were included in the review. Television (TV) watching was the most common measure of sedentary behaviour and body composition was the most common outcome measure. Qualitative analysis of all studies revealed a dose-response relation between increased sedentary behaviour and unfavourable health outcomes. Watching TV for more than 2 hours per day was associated with unfavourable body composition, decreased fitness, lowered scores for self-esteem and pro-social behaviour and decreased academic achievement. Meta-analysis was completed for randomized controlled studies that aimed to reduce sedentary time and reported change in body mass index (BMI) as their primary outcome. In this regard, a meta-analysis revealed an overall significant effect of -0.81 (95% CI of -1.44 to -0.17, p = 0.01) indicating an overall decrease in mean BMI associated with the interventions. There is a large body of evidence from all study designs which suggests that decreasing any type of sedentary time is associated with lower health risk in youth aged 5-17 years. In particular, the evidence suggests that daily TV viewing in excess of 2 hours is associated with reduced physical and psychosocial health, and that lowering sedentary time leads to reductions in BMI.", "title": "" }, { "docid": "4dd2fc66b1a2f758192b02971476b4cc", "text": "Although efforts have been directed toward the advancement of women in science, technology, engineering, and mathematics (STEM) positions, little research has directly examined women's perspectives and bottom-up strategies for advancing in male-stereotyped disciplines. The present study utilized Photovoice, a Participatory Action Research method, to identify themes that underlie women's experiences in traditionally male-dominated fields. Photovoice enables participants to convey unique aspects of their experiences via photographs and their in-depth knowledge of a community through personal narrative. Forty-six STEM women graduate students and postdoctoral fellows completed a Photovoice activity in small groups. They presented photographs that described their experiences pursuing leadership positions in STEM fields. Three types of narratives were discovered and classified: career strategies, barriers to achievement, and buffering strategies or methods for managing barriers. Participants described three common types of career strategies and motivational factors, including professional development, collaboration, and social impact. Moreover, the lack of rewards for these workplace activities was seen as limiting professional effectiveness. In terms of barriers to achievement, women indicated they were not recognized as authority figures and often worked to build legitimacy by fostering positive relationships. Women were vigilant to other people's perspectives, which was costly in terms of time and energy. To manage role expectations, including those related to gender, participants engaged in numerous role transitions throughout their day to accommodate workplace demands. To buffer barriers to achievement, participants found resiliency in feelings of accomplishment and recognition. Social support, particularly from mentors, helped participants cope with negative experiences and to envision their future within the field. Work-life balance also helped participants find meaning in their work and have a sense of control over their lives. Overall, common workplace challenges included a lack of social capital and limited degrees of freedom. Implications for organizational policy and future research are discussed.", "title": "" }, { "docid": "0ae071bc719fdaac34a59991e66ab2b8", "text": "It has recently been shown in a brain-computer interface experiment that motor cortical neurons change their tuning properties selectively to compensate for errors induced by displaced decoding parameters. In particular, it was shown that the three-dimensional tuning curves of neurons whose decoding parameters were reassigned changed more than those of neurons whose decoding parameters had not been reassigned. In this article, we propose a simple learning rule that can reproduce this effect. Our learning rule uses Hebbian weight updates driven by a global reward signal and neuronal noise. In contrast to most previously proposed learning rules, this approach does not require extrinsic information to separate noise from signal. The learning rule is able to optimize the performance of a model system within biologically realistic periods of time under high noise levels. Furthermore, when the model parameters are matched to data recorded during the brain-computer interface learning experiments described above, the model produces learning effects strikingly similar to those found in the experiments.", "title": "" }, { "docid": "4f87b93eb66b7126c53ee8126151f77f", "text": "We propose a convolutional neural network architecture with k-max pooling layer for semantic modeling of music. The aim of a music model is to analyze and represent the semantic content of music for purposes of classification, discovery, or clustering. The k-max pooling layer is used in the network to make it possible to pool the k most active features, capturing the semantic-rich and time-varying information about music. Our network takes an input music as a sequence of audio words, where each audio word is associated with a distributed feature vector that can be fine-tuned by backpropagating errors during the training. The architecture allows us to take advantage of the better trained audio word embeddings and the deep structures to produce more robust music representations. Experiment results with two different music collections show that our neural networks achieved the best accuracy in music genre classification comparing with three state-of-art systems.", "title": "" }, { "docid": "e711f9f57e1c3c22c762bf17cb6afd2b", "text": "Qualitative research methodology has become an established part of the medical education research field. A very popular data-collection technique used in qualitative research is the \"focus group\". Focus groups in this Guide are defined as \"… group discussions organized to explore a specific set of issues … The group is focused in the sense that it involves some kind of collective activity … crucially, focus groups are distinguished from the broader category of group interview by the explicit use of the group interaction as research data\" (Kitzinger 1994, p. 103). This Guide has been designed to provide people who are interested in using focus groups with the information and tools to organize, conduct, analyze and publish sound focus group research within a broader understanding of the background and theoretical grounding of the focus group method. The Guide is organized as follows: Firstly, to describe the evolution of the focus group in the social sciences research domain. Secondly, to describe the paradigmatic fit of focus groups within qualitative research approaches in the field of medical education. After defining, the nature of focus groups and when, and when not, to use them, the Guide takes on a more practical approach, taking the reader through the various steps that need to be taken in conducting effective focus group research. Finally, the Guide finishes with practical hints towards writing up a focus group study for publication.", "title": "" }, { "docid": "f7bc42beb169e42496b674c918541865", "text": "Brain endothelial cells are unique among endothelial cells in that they express apical junctional complexes, including tight junctions, which quite resemble epithelial tight junctions both structurally and functionally. They form the blood-brain-barrier (BBB) which strictly controls the exchanges between the blood and the brain compartments by limiting passive diffusion of blood-borne solutes while actively transporting nutrients to the brain. Accumulating experimental and clinical evidence indicate that BBB dysfunctions are associated with a number of serious CNS diseases with important social impacts, such as multiple sclerosis, stroke, brain tumors, epilepsy or Alzheimer's disease. This review will focus on the implication of brain endothelial tight junctions in BBB architecture and physiology, will discuss the consequences of BBB dysfunction in these CNS diseases and will present some therapeutic strategies for drug delivery to the brain across the BBB.", "title": "" }, { "docid": "8ea6c4957443916c2102f8a173f9d3dc", "text": "INTRODUCTION\nOpioid overdose fatality has increased threefold since 1999. As a result, prescription drug overdose surpassed motor vehicle collision as the leading cause of unintentional injury-related death in the USA. Naloxone , an opioid antagonist that has been available for decades, can safely reverse opioid overdose if used promptly and correctly. However, clinicians often overestimate the dose of naloxone needed to achieve the desired clinical outcome, precipitating acute opioid withdrawal syndrome (OWS).\n\n\nAREAS COVERED\nThis article provides a comprehensive review of naloxone's pharmacologic properties and its clinical application to promote the safe use of naloxone in acute management of opioid intoxication and to mitigate the risk of precipitated OWS. Available clinical data on opioid-receptor kinetics that influence the reversal of opioid agonism by naloxone are discussed. Additionally, the legal and social barriers to take home naloxone programs are addressed.\n\n\nEXPERT OPINION\nNaloxone is an intrinsically safe drug, and may be administered in large doses with minimal clinical effect in non-opioid-dependent patients. However, when administered to opioid-dependent patients, naloxone can result in acute opioid withdrawal. Therefore, it is prudent to use low-dose naloxone (0.04 mg) with appropriate titration to reverse ventilatory depression in this population.", "title": "" }, { "docid": "1fb0344be6a5da582e0563dceca70d44", "text": "Self-mutilating behaviors could be minor and benign, but more severe cases are usually associated with psychiatric disorders or with acquired nervous system lesions and could be life-threatening. The patient was a 66-year-old man who had been mutilating his fingers for 6 years. This behavior started as serious nail biting and continued as severe finger mutilation (by biting), resulting in loss of the terminal phalanges of all fingers in both hands. On admission, he complained only about insomnia. The electromyography showed severe peripheral nerve damage in both hands and feet caused by severe diabetic neuropathy. Cognitive decline was not established (Mini Mental State Examination score, 28), although the computed tomographic scan revealed serious brain atrophy. He was given a diagnosis of impulse control disorder not otherwise specified. His impulsive biting improved markedly when low doses of haloperidol (1.5 mg/day) were added to fluoxetine (80 mg/day). In our patient's case, self-mutilating behavior was associated with severe diabetic neuropathy, impulsivity, and social isolation. The administration of a combination of an antipsychotic and an antidepressant proved to be beneficial.", "title": "" }, { "docid": "03eabf03f8ac967c728ff35b77f3dd84", "text": "In this paper, we tackle the problem of associating combinations of colors to abstract categories (e.g. capricious, classic, cool, delicate, etc.). It is evident that such concepts would be difficult to distinguish using single colors, therefore we consider combinations of colors or color palettes. We leverage two novel databases for color palettes and we learn categorization models using low and high level descriptors. Preliminary results show that Fisher representation based on GMMs is the most rewarding strategy in terms of classification performance over a baseline model. We also suggest a process for cleaning weakly annotated data, whilst preserving the visual coherence of categories. Finally, we demonstrate how learning abstract categories on color palettes can be used in the application of color transfer, personalization and image re-ranking.", "title": "" }, { "docid": "e5c625ceaf78c66c2bfb9562970c09ec", "text": "A continuing question in neural net research is the size of network needed to solve a particular problem. If training is started with too small a network for the problem no learning can occur. The researcher must then go through a slow process of deciding that no learning is taking place, increasing the size of the network and training again. If a network that is larger than required is used, then processing is slowed, particularly on a conventional von Neumann computer. An approach to this problem is discussed that is based on learning with a net which is larger than the minimum size network required to solve the problem and then pruning the solution network. The result is a small, efficient network that performs as well or better than the original which does not give a complete answer to the question, since the size of the initial network is still largely based on guesswork but it gives a very useful partial answer and sheds some light on the workings of a neural network in the process.<<ETX>>", "title": "" }, { "docid": "d272cf01340c8dcc3c24651eaf876926", "text": "We propose a new method for learning from a single demonstration to solve hard exploration tasks like the Atari game Montezuma’s Revenge. Instead of imitating human demonstrations, as proposed in other recent works, our approach is to maximize rewards directly. Our agent is trained using off-the-shelf reinforcement learning, but starts every episode by resetting to a state from a demonstration. By starting from such demonstration states, the agent requires much less exploration to learn a game compared to when it starts from the beginning of the game at every episode. We analyze reinforcement learning for tasks with sparse rewards in a simple toy environment, where we show that the run-time of standard RL methods scales exponentially in the number of states between rewards. Our method reduces this to quadratic scaling, opening up many tasks that were previously infeasible. We then apply our method to Montezuma’s Revenge, for which we present a trained agent achieving a high-score of 74,500, better than any previously published result.", "title": "" }, { "docid": "2b569d086698cffc0cba2dc3fe0ab8a6", "text": "Home security should be a top concern for everyone who owns or rents a home. Moreover, safe and secure residential space is the necessity of every individual as most of the family members are working. The home is left unattended for most of the day-time and home invasion crimes are at its peak as constantly monitoring of the home is difficult. Another reason for the need of home safety is specifically when the elderly person is alone or the kids are with baby-sitter and servant. Home security system i.e. HomeOS is thus applicable and desirable for resident’s safety and convenience. This will be achieved by turning your home into a smart home by intelligent remote monitoring. Smart home comes into picture for the purpose of controlling and monitoring the home. It will give you peace of mind, as you can have a close watch and stay connected anytime, anywhere. But, is common man really concerned about home security? An investigative study was done by conducting a survey to get the inputs from different people from diverse backgrounds. The main motivation behind this survey was to make people aware of advanced HomeOS and analyze their need for security. This paper also studied the necessity of HomeOS investigative study in current situation where the home burglaries are rising at an exponential rate. In order to arrive at findings and conclusions, data were analyzed. The graphical method was employed to identify the relative significance of home security. From this analysis, we can infer that the cases of having kids and aged person at home or location of home contribute significantly to the need of advanced home security system. At the end, the proposed system model with its flow and the challenges faced while implementing home security systems are also discussed.", "title": "" }, { "docid": "da088acea8b1d2dc68b238e671649f4f", "text": "Water is a naturally circulating resource that is constantly recharged. Therefore, even though the stocks of water in natural and artificial reservoirs are helpful to increase the available water resources for human society, the flow of water should be the main focus in water resources assessments. The climate system puts an upper limit on the circulation rate of available renewable freshwater resources (RFWR). Although current global withdrawals are well below the upper limit, more than two billion people live in highly water-stressed areas because of the uneven distribution of RFWR in time and space. Climate change is expected to accelerate water cycles and thereby increase the available RFWR. This would slow down the increase of people living under water stress; however, changes in seasonal patterns and increasing probability of extreme events may offset this effect. Reducing current vulnerability will be the first step to prepare for such anticipated changes.", "title": "" } ]
scidocsrr
e12810a39baa7c96646907aceec16c72
An effective solution for a real cutting stock problem in manufacturing plastic rolls
[ { "docid": "74381f9602374af5ad0775a69163d1b9", "text": "This paper discusses some of the basic formulation issues and solution procedures for solving oneand twodimensional cutting stock problems. Linear programming, sequential heuristic and hybrid solution procedures are described. For two-dimensional cutting stock problems with rectangular shapes, we also propose an approach for solving large problems with limits on the number of times an ordered size may appear in a pattern.", "title": "" } ]
[ { "docid": "a4605974c90bc17edf715eb9edb10b8a", "text": "Natural language processing has been in existence for more than fifty years. During this time, it has significantly contributed to the field of human-computer interaction in terms of theoretical results and practical applications. As computers continue to become more affordable and accessible, the importance of user interfaces that are effective, robust, unobtrusive, and user-friendly – regardless of user expertise or impediments – becomes more pronounced. Since natural language usually provides for effortless and effective communication in human-human interaction, its significance and potential in human-computer interaction should not be overlooked – either spoken or typewritten, it may effectively complement other available modalities, such as windows, icons, and menus, and pointing; in some cases, such as in users with disabilities, natural language may even be the only applicable modality. This chapter examines the field of natural language processing as it relates to humancomputer interaction by focusing on its history, interactive application areas, theoretical approaches to linguistic modeling, and relevant computational and philosophical issues. It also presents a taxonomy for interactive natural language systems based on their linguistic knowledge and processing requirements, and reviews related applications. Finally, it discusses linguistic coverage issues, and explores the development of natural language widgets and their integration into multimodal user interfaces.", "title": "" }, { "docid": "e3f4add37a083f61feda8805478d0729", "text": "The evaluation of the effects of different media ionic strengths and pH on the release of hydrochlorothiazide, a poorly soluble drug, and diltiazem hydrochloride, a cationic and soluble drug, from a gel forming hydrophilic polymeric matrix was the objective of this study. The drug to polymer ratio of formulated tablets was 4:1. Hydrochlorothiazide or diltiazem HCl extended release (ER) matrices containing hypromellose (hydroxypropyl methylcellulose (HPMC)) were evaluated in media with a pH range of 1.2-7.5, using an automated USP type III, Bio-Dis dissolution apparatus. The ionic strength of the media was varied over a range of 0-0.4M to simulate the gastrointestinal fed and fasted states and various physiological pH conditions. Sodium chloride was used for ionic regulation due to its ability to salt out polymers in the midrange of the lyotropic series. The results showed that the ionic strength had a profound effect on the drug release from the diltiazem HCl K100LV matrices. The K4M, K15M and K100M tablets however withstood the effects of media ionic strength and showed a decrease in drug release to occur with an increase in ionic strength. For example, drug release after the 1h mark for the K100M matrices in water was 36%. Drug release in pH 1.2 after 1h was 30%. An increase of the pH 1.2 ionic strength to 0.4M saw a reduction of drug release to 26%. This was the general trend for the K4M and K15M matrices as well. The similarity factor f2 was calculated using drug release in water as a reference. Despite similarity occurring for all the diltiazem HCl matrices in the pH 1.2 media (f2=64-72), increases of ionic strength at 0.2M and 0.4M brought about dissimilarity. The hydrochlorothiazide tablet matrices showed similarity at all the ionic strength tested for all polymers (f2=56-81). The values of f2 however reduced with increasing ionic strengths. DSC hydration results explained the hydrochlorothiazide release from their HPMC matrices. There was an increase in bound water as ionic strengths increased. Texture analysis was employed to determine the gel strength and also to explain the drug release for the diltiazem hydrochloride. This methodology can be used as a valuable tool for predicting potential ionic effects related to in vivo fed and fasted states on drug release from hydrophilic ER matrices.", "title": "" }, { "docid": "72cf634b61876d3ad9c265e61f1148ae", "text": "Many functionals have been proposed for validation of partitions of object data produced by the fuzzy c-means (FCM) clustering algorithm. We examine the role a subtle but important parameter-the weighting exponent m of the FCM model-plays in determining the validity of FCM partitions. The functionals considered are the partition coefficient and entropy indexes of Bezdek, the Xie-Beni, and extended Xie-Beni indexes, and the FukuyamaSugeno index. Limit analysis indicates, and numerical experiments confirm, that the FukuyamaSugeno index is sensitive to both high and low values of m and may be unreliable because of this. Of the indexes tested, the Xie-Beni index provided the best response over a wide range of choices for the number of clusters, (%lo), and for m from 1.01-7. Finally, our calculations suggest that the best choice for m is probably in the interval [U, 2.51, whose mean and midpoint, m = 2, have often been the preferred choice for many users of FCM.", "title": "" }, { "docid": "18848101a74a23d6740f08f86992a4a4", "text": "Post-traumatic stress disorder (PTSD) is accompanied by disturbed sleep and an impaired ability to learn and remember extinction of conditioned fear. Following a traumatic event, the full spectrum of PTSD symptoms typically requires several months to develop. During this time, sleep disturbances such as insomnia, nightmares, and fragmented rapid eye movement sleep predict later development of PTSD symptoms. Only a minority of individuals exposed to trauma go on to develop PTSD. We hypothesize that sleep disturbance resulting from an acute trauma, or predating the traumatic experience, may contribute to the etiology of PTSD. Because symptoms can worsen over time, we suggest that continued sleep disturbances can also maintain and exacerbate PTSD. Sleep disturbance may result in failure of extinction memory to persist and generalize, and we suggest that this constitutes one, non-exclusive mechanism by which poor sleep contributes to the development and perpetuation of PTSD. Also reviewed are neuroendocrine systems that show abnormalities in PTSD, and in which stress responses and sleep disturbance potentially produce synergistic effects that interfere with extinction learning and memory. Preliminary evidence that insomnia alone can disrupt sleep-dependent emotional processes including consolidation of extinction memory is also discussed. We suggest that optimizing sleep quality following trauma, and even strategically timing sleep to strengthen extinction memories therapeutically instantiated during exposure therapy, may allow sleep itself to be recruited in the treatment of PTSD and other trauma and stress-related disorders.", "title": "" }, { "docid": "51ba2c02aa4ad9b7cfb381ddae0f3dfe", "text": "The dynamics of spontaneous fluctuations in neural activity are shaped by underlying patterns of anatomical connectivity. While numerous studies have demonstrated edge-wise correspondence between structural and functional connections, much less is known about how large-scale coherent functional network patterns emerge from the topology of structural networks. In the present study, we deploy a multivariate statistical technique, partial least squares, to investigate the association between spatially extended structural networks and functional networks. We find multiple statistically robust patterns, reflecting reliable combinations of structural and functional subnetworks that are optimally associated with one another. Importantly, these patterns generally do not show a one-to-one correspondence between structural and functional edges, but are instead distributed and heterogeneous, with many functional relationships arising from nonoverlapping sets of anatomical connections. We also find that structural connections between high-degree hubs are disproportionately represented, suggesting that these connections are particularly important in establishing coherent functional networks. Altogether, these results demonstrate that the network organization of the cerebral cortex supports the emergence of diverse functional network configurations that often diverge from the underlying anatomical substrate.", "title": "" }, { "docid": "4c2f9f9681a1d3bc6d9a27a59c2a01d6", "text": "BACKGROUND\nStatin therapy reduces low-density lipoprotein (LDL) cholesterol levels and the risk of cardiovascular events, but whether the addition of ezetimibe, a nonstatin drug that reduces intestinal cholesterol absorption, can reduce the rate of cardiovascular events further is not known.\n\n\nMETHODS\nWe conducted a double-blind, randomized trial involving 18,144 patients who had been hospitalized for an acute coronary syndrome within the preceding 10 days and had LDL cholesterol levels of 50 to 100 mg per deciliter (1.3 to 2.6 mmol per liter) if they were receiving lipid-lowering therapy or 50 to 125 mg per deciliter (1.3 to 3.2 mmol per liter) if they were not receiving lipid-lowering therapy. The combination of simvastatin (40 mg) and ezetimibe (10 mg) (simvastatin-ezetimibe) was compared with simvastatin (40 mg) and placebo (simvastatin monotherapy). The primary end point was a composite of cardiovascular death, nonfatal myocardial infarction, unstable angina requiring rehospitalization, coronary revascularization (≥30 days after randomization), or nonfatal stroke. The median follow-up was 6 years.\n\n\nRESULTS\nThe median time-weighted average LDL cholesterol level during the study was 53.7 mg per deciliter (1.4 mmol per liter) in the simvastatin-ezetimibe group, as compared with 69.5 mg per deciliter (1.8 mmol per liter) in the simvastatin-monotherapy group (P<0.001). The Kaplan-Meier event rate for the primary end point at 7 years was 32.7% in the simvastatin-ezetimibe group, as compared with 34.7% in the simvastatin-monotherapy group (absolute risk difference, 2.0 percentage points; hazard ratio, 0.936; 95% confidence interval, 0.89 to 0.99; P=0.016). Rates of prespecified muscle, gallbladder, and hepatic adverse effects and cancer were similar in the two groups.\n\n\nCONCLUSIONS\nWhen added to statin therapy, ezetimibe resulted in incremental lowering of LDL cholesterol levels and improved cardiovascular outcomes. Moreover, lowering LDL cholesterol to levels below previous targets provided additional benefit. (Funded by Merck; IMPROVE-IT ClinicalTrials.gov number, NCT00202878.).", "title": "" }, { "docid": "6b55931c9945a71de6b28789323f191b", "text": "Resistant hypertension-uncontrolled hypertension with 3 or more antihypertensive agents-is increasingly common in clinical practice. Clinicians should exclude pseudoresistant hypertension, which results from nonadherence to medications or from elevated blood pressure related to the white coat syndrome. In patients with truly resistant hypertension, thiazide diuretics, particularly chlorthalidone, should be considered as one of the initial agents. The other 2 agents should include calcium channel blockers and angiotensin-converting enzyme inhibitors for cardiovascular protection. An increasing body of evidence has suggested benefits of mineralocorticoid receptor antagonists, such as eplerenone and spironolactone, in improving blood pressure control in patients with resistant hypertension, regardless of circulating aldosterone levels. Thus, this class of drugs should be considered for patients whose blood pressure remains elevated after treatment with a 3-drug regimen to maximal or near maximal doses. Resistant hypertension may be associated with secondary causes of hypertension including obstructive sleep apnea or primary aldosteronism. Treating these disorders can significantly improve blood pressure beyond medical therapy alone. The role of device therapy for treating the typical patient with resistant hypertension remains unclear.", "title": "" }, { "docid": "a0fc4982c5d63191ab1b15deff4e65d6", "text": "Sentiment classification is an important subject in text mining research, which concerns the application of automatic methods for predicting the orientation of sentiment present on text documents, with many applications on a number of areas including recommender and advertising systems, customer intelligence and information retrieval. In this paper, we provide a survey and comparative study of existing techniques for opinion mining including machine learning and lexicon-based approaches, together with evaluation metrics. Also cross-domain and cross-lingual approaches are explored. Experimental results show that supervised machine learning methods, such as SVM and naive Bayes, have higher precision, while lexicon-based methods are also very competitive because they require few effort in human-labeled document and isn't sensitive to the quantity and quality of the training dataset.", "title": "" }, { "docid": "be45e9231cc468c8f9551868c1d13938", "text": "We present a user-centric approach for stream surface generation. Given a set of densely traced streamlines over the flow field, we design a sketch-based interface that allows users to draw simple strokes directly on top of the streamline visualization result. Based on the 2D stroke, we identify a 3D seeding curve and generate a stream surface that captures the flow pattern of streamlines at the outermost layer. Then, we remove the streamlines whose patterns are covered by the stream surface. Repeating this process, users can peel the flow by replacing the streamlines with customized surfaces layer by layer. Our sketch-based interface leverages an intuitive painting metaphor which most users are familiar with. We present results using multiple data sets to show the effectiveness of our approach, and discuss the limitations and future directions.", "title": "" }, { "docid": "59786d8ea951639b8b9a4e60c9d43a06", "text": "Compressed sensing is a technique to sample compressible signals below the Nyquist rate, whilst still allowing near optimal reconstruction of the signal. In this paper we present a theoretical analysis of the iterative hard thresholding algorithm when applied to the compressed sensing recovery problem. We show that the algorithm has the following properties (made more precise in the main text of the paper) • It gives near-optimal error guarantees. • It is robust to observation noise. • It succeeds with a minimum number of observations. • It can be used with any sampling operator for which the operator and its adjoint can be computed. • The memory requirement is linear in the problem size. Preprint submitted to Elsevier 28 January 2009 • Its computational complexity per iteration is of the same order as the application of the measurement operator or its adjoint. • It requires a fixed number of iterations depending only on the logarithm of a form of signal to noise ratio of the signal. • Its performance guarantees are uniform in that they only depend on properties of the sampling operator and signal sparsity.", "title": "" }, { "docid": "ee3c2f50a7ea955d33305a3e02310109", "text": "This research strives for natural language moment retrieval in long, untrimmed video streams. The problem nevertheless is not trivial especially when a video contains multiple moments of interests and the language describes complex temporal dependencies, which often happens in real scenarios. We identify two crucial challenges: semantic misalignment and structural misalignment. However, existing approaches treat different moments separately and do not explicitly model complex moment-wise temporal relations. In this paper, we present Moment Alignment Network (MAN), a novel framework that unifies the candidate moment encoding and temporal structural reasoning in a single-shot feed-forward network. MAN naturally assigns candidate moment representations aligned with language semantics over different temporal locations and scales. Most importantly, we propose to explicitly model momentwise temporal relations as a structured graph and devise an iterative graph adjustment network to jointly learn the best structure in an end-to-end manner. We evaluate the proposed approach on two challenging public benchmarks Charades-STA and DiDeMo, where our MAN significantly outperforms the state-of-the-art by a large margin.", "title": "" }, { "docid": "8f9e3bb85b4a2fcff3374fd700ac3261", "text": "Vehicle theft has become a pervasive problem in metropolitan cities. The aim of our work is to reduce the vehicle and fuel theft with an alert given by commonly used smart phones. The modern vehicles are interconnected with computer systems so that the information can be obtained from vehicular sources and Internet services. This provides space for tracking the vehicle through smart phones. In our work, an Advanced Encryption Standard (AES) algorithm is implemented which integrates a smart phone with classical embedded systems to avoid vehicle theft.", "title": "" }, { "docid": "caa35f58e9e217fd45daa2e49c4a4cde", "text": "Despite its linguistic complexity, the Horn of Africa region includes several major languages with more than 5 million speakers, some crossing the borders of multiple countries. All of these languages have official status in regions or nations and are crucial for development; yet computational resources for the languages remain limited or non-existent. Since these languages are complex morphologically, software for morphological analysis and generation is a necessary first step toward nearly all other applications. This paper describes a resource for morphological analysis and generation for three of the most important languages in the Horn of Africa, Amharic, Tigrinya, and Oromo. 1 Language in the Horn of Africa The Horn of Africa consists politically of four modern nations, Ethiopia, Somalia, Eritrea, and Djibouti. As in most of sub-Saharan Africa, the linguistic picture in the region is complex. The great majority of people are speakers of AfroAsiatic languages belonging to three sub-families: Semitic, Cushitic, and Omotic. Approximately 75% of the population of almost 100 million people are native speakers of four languages: the Cushitic languages Oromo and Somali and the Semitic languages Amharic and Tigrinya. Many others speak one or the other of these languages as second languages. All of these languages have official status at the national or regional level. All of the languages of the region, especially the Semitic languages, are characterized by relatively complex morphology. For such languages, nearly all forms of language technology depend on the existence of software for analyzing and generating word forms. As with most other subSaharan languages, this software has previously not been available. This paper describes a set of Python programs called HornMorpho that address this lack for three of the most important languages, Amharic, Tigrinya, and Oromo. 2 Morphological processingn 2.1 Finite state morphology Morphological analysis is the segmentation of words into their component morphemes and the assignment of grammatical morphemes to grammatical categories and lexical morphemes to lexemes. Morphological generation is the reverse process. Both processes relate a surface level to a lexical level. The relationship between the levels has traditionally been viewed within linguistics in terms of an ordered series of phonological rules. Within computational morphology, a very significant advance came with the demonstration that phonological rules could be implemented as finite state transducers (Kaplan and Kay, 1994) (FSTs) and that the rule ordering could be dispensed with using FSTs that relate the surface and lexical levels directly (Koskenniemi, 1983), so-called “twolevel” morphology. A second important advance was the recognition by Karttunen et al. (1992) that a cascade of composed FSTs could implement the two-level model. This made possible quite complex finite state systems, including ordered alternation rules representing context-sensitive variation in the phonological or orthographic shape of morphemes, the morphotactics characterizing the possible sequences of morphemes (in canonical form) for a given word class, and a lexicon. The key feature of such systems is that, even though the FSTs making up the cascade must be composed in a particular order, the result of composition is a single FST relating surface and lexical levels directly, as in two-level morphology. Because of the invertibility of FSTs, it is a simple matter to convert an analysis FST (surface input Figure 1: Basic architecture of lexical FSTs for morphological analysis and generation. Each rectangle represents an FST; the outermost rectangle is the full FST that is actually used for processing. “.o.” represents composition of FSTs, “+” concatenation of FSTs. to lexical output) to one that performs generation (lexical input to surface output). This basic architecture, illustrated in Figure 1, consisting of a cascade of composed FSTs representing (1) alternation rules and (2) morphotactics, including a lexicon of stems or roots, is the basis for the system described in this paper. We may also want to handle words whose roots or stems are not found in the lexicon, especially when the available set of known roots or stems is limited. In such cases the lexical component is replaced by a phonotactic component characterizing the possible shapes of roots or stems. Such a “guesser” analyzer (Beesley and Karttunen, 2003) analyzes words with unfamiliar roots or stems by positing possible roots or stems. 2.2 Semitic morphology These ideas have revolutionized computational morphology, making languages with complex word structure, such as Finnish and Turkish, far more amenable to analysis by traditional computational techniques. However, finite state morphology is inherently biased to view morphemes as sequences of characters or phones and words as concatenations of morphemes. This presents problems in the case of non-concatenative morphology, for example, discontinuous morphemes and the template morphology that characterizes Semitic languages such as Amharic and Tigrinya. The stem of a Semitic verb consists of a root, essentially a sequence of consonants, and a template that inserts other segments between the root consonants and possibly copies certain of the consonants. For example, the Amharic verb root sbr ‘break’ can combine with roughly 50 different templates to form stems in words such as y ̃b•l y1-sEbr-al ‘he breaks’, ° ̃¤’ tEsEbbEr-E ‘it was broken’, ‰ ̃b’w l-assEbb1r-Ew , ‘let me cause him to break something’, ̃§§” sEbabar-i ‘broken into many pieces’. A number of different additions to the basic FST framework have been proposed to deal with non-concatenative morphology, all remaining finite state in their complexity. A discussion of the advantages and drawbacks of these different proposals is beyond the scope of this paper. The approach used in our system is one first proposed by Amtrup (2003), based in turn on the well studied formalism of weighted FSTs. In brief, in Amtrup’s approach, each of the arcs in a transducer may be “weighted” with a feature structure, that is, a set of grammatical feature-value pairs. As the arcs in an FST are traversed, a set of feature-value pairs is accumulated by unifying the current set with whatever appears on the arcs along the path through the transducer. These feature-value pairs represent a kind of memory for the path that has been traversed but without the power of a stack. Any arc whose feature structure fails to unify with the current set of feature-value pairs cannot be traversed. The result of traversing such an FST during morphological analysis is not only an output character sequence, representing the root of the word, but a set of feature-value pairs that represents the grammatical structure of the input word. In the generation direction, processing begins with a root and a set of feature-value pairs, representing the desired grammatical structure of the output word, and the output is the surface wordform corresponding to the input root and grammatical structure. In Gasser (2009) we showed how Amtrup’s technique can be applied to the analysis and generation of Tigrinya verbs. For an alternate approach to handling the morphotactics of a subset of Amharic verbs, within the context of the Xerox finite state tools (Beesley and Karttunen, 2003), see Amsalu and Demeke (2006). Although Oromo, a Cushitic language, does not exhibit the root+template morphology that is typical of Semitic languages, it is also convenient to handle its morphology using the same technique because there are some long-distance dependencies and because it is useful to have the grammatical output that this approach yields for analysis.", "title": "" }, { "docid": "18d8fe3f77ab8878ae2eb72b04fa8a48", "text": "A new magneto-electric dipole antenna with a unidirectional radiation pattern is proposed. A novel differential feeding structure is designed to provide an ultra-wideband impedance matching. A stable gain of 8.25±1.05 dBi is realized by introducing two slots in the magneto-electric dipole and using a rectangular box-shaped reflector, instead of a planar reflector. The antenna can achieve an impedance bandwidth of 114% for SWR ≤ 2 from 2.95 to 10.73 GHz. Radiation patterns with low cross polarization, low back radiation, fixing broadside direction mainbeam and symmetrical E- and H -plane patterns are obtained over the operating frequency range. Moreover, the correlation factor between the transmitting antenna input signal and the receiving antenna output signal is calculated for evaluating the time-domain characteristic. The proposed antenna, which is small in size, can be constructed easily by using PCB fabrication technique.", "title": "" }, { "docid": "2ed16f9344f5c5b024095a4e27283596", "text": "An overview is presented of the impact of NLO on today's daily life. While NLO researchers have promised many applications, only a few have changed our lives so far. This paper categorizes applications of NLO into three areas: improving lasers, interaction with materials, and information technology. NLO provides: coherent light of different wavelengths; multi-photon absorption for plasma-materials interaction; advanced spectroscopy and materials analysis; and applications to communications and sensors. Applications in information processing and storage seem less mature.", "title": "" }, { "docid": "2c3bfdb36a691434ece6b9f3e7e281e9", "text": "Heterogeneous cloud radio access networks (H-CRAN) is a new trend of SC that aims to leverage the heterogeneous and cloud radio access networks advantages. Low power remote radio heads (RRHs) are exploited to provide high data rates for users with high quality of service requirements (QoS), while high power macro base stations (BSs) are deployed for coverage maintenance and low QoS users support. However, the inter-tier interference between the macro BS and RRHs and energy efficiency are critical challenges that accompany resource allocation in H-CRAN. Therefore, we propose a centralized resource allocation scheme using online learning, which guarantees interference mitigation and maximizes energy efficiency while maintaining QoS requirements for all users. To foster the performance of such scheme with a model-free learning, we consider users' priority in resource blocks (RBs) allocation and compact state representation based learning methodology to enhance the learning process. Simulation results confirm that the proposed resource allocation solution can mitigate interference, increase energy and spectral efficiencies significantly, and maintain users' QoS requirements.", "title": "" }, { "docid": "556c9a28f9bbd81d53e093b139ce7866", "text": "This paper is devoted to a new method of using Microsoft (MS) Kinect sensors for non-contact monitoring of breathing and heart rate estimation to detect possible medical and neurological disorders. Video sequences of facial features and thorax movements are recorded by MS Kinect image, depth and infrared sensors to enable their time analysis in selected regions of interest. The proposed methodology includes the use of computational methods and functional transforms for data selection, as well as their denoising, spectral analysis and visualization, in order to determine specific biomedical features. The results that were obtained verify the correspondence between the evaluation of the breathing frequency that was obtained from the image and infrared data of the mouth area and from the thorax movement that was recorded by the depth sensor. Spectral analysis of the time evolution of the mouth area video frames was also used for heart rate estimation. Results estimated from the image and infrared data of the mouth area were compared with those obtained by contact measurements by Garmin sensors (www.garmin.com). The study proves that simple image and depth sensors can be used to efficiently record biomedical multidimensional data with sufficient accuracy to detect selected biomedical features using specific methods of computational intelligence. The achieved accuracy for non-contact detection of breathing rate was 0.26% and the accuracy of heart rate estimation was 1.47% for the infrared sensor. The following results show how video frames with depth data can be used to differentiate different kinds of breathing. The proposed method enables us to obtain and analyse data for diagnostic purposes in the home environment or during physical activities, enabling efficient human-machine interaction.", "title": "" }, { "docid": "76375aa50ebe8388d653241ba481ecd2", "text": "Sequential learning of tasks using gradient descent leads to an unremitting decline in the accuracy of tasks for which training data is no longer available, termed catastrophic forgetting. Generative models have been explored as a means to approximate the distribution of old tasks and bypass storage of real data. Here we propose a cumulative closed-loop generator and embedded classifier using an AC-GAN architecture provided with external regularization by a small buffer. We evaluate incremental learning using a notoriously hard paradigm, “single headed learning,” in which each task is a disjoint subset of classes in the overall dataset, and performance is evaluated on all previous classes. First, we show that the variability contained in a small percentage of a dataset (memory buffer) accounts for a significant portion of the reported accuracy, both in multi-task and continual learning settings. Second, we show that using a generator to continuously output new images while training provides an up-sampling of the buffer, which prevents catastrophic forgetting and yields superior performance when compared to a fixed buffer. We achieve an average accuracy for all classes of 92.26% in MNIST and 76.15% in FASHION-MNIST after 5 tasks using GAN sampling with a buffer of only 0.17% of the entire dataset size. We compare to a network with regularization (EWC) which shows a deteriorated average performance of 29.19% (MNIST) and 26.5% (FASHION). The baseline of no regularization (plain gradient descent) performs at 99.84% (MNIST) and 99.79% (FASHION) for the last task, but below 3% for all previous tasks. Our method has very low long-term memory cost, the buffer, as well as negligible intermediate memory storage.", "title": "" }, { "docid": "0fa35886300345106390cc55c6025257", "text": "Non-linear models recently receive a lot of attention as people are starting to discover the power of statistical and embedding features. However, tree-based models are seldom studied in the context of structured learning despite their recent success on various classification and ranking tasks. In this paper, we propose S-MART, a tree-based structured learning framework based on multiple additive regression trees. S-MART is especially suitable for handling tasks with dense features, and can be used to learn many different structures under various loss functions. We apply S-MART to the task of tweet entity linking — a core component of tweet information extraction, which aims to identify and link name mentions to entities in a knowledge base. A novel inference algorithm is proposed to handle the special structure of the task. The experimental results show that S-MART significantly outperforms state-of-the-art tweet entity linking systems.", "title": "" }, { "docid": "107c839a73c12606d4106af7dc04cd96", "text": "This study presents a novel four-fingered robotic hand to attain a soft contact and high stability under disturbances while holding an object. Each finger is constructed using a tendon-driven skeleton, granular materials corresponding to finger pulp, and a deformable rubber skin. This structure provides soft contact with an object, as well as high adaptation to its shape. Even if the object is deformable and fragile, a grasping posture can be formed without deforming the object. If the air around the granular materials in the rubber skin and jamming transition is vacuumed, the grasping posture can be fixed and the object can be grasped firmly and stably. A high grasping stability under disturbances can be attained. Additionally, the fingertips can work as a small jamming gripper to grasp an object smaller than a fingertip. An experimental investigation indicated that the proposed structure provides a high grasping force with a jamming transition with high adaptability to the object's shape.", "title": "" } ]
scidocsrr
35c81e99bc7bb0be3ec777516308dfb9
Supply chain ontology: Review, analysis and synthesis
[ { "docid": "910c42c4737d38db592f7249c2e0d6d2", "text": "This document presents the Enterprise Ontology a collection of terms and de nitions relevant to business enterprises It was developed as part of the Enterprise Project a collaborative e ort to provide a framework for enterprise modelling The Enterprise Ontology will serve as a basis for this framework which includes methods and a computer toolset for enterprise modelling We give an overview of the Enterprise Project elaborate on the intended use of the Ontology and discuss the process we went through to build it The scope of the Enterprise Ontology is limited to those core concepts required for the project however it is expected that it will appeal to a wider audience It should not be considered static during the course of the project the Enterprise Ontology will be further re ned and extended", "title": "" } ]
[ { "docid": "928ed1aed332846176ad52ce7cc0754c", "text": "What is the price of anarchy when unsplittable demands are ro uted selfishly in general networks with load-dependent edge dela ys? Motivated by this question we generalize the model of [14] to the case of weighted congestion games. We show that varying demands of users crucially affect the n ature of these games, which are no longer isomorphic to exact potential gam es, even for very simple instances. Indeed we construct examples where even a single-commodity (weighted) network congestion game may have no pure Nash equ ilibrium. On the other hand, we study a special family of networks (whic h we call the l-layered networks ) and we prove that any weighted congestion game on such a network with resource delays equal to the congestions, pos sesses a pure Nash Equilibrium. We also show how to construct one in pseudo-pol yn mial time. Finally, we give a surprising answer to the question above for s uch games: The price of anarchy of any weighted l-layered network congestion game with m edges and edge delays equal to the loads, is Θ (", "title": "" }, { "docid": "c237facfc6639dfff82659f927a25267", "text": "The scientific approach to understand the nature of consciousness revolves around the study of human brain. Neurobiological studies that compare the nervous system of different species have accorded highest place to the humans on account of various factors that include a highly developed cortical area comprising of approximately 100 billion neurons, that are intrinsically connected to form a highly complex network. Quantum theories of consciousness are based on mathematical abstraction and Penrose-Hameroff Orch-OR Theory is one of the most promising ones. Inspired by Penrose-Hameroff Orch-OR Theory, Behrman et. al. (Behrman, 2006) have simulated a quantum Hopfield neural network with the structure of a microtubule. They have used an extremely simplified model of the tubulin dimers with each dimer represented simply as a qubit, a single quantum two-state system. The extension of this model to n-dimensional quantum states, or n-qudits presented in this work holds considerable promise for even higher mathematical abstraction in modelling consciousness systems.", "title": "" }, { "docid": "755f7e93dbe43a0ed12eb90b1d320cb2", "text": "This paper presents a deep architecture for learning a similarity metric on variablelength character sequences. The model combines a stack of character-level bidirectional LSTM’s with a Siamese architecture. It learns to project variablelength strings into a fixed-dimensional embedding space by using only information about the similarity between pairs of strings. This model is applied to the task of job title normalization based on a manually annotated taxonomy. A small data set is incrementally expanded and augmented with new sources of variance. The model learns a representation that is selective to differences in the input that reflect semantic differences (e.g., “Java developer” vs. “HR manager”) but also invariant to nonsemantic string differences (e.g., “Java developer” vs. “Java programmer”).", "title": "" }, { "docid": "e72ed2b388577122402831d4cd75aa0f", "text": "Development and testing of a compact 200-kV, 10-kJ/s industrial-grade power supply for capacitor charging applications is described. Pulse repetition rate (PRR) can be from single shot to 250 Hz, depending on the storage capacitance. Energy dosing (ED) topology enables high efficiency at switching frequency of up to 55 kHz using standard slow IGBTs. Circuit simulation examples are given. They clearly show zero-current switching at variable frequency during the charge set by the ED governing equations. Peak power drawn from the primary source is about only 60% higher than the average power, which lowers the stress on the input rectifier. Insulation design was assisted by electrostatic field analyses. Field plots of the main transformer insulation illustrate field distribution and stresses in it. Subsystem and system tests were performed including limited insulation life test. A precision, high-impedance, fast HV divider was developed for measuring voltages up to 250 kV with risetime down to 10 μs. The charger was successfully tested with stored energy of up to 550 J at discharge via a custom designed open-air spark gap at PRR up to 20 Hz (in bursts). Future work will include testing at customer sites.", "title": "" }, { "docid": "b0103474ecd369a9f0ba637c34bacc56", "text": "BACKGROUND\nThe Internet Addiction Test (IAT) by Kimberly Young is one of the most utilized diagnostic instruments for Internet addiction. Although many studies have documented psychometric properties of the IAT, consensus on the optimal overall structure of the instrument has yet to emerge since previous analyses yielded markedly different factor analytic results.\n\n\nOBJECTIVE\nThe objective of this study was to evaluate the psychometric properties of the Italian version of the IAT, specifically testing the factor structure stability across cultures.\n\n\nMETHODS\nIn order to determine the dimensional structure underlying the questionnaire, both exploratory and confirmatory factor analyses were performed. The reliability of the questionnaire was computed by the Cronbach alpha coefficient.\n\n\nRESULTS\nData analyses were conducted on a sample of 485 college students (32.3%, 157/485 males and 67.7%, 328/485 females) with a mean age of 24.05 years (SD 7.3, range 17-47). Results showed 176/485 (36.3%) participants with IAT score from 40 to 69, revealing excessive Internet use, and 11/485 (1.9%) participants with IAT score from 70 to 100, suggesting significant problems because of Internet use. The IAT Italian version showed good psychometric properties, in terms of internal consistency and factorial validity. Alpha values were satisfactory for both the one-factor solution (Cronbach alpha=.91), and the two-factor solution (Cronbach alpha=.88 and Cronbach alpha=.79). The one-factor solution comprised 20 items, explaining 36.18% of the variance. The two-factor solution, accounting for 42.15% of the variance, showed 11 items loading on Factor 1 (Emotional and Cognitive Preoccupation with the Internet) and 7 items on Factor 2 (Loss of Control and Interference with Daily Life). Goodness-of-fit indexes (NNFI: Non-Normed Fit Index; CFI: Comparative Fit Index; RMSEA: Root Mean Square Error of Approximation; SRMR: Standardized Root Mean Square Residual) from confirmatory factor analyses conducted on a random half subsample of participants (n=243) were satisfactory in both factorial solutions: two-factor model (χ²₁₃₂= 354.17, P<.001, χ²/df=2.68, NNFI=.99, CFI=.99, RMSEA=.02 [90% CI 0.000-0.038], and SRMR=.07), and one-factor model (χ²₁₆₉=483.79, P<.001, χ²/df=2.86, NNFI=.98, CFI=.99, RMSEA=.02 [90% CI 0.000-0.039], and SRMR=.07).\n\n\nCONCLUSIONS\nOur study was aimed at determining the most parsimonious and veridical representation of the structure of Internet addiction as measured by the IAT. Based on our findings, support was provided for both single and two-factor models, with slightly strong support for the bidimensionality of the instrument. Given the inconsistency of the factor analytic literature of the IAT, researchers should exercise caution when using the instrument, dividing the scale into factors or subscales. Additional research examining the cross-cultural stability of factor solutions is still needed.", "title": "" }, { "docid": "ef6160d304908ea87287f2071dea5f6d", "text": "The diffusion of fake images and videos on social networks is a fast growing problem. Commercial media editing tools allow anyone to remove, add, or clone people and objects, to generate fake images. Many techniques have been proposed to detect such conventional fakes, but new attacks emerge by the day. Image-to-image translation, based on generative adversarial networks (GANs), appears as one of the most dangerous, as it allows one to modify context and semantics of images in a very realistic way. In this paper, we study the performance of several image forgery detectors against image-to-image translation, both in ideal conditions, and in the presence of compression, routinely performed upon uploading on social networks. The study, carried out on a dataset of 36302 images, shows that detection accuracies up to 95% can be achieved by both conventional and deep learning detectors, but only the latter keep providing a high accuracy, up to 89%, on compressed data.", "title": "" }, { "docid": "e8e3f77626742ef7aa40703e3113f148", "text": "This paper presents a multi-agent based framework for target tracking. We exploit the agent-oriented software paradigm with its characteristics that provide intelligent autonomous behavior together with a real time computer vision system to achieve high performance real time target tracking. The framework consists of four layers; interface, strategic, management, and operation layers. Interface layer receives from the user the tracking parameters such as the number and type of trackers and targets and type of the tracking environment, and then delivers these parameters to the subsequent layers. Strategic (decision making) layer is provided with a knowledge base of target tracking methodologies that are previously implemented by researchers in diverse target tracking applications and are proven successful. And by inference in the knowledge base using the user input a tracking methodology is chosen. Management layer is responsible for pursuing and controlling the tracking methodology execution. Operation layer represents the phases in the tracking methodology and is responsible for communicating with the real-time computer vision system to execute the algorithms in the phases. The framework is presented with a case study to show its ability to tackle the target tracking problem and its flexibility to solve the problem with different tracking parameters. This paper describes the ability of the agent-based framework to deploy any real-time vision system that fits in solving the target tracking problem. It is a step towards a complete open standard, real-time, agent-based framework for target tracking.", "title": "" }, { "docid": "871af4524fcbbae44ba9139bef3481d0", "text": "AIM\n'Othering' is described as a social process whereby a dominant group or person uses negative attributes to define and subordinate others. Literature suggests othering creates exclusive relationships and puts patients at risk for suboptimal care. A concept analysis delineating the properties of othering was conducted to develop knowledge to support inclusionary practices in nursing.\n\n\nDESIGN\nRodgers' Evolutionary Method for concept analysis guided this study.\n\n\nMETHODS\nThe following databases were searched spanning the years 1999-2015: CINAHL, PUBMED, PsychINFO and Google. Search terms included \"othering\", \"nurse\", \"other\", \"exclusion\" and \"patient\".\n\n\nRESULTS\nTwenty-eight papers were analyzed whereby definitions, related concepts and othering attributes were identified. Findings support that othering in nursing is a sequential process with a trajectory aimed at marginalization and exclusion, which in turn has a negative impact on patient care and professional relationships. Implications are discussed in terms of deriving practical solutions to disrupt othering. We conclude with a conceptual foundation designed to support inclusionary strategies in nursing.", "title": "" }, { "docid": "b15f185258caa9d355fae140a41ae03c", "text": "The current approaches in terms of information security awareness and education are descriptive (i.e. they are not accomplishment-oriented nor do they recognize the factual/normative dualism); and current research has not explored the possibilities offered by motivation/behavioural theories. The first situation, level of descriptiveness, is deemed to be questionable because it may prove eventually that end-users fail to internalize target goals and do not follow security guidelines, for example ± which is inadequate. Moreover, the role of motivation in the area of information security is not considered seriously enough, even though its role has been widely recognised. To tackle such weaknesses, this paper constructs a conceptual foundation for information systems/organizational security awareness. The normative and prescriptive nature of end-user guidelines will be considered. In order to understand human behaviour, the behavioural science framework, consisting in intrinsic motivation, a theory of planned behaviour and a technology acceptance model, will be depicted and applied. Current approaches (such as the campaign) in the area of information security awareness and education will be analysed from the viewpoint of the theoretical framework, resulting in information on their strengths and weaknesses. Finally, a novel persuasion strategy aimed at increasing users' commitment to security guidelines is presented. spite of its significant role, seems to lack adequate foundations. To begin with, current approaches (e.g. McLean, 1992; NIST, 1995, 1998; Perry, 1985; Morwood, 1998), are descriptive in nature. Their inadequacy with respect to point of departure is partly recognized by McLean (1992), who points out that the approaches presented hitherto do not ensure learning. Learning can also be descriptive, however, which makes it an improper objective for security awareness. Learning and other concepts or approaches are not irrelevant in the case of security awareness, education or training, but these and other approaches need a reasoned contextual foundation as a point of departure in order to be relevant. For instance, if learning does not reflect the idea of prescriptiveness, the objective of the learning approach includes the fact that users may learn guidelines, but nevertheless fails to comply with them in the end. This state of affairs (level of descriptiveness[6]), is an inadequate objective for a security activity (the idea of prescriptiveness will be thoroughly considered in section 3). Also with regard to the content facet, the important role of motivation (and behavioural theories) with respect to the uses of security systems has been recognised (e.g. by NIST, 1998; Parker, 1998; Baskerville, 1989; Spruit, 1998; SSE-CMM, 1998a; 1998b; Straub, 1990; Straub et al., 1992; Thomson and von Solms, 1998; Warman, 1992) ± but only on an abstract level (as seen in Table I, the issue islevel (as seen in Table I, the issue is not considered from the viewpoint of any particular behavioural theory as yet). Motivation, however, is an issue where a deeper understanding may be of crucial relevance with respect to the effectiveness of approaches based on it. The role, possibilities and constraints of motivation and attitude in the effort to achieve positive results with respect to information security activities will be addressed at a conceptual level from the viewpoints of different theories. The scope of this paper is limited to the content aspects of awareness (Table I) and further end-users, thus resulting in a research contribution that is: a conceptual foundation and a framework for IS security awareness. This is achieved by addressing the following research questions: . What are the premises, nature and point of departure of awareness? . What is the role of attitude, and particularly motivation: the possibilities and requirements for achieving motivation/user acceptance and commitment with respect to information security tasks? . What approaches can be used as a framework to reach the stage of internalization and end-user", "title": "" }, { "docid": "5c8ab947856945b32d4d3e0edc89a9e0", "text": "While MOOCs offer educational data on a new scale, many educators find great potential of the big data including detailed activity records of every learner. A learner's behavior such as if a learner will drop out from the course can be predicted. How to provide an effective, economical, and scalable method to detect cheating on tests such as surrogate exam-taker is a challenging problem. In this paper, we present a grade predicting method that uses student activity features to predict whether a learner may get a certification if he/she takes a test. The method consists of two-step classifications: motivation classification (MC) and grade classification (GC). The MC divides all learners into three groups including certification earning, video watching, and course sampling. The GC then predicts a certification earning learner may or may not obtain a certification. Our experiment shows that the proposed method can fit the classification model at a fine scale and it is possible to find a surrogate exam-taker.", "title": "" }, { "docid": "29aa7084f7d6155d4626b682a5fc88ef", "text": "There is an underlying cascading behavior over road networks. Traffic cascading patterns are of great importance to easing traffic and improving urban planning. However, what we can observe is individual traffic conditions on different road segments at discrete time intervals, rather than explicit interactions or propagation (e.g., A→B) between road segments. Additionally, the traffic from multiple sources and the geospatial correlations between road segments make it more challenging to infer the patterns. In this paper, we first model the three-fold influences existing in traffic propagation and then propose a data-driven approach, which finds the cascading patterns through maximizing the likelihood of observed traffic data. As this is equivalent to a submodular function maximization problem, we solve it by using an approximate algorithm with provable near-optimal performance guarantees based on its submodularity. Extensive experiments on real-world datasets demonstrate the advantages of our approach in both effectiveness and efficiency.", "title": "" }, { "docid": "46e37ce77756f58ab35c0930d45e367f", "text": "In this letter, we propose an enhanced stereophonic acoustic echo suppression (SAES) algorithm incorporating spectral and temporal correlations in the short-time Fourier transform (STFT) domain. Unlike traditional stereophonic acoustic echo cancellation, SAES estimates the echo spectra in the STFT domain and uses a Wiener filter to suppress echo without performing any explicit double-talk detection. The proposed approach takes account of interdependencies among components in adjacent time frames and frequency bins, which enables more accurate estimation of the echo signals. Experimental results show that the proposed method yields improved performance compared to that of conventional SAES.", "title": "" }, { "docid": "e8681043d4551f6da335a649a6d7b13c", "text": "In recent years, wireless communication particularly in the front-end transceiver architecture has increased its functionality. This trend is continuously expanding and of particular is reconfigurable radio frequency (RF) front-end. A multi-band single chip architecture which consists of an array of switches and filters could simplify the complexity of the current superheterodyne architecture. In this paper, the design of a Single Pole Double Throw (SPDT) switch using 0.35μm Complementary Metal Oxide Semiconductor (CMOS) technology is discussed. The SPDT RF CMOS switch was then simulated in the range of frequency of 0-2GHz. At 2 GHz, the switch exhibits insertion loss of 1.153dB, isolation of 21.24dB, P1dB of 21.73dBm and IIP3 of 26.02dBm. Critical RF T/R switch characteristic such as insertion loss, isolation, power 1dB compression point and third order intercept point, IIP3 is discussed and compared with other type of switch designs. Pre and post layout simulation of the SPDT RF CMOS switch are also discussed to analyze the effect of parasitic capacitance between components' interconnection.", "title": "" }, { "docid": "dbf8e0125944b526f7b14c98fc46afa2", "text": "People counting is one of the key techniques in video surveillance. This task usually encounters many challenges in crowded environment, such as heavy occlusion, low resolution, imaging viewpoint variability, etc. Motivated by the success of R-CNN [1] on object detection, in this paper we propose a head detection based people counting method combining the Adaboost algorithm and the CNN. Unlike the R-CNN which uses the general object proposals as the inputs of CNN, our method uses the cascade Adaboost algorithm to obtain the head region proposals for CNN, which can greatly reduce the following classification time. Resorting to the strong ability of feature learning of the CNN, it is used as a feature extractor in this paper, instead of as a classifier as its commonlyused strategy. The final classification is done by a linear SVM classifier trained on the features extracted using the CNN feature extractor. Finally, the prior knowledge can be applied to post-process the detection results to increase the precision of head detection and the people count is obtained by counting the head detection results. A real classroom surveillance dataset is used to evaluate the proposed method and experimental results show that this method has good performance and outperforms the baseline methods, including deformable part model and cascade Adaboost methods. ∗Corresponding author Email address: gaocq@cqupt.edu.cn (Chenqiang Gao∗, Pei Li, Yajun Zhang, Jiang Liu, Lan Wang) Preprint submitted to Neurocomputing May 28, 2016", "title": "" }, { "docid": "d69573f767b2e72bcff5ed928ca8271c", "text": "This article provides a novel analytical method of magnetic circuit on Axially-Laminated Anisotropic (ALA) rotor synchronous reluctance motor when the motor is magnetized on the d-axis. To simplify the calculation, the reluctance of stator magnet yoke and rotor magnetic laminations and leakage magnetic flux all are ignored. With regard to the uneven air-gap brought by the teeth and slots of the stator and rotor, the method resolves the problem with the equivalent air-gap length distribution function, and clarifies the magnetic circuit when the stator teeth are saturated or unsaturated. In order to conduct exact computation, the high order harmonics of the stator magnetic potential are also taken into account.", "title": "" }, { "docid": "33e6abc5ed78316cc03dae8ba5a0bfc8", "text": "In this paper, we present a deep learning architecture which addresses the problem of 3D semantic segmentation of unstructured point clouds. Compared to previous work, we introduce grouping techniques which define point neighborhoods in the initial world space and the learned feature space. Neighborhoods are important as they allow to compute local or global point features depending on the spatial extend of the neighborhood. Additionally, we incorporate dedicated loss functions to further structure the learned point feature space: the pairwise distance loss and the centroid loss. We show how to apply these mechanisms to the task of 3D semantic segmentation of point clouds and report state-of-the-art performance on indoor and outdoor datasets. ar X iv :1 81 0. 01 15 1v 2 [ cs .C V ] 8 D ec 2 01 8 2 F. Engelmann et al.", "title": "" }, { "docid": "23d9479a38afa6e8061fe431047bed4e", "text": "We introduce cMix, a new approach to anonymous communications. Through a precomputation, the core cMix protocol eliminates all expensive realtime public-key operations—at the senders, recipients and mixnodes—thereby decreasing real-time cryptographic latency and lowering computational costs for clients. The core real-time phase performs only a few fast modular multiplications. In these times of surveillance and extensive profiling there is a great need for an anonymous communication system that resists global attackers. One widely recognized solution to the challenge of traffic analysis is a mixnet, which anonymizes a batch of messages by sending the batch through a fixed cascade of mixnodes. Mixnets can offer excellent privacy guarantees, including unlinkability of sender and receiver, and resistance to many traffic-analysis attacks that undermine many other approaches including onion routing. Existing mixnet designs, however, suffer from high latency in part because of the need for real-time public-key operations. Precomputation greatly improves the real-time performance of cMix, while its fixed cascade of mixnodes yields the strong anonymity guarantees of mixnets. cMix is unique in not requiring any real-time public-key operations by users. Consequently, cMix is the first mixing suitable for low latency chat for lightweight devices. Our presentation includes a specification of cMix, security arguments, anonymity analysis, and a performance comparison with selected other approaches. We also give benchmarks from our prototype.", "title": "" }, { "docid": "0408aeb750ca9064a070248f0d32d786", "text": "Mood, attention and motivation co-vary with activity in the neuromodulatory systems of the brain to influence behaviour. These psychological states, mediated by neuromodulators, have a profound influence on the cognitive processes of attention, perception and, particularly, our ability to retrieve memories from the past and make new ones. Moreover, many psychiatric and neurodegenerative disorders are related to dysfunction of these neuromodulatory systems. Neurons of the brainstem nucleus locus coeruleus are the sole source of noradrenaline, a neuromodulator that has a key role in all of these forebrain activities. Elucidating the factors that control the activity of these neurons and the effect of noradrenaline in target regions is key to understanding how the brain allocates attention and apprehends the environment to select, store and retrieve information for generating adaptive behaviour.", "title": "" }, { "docid": "8a708ec1187ecb2fe9fa929b46208b34", "text": "This paper proposes a new face verification method that uses multiple deep convolutional neural networks (DCNNs) and a deep ensemble, that extracts two types of low dimensional but discriminative and high-level abstracted features from each DCNN, then combines them as a descriptor for face verification. Our DCNNs are built from stacked multi-scale convolutional layer blocks to present multi-scale abstraction. To train our DCNNs, we use different resolutions of triplets that consist of reference images, positive images, and negative images, and triplet-based loss function that maximize the ratio of distances between negative pairs and positive pairs and minimize the absolute distances between positive face images. A deep ensemble is generated from features extracted by each DCNN, and used as a descriptor to train the joint Bayesian learning and its transfer learning method. On the LFW, although we use only 198,018 images and only four different types of networks, the proposed method with the joint Bayesian learning and its transfer learning method achieved 98.33% accuracy. In addition to further increase the accuracy, we combine the proposed method and high dimensional LBP based joint Bayesian method, and achieved 99.08% accuracy on the LFW. Therefore, the proposed method helps to improve the accuracy of face verification when training data is insufficient to train DCNNs.", "title": "" }, { "docid": "95037e7dc3ae042d64a4b343ad4efd39", "text": "We classify human actions occurring in depth image sequences using features based on skeletal joint positions. The action classes are represented by a multi-level Hierarchical Dirichlet Process – Hidden Markov Model (HDP-HMM). The non-parametric HDP-HMM allows the inference of hidden states automatically from training data. The model parameters of each class are formulated as transformations from a shared base distribution, thus promoting the use of unlabelled examples during training and borrowing information across action classes. Further, the parameters are learnt in a discriminative way. We use a normalized gamma process representation of HDP and margin based likelihood functions for this purpose. We sample parameters from the complex posterior distribution induced by our discriminative likelihood function using elliptical slice sampling. Experiments with two different datasets show that action class models learnt using our technique produce good classification results.", "title": "" } ]
scidocsrr
1bc8b083b81954925146ea8e9941badf
Experimental Investigation of Light-Gauge Steel Plate Shear Walls
[ { "docid": "8f3b3611ee8a52753e026625f6ccd12e", "text": "plate is ntation of by plastic plex, wall ection of procedure Abstract: A revised procedure for the design of steel plate shear walls is proposed. In this procedure the thickness of the infill found using equations that are derived from the plastic analysis of the strip model, which is an accepted model for the represe steel plate shear walls. Comparisons of experimentally obtained ultimate strengths of steel plate shear walls and those predicted analysis are given and reasonable agreement is observed. Fundamental plastic collapse mechanisms for several, more com configurations are also given. Additionally, an existing codified procedure for the design of steel plate walls is reviewed and a s this procedure which could lead to designs with less-than-expected ultimate strength is identified. It is shown that the proposed eliminates this possibility without changing the other valid sections of the current procedure.", "title": "" } ]
[ { "docid": "bf8a24b974553d21849e9b066d78e6d4", "text": "Dense video captioning aims to generate text descriptions for all events in an untrimmed video. This involves both detecting and describing events. Therefore, all previous methods on dense video captioning tackle this problem by building two models, i.e. an event proposal and a captioning model, for these two sub-problems. The models are either trained separately or in alternation. This prevents direct influence of the language description to the event proposal, which is important for generating accurate descriptions. To address this problem, we propose an end-to-end transformer model for dense video captioning. The encoder encodes the video into appropriate representations. The proposal decoder decodes from the encoding with different anchors to form video event proposals. The captioning decoder employs a masking network to restrict its attention to the proposal event over the encoding feature. This masking network converts the event proposal to a differentiable mask, which ensures the consistency between the proposal and captioning during training. In addition, our model employs a self-attention mechanism, which enables the use of efficient non-recurrent structure during encoding and leads to performance improvements. We demonstrate the effectiveness of this end-to-end model on ActivityNet Captions and YouCookII datasets, where we achieved 10.12 and 6.58 METEOR score, respectively.", "title": "" }, { "docid": "05a76f64a6acbcf48b7ac36785009db3", "text": "Mixed methods research is an approach that combines quantitative and qualitative research methods in the same research inquiry. Such work can help develop rich insights into various phenomena of interest that cannot be fully understood using only a quantitative or a qualitative method. Notwithstanding the benefits and repeated calls for such work, there is a dearth of mixed methods research in information systems. Building on the literature on recent methodological advances in mixed methods research, we develop a set of guidelines for conducting mixed methods research in IS. We particularly elaborate on three important aspects of conducting mixed methods research: (1) appropriateness of a mixed methods approach; (2) development of meta-inferences (i.e., substantive theory) from mixed methods research; and (3) assessment of the quality of meta-inferences (i.e., validation of mixed methods research). The applicability of these guidelines is illustrated using two published IS papers that used mixed methods.", "title": "" }, { "docid": "9414f4f7164c69f67b4bf200da9f1358", "text": "Experience replay is one of the most commonly used approaches to improve the sample efficiency of reinforcement learning algorithms. In this work, we propose an approach to select and replay sequences of transitions in order to accelerate the learning of a reinforcement learning agent in an off-policy setting. In addition to selecting appropriate sequences, we also artificially construct transition sequences using information gathered from previous agent-environment interactions. These sequences, when replayed, allow value function information to trickle down to larger sections of the state/state-action space, thereby making the most of the agent's experience. We demonstrate our approach on modified versions of standard reinforcement learning tasks such as the mountain car and puddle world problems and empirically show that it enables faster, and more accurate learning of value functions as compared to other forms of experience replay. Further, we briefly discuss some of the possible extensions to this work, as well as applications and situations where this approach could be particularly useful.", "title": "" }, { "docid": "73c1f5b8e8df783c976427b64734f909", "text": "XTS-AES is an advanced mode of AES for data protection of sector-based devices. Compared to other AES modes, it features two secret keys instead of one, and an additional tweak for each data block. These characteristics make the mode not only resistant against cryptoanalysis attacks, but also more challenging for side-channel attack. In this paper, we propose two attack methods on XTS-AES overcoming these challenges. In the first attack, we analyze side-channel leakage of the particular modular multiplication in XTS-AES mode. In the second one, we utilize the relationship between two consecutive block tweaks and propose a method to work around the masking of ciphertext by the tweak. These attacks are verified on an FPGA implementation of XTS-AES. The results show that XTS-AES is susceptible to side-channel power analysis attacks, and therefore dedicated protections are required for security of XTS-AES in storage devices.", "title": "" }, { "docid": "9e451fe70d74511d2cc5a58b667da526", "text": "Convolutional Neural Networks (CNNs) are propelling advances in a range of different computer vision tasks such as object detection and object segmentation. Their success has motivated research in applications of such models for medical image analysis. If CNN-based models are to be helpful in a medical context, they need to be precise, interpretable, and uncertainty in predictions must be well understood. In this paper, we develop and evaluate recent advances in uncertainty estimation and model interpretability in the context of semantic segmentation of polyps from colonoscopy images. We evaluate and enhance several architectures of Fully Convolutional Networks (FCNs) for semantic segmentation of colorectal polyps and provide a comparison between these models. Our highest performing model achieves a 76.06% mean IOU accuracy on the EndoScene dataset, a considerable improvement over the previous state-of-the-art.", "title": "" }, { "docid": "2687cb8fc5cde18e53c580a50b33e328", "text": "Social network sites (SNSs) are becoming an increasingly popular resource for both students and adults, who use them to connect with and maintain relationships with a variety of ties. For many, the primary function of these sites is to consume and distribute personal content about the self. Privacy concerns around sharing information in a public or semi-public space are amplified by SNSs’ structural characteristics, which may obfuscate the true audience of these disclosures due to their technical properties (e.g., persistence, searchability) and dynamics of use (e.g., invisible audiences, context collapse) (boyd, 2008b). Early work on the topic focused on the privacy pitfalls of Facebook and other SNSs (e.g., Acquisti & Gross, 2006; Barnes, 2006; Gross & Acquisti, 2005) and argued that individuals were (perhaps inadvertently) disclosing information that might be inappropriate for some audiences, such as future employers, or that might enable identity theft or other negative outcomes.", "title": "" }, { "docid": "f6f22580071dc149a8dc544835123977", "text": "This paper describes MITRE’s participation in the Paraphrase and Semantic Similarity in Twitter task (SemEval-2015 Task 1). This effort placed first in Semantic Similarity and second in Paraphrase Identification with scores of Pearson’s r of 61.9%, F1 of 66.7%, and maxF1 of 72.4%. We detail the approaches we explored including mixtures of string matching metrics, alignments using tweet-specific distributed word representations, recurrent neural networks for modeling similarity with those alignments, and distance measurements on pooled latent semantic features. Logistic regression is used to tie the systems together into the ensembles submitted for evaluation.", "title": "" }, { "docid": "b713da979bc3d01153eaae8827779b7b", "text": "Chronic lower leg pain results from various conditions, most commonly, medial tibial stress syndrome, stress fracture, chronic exertional compartment syndrome, nerve entrapment, and popliteal artery entrapment syndrome. Symptoms associated with these conditions often overlap, making a definitive diagnosis difficult. As a result, an algorithmic approach was created to aid in the evaluation of patients with complaints of lower leg pain and to assist in defining a diagnosis by providing recommended diagnostic studies for each condition. A comprehensive physical examination is imperative to confirm a diagnosis and should begin with an inquiry regarding the location and onset of the patient's pain and tenderness. Confirmation of the diagnosis requires performing the appropriate diagnostic studies, including radiographs, bone scans, magnetic resonance imaging, magnetic resonance angiography, compartmental pressure measurements, and arteriograms. Although most conditions causing lower leg pain are treated successfully with nonsurgical management, some syndromes, such as popliteal artery entrapment syndrome, may require surgical intervention. Regardless of the form of treatment, return to activity must be gradual and individualized for each patient to prevent future athletic injury.", "title": "" }, { "docid": "1b990fd9a3506f821519faad113f59ee", "text": "The primary focus of this study is to understand the current port operating condition and recommend short term measures to improve traffic condition in the port of Chennai. The cause of congestion is identified based on the data collected and observation made at port gates as well as at terminal gates in Chennai port. A simulation model for the existing road layout is developed in micro-simulation software VISSIM and is calibrated to reflect the prevailing condition inside the port. The data such as truck origin/destination, hourly inflow and outflow of trucks, speed, and stopping time at checking booths are used as input. Routing data is used to direct traffic to specific terminal or dock within the port. Several alternative scenarios are developed and simulated to get results of the key performance indicators. A comparative and detailed analysis of these indicators is used to evaluate recommendations to reduce congestion inside the port.", "title": "" }, { "docid": "435da20d6285a8b57a35fb407b96c802", "text": "This paper attempts to review examples of the use of storytelling and narrative in immersive virtual reality worlds. Particular attention is given to the way narrative is incorporated in artistic, cultural, and educational applications through the development of specific sensory and perceptual experiences that are based on characteristics inherent to virtual reality, such as immersion, interactivity, representation, and illusion. Narrative development is considered on three axes: form (visual representation), story (emotional involvement), and history (authenticated cultural content) and how these can come together.", "title": "" }, { "docid": "ebbc0b7aea9fafa1258f337fab4d20e8", "text": "This paper presents a new design of high frequency DC/AC inverter for home applications using fuel cells or photovoltaic array sources. A battery bank parallel to the DC link is provided to take care of the slow dynamic response of the source. The design is based on a push-pull DC/DC converter followed by a full-bridge PWM inverter topology. The nominal power rating is 10 kW. Actual design parameters, procedure and experimental results of a 1.5 kW prototype are provided. The objective of this paper is to explore the possibility of making renewable sources of energy utility interactive by means of low cost power electronic interface.", "title": "" }, { "docid": "f4d6cd6f6cd453077e162b64ae485c62", "text": "Effects of Music Therapy on Prosocial Behavior of Students with Autism and Developmental Disabilities by Catherine L. de Mers Dr. Matt Tincani, Examination Committee Chair Assistant Professor o f Special Education University o f Nevada, Las Vegas This researeh study employed a multiple baseline across participants design to investigate the effects o f music therapy intervention on hitting, screaming, and asking o f three children with autism and/or developmental disabilities. Behaviors were observed and recorded during 10-minute free-play sessions both during baseline and immediately after musie therapy sessions during intervention. Interobserver agreement and procedural fidelity data were collected. Music therapy sessions were modeled on literature pertaining to music therapy with children with autism. In addition, social validity surveys were eollected to answer research questions pertaining to the social validity of music therapy as an intervention. Findings indicate that music therapy produced moderate and gradual effects on hitting, screaming, and asking. Hitting and sereaming deereased following intervention, while asking increased. Intervention effects were maintained three weeks following", "title": "" }, { "docid": "6fdd0c7d239417234cfc4706a82b5a0f", "text": "We propose a method of generating teaching policies for use in intelligent tutoring systems (ITS) for concept learning tasks <xref ref-type=\"bibr\" rid=\"ref1\">[1]</xref> , e.g., teaching students the meanings of words by showing images that exemplify their meanings à la Rosetta Stone <xref ref-type=\"bibr\" rid=\"ref2\">[2]</xref> and Duo Lingo <xref ref-type=\"bibr\" rid=\"ref3\">[3]</xref> . The approach is grounded in control theory and capitalizes on recent work by <xref ref-type=\"bibr\" rid=\"ref4\">[4] </xref> , <xref ref-type=\"bibr\" rid=\"ref5\">[5]</xref> that frames the “teaching” problem as that of finding approximately optimal teaching policies for approximately optimal learners (AOTAOL). Our work expands on <xref ref-type=\"bibr\" rid=\"ref4\">[4]</xref> , <xref ref-type=\"bibr\" rid=\"ref5\">[5]</xref> in several ways: (1) We develop a novel student model in which the teacher's actions can <italic>partially </italic> eliminate hypotheses about the curriculum. (2) With our student model, inference can be conducted <italic> analytically</italic> rather than numerically, thus allowing computationally efficient planning to optimize learning. (3) We develop a reinforcement learning-based hierarchical control technique that allows the teaching policy to search through <italic>deeper</italic> learning trajectories. We demonstrate our approach in a novel ITS for foreign language learning similar to Rosetta Stone and show that the automatically generated AOTAOL teaching policy performs favorably compared to two hand-crafted teaching policies.", "title": "" }, { "docid": "e8dd0edd4ae06d53b78662f9acca09c5", "text": "A new methodology based on mixed linear models was developed for mapping QTLs with digenic epistasis and QTL×environment (QE) interactions. Reliable estimates of QTL main effects (additive and epistasis effects) can be obtained by the maximum-likelihood estimation method, while QE interaction effects (additive×environment interaction and epistasis×environment interaction) can be predicted by the-best-linear-unbiased-prediction (BLUP) method. Likelihood ratio and t statistics were combined for testing hypotheses about QTL effects and QE interactions. Monte Carlo simulations were conducted for evaluating the unbiasedness, accuracy, and power for parameter estimation in QTL mapping. The results indicated that the mixed-model approaches could provide unbiased estimates for both positions and effects of QTLs, as well as unbiased predicted values for QE interactions. Additionally, the mixed-model approaches also showed high accuracy and power in mapping QTLs with epistatic effects and QE interactions. Based on the models and the methodology, a computer software program (QTLMapper version 1.0) was developed, which is suitable for interval mapping of QTLs with additive, additive×additive epistasis, and their environment interactions.", "title": "" }, { "docid": "83f88cbaed86220e0047b51c965a77ba", "text": "There are two conflicting perspectives regarding the relationship between profanity and dishonesty. These two forms of norm-violating behavior share common causes and are often considered to be positively related. On the other hand, however, profanity is often used to express one's genuine feelings and could therefore be negatively related to dishonesty. In three studies, we explored the relationship between profanity and honesty. We examined profanity and honesty first with profanity behavior and lying on a scale in the lab (Study 1; N = 276), then with a linguistic analysis of real-life social interactions on Facebook (Study 2; N = 73,789), and finally with profanity and integrity indexes for the aggregate level of U.S. states (Study 3; N = 50 states). We found a consistent positive relationship between profanity and honesty; profanity was associated with less lying and deception at the individual level and with higher integrity at the society level.", "title": "" }, { "docid": "4706f9e8d9892543aaeb441c45816b24", "text": "The mood of a text and the intention of the writer can be reflected in the typeface. However, in designing a typeface, it is difficult to keep the style of various characters consistent, especially for languages with lots of morphological variations such as Chinese. In this paper, we propose a Typeface Completion Network (TCN) which takes one character as an input, and automatically completes the entire set of characters in the same style as the input characters. Unlike existing models proposed for image-to-image translation, TCN embeds a character image into two separate vectors representing typeface and content. Combined with a reconstruction loss from the latent space, and with other various losses, TCN overcomes the inherent difficulty in designing a typeface. Also, compared to previous image-to-image translation models, TCN generates high quality character images of the same typeface with a much smaller number of model parameters. We validate our proposed model on the Chinese and English character datasets, which is paired data, and the CelebA dataset, which is unpaired data. In these datasets, TCN outperforms recently proposed state-of-the-art models for image-to-image translation. The source code of our model is available at https://github.com/yongqyu/TCN.", "title": "" }, { "docid": "2b314587816255285bf985a086719572", "text": "Tomatoes are well-known vegetables, grown and eaten around the world due to their nutritional benefits. The aim of this research was to determine the chemical composition (dry matter, soluble solids, titritable acidity, vitamin C, lycopene), the taste index and maturity in three cherry tomato varieties (Sakura, Sunstream, Mathew) grown and collected from greenhouse at different stages of ripening. The output of the analyses showed that there were significant differences in the mean values among the analysed parameters according to the stage of ripening and variety. During ripening, the content of soluble solids increases on average two times in all analyzed varieties; the highest content of vitamin C and lycopene was determined in tomatoes of Sunstream variety in red stage. The highest total acidity expressed as g of citric acid 100 g was observed in pink stage (variety Sakura) or a breaker stage (varieties Sunstream and Mathew). The taste index of the variety Sakura was higher at all analyzed ripening stages in comparison with other varieties. This shows that ripening stages have a significant effect on tomato biochemical composition along with their variety.", "title": "" }, { "docid": "eac86562382c4ec9455f1422b6f50e9f", "text": "In this paper we look at how to sparsify a graph i.e. how to reduce the edgeset while keeping the nodes intact, so as to enable faster graph clustering without sacrificing quality. The main idea behind our approach is to preferentially retain the edges that are likely to be part of the same cluster. We propose to rank edges using a simple similarity-based heuristic that we efficiently compute by comparing the minhash signatures of the nodes incident to the edge. For each node, we select the top few edges to be retained in the sparsified graph. Extensive empirical results on several real networks and using four state-of-the-art graph clustering and community discovery algorithms reveal that our proposed approach realizes excellent speedups (often in the range 10-50), with little or no deterioration in the quality of the resulting clusters. In fact, for at least two of the four clustering algorithms, our sparsification consistently enables higher clustering accuracies.", "title": "" }, { "docid": "93c9ffa6c83de5fece14eb351315fbed", "text": "nature protocols | VOL.7 NO.11 | 2012 | 1983 IntroDuctIon In a typical histology study, it is necessary to make thin sections of blocks of frozen or fixed tissue for microscopy. This process has major limitations for obtaining a 3D picture of structural components and the distribution of cells within tissues. For example, in axon regeneration studies, after labeling the injured axons, it is common that the tissue of interest (e.g., spinal cord, optic nerve) is sectioned. Subsequently, when tissue sections are analyzed under the microscope, only short fragments of axons are observed within each section; hence, the 3D information of axonal structures is lost. Because of this confusion, these fragmented axonal profiles might be interpreted as regenerated axons even though they could be spared axons1. In addition, the growth trajectories and target regions of the regenerating axons cannot be identified by visualization of axonal fragments. Similar problems could occur in cancer and immunology studies when only small fractions of target cells are observed within large organs. To avoid these limitations and problems, tissues ideally should be imaged at high spatial resolution without sectioning. However, optical imaging of thick tissues is limited mostly because of scattering of imaging light through the thick tissues, which contain various cellular and extracellular structures with different refractive indices. The imaging light traveling through different structures scatters and loses its excitation and emission efficiency, resulting in a lower resolution and imaging depth2,3. Optical clearing of tissues by organic solvents, which make the biological tissue transparent by matching the refractory indexes of different tissue layers to the solvent, has become a prominent method for imaging thick tissues2,4. In cleared tissues, the imaging light does not scatter and travels unobstructed throughout the different tissue layers. For this purpose, the first tissue clearing method was developed about a century ago by Spalteholz, who used a mixture of benzyl alcohol and methyl salicylate to clear large organs such as the heart5,6. In general, the first step of tissue clearing is tissue dehydration, owing to the low refractive index of water compared with cellular structures containing proteins and lipids4. Subsequently, dehydrated tissue is impregnated with an optical clearing agent, such as glucose7, glycerol8, benzyl alcohol–benzyl benzoate (BABB, also known as Murray’s clear)4,9–13 or dibenzyl ether (DBE)13,14, which have approximately the same refractive index as the impregnated tissue. At the end of the clearing procedure, the cleared tissue hardens and turns transparent, and thus resembles glass.", "title": "" }, { "docid": "6f22283e5142035d6f6f9d5e06ab1cd2", "text": "We present a novel technique to automatically colorize grayscale images that combines both global priors and local image features. Based on Convolutional Neural Networks, our deep network features a fusion layer that allows us to elegantly merge local information dependent on small image patches with global priors computed using the entire image. The entire framework, including the global and local priors as well as the colorization model, is trained in an end-to-end fashion. Furthermore, our architecture can process images of any resolution, unlike most existing approaches based on CNN. We leverage an existing large-scale scene classification database to train our model, exploiting the class labels of the dataset to more efficiently and discriminatively learn the global priors. We validate our approach with a user study and compare against the state of the art, where we show significant improvements. Furthermore, we demonstrate our method extensively on many different types of images, including black-and-white photography from over a hundred years ago, and show realistic colorizations.", "title": "" } ]
scidocsrr
911bbf99dbbf1992c6ebfc9349b22f70
Cross-Point Architecture for Spin-Transfer Torque Magnetic Random Access Memory
[ { "docid": "1898911f1e4f68f02edcc6c80fda47bc", "text": "This paper reports a 45nm spin-transfer-torque (STT) MRAM embedded into a standard CMOS logic platform that employs low-power (LP) transistors and Cu/low-k BEOL. We believe that this is the first-ever demonstration of embedded STT MRAM that is fully compatible with the 45nm logic technology. To ensure the switching margin, a novel Ȝreverse-connectionȝ 1T/1MT cell has been developed with a cell size of 0.1026 µm2. This cell is utilized to build embedded memory macros up to 32 Mbits in density. Device attributes and design windows have been examined by considering PVT variations to secure operating margins. Promising early reliability data on endurance, read disturb, and thermal stability have been obtained.", "title": "" } ]
[ { "docid": "1ed19900b9cfa74f27fef472acde0e84", "text": "We describe the capabilities of and algorithms used in a ne w FPGA CAD tool, Versatile Place and Route (VPR). In terms of minimizing routing area, VPR outperforms all published FPGA place and route tools to which we can compare. Although the algorithms used are based on pre viously known approaches, we present se veral enhancements that impro ve run-time and quality . We present placement and routing results on a ne w set of lar ge circuits to allo w future benchmark comparisons of FPGA place and route tools on circuit sizes more typical of today’ s industrial designs. VPR is capable of tar geting a broad range of FPGA architectures, and the source code is publicly a vailable. It and the associated netlist translation / clustering tool VPACK have already been used in a number of research projects w orldwide, and should be useful in man y areas of FPGA architecture research.", "title": "" }, { "docid": "ae6a02ee18e3599c65fb9db22706de44", "text": "We use a hierarchical Bayesian approach to model user preferences in different contexts or settings. Unlike many previous recommenders, our approach is content-based. We assume that for each context, a user has a different set of preference weights which are linked by a common, “generic context” set of weights. The approach uses Expectation Maximization (EM) to estimate both the generic context weights and the context specific weights. This improves upon many current recommender systems that do not incorporate context into the recommendations they provide. In this paper, we show that by considering contextual information, we can improve our recommendations, demonstrating that it is useful to consider context in giving ratings. Because the approach does not rely on connecting users via collaborative filtering, users are able to interpret contexts in different ways and invent their own", "title": "" }, { "docid": "ecaa792a7b3c9de643b7ed381ffb9d6b", "text": "In the field of Evolutionary Computation, a common myth that “An Evolutionary Algorithm (EA) will outperform a local search algorithm, given enough runtime and a large-enough population” exists. We believe that this is not necessarily true and challenge the statement with several simple considerations. We then investigate the population size parameter of EAs, as this is the element in the above claim that can be controlled. We conduct a related work study, which substantiates the assumption that there should be an optimal setting for the population size at which a specific EA would perform best on a given problem instance and computational budget. Subsequently, we carry out a large-scale experimental study on 68 instances of the Traveling Salesman Problem with static population sizes that are powers of two between (1+2) and (262 144 + 524 288) EAs as well as with adaptive population sizes. We find that analyzing the performance of the different setups over runtime supports our point of view and the existence of optimal finite population size settings.", "title": "" }, { "docid": "835b74c546ba60dfbb62e804daec8521", "text": "The goal of Open Information Extraction (OIE) is to extract surface relations and their arguments from naturallanguage text in an unsupervised, domainindependent manner. In this paper, we propose MinIE, an OIE system that aims to provide useful, compact extractions with high precision and recall. MinIE approaches these goals by (1) representing information about polarity, modality, attribution, and quantities with semantic annotations instead of in the actual extraction, and (2) identifying and removing parts that are considered overly specific. We conducted an experimental study with several real-world datasets and found that MinIE achieves competitive or higher precision and recall than most prior systems, while at the same time producing shorter, semantically enriched extractions.", "title": "" }, { "docid": "8cc12987072c983bc45406a033a467aa", "text": "Vehicular drivers and shift workers in industry are at most risk of handling life critical tasks. The drivers traveling long distances or when they are tired, are at risk of a meeting an accident. The early hours of the morning and the middle of the afternoon are the peak times for fatigue driven accidents. The difficulty in determining the incidence of fatigue-related accidents is due, at least in part, to the difficulty in identifying fatigue as a causal or causative factor in accidents. In this paper we propose an alternative approach for fatigue detection in vehicular drivers using Respiration (RSP) signal to reduce the losses of the lives and vehicular accidents those occur due to cognitive fatigue of the driver. We are using basic K-means algorithm with proposed two modifications as classifier for detection of Respiration signal two state fatigue data recorded from the driver. The K-means classifiers [11] were trained and tested for wavelet feature of Respiration signal. The extracted features were treated as individual decision making parameters. From test results it could be found that some of the wavelet features could fetch 100 % classification accuracy.", "title": "" }, { "docid": "4941250a228f9494480d8dd175490671", "text": "In machine learning often a tradeoff must be made between accuracy and intelligibility. More accurate models such as boosted trees, random forests, and neural nets usually are not intelligible, but more intelligible models such as logistic regression, naive-Bayes, and single decision trees often have significantly worse accuracy. This tradeoff sometimes limits the accuracy of models that can be applied in mission-critical applications such as healthcare where being able to understand, validate, edit, and trust a learned model is important. We present two case studies where high-performance generalized additive models with pairwise interactions (GA2Ms) are applied to real healthcare problems yielding intelligible models with state-of-the-art accuracy. In the pneumonia risk prediction case study, the intelligible model uncovers surprising patterns in the data that previously had prevented complex learned models from being fielded in this domain, but because it is intelligible and modular allows these patterns to be recognized and removed. In the 30-day hospital readmission case study, we show that the same methods scale to large datasets containing hundreds of thousands of patients and thousands of attributes while remaining intelligible and providing accuracy comparable to the best (unintelligible) machine learning methods.", "title": "" }, { "docid": "59869b070268fd17145e23c7b0bb4b80", "text": "Friction characteristics between the wafer and the polishing pad play an important role in the chemical-mechanical planarization (CMP) process. In this paper, a wafer/pad friction modeling and monitoring scheme for the linear CMP process is presented. Kinematic analysis of the linear CMP system is investigated and a distributed LuGre dynamic friction model is utilized to capture the friction forces generated by the wafer/pad interactions. The frictional torques of both the polisher spindle and the roller systems are used to monitor in situ the changes of the friction coefficient during a CMP process. Effects of pad conditioning and patterned wafer topography on the wafer/pad friction are also analyzed and discussed. The proposed friction modeling and monitoring scheme can be further used for real-time CMP monitoring and process fault diagnosis.", "title": "" }, { "docid": "eb81611ba60d5c07e0306dc4e93deee4", "text": "Research in child fatalities because of abuse and neglect has continued to increase, yet the mechanisms of the death incident and risk factors for these deaths remain unclear. The purpose of this study was to systematically examine the types of neglect that resulted in children's deaths as determined by child welfare and a child death review board. This case review study reviewed 22 years of data (n=372) of child fatalities attributed solely to neglect taken from a larger sample (N=754) of abuse and neglect death cases spanning the years 1987-2008. The file information reviewed was provided by the Oklahoma Child Death Review Board (CDRB) and the Oklahoma Department of Human Services (DHS) Division of Children and Family Services. Variables of interest were child age, ethnicity, and birth order; parental age and ethnicity; cause of death as determined by child protective services (CPS); and involvement with DHS at the time of the fatal event. Three categories of fatal neglect--supervisory neglect, deprivation of needs, and medical neglect--were identified and analyzed. Results found an overwhelming presence of supervisory neglect in child neglect fatalities and indicated no significant differences between children living in rural and urban settings. Young children and male children comprised the majority of fatalities, and African American and Native American children were over-represented in the sample when compared to the state population. This study underscores the critical need for prevention and educational programming related to appropriate adult supervision and adequate safety measures to prevent a child's death because of neglect.", "title": "" }, { "docid": "d44080fc547355ff8389f9da53d03c45", "text": "High profile attacks such as Stuxnet and the cyber attack on the Ukrainian power grid have increased research in Industrial Control System (ICS) and Supervisory Control and Data Acquisition (SCADA) network security. However, due to the sensitive nature of these networks, there is little publicly available data for researchers to evaluate the effectiveness of the proposed solution. The lack of representative data sets makes evaluation and independent validation of emerging security solutions difficult and slows down progress towards effective and reusable solutions. This paper presents our work to generate representative labeled data sets for SCADA networks that security researcher can use freely. The data sets include packet captures including both malicious and non-malicious Modbus traffic and accompanying CSV files that contain labels to provide the ground truth for supervised machine learning. To provide representative data at the network level, the data sets were generated in a SCADA sandbox, where electrical network simulators were used to introduce realism in the physical component. Also, real attack tools, some of them custom built for Modbus networks, were used to generate the malicious traffic. Even though they do not fully replicate a production network, these data sets represent a good baseline to validate detection tools for SCADA systems.", "title": "" }, { "docid": "28552dfe20642145afa9f9fa00218e8e", "text": "Augmented Reality can be of immense benefit to the construction industry. The oft-cited benefits of AR in construction industry include real time visualization of projects, project monitoring by overlaying virtual models on actual built structures and onsite information retrieval. But this technology is restricted by the high cost and limited portability of the devices. Further, problems with real time and accurate tracking in a construction environment hinder its broader application. To enable utilization of augmented reality on a construction site, a low cost augmented reality framework based on the Google Cardboard visor is proposed. The current applications available for Google cardboard has several limitations in delivering an AR experience relevant to construction requirements. To overcome these limitations Unity game engine, with the help of Vuforia & Cardboard SDK, is used to develop an application environment which can be used for location and orientation specific visualization and planning of work at construction workface. The real world image is captured through the smart-phone camera input and blended with the stereo input of the 3D models to enable a full immersion experience. The application is currently limited to marker based tracking where the 3D models are triggered onto the user’s view upon scanning an image which is registered with a corresponding 3D model preloaded into the application. A gaze input user interface is proposed which enables the user to interact with the augmented models. Finally usage of AR app while traversing the construction site is illustrated.", "title": "" }, { "docid": "6bae81e837f4a498ae4c814608aac313", "text": "person’s ability to focus on his or her primary task. Distractions occur especially in mobile environments, because walking, driving, or other real-world interactions often preoccupy the user. A pervasivecomputing environment that minimizes distraction must be context aware, and a pervasive-computing system must know the user’s state to accommodate his or her needs. Context-aware applications provide at least two fundamental services: spatial awareness and temporal awareness. Spatially aware applications consider a user’s relative and absolute position and orientation. Temporally aware applications consider the time schedules of public and private events. With an interdisciplinary class of Carnegie Mellon University (CMU) students, we developed and implemented a context-aware, pervasive-computing environment that minimizes distraction and facilitates collaborative design.", "title": "" }, { "docid": "0f9a33f8ef5c9c415cf47814c9ef896d", "text": "BACKGROUND\nNeuropathic pain is one of the most devastating kinds of chronic pain. Neuroinflammation has been shown to contribute to the development of neuropathic pain. We have previously demonstrated that lumbar spinal cord-infiltrating CD4+ T lymphocytes contribute to the maintenance of mechanical hypersensitivity in spinal nerve L5 transection (L5Tx), a murine model of neuropathic pain. Here, we further examined the phenotype of the CD4+ T lymphocytes involved in the maintenance of neuropathic pain-like behavior via intracellular flow cytometric analysis and explored potential interactions between infiltrating CD4+ T lymphocytes and spinal cord glial cells.\n\n\nRESULTS\nWe consistently observed significantly higher numbers of T-Bet+, IFN-γ+, TNF-α+, and GM-CSF+, but not GATA3+ or IL-4+, lumbar spinal cord-infiltrating CD4+ T lymphocytes in the L5Tx group compared to the sham group at day 7 post-L5Tx. This suggests that the infiltrating CD4+ T lymphocytes expressed a pro-inflammatory type 1 phenotype (Th1). Despite the observation of CD4+ CD40 ligand (CD154)+ T lymphocytes in the lumbar spinal cord post-L5Tx, CD154 knockout (KO) mice did not display significant changes in L5Tx-induced mechanical hypersensitivity, indicating that T lymphocyte-microglial interaction through the CD154-CD40 pathway is not necessary for L5Tx-induced hypersensitivity. In addition, spinal cord astrocytic activation, represented by glial fibillary acidic protein (GFAP) expression, was significantly lower in CD4 KO mice compared to wild type (WT) mice at day 14 post-L5Tx, suggesting the involvement of astrocytes in the pronociceptive effects mediated by infiltrating CD4+ T lymphocytes.\n\n\nCONCLUSIONS\nIn all, these data indicate that the maintenance of L5Tx-induced neuropathic pain is mostly mediated by Th1 cells in a CD154-independent manner via a mechanism that could involve multiple Th1 cytokines and astrocytic activation.", "title": "" }, { "docid": "a86b53d284ad8244d9917f05eeef5f15", "text": "Social networks consist of various communities that host members sharing common characteristics. Often some members of one community are also members of other communities. Such shared membership of different communities leads to overlapping communities. Detecting such overlapping communities is a challenging and computationally intensive problem. In this paper, we investigate the usability of high performance computing in the area of social networks and community detection. We present highly scalable variants of a community detection algorithm called Speaker-listener Label Propagation Algorithm (SLPA). We show that despite of irregular data dependencies in the computation, parallel computing paradigms can significantly speed up the detection of overlapping communities of social networks which is computationally expensive. We show by experiments, how various parallel computing architectures can be utilized to analyze large social network data on both shared memory machines and distributed memory machines, such as IBM Blue Gene.", "title": "" }, { "docid": "2c8061cf1c9b6e157bdebf9126b2f15c", "text": "Recently, the concept of olfaction-enhanced multimedia applications has gained traction as a step toward further enhancing user quality of experience. The next generation of rich media services will be immersive and multisensory, with olfaction playing a key role. This survey reviews current olfactory-related research from a number of perspectives. It introduces and explains relevant olfactory psychophysical terminology, knowledge of which is necessary for working with olfaction as a media component. In addition, it reviews and highlights the use of, and potential for, olfaction across a number of application domains, namely health, tourism, education, and training. A taxonomy of research and development of olfactory displays is provided in terms of display type, scent generation mechanism, application area, and strengths/weaknesses. State of the art research works involving olfaction are discussed and associated research challenges are proposed.", "title": "" }, { "docid": "e81b4c01c2512f2052354402cd09522b", "text": "...................................................................................................................... iii ACKNOWLEDGEMENTS .................................................................................................v CHAPTER", "title": "" }, { "docid": "b039138e9c0ef8456084891c45d7b36d", "text": "Over the last few years or so, the use of artificial neural networks (ANNs) has increased in many areas of engineering. In particular, ANNs have been applied to many geotechnical engineering problems and have demonstrated some degree of success. A review of the literature reveals that ANNs have been used successfully in pile capacity prediction, modelling soil behaviour, site characterisation, earth retaining structures, settlement of structures, slope stability, design of tunnels and underground openings, liquefaction, soil permeability and hydraulic conductivity, soil compaction, soil swelling and classification of soils. The objective of this paper is to provide a general view of some ANN applications for solving some types of geotechnical engineering problems. It is not intended to describe the ANNs modelling issues in geotechnical engineering. The paper also does not intend to cover every single application or scientific paper that found in the literature. For brevity, some works are selected to be described in some detail, while others are acknowledged for reference purposes. The paper then discusses the strengths and limitations of ANNs compared with the other modelling approaches.", "title": "" }, { "docid": "884f575062bb9e9702d3ec44d620e6cc", "text": "A key issue in the direct torque control of permanent magnet brushless DC motors is the estimation of the instantaneous electromagnetic torque, while sensorless control is often advantageous. A sliding mode observer is employed to estimate the non-sinusoidal back-emf waveform, and a simplified extended Kalman filter is used to estimate the rotor speed. Both are combined to calculate the instantaneous electromagnetic torque, the effectiveness of this approach being validated by simulations and measurements.", "title": "" }, { "docid": "bb89461e134951301bb41339f83d29d0", "text": "Gravity is the only component of Earth environment that remained constant throughout the entire process of biological evolution. However, it is still unclear how gravity affects plant growth and development. In this study, an in vitro cell culture of Arabidopsis thaliana was exposed to different altered gravity conditions, namely simulated reduced gravity (simulated microgravity, simulated Mars gravity) and hypergravity (2g), to study changes in cell proliferation, cell growth, and epigenetics. The effects after 3, 14, and 24-hours of exposure were evaluated. The most relevant alterations were found in the 24-hour treatment, being more significant for simulated reduced gravity than hypergravity. Cell proliferation and growth were uncoupled under simulated reduced gravity, similarly, as found in meristematic cells from seedlings grown in real or simulated microgravity. The distribution of cell cycle phases was changed, as well as the levels and gene transcription of the tested cell cycle regulators. Ribosome biogenesis was decreased, according to levels and gene transcription of nucleolar proteins and the number of inactive nucleoli. Furthermore, we found alterations in the epigenetic modifications of chromatin. These results show that altered gravity effects include a serious disturbance of cell proliferation and growth, which are cellular functions essential for normal plant development.", "title": "" }, { "docid": "fd2da8187978c334d5fe265b4df14487", "text": "Monopulse is a classical radar technique [1] of precise direction finding of a source or target. The concept can be used both in radar applications as well as in modern communication techniques. The information contained in antenna sidelobes normally disturbs the determination of DOA in the case of a classical monopulse system. The suitable combination of amplitudeand phase-monopulse algorithm leads to the novel complex monopulse algorithm (CMP), which also can utilise information from the sidelobes by using the phase shift of the signals in the sidelobes in relation to the mainlobes.", "title": "" }, { "docid": "fd1b82c69a3182ab7f8c0a7cf2030b6f", "text": "Lenz-Majewski hyperostotic dwarfism (LMHD) is an ultra-rare Mendelian craniotubular dysostosis that causes skeletal dysmorphism and widely distributed osteosclerosis. Biochemical and histopathological characterization of the bone disease is incomplete and nonexistent, respectively. In 2014, a publication concerning five unrelated patients with LMHD disclosed that all carried one of three heterozygous missense mutations in PTDSS1 encoding phosphatidylserine synthase 1 (PSS1). PSS1 promotes the biosynthesis of phosphatidylserine (PTDS), which is a functional constituent of lipid bilayers. In vitro, these PTDSS1 mutations were gain-of-function and increased PTDS production. Notably, PTDS binds calcium within matrix vesicles to engender hydroxyapatite crystal formation, and may enhance mesenchymal stem cell differentiation leading to osteogenesis. We report an infant girl with LMHD and a novel heterozygous missense mutation (c.829T>C, p.Trp277Arg) within PTDSS1. Bone turnover markers suggested that her osteosclerosis resulted from accelerated formation with an unremarkable rate of resorption. Urinary amino acid quantitation revealed a greater than sixfold elevation of phosphoserine. Our findings affirm that PTDSS1 defects cause LMHD and support enhanced biosynthesis of PTDS in the pathogenesis of LMHD.", "title": "" } ]
scidocsrr
32bf91d28b824afac3874285773666d9
From archaeon to eukaryote: the evolutionary dark ages of the eukaryotic cell.
[ { "docid": "023fa0ac94b2ea1740f1bbeb8de64734", "text": "The establishment of an endosymbiotic relationship typically seems to be driven through complementation of the host's limited metabolic capabilities by the biochemical versatility of the endosymbiont. The most significant examples of endosymbiosis are represented by the endosymbiotic acquisition of plastids and mitochondria, introducing photosynthesis and respiration to eukaryotes. However, there are numerous other endosymbioses that evolved more recently and repeatedly across the tree of life. Recent advances in genome sequencing technology have led to a better understanding of the physiological basis of many endosymbiotic associations. This review focuses on endosymbionts in protists (unicellular eukaryotes). Selected examples illustrate the incorporation of various new biochemical functions, such as photosynthesis, nitrogen fixation and recycling, and methanogenesis, into protist hosts by prokaryotic endosymbionts. Furthermore, photosynthetic eukaryotic endosymbionts display a great diversity of modes of integration into different protist hosts. In conclusion, endosymbiosis seems to represent a general evolutionary strategy of protists to acquire novel biochemical functions and is thus an important source of genetic innovation.", "title": "" } ]
[ { "docid": "179675ecf9ef119fcb0bc512995e2920", "text": "There is little evidence available on the use of robot-assisted therapy in subacute stroke patients. A randomized controlled trial was carried out to evaluate the short-time efficacy of intensive robot-assisted therapy compared to usual physical therapy performed in the early phase after stroke onset. Fifty-three subacute stroke patients at their first-ever stroke were enrolled 30 ± 7 days after the acute event and randomized into two groups, both exposed to standard therapy. Additional 30 sessions of robot-assisted therapy were provided to the Experimental Group. Additional 30 sessions of usual therapy were provided to the Control Group. The following impairment evaluations were performed at the beginning (T0), after 15 sessions (T1), and at the end of the treatment (T2): Fugl-Meyer Assessment Scale (FM), Modified Ashworth Scale-Shoulder (MAS-S), Modified Ashworth Scale-Elbow (MAS-E), Total Passive Range of Motion-Shoulder/Elbow (pROM), and Motricity Index (MI). Evidence of significant improvements in MAS-S (p = 0.004), MAS-E (p = 0.018) and pROM (p < 0.0001) was found in the Experimental Group. Significant improvement was demonstrated in both Experimental and Control Group in FM (EG: p < 0.0001, CG: p < 0.0001) and MI (EG: p < 0.0001, CG: p < 0.0001), with an higher improvement in the Experimental Group. Robot-assisted upper limb rehabilitation treatment can contribute to increasing motor recovery in subacute stroke patients. Focusing on the early phase of stroke recovery has a high potential impact in clinical practice.", "title": "" }, { "docid": "f7d535f9a5eeae77defe41318d642403", "text": "On-line learning in domains where the target concept depends on some hidden context poses serious problems. A changing context can induce changes in the target concepts, producing what is known as concept drift. We describe a family of learning algorithms that flexibly react to concept drift and can take advantage of situations where contexts reappear. The general approach underlying all these algorithms consists of (1) keeping only a window of currently trusted examples and hypotheses; (2) storing concept descriptions and re-using them when a previous context re-appears; and (3) controlling both of these functions by a heuristic that constantly monitors the system's behavior. The paper reports on experiments that test the systems' performance under various conditions such as different levels of noise and different extent and rate of concept drift.", "title": "" }, { "docid": "97582a93ef3977fab8b242a1ce102459", "text": "We propose a distributed, multi-camera video analysis paradigm for aiport security surveillance. We propose to use a new class of biometry signatures, which are called soft biometry including a person's height, built, skin tone, color of shirts and trousers, motion pattern, trajectory history, etc., to ID and track errant passengers and suspicious events without having to shut down a whole terminal building and cancel multiple flights. The proposed research is to enable the reliable acquisition, maintenance, and correspondence of soft biometry signatures in a coordinated manner from a large number of video streams for security surveillance. The intellectual merit of the proposed research is to address three important video analysis problems in a distributed, multi-camera surveillance network: sensor network calibration, peer-to-peer sensor data fusion, and stationary-dynamic cooperative camera sensing.", "title": "" }, { "docid": "a10b7c4b088c8df706381cfc3f1faec1", "text": "OBJECTIVE\nTo develop a clinical practice guideline for red blood cell transfusion in adult trauma and critical care.\n\n\nDESIGN\nMeetings, teleconferences and electronic-based communication to achieve grading of the published evidence, discussion and consensus among the entire committee members.\n\n\nMETHODS\nThis practice management guideline was developed by a joint taskforce of EAST (Eastern Association for Surgery of Trauma) and the American College of Critical Care Medicine (ACCM) of the Society of Critical Care Medicine (SCCM). We performed a comprehensive literature review of the topic and graded the evidence using scientific assessment methods employed by the Canadian and U.S. Preventive Task Force (Grading of Evidence, Class I, II, III; Grading of Recommendations, Level I, II, III). A list of guideline recommendations was compiled by the members of the guidelines committees for the two societies. Following an extensive review process by external reviewers, the final guideline manuscript was reviewed and approved by the EAST Board of Directors, the Board of Regents of the ACCM and the Council of SCCM.\n\n\nRESULTS\nKey recommendations are listed by category, including (A) Indications for RBC transfusion in the general critically ill patient; (B) RBC transfusion in sepsis; (C) RBC transfusion in patients at risk for or with acute lung injury and acute respiratory distress syndrome; (D) RBC transfusion in patients with neurologic injury and diseases; (E) RBC transfusion risks; (F) Alternatives to RBC transfusion; and (G) Strategies to reduce RBC transfusion.\n\n\nCONCLUSIONS\nEvidence-based recommendations regarding the use of RBC transfusion in adult trauma and critical care will provide important information to critical care practitioners.", "title": "" }, { "docid": "950fc4239ced87fef76ac687af3b09ac", "text": "Software developers’ activities are in general recorded in software repositories such as version control systems, bug trackers and mail archives. While abundant information is usually present in such repositories, successful information extraction is often challenged by the necessity to simultaneously analyze different repositories and to combine the information obtained. We propose to apply process mining techniques, originally developed for business process analysis, to address this challenge. However, in order for process mining to become applicable, different software repositories should be combined, and “related” software development events should be matched: e.g., mails sent about a file, modifications of the file and bug reports that can be traced back to it. The combination and matching of events has been implemented in FRASR (Framework for Analyzing Software Repositories), augmenting the process mining framework ProM. FRASR has been successfully applied in a series of case studies addressing such aspects of the development process as roles of different developers and the way bug reports are handled.", "title": "" }, { "docid": "ea31a93d54e45eede5ba3e6263e8a13e", "text": "Clustering methods for data-mining problems must be extremely scalable. In addition, several data mining applications demand that the clusters obtained be balanced, i.e., of approximately the same size or importance. In this paper, we propose a general framework for scalable, balanced clustering. The data clustering process is broken down into three steps: sampling of a small representative subset of the points, clustering of the sampled data, and populating the initial clusters with the remaining data followed by refinements. First, we show that a simple uniform sampling from the original data is sufficient to get a representative subset with high probability. While the proposed framework allows a large class of algorithms to be used for clustering the sampled set, we focus on some popular parametric algorithms for ease of exposition. We then present algorithms to populate and refine the clusters. The algorithm for populating the clusters is based on a generalization of the stable marriage problem, whereas the refinement algorithm is a constrained iterative relocation scheme. The complexity of the overall method is O(kN log N) for obtaining k balanced clusters from N data points, which compares favorably with other existing techniques for balanced clustering. In addition to providing balancing guarantees, the clustering performance obtained using the proposed framework is comparable to and often better than the corresponding unconstrained solution. Experimental results on several datasets, including high-dimensional (>20,000) ones, are provided to demonstrate the efficacy of the proposed framework.", "title": "" }, { "docid": "e37b3a68c850d1fb54c9030c22b5792f", "text": "We address a central problem of neuroanatomy, namely, the automatic segmentation of neuronal structures depicted in stacks of electron microscopy (EM) images. This is necessary to efficiently map 3D brain structure and connectivity. To segment biological neuron membranes, we use a special type of deep artificial neural network as a pixel classifier. The label of each pixel (membrane or nonmembrane) is predicted from raw pixel values in a square window centered on it. The input layer maps each window pixel to a neuron. It is followed by a succession of convolutional and max-pooling layers which preserve 2D information and extract features with increasing levels of abstraction. The output layer produces a calibrated probability for each class. The classifier is trained by plain gradient descent on a 512 × 512 × 30 stack with known ground truth, and tested on a stack of the same size (ground truth unknown to the authors) by the organizers of the ISBI 2012 EM Segmentation Challenge. Even without problem-specific postprocessing, our approach outperforms competing techniques by a large margin in all three considered metrics, i.e. rand error, warping error and pixel error. For pixel error, our approach is the only one outperforming a second human observer.", "title": "" }, { "docid": "9ca63cbf9fb0294aff706562d629e9d1", "text": "This demo showcases Scythe, a novel query-by-example system that can synthesize expressive SQL queries from inputoutput examples. Scythe is designed to help end-users program SQL and explore data simply using input-output examples. From a web-browser, users can obtain SQL queries with Scythe in an automated, interactive fashion: from a provided example, Scythe synthesizes SQL queries and resolves ambiguities via conversations with the users. In this demo, we first show Scythe how end users can formulate queries using Scythe; we then switch to the perspective of an algorithm designer to show how Scythe can scale up to handle complex SQL features, like outer joins and subqueries.", "title": "" }, { "docid": "e34d244a395a753b0cb97f8535b56add", "text": "We propose Quadruplet Convolutional Neural Networks (Quad-CNN) for multi-object tracking, which learn to associate object detections across frames using quadruplet losses. The proposed networks consider target appearances together with their temporal adjacencies for data association. Unlike conventional ranking losses, the quadruplet loss enforces an additional constraint that makes temporally adjacent detections more closely located than the ones with large temporal gaps. We also employ a multi-task loss to jointly learn object association and bounding box regression for better localization. The whole network is trained end-to-end. For tracking, the target association is performed by minimax label propagation using the metric learned from the proposed network. We evaluate performance of our multi-object tracking algorithm on public MOT Challenge datasets, and achieve outstanding results.", "title": "" }, { "docid": "c16428f049cebdc383c4ee24f75da6b0", "text": "Classification and regression trees are machine-learning methods for constructing prediction models from data. The models are obtained by recursively partitioning the data space and fitting a simple prediction model within each partition. As a result, the partitioning can be represented graphically as a decision tree. Classification trees are designed for dependent variables that take a finite number of unordered values, with prediction error measured in terms of misclassification cost. Regression trees are for dependent variables that take continuous or ordered discrete values, with prediction error typically measured by the squared difference between the observed and predicted values. This article gives an introduction to the subject by reviewing some widely available algorithms and comparing their capabilities, strengths, and weakness in two examples. C © 2011 John Wiley & Sons, Inc. WIREs Data Mining Knowl Discov 2011 1 14–23 DOI: 10.1002/widm.8", "title": "" }, { "docid": "3364f6fab787e3dbcc4cb611960748b8", "text": "Filamentous fungi can each produce dozens of secondary metabolites which are attractive as therapeutics, drugs, antimicrobials, flavour compounds and other high-value chemicals. Furthermore, they can be used as an expression system for eukaryotic proteins. Application of most fungal secondary metabolites is, however, so far hampered by the lack of suitable fermentation protocols for the producing strain and/or by low product titers. To overcome these limitations, we report here the engineering of the industrial fungus Aspergillus niger to produce high titers (up to 4,500 mg • l−1) of secondary metabolites belonging to the class of nonribosomal peptides. For a proof-of-concept study, we heterologously expressed the 351 kDa nonribosomal peptide synthetase ESYN from Fusarium oxysporum in A. niger. ESYN catalyzes the formation of cyclic depsipeptides of the enniatin family, which exhibit antimicrobial, antiviral and anticancer activities. The encoding gene esyn1 was put under control of a tunable bacterial-fungal hybrid promoter (Tet-on) which was switched on during early-exponential growth phase of A. niger cultures. The enniatins were isolated and purified by means of reverse phase chromatography and their identity and purity proven by tandem MS, NMR spectroscopy and X-ray crystallography. The initial yields of 1 mg • l−1 of enniatin were increased about 950 fold by optimizing feeding conditions and the morphology of A. niger in liquid shake flask cultures. Further yield optimization (about 4.5 fold) was accomplished by cultivating A. niger in 5 l fed batch fermentations. Finally, an autonomous A. niger expression host was established, which was independent from feeding with the enniatin precursor d-2-hydroxyvaleric acid d-Hiv. This was achieved by constitutively expressing a fungal d-Hiv dehydrogenase in the esyn1-expressing A. niger strain, which used the intracellular α-ketovaleric acid pool to generate d-Hiv. This is the first report demonstrating that A. niger is a potent and promising expression host for nonribosomal peptides with titers high enough to become industrially attractive. Application of the Tet-on system in A. niger allows precise control on the timing of product formation, thereby ensuring high yields and purity of the peptides produced.", "title": "" }, { "docid": "f562bd72463945bd35d42894e4815543", "text": "Sound levels in animal shelters regularly exceed 100 dB. Noise is a physical stressor on animals that can lead to behavioral, physiological, and anatomical responses. There are currently no policies regulating noise levels in dog kennels. The objective of this study was to evaluate the noise levels dogs are exposed to in an animal shelter on a continuous basis and to determine the need, if any, for noise regulations. Noise levels at a newly constructed animal shelter were measured using a noise dosimeter in all indoor dog-holding areas. These holding areas included large dog adoptable, large dog stray, small dog adoptable, small dog stray, and front intake. The noise level was highest in the large adoptable area. Sound from the large adoptable area affected some of the noise measurements for the other rooms. Peak noise levels regularly exceeded the measuring capability of the dosimeter (118.9 dBA). Often, in new facility design, there is little attention paid to noise abatement, despite the evidence that noise causes physical and psychological stress on dogs. To meet their behavioral and physical needs, kennel design should also address optimal sound range.", "title": "" }, { "docid": "27caf5f3a638e5084ca361424e69e9d0", "text": "Digital watermarking of multimedia content has become a very active research area over the last several years. A general framework for watermark embedding and detection/decoding is presented here along with a review of some of the algorithms for different media types described in the literature. We highlight some of the differences based on application such as copyright protection, authentication, tamper detection, and data hiding as well as differences in technology and system requirements for different media types such as digital images, video, audio and text.", "title": "" }, { "docid": "869a2cfbb021104e7f3bc7cb214b82f9", "text": "The commoditization of high-performance networking has sparked research interest in the RDMA capability of this hardware. One-sided RDMA primitives, in particular, have generated substantial excitement due to the ability to directly access remote memory from within an application without involving the TCP/IP stack or the remote CPU. This paper considers how to leverage RDMA to improve the analytical performance of parallel database systems. To shuffle data efficiently using RDMA, one needs to consider a complex design space that includes (1) the number of open connections, (2) the contention for the shared network interface, (3) the RDMA transport function, and (4) how much memory should be reserved to exchange data between nodes during query processing. We contribute six designs that capture salient trade-offs in this design space. We comprehensively evaluate how transport-layer decisions impact the query performance of a database system for different generations of InfiniBand. We find that a shuffling operator that uses the RDMA Send/Receive transport function over the Unreliable Datagram transport service can transmit data up to 4× faster than an RDMA-capable MPI implementation in a 16-node cluster. The response time of TPC-H queries improves by as much as 2×.", "title": "" }, { "docid": "644ebe324c23a23bc081119f13190810", "text": "Most computer systems currently consist of DRAM as main memory and hard disk drives (HDDs) as storage devices. Due to the volatile nature of DRAM, the main memory may suffer from data loss in the event of power failures or system crashes. With rapid development of new types of non-volatile memory (NVRAM), such as PCM, Memristor, and STT-RAM, it becomes likely that one of these technologies will replace DRAM as main memory in the not-too-distant future. In an NVRAM based buffer cache, any updated pages can be kept longer without the urgency to be flushed to HDDs. This opens opportunities for designing new buffer cache policies that can achieve better storage performance. However, it is challenging to design a policy that can also increase the cache hit ratio. In this paper, we propose a buffer cache policy, named I/O-Cache, that regroups and synchronizes long sets of consecutive dirty pages to take advantage of HDDs' fast sequential access speed and the non-volatile property of NVRAM. In addition, our new policy can dynamically separate the whole cache into a dirty cache and a clean cache, according to the characteristics of the workload, to decrease storage writes. We evaluate our scheme with various traces. The experimental results show that I/O-Cache shortens I/O completion time, decreases the number of I/O requests, and improves the cache hit ratio compared with existing cache policies.", "title": "" }, { "docid": "9da15e2851124d6ca1524ba28572f922", "text": "With the growth of mobile data application and the ultimate expectations of 5G technology, the need to expand the capacity of the wireless networks is inevitable. Massive MIMO technique is currently taking a major part of the ongoing research, and expected to be the key player in the new cellular technologies. This papers presents an overview of the major aspects related to massive MIMO design including, antenna array general design, configuration, and challenges, in addition to advanced beamforming techniques and channel modeling and estimation issues affecting the implementation of such systems.", "title": "" }, { "docid": "e1a4e8b8c892f1e26b698cd9fd37c3db", "text": "Social networks such as Facebook, MySpace, and Twitter have become increasingly important for reaching millions of users. Consequently, spammers are increasing using such networks for propagating spam. Existing filtering techniques such as collaborative filters and behavioral analysis filters are able to significantly reduce spam, each social network needs to build its own independent spam filter and support a spam team to keep spam prevention techniques current. We propose a framework for spam detection which can be used across all social network sites. There are numerous benefits of the framework including: 1) new spam detected on one social network, can quickly be identified across social networks; 2) accuracy of spam detection will improve with a large amount of data from across social networks; 3) other techniques (such as blacklists and message shingling) can be integrated and centralized; 4) new social networks can plug into the system easily, preventing spam at an early stage. We provide an experimental study of real datasets from social networks to demonstrate the flexibility and feasibility of our framework.", "title": "" }, { "docid": "354cbda757045bcee7044159bd353ca5", "text": "In this paper we present the preliminary work of a Basque poetry generation system. Basically, we have extracted the POS-tag sequences from some verse corpora and calculated the probability of each sequence. For the generation process we have defined 3 different experiments: Based on a strophe from the corpora, we (a) replace each word with other according to its POS-tag and suffixes, (b) replace each noun and adjective with another equally inflected word and (c) replace only nouns with semantically related ones (inflected). Finally we evaluate those strategies using a Turing Test-like evaluation.", "title": "" }, { "docid": "c479983e954695014417976275030746", "text": "Semi-Non-negative Matrix Factorization is a technique that learns a low-dimensional representation of a dataset that lends itself to a clustering interpretation. It is possible that the mapping between this new representation and our original data matrix contains rather complex hierarchical information with implicit lower-level hidden attributes, that classical one level clustering methodologies cannot interpret. In this work we propose a novel model, Deep Semi-NMF, that is able to learn such hidden representations that allow themselves to an interpretation of clustering according to different, unknown attributes of a given dataset. We also present a semi-supervised version of the algorithm, named Deep WSF, that allows the use of (partial) prior information for each of the known attributes of a dataset, that allows the model to be used on datasets with mixed attribute knowledge. Finally, we show that our models are able to learn low-dimensional representations that are better suited for clustering, but also classification, outperforming Semi-Non-negative Matrix Factorization, but also other state-of-the-art methodologies variants.", "title": "" }, { "docid": "81b5379abf3849e1ae4e233fd4955062", "text": "Three-phase dc/dc converters have the superior characteristics including lower current rating of switches, the reduced output filter requirement, and effective utilization of transformers. To further reduce the voltage stress on switches, three-phase three-level (TPTL) dc/dc converters have been investigated recently; however, numerous active power switches result in a complicated configuration in the available topologies. Therefore, a novel TPTL dc/dc converter adopting a symmetrical duty cycle control is proposed in this paper. Compared with the available TPTL converters, the proposed converter has fewer switches and simpler configuration. The voltage stress on all switches can be reduced to the half of the input voltage. Meanwhile, the ripple frequency of output current can be increased significantly, resulting in a reduced filter requirement. Experimental results from a 540-660-V input and 48-V/20-A output are presented to verify the theoretical analysis and the performance of the proposed converter.", "title": "" } ]
scidocsrr
6d3e17e4b44a2cadedc8f483ab186cb2
Add English to image Chinese captioning
[ { "docid": "210a777341f3557081d43f2580428c32", "text": "This paper studies the problem of associating images with descriptive sentences by embedding them in a common latent space. We are interested in learning such embeddings from hundreds of thousands or millions of examples. Unfortunately, it is prohibitively expensive to fully annotate this many training images with ground-truth sentences. Instead, we ask whether we can learn better image-sentence embeddings by augmenting small fully annotated training sets with millions of images that have weak and noisy annotations (titles, tags, or descriptions). After investigating several state-of-the-art scalable embedding methods, we introduce a new algorithm called Stacked Auxiliary Embedding that can successfully transfer knowledge from millions of weakly annotated images to improve the accuracy of retrieval-based image description.", "title": "" }, { "docid": "c879ee3945592f2e39bb3306602bb46a", "text": "This paper presents a novel approach for automatically generating image descriptions: visual detectors, language models, and multimodal similarity models learnt directly from a dataset of image captions. We use multiple instance learning to train visual detectors for words that commonly occur in captions, including many different parts of speech such as nouns, verbs, and adjectives. The word detector outputs serve as conditional inputs to a maximum-entropy language model. The language model learns from a set of over 400,000 image descriptions to capture the statistics of word usage. We capture global semantics by re-ranking caption candidates using sentence-level features and a deep multimodal similarity model. Our system is state-of-the-art on the official Microsoft COCO benchmark, producing a BLEU-4 score of 29.1%. When human judges compare the system captions to ones written by other people on our held-out test set, the system captions have equal or better quality 34% of the time.", "title": "" }, { "docid": "9eaab923986bf74bdd073f6766ca45b2", "text": "This paper introduces a novel generation system that composes humanlike descriptions of images from computer vision detections. By leveraging syntactically informed word co-occurrence statistics, the generator filters and constrains the noisy detections output from a vision system to generate syntactic trees that detail what the computer vision system sees. Results show that the generation system outperforms state-of-the-art systems, automatically generating some of the most natural image descriptions to date.", "title": "" } ]
[ { "docid": "b59965c405937a096186e41b2a3877c3", "text": "The culmination of many years of increasing research into the toxicity of tau aggregation in neurodegenerative disease has led to the consensus that soluble, oligomeric forms of tau are likely the most toxic entities in disease. While tauopathies overlap in the presence of tau pathology, each disease has a unique combination of symptoms and pathological features; however, most study into tau has grouped tau oligomers and studied them as a homogenous population. Established evidence from the prion field combined with the most recent tau and amyloidogenic protein research suggests that tau is a prion-like protein, capable of seeding the spread of pathology throughout the brain. Thus, it is likely that tau may also form prion-like strains or diverse conformational structures that may differ by disease and underlie some of the differences in symptoms and pathology in neurodegenerative tauopathies. The development of techniques and new technology for the detection of tau oligomeric strains may, therefore, lead to more efficacious diagnostic and treatment strategies for neurodegenerative disease. [Formula: see text].", "title": "" }, { "docid": "2827e0d197b7f66c7f6ceb846c6aaa27", "text": "The food industry is becoming more customer-oriented and needs faster response times to deal with food scandals and incidents. Good traceability systems help to minimize the production and distribution of unsafe or poor quality products, thereby minimizing the potential for bad publicity, liability, and recalls. The current food labelling system cannot guarantee that the food is authentic, good quality and safe. Therefore, traceability is applied as a tool to assist in the assurance of food safety and quality as well as to achieve consumer confidence. This paper presents comprehensive information about traceability with regards to safety and quality in the food supply chain. 2013 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "e84ca42f96cca0fe3ed7c70d90554a8d", "text": "While the volume of scholarly publications has increased at a frenetic pace, accessing and consuming the useful candidate papers, in very large digital libraries, is becoming an essential and challenging task for scholars. Unfortunately, because of language barrier, some scientists (especially the junior ones or graduate students who do not master other languages) cannot efficiently locate the publications hosted in a foreign language repository. In this study, we propose a novel solution, cross-language citation recommendation via Hierarchical Representation Learning on Heterogeneous Graph (HRLHG), to address this new problem. HRLHG can learn a representation function by mapping the publications, from multilingual repositories, to a low-dimensional joint embedding space from various kinds of vertexes and relations on a heterogeneous graph. By leveraging both global (task specific) plus local (task independent) information as well as a novel supervised hierarchical random walk algorithm, the proposed method can optimize the publication representations by maximizing the likelihood of locating the important cross-language neighborhoods on the graph. Experiment results show that the proposed method can not only outperform state-of-the-art baseline models, but also improve the interpretability of the representation model for cross-language citation recommendation task.", "title": "" }, { "docid": "2c39430076bf63a05cde06fe57a61ff4", "text": "With the advent of IoT based technologies; the overall industrial sector is amenable to undergo a fundamental and essential change alike to the industrial revolution. Online Monitoring solutions of environmental polluting parameter using Internet Of Things (IoT) techniques help us to gather the parameter values such as pH, temperature, humidity and concentration of carbon monoxide gas, etc. Using sensors and enables to have a keen control on the environmental pollution caused by the industries. This paper introduces a LabVIEW based online pollution monitoring of industries for the control over pollution caused by untreated disposal of waste. This paper proposes the use of an AT-mega 2560 Arduino board which collects the temperature and humidity parameter from the DHT-11 sensor, carbon dioxide concentration using MG-811 and update it into the online database using MYSQL. For monitoring and controlling, a website is designed and hosted which will give a real essence of IoT. To increase the reliability and flexibility an android application is also developed.", "title": "" }, { "docid": "bfb79421ca0ddfd5a584f009f8102a2c", "text": "In this paper, suppression of cross-polarized (XP) radiation of a circular microstrip patch antenna (CMPA) employing two new geometries of defected ground structures (DGSs), is experimentally investigated. One of the antennas employs a circular ring shaped defect in the ground plane, located bit away from the edge of the patch. This structure provides an improvement of XP level by 5 to 7 dB compared to an identical patch with normal ground plane. The second structure incorporates two arc-shaped DGSs in the H-plane of the patch. This configuration improves the XP radiation by about 7 to 12 dB over and above a normal CMPA. For demonstration of the concept, a set of prototypes have been examined at C-band. The experimental results have been presented.", "title": "" }, { "docid": "7ea3d3002506e0ea6f91f4bdab09c2d5", "text": "We propose a novel and robust computational framework for automatic detection of deformed 2D wallpaper patterns in real-world images. The theory of 2D crystallographic groups provides a sound and natural correspondence between the underlying lattice of a deformed wallpaper pattern and a degree-4 graphical model. We start the discovery process with unsupervised clustering of interest points and voting for consistent lattice unit proposals. The proposed lattice basis vectors and pattern element contribute to the pairwise compatibility and joint compatibility (observation model) functions in a Markov random field (MRF). Thus, we formulate the 2D lattice detection as a spatial, multitarget tracking problem, solved within an MRF framework using a novel and efficient mean-shift belief propagation (MSBP) method. Iterative detection and growth of the deformed lattice are interleaved with regularized thin-plate spline (TPS) warping, which rectifies the current deformed lattice into a regular one to ensure stability of the MRF model in the next round of lattice recovery. We provide quantitative comparisons of our proposed method with existing algorithms on a diverse set of 261 real-world photos to demonstrate significant advances in accuracy and speed over the state of the art in automatic discovery of regularity in real images.", "title": "" }, { "docid": "8c6c8ab24394ddfde8209cd0dacc9da3", "text": "The Intelligence in Wikipedia project at the University of Washington is combining self-supervised information extraction (IE) techniques with a mixed initiative interface designed to encourage communal content creation (CCC). Since IE and CCC are each powerful ways to produce large amounts of structured information, they have been studied extensively — but only in isolation. By combining the two methods in a virtuous feedback cycle, we aim for substantial synergy. While previous papers have described the details of individual aspects of our endeavor [25, 26, 24, 13], this report provides an overview of the project’s progress and vision.", "title": "" }, { "docid": "29786d164d0d5e76ea9c098944e27266", "text": "Future mobile communications systems are likely to be very different to those of today with new service innovations driven by increasing data traffic demand, increasing processing power of smart devices and new innovative applications. To meet these service demands the telecommunications industry is converging on a common set of 5G requirements which includes network speeds as high as 10 Gbps, cell edge rate greater than 100 Mbps, and latency of less than 1 msec. To reach these 5G requirements the industry is looking at new spectrum bands in the range up to 100 GHz where there is spectrum availability for wide bandwidth channels. For the development of new 5G systems to operate in bands up to 100 GHz there is a need for accurate radio propagation models which are not addressed by existing channel models developed for bands below 6 GHz. This paper presents a preliminary overview of the 5G channel models for bands up to 100 GHz in indoor offices and shopping malls, derived from extensive measurements across a multitude of bands. These studies have found some extensibility of the existing 3GPP models (e.g. 3GPP TR36.873) to the higher frequency bands up to 100 GHz. The measurements indicate that the smaller wavelengths introduce an increased sensitivity of the propagation models to the scale of the environment and show some frequency dependence of the path loss as well as increased occurrence of blockage. Further, the penetration loss is highly dependent on the material and tends to increase with frequency. The small-scale characteristics of the channel such as delay spread and angular spread and the multipath richness is somewhat similar over the frequency range, which is encouraging for extending the existing 3GPP models to the wider frequency range. Further work will be carried out to complete these models, but this paper presents the first steps for an initial basis for the model development.", "title": "" }, { "docid": "16f2811b6052a1a9e527d61b2ff6509b", "text": "Corneal topography is a non-invasive medical imaging techniqueto assess the shape of the cornea in ophthalmology. In this paper we demonstrate that in addition to its health care use, corneal topography could provide valuable biometric measurements for person authentication. To extract a feature vector from these images (topographies), we propose to fit the geometry of the corneal surface with Zernike polynomials, followed by a linear discriminant analysis (LDA) of the Zernike coefficients to select the most discriminating features. The results show that the proposed method reduced the typical d-dimensional Zernike feature vector (d=36) into a much lower r-dimensional feature vector (r=3), and improved the Equal Error Rate from 2.88% to 0.96%, with the added benefit of faster computation time.", "title": "" }, { "docid": "f9cc9e1ddc0d1db56f362a1ef409274d", "text": "Phishing is increasing dramatically with the development of modern technologies and the global worldwide computer networks. This results in the loss of customer’s confidence in e-commerce and online banking, financial damages, and identity theft. Phishing is fraudulent effort aims to acquire sensitive information from users such as credit card credentials, and social security number. In this article, we propose a model for predicting phishing attacks based on Artificial Neural Network (ANN). A Feed Forward Neural Network trained by Back Propagation algorithm is developed to classify websites as phishing or legitimate. The suggested model shows high acceptance ability for noisy data, fault tolerance and high prediction accuracy with respect to false positive and false negative rates.", "title": "" }, { "docid": "1d724b07c232098e2a5e5af2bb1e7c83", "text": "[2] Brown SJ, McLean WH. One remarkable molecule: filaggrin. J Invest Dermatol 2012;132:751–62. [3] Sandilands A, Terron-Kwiatkowski A, Hull PR, O’Regan GM, Clayton TH, Watson RM, et al. Comprehensive analysis of the gene encoding filaggrin uncovers prevalent and rare mutations in ichthyosis vulgaris and atopic eczema. Nat Genet 2007;39:650–4. [4] Margolis DJ, Apter AJ, Gupta J, Hoffstad O, Papadopoulos M, Campbell LE, et al. The persistence of atopic dermatitis and Filaggrin mutations in a US longitudinal cohort. J Allergy Clin Immunol 2012;130(4):912–7. [5] Smith FJ, Irvine AD, Terron-Kwiatkowski A, Sandilands A, Campbell LE, Zhao Y, et al. Loss-of-function mutations in the gene encoding filaggrin cause ichthyosis vulgaris. Nat Genet 2006;38:337–42. [6] Paternoster L, Standl M, Chen CM, Ramasamy A, Bonnelykke K, Duijts L, et al. Meta-analysis of genome-wide association studies identifies three new risk Table 1 Reliability and validity comparisons for FLG null mutations as assayed by TaqMan and beadchip methods.", "title": "" }, { "docid": "012f30fbeed17fcfd098e5362bd95ee8", "text": "We prove that binary orthogonal arrays of strength 8, length 12 and cardinality 1536 do not exist. This implies the nonexistence of arrays of parameters (strength,length,cardinality) = (n, n + 4, 6.2) for every integer n ≥ 8.", "title": "" }, { "docid": "a50b7ab02d2fe934f5fb5bed14fcdad9", "text": "An empirical study has been conducted investigating the relationship between the performance of an aspect based language model in terms of perplexity and the corresponding information retrieval performance obtained. It is observed, on the corpora considered, that the perplexity of the language model has a systematic relationship with the achievable precision recall performance though it is not statistically significant.", "title": "" }, { "docid": "37a6f3773aebf46cc40266b8bb5692af", "text": "The theory of myofascial pain syndrome (MPS) caused by trigger points (TrPs) seeks to explain the phenomena of muscle pain and tenderness in the absence of evidence for local nociception. Although it lacks external validity, many practitioners have uncritically accepted the diagnosis of MPS and its system of treatment. Furthermore, rheumatologists have implicated TrPs in the pathogenesis of chronic widespread pain (FM syndrome). We have critically examined the evidence for the existence of myofascial TrPs as putative pathological entities and for the vicious cycles that are said to maintain them. We find that both are inventions that have no scientific basis, whether from experimental approaches that interrogate the suspect tissue or empirical approaches that assess the outcome of treatments predicated on presumed pathology. Therefore, the theory of MPS caused by TrPs has been refuted. This is not to deny the existence of the clinical phenomena themselves, for which scientifically sound and logically plausible explanations based on known neurophysiological phenomena can be advanced.", "title": "" }, { "docid": "60eff31e8f742873cec993f1499385b5", "text": "There is an increasing interest in employing multiple sensors for surveillance and communications. Some of the motivating factors are reliability, survivability, increase in the number of targets under consideration, and increase in required coverage. Tenney and Sandell have recently treated the Bayesian detection problem with distributed sensors. They did not consider the design of data fusion algorithms. We present an optimum data fusion structure given the detectors. Individual decisions are weighted according to the reliability of the detector and then a threshold comparison is performed to obtain the global decision.", "title": "" }, { "docid": "a9d22e2568bcae7a98af7811546c7853", "text": "This thesis addresses the challenges of building a software system for general-purpose runtime code manipulation. Modern applications, with dynamically-loaded modules and dynamicallygenerated code, are assembled at runtime. While it was once feasible at compile time to observe and manipulate every instruction — which is critical for program analysis, instrumentation, trace gathering, optimization, and similar tools — it can now only be done at runtime. Existing runtime tools are successful at inserting instrumentation calls, but no general framework has been developed for fine-grained and comprehensive code observation and modification without high overheads. This thesis demonstrates the feasibility of building such a system in software. We present DynamoRIO, a fully-implemented runtime code manipulation system that supports code transformations on any part of a program, while it executes. DynamoRIO uses code caching technology to provide efficient, transparent, and comprehensive manipulation of an unmodified application running on a stock operating system and commodity hardware. DynamoRIO executes large, complex, modern applications with dynamically-loaded, generated, or even modified code. Despite the formidable obstacles inherent in the IA-32 architecture, DynamoRIO provides these capabilities efficiently, with zero to thirty percent time and memory overhead on both Windows and Linux. DynamoRIO exports an interface for building custom runtime code manipulation tools of all types. It has been used by many researchers, with several hundred downloads of our public release, and is being commercialized in a product for protection against remote security exploits, one of numerous applications of runtime code manipulation. Thesis Supervisor: Saman Amarasinghe Title: Associate Professor of Electrical Engineering and Computer Science", "title": "" }, { "docid": "d5b004af32bd747c2b5ad175975f8c06", "text": "This paper presents a design of a quasi-millimeter wave wideband antenna array consisting of a leaf-shaped bowtie antenna (LSBA) and series-parallel feed networks in which parallel strip and microstrip lines are employed. A 16-element LSBA array is designed such that the antenna array operates over the frequency band of 22-30GHz. In order to demonstrate the effective performance of the presented configuration, characteristics of the designed LSBA array are evaluated by the finite-difference time domain (FDTD) analysis and measurements. Over the frequency range from 22GHz to 30GHz, the simulated reflection coefficient is observed to be less than -8dB, and the actual gain of 12.3-19.4dBi is obtained.", "title": "" }, { "docid": "95037e7dc3ae042d64a4b343ad4efd39", "text": "We classify human actions occurring in depth image sequences using features based on skeletal joint positions. The action classes are represented by a multi-level Hierarchical Dirichlet Process – Hidden Markov Model (HDP-HMM). The non-parametric HDP-HMM allows the inference of hidden states automatically from training data. The model parameters of each class are formulated as transformations from a shared base distribution, thus promoting the use of unlabelled examples during training and borrowing information across action classes. Further, the parameters are learnt in a discriminative way. We use a normalized gamma process representation of HDP and margin based likelihood functions for this purpose. We sample parameters from the complex posterior distribution induced by our discriminative likelihood function using elliptical slice sampling. Experiments with two different datasets show that action class models learnt using our technique produce good classification results.", "title": "" }, { "docid": "118526b566b800d9dea30d2e4c904feb", "text": "With the problem of increased web resources and the huge amount of information available, the necessity of having automatic summarization systems appeared. Since summarization is needed the most in the process of searching for information on the web, where the user aims at a certain domain of interest according to his query, in this case domain-based summaries would serve the best. Despite the existence of plenty of research work in the domain-based summarization in English, there is lack of them in Arabic due to the shortage of existing knowledge bases. In this paper we introduce a query based, Arabic text, single document summarization using an existing Arabic language thesaurus and an extracted knowledge base. We use an Arabic corpus to extract domain knowledge represented by topic related concepts/ keywords and the lexical relations among them. The user’s query is expanded once by using the Arabic WordNet thesaurus and then by adding the domain specific knowledge base to the expansion. For the summarization dataset, Essex Arabic Summaries Corpus was used. It has many topic based articles with multiple human summaries. The performance appeared to be enhanced when using our extracted knowledge base than to just use the WordNet.", "title": "" }, { "docid": "3aaffdda034c762ad36954386d796fb9", "text": "KNTU CDRPM is a cable driven redundant parallel manipulator, which is under investigation for possible high speed and large workspace applications. This newly developed mechanisms have several advantages compared to the conventional parallel mechanisms. Its rotational motion range is relatively large, its redundancy improves safety for failure in cables, and its design is suitable for long-time high acceleration motions. In this paper, collision-free workspace of the manipulator is derived by applying fast geometrical intersection detection method, which can be used for any fully parallel manipulator. Implementation of the algorithm on the Neuron design of the KNTU CDRPM leads to significant results, which introduce a new style of design of a spatial cable-driven parallel manipulators. The results are elaborated in three presentations; constant-orientation workspace, total orientation workspace and orientation workspace.", "title": "" } ]
scidocsrr
db86988618b0f2e30c4f824784eba8ff
A phase space model of Fourier ptychographic microscopy.
[ { "docid": "0cce6366df945f079dbb0b90d79b790e", "text": "Fourier ptychographic microscopy (FPM) is a recently developed imaging modality that uses angularly varying illumination to extend a system's performance beyond the limit defined by its optical components. The FPM technique applies a novel phase-retrieval procedure to achieve resolution enhancement and complex image recovery. In this Letter, we compare FPM data to theoretical prediction and phase-shifting digital holography measurement to show that its acquired phase maps are quantitative and artifact-free. We additionally explore the relationship between the achievable spatial and optical thickness resolution offered by a reconstructed FPM phase image. We conclude by demonstrating enhanced visualization and the collection of otherwise unobservable sample information using FPM's quantitative phase.", "title": "" } ]
[ { "docid": "6d728174d576ac785ff093f4cdc16e1b", "text": "The stress-inducible protein heme oxygenase-1 provides protection against oxidative stress. The anti-inflammatory properties of heme oxygenase-1 may serve as a basis for this cytoprotection. We demonstrate here that carbon monoxide, a by-product of heme catabolism by heme oxygenase, mediates potent anti-inflammatory effects. Both in vivo and in vitro, carbon monoxide at low concentrations differentially and selectively inhibited the expression of lipopolysaccharide-induced pro-inflammatory cytokines tumor necrosis factor-α, interleukin-1β, and macrophage inflammatory protein-1β and increased the lipopolysaccharide-induced expression of the anti-inflammatory cytokine interleukin-10. Carbon monoxide mediated these anti-inflammatory effects not through a guanylyl cyclase–cGMP or nitric oxide pathway, but instead through a pathway involving the mitogen-activated protein kinases. These data indicate the possibility that carbon monoxide may have an important protective function in inflammatory disease states and thus has potential therapeutic uses.", "title": "" }, { "docid": "b06a3c929a934633e174bfe1adab21f1", "text": "In this paper, we analyze the radio channel characteristics at mmWave frequencies for 5G cellular communications in urban scenarios. 3D-ray tracing simulations in the downtown areas of Ottawa and Chicago are conducted in both the 2 GHz and 28 GHz bands. Each area has two different deployment scenarios, with different transmitter height and different density of buildings. Based on the observations of the ray-tracing experiments, important parameters of the radio channel model, such as path loss exponent, shadowing variance, delay spread and angle spread, are provided, forming the basis of a mmWave channel model. Based on the analysis and the 3GPP 3D-Spatial Channel Model (SCM) framework, we introduce a a preliminary mmWave channel model at 28 GHz.", "title": "" }, { "docid": "89b17ff10887b84270c1d627231a0721", "text": "A novel robust adaptive beamforming method for conformal array is proposed. By using interpolation technique, the cylindrical conformal array with directional antenna elements is transformed to a virtual uniform linear array with omni-directional elements. This method can compensate the amplitude and mutual coupling errors as well as desired signal point errors of the conformal array efficiently. It is a universal method and can be applied to other curved conformal arrays. After the transformation, most of the existing adaptive beamforming algorithms can be applied to conformal array directly. The efficiency of the proposed scheme is assessed through numerical simulations.", "title": "" }, { "docid": "1389323613225897330d250e9349867b", "text": "Description: The field of data mining lies at the confluence of predictive analytics, statistical analysis, and business intelligence. Due to the ever–increasing complexity and size of data sets and the wide range of applications in computer science, business, and health care, the process of discovering knowledge in data is more relevant than ever before. This book provides the tools needed to thrive in today s big data world. The author demonstrates how to leverage a company s existing databases to increase profits and market share, and carefully explains the most current data science methods and techniques. The reader will learn data mining by doing data mining . By adding chapters on data modelling preparation, imputation of missing data, and multivariate statistical analysis, Discovering Knowledge in Data, Second Edition remains the eminent reference on data mining .", "title": "" }, { "docid": "554d0255aef7ffac9e923da5d93b97e3", "text": "In this demo paper, we present a text simplification approach that is directed at improving the performance of state-of-the-art Open Relation Extraction (RE) systems. As syntactically complex sentences often pose a challenge for current Open RE approaches, we have developed a simplification framework that performs a pre-processing step by taking a single sentence as input and using a set of syntactic-based transformation rules to create a textual input that is easier to process for subsequently applied Open RE systems.", "title": "" }, { "docid": "b623437391b298c2e618b0f42d3e19a9", "text": "In the era of the Social Web, crowdfunding has become an increasingly more important channel for entrepreneurs to raise funds from the crowd to support their startup projects. Previous studies examined various factors such as project goals, project durations, and categories of projects that might influence the outcomes of the fund raising campaigns. However, textual information of projects has rarely been studied for analyzing crowdfunding successes. The main contribution of our research work is the design of a novel text analytics-based framework that can extract latent semantics from the textual descriptions of projects to predict the fund raising outcomes of these projects. More specifically, we develop the Domain-Constraint Latent Dirichlet Allocation (DC-LDA) topic model for effective extraction of topical features from texts. Based on two real-world crowdfunding datasets, our experimental results reveal that the proposed framework outperforms a classical LDA-based method in predicting fund raising success by an average of 11% in terms of F1 score. The managerial implication of our research is that entrepreneurs can apply the proposed methodology to identify the most influential topical features embedded in project descriptions, Corresponding author at: School of Information, Renmin University of China, Beijing, 100872, P.R. China. Email address: hui.yuan@my.cityu.edu.hk (H. Yuan), raylau@cityu.edu.hk (R.Y.K. Lau), weixu@ruc.edu.cn (W. Xu) AC C EP TE D M AN U SC R IP T ACCEPTED MANUSCRIPT 2 and hence to better promote their projects and improving the chance of raising sufficient funds for their projects.", "title": "" }, { "docid": "07c185c21c9ce3be5754294a73ab5e3c", "text": "In order to support efficient workflow design, recent commercial workflow systems are providing templates of common business processes. These templates, called cases, can be modified individually or collectively into a new workflow to meet the business specification. However, little research has been done on how to manage workflow models, including issues such as model storage, model retrieval, model reuse and assembly. In this paper, we propose a novel framework to support workflow modeling and design by adapting workflow cases from a repository of process models. Our approach to workflow model management is based on a structured workflow lifecycle and leverages recent advances in model management and case-based reasoning techniques. Our contributions include a conceptual model of workflow cases, a similarity flooding algorithm for workflow case retrieval, and a domain-independent AI planning approach to workflow case composition. We illustrate the workflow model management framework with a prototype system called Case-Oriented Design Assistant for Workflow Modeling (CODAW). 2004 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "11d1978a3405f63829e02ccb73dcd75f", "text": "The performance of two commercial simulation codes, Ansys Fluent and Comsol Multiphysics, is thoroughly examined for a recently established two-phase flow benchmark test case. In addition, the commercial codes are directly compared with the newly developed academic code, FeatFlow TP2D. The results from this study show that the commercial codes fail to converge and produce accurate results, and leave much to be desired with respect to direct numerical simulation of flows with free interfaces. The academic code on the other hand was shown to be computationally efficient, produced very accurate results, and outperformed the commercial codes by a magnitude or more.", "title": "" }, { "docid": "a488a74817a8401eff1373d4e21f060f", "text": "We propose a neural machine translation architecture that models the surrounding text in addition to the source sentence. These models lead to better performance, both in terms of general translation quality and pronoun prediction, when trained on small corpora, although this improvement largely disappears when trained with a larger corpus. We also discover that attention-based neural machine translation is well suited for pronoun prediction and compares favorably with other approaches that were specifically designed for this task.", "title": "" }, { "docid": "3111ef9867be7cf58be9694cbe2a14d9", "text": "Grammatical Error Diagnosis for Chinese has always been a challenge for both foreign learners and NLP researchers, for the variousity of grammar and the flexibility of expression. In this paper, we present a model based on Bidirectional Long Short-Term Memory(Bi-LSTM) neural networks, which treats the task as a sequence labeling problem, so as to detect Chinese grammatical errors, to identify the error types and to locate the error positions. In the corpora of this year’s shared task, there can be multiple errors in a single offset of a sentence, to address which, we simutaneously train three Bi-LSTM models sharing word embeddings which label Missing, Redundant and Selection errors respectively. We regard word ordering error as a special kind of word selection error which is longer during training phase, and then separate them by length during testing phase. In NLP-TEA 3 shared task for Chinese Grammatical Error Diagnosis(CGED), Our system achieved relatively high F1 for all the three levels in the traditional Chinese track and for the detection level in the Simpified Chinese track.", "title": "" }, { "docid": "40413aa7fd92e042b8c359b2cf6d2d23", "text": "Text summarization is the process of creating a short description of a specified text while preserving its information context. This paper tackles Arabic text summarization problem. The semantic redundancy and insignificance will be removed from the summarized text. This can be achieved by checking the text entailment relation, and lexical cohesion. Accordingly, a text summarization approach (called LCEAS) based on lexical cohesion and text entailment relation is developed. In LCEAS, text entailment approach is enhanced to suit Arabic language. Roots and semantic-relations are used between the senses of the words to extract the common words. New threshold values are specified to suit entailment based segmentation for Arabic text. LCEAS is a single document summarization, which is constructed using extraction technique. To evaluate LCEAS, its performance is compared with previous Arabic text summarization systems. Each system output is compared against Essex Arabic Summaries Corpus (EASC) corpus (the model summaries), using Recall-Oriented Understudy for Gisting Evaluation (ROUGE) and Automatic Summarization Engineering (AutoSummEng) metrics. The outcome of LCEAS indicates that the developed approach outperforms the previous Arabic text summarization systems. KeywordsText Summarization; Text Segmentation; Lexical Cohesion; Text Entailment; Natural Language Processing.", "title": "" }, { "docid": "e587b5954c957f268d21878ede3359f8", "text": "ing audit logs", "title": "" }, { "docid": "b31244421f89b32704509dfeb80702a0", "text": "Robust and fast solutions for anatomical object detection and segmentation support the entire clinical workflow from diagnosis, patient stratification, therapy planning, intervention and follow-up. Current state-of-the-art techniques for parsing volumetric medical image data are typically based on machine learning methods that exploit large annotated image databases. Two main challenges need to be addressed, these are the efficiency in scanning high-dimensional parametric spaces and the need for representative image features which require significant efforts of manual engineering. We propose a pipeline for object detection and segmentation in the context of volumetric image parsing, solving a two-step learning problem: anatomical pose estimation and boundary delineation. For this task we introduce Marginal Space Deep Learning (MSDL), a novel framework exploiting both the strengths of efficient object parametrization in hierarchical marginal spaces and the automated feature design of Deep Learning (DL) network architectures. In the 3D context, the application of deep learning systems is limited by the very high complexity of the parametrization. More specifically 9 parameters are necessary to describe a restricted affine transformation in 3D, resulting in a prohibitive amount of billions of scanning hypotheses. The mechanism of marginal space learning provides excellent run-time performance by learning classifiers in clustered, high-probability regions in spaces of gradually increasing dimensionality. To further increase computational efficiency and robustness, in our system we learn sparse adaptive data sampling patterns that automatically capture the structure of the input. Given the object localization, we propose a DL-based active shape model to estimate the non-rigid object boundary. Experimental results are presented on the aortic valve in ultrasound using an extensive dataset of 2891 volumes from 869 patients, showing significant improvements of up to 45.2% over the state-of-the-art. To our knowledge, this is the first successful demonstration of the DL potential to detection and segmentation in full 3D data with parametrized representations.", "title": "" }, { "docid": "9664431f0cfc22567e1e5c945f898595", "text": "Anomaly detection aims to detect abnormal events by a model of normality. It plays an important role in many domains such as network intrusion detection, criminal activity identity and so on. With the rapidly growing size of accessible training data and high computation capacities, deep learning based anomaly detection has become more and more popular. In this paper, a new domain-based anomaly detection method based on generative adversarial networks (GAN) is proposed. Minimum likelihood regularization is proposed to make the generator produce more anomalies and prevent it from converging to normal data distribution. Proper ensemble of anomaly scores is shown to improve the stability of discriminator effectively. The proposed method has achieved significant improvement than other anomaly detection methods on Cifar10 and UCI datasets.", "title": "" }, { "docid": "b79bf80221c893f40abd7fd6b8a7145a", "text": "Attention is typically used to select informative sub-phrases that are used for prediction. This paper investigates the novel use of attention as a form of feature augmentation, i.e, casted attention. We propose Multi-Cast Attention Networks (MCAN), a new attention mechanism and general model architecture for a potpourri of ranking tasks in the conversational modeling and question answering domains. Our approach performs a series of soft attention operations, each time casting a scalar feature upon the inner word embeddings. The key idea is to provide a real-valued hint (feature) to a subsequent encoder layer and is targeted at improving the representation learning process. There are several advantages to this design, e.g., it allows an arbitrary number of attention mechanisms to be casted, allowing for multiple attention types (e.g., co-attention, intra-attention) and attention variants (e.g., alignment-pooling, max-pooling, mean-pooling) to be executed simultaneously. This not only eliminates the costly need to tune the nature of the co-attention layer, but also provides greater extents of explainability to practitioners. Via extensive experiments on four well-known benchmark datasets, we show that MCAN achieves state-of-the-art performance. On the Ubuntu Dialogue Corpus, MCAN outperforms existing state-of-the-art models by 9%. MCAN also achieves the best performing score to date on the well-studied TrecQA dataset.", "title": "" }, { "docid": "486e3f5614f69f60d8703d8641c73416", "text": "The Great East Japan Earthquake and Tsunami drastically changed Japanese society, and the requirements for ICT was completely redefined. After the disaster, it was impossible for disaster victims to utilize their communication devices, such as cellular phones, tablet computers, or laptop computers, to notify their families and friends of their safety and confirm the safety of their loved ones since the communication infrastructures were physically damaged or lacked the energy necessary to operate. Due to this drastic event, we have come to realize the importance of device-to-device communications. With the recent increase in popularity of D2D communications, many research works are focusing their attention on a centralized network operated by network operators and neglect the importance of decentralized infrastructureless multihop communication, which is essential for disaster relief applications. In this article, we propose the concept of multihop D2D communication network systems that are applicable to many different wireless technologies, and clarify requirements along with introducing open issues in such systems. The first generation prototype of relay by smartphone can deliver messages using only users' mobile devices, allowing us to send out emergency messages from disconnected areas as well as information sharing among people gathered in evacuation centers. The success of field experiments demonstrates steady advancement toward realizing user-driven networking powered by communication devices independent of operator networks.", "title": "" }, { "docid": "4331057bb0a3f3add576513fa71791a8", "text": "The category theoretic structures of monads and comonads can be used as an abstraction mechanism for simplifying both language semantics and programs. Monads have been used to structure impure computations, whilst comonads have been used to structure context-dependent computations. Interestingly, the class of computations structured by monads and the class of computations structured by comonads are not mutually exclusive. This paper formalises and explores the conditions under which a monad and a comonad can both structure the same notion of computation: when a comonad is left adjoint to a monad. Furthermore, we examine situations where a particular monad/comonad model of computation is deficient in capturing the essence of a computational pattern and provide a technique for calculating an alternative monad or comonad structure which fully captures the essence of the computation. Included is some discussion on how to choose between a monad or comonad structure in the case where either can be used to capture a particular notion of computation.", "title": "" }, { "docid": "70bed43cdfd50586e803bf1a9c8b3c0a", "text": "We design a way to model apps as vectors, inspired by the recent deep learning approach to vectorization of words called word2vec. Our method relies on how users use apps. In particular, we visualize the time series of how each user uses mobile apps as a “document”, and apply the recent word2vec modeling on these documents, but the novelty is that the training context is carefully weighted by the time interval between the usage of successive apps. This gives us the app2vec vectorization of apps. We apply this to industrial scale data from Yahoo! and (a) show examples that app2vec captures semantic relationships between apps, much as word2vec does with words, (b) show using Yahoo!'s extensive human evaluation system that 82% of the retrieved top similar apps are semantically relevant, achieving 37% lift over bag-of-word approach and 140% lift over matrix factorization approach to vectorizing apps, and (c) finally, we use app2vec to predict app-install conversion and improve ad conversion prediction accuracy by almost 5%. This is the first industry scale design, training and use of app vectorization.", "title": "" }, { "docid": "6cf9456d2fe55d2115fd40efbb1a8f96", "text": "We propose a new approach to the problem of optimizing autoencoders for lossy image compression. New media formats, changing hardware technology, as well as diverse requirements and content types create a need for compression algorithms which are more flexible than existing codecs. Autoencoders have the potential to address this need, but are difficult to optimize directly due to the inherent non-differentiabilty of the compression loss. We here show that minimal changes to the loss are sufficient to train deep autoencoders competitive with JPEG 2000 and outperforming recently proposed approaches based on RNNs. Our network is furthermore computationally efficient thanks to a sub-pixel architecture, which makes it suitable for high-resolution images. This is in contrast to previous work on autoencoders for compression using coarser approximations, shallower architectures, computationally expensive methods, or focusing on small images.", "title": "" }, { "docid": "595a31e82d857cedecd098bf4c910e99", "text": "Human actions in video sequences are three-dimensional (3D) spatio-temporal signals characterizing both the visual appearance and motion dynamics of the involved humans and objects. Inspired by the success of convolutional neural networks (CNN) for image classification, recent attempts have been made to learn 3D CNNs for recognizing human actions in videos. However, partly due to the high complexity of training 3D convolution kernels and the need for large quantities of training videos, only limited success has been reported. This has triggered us to investigate in this paper a new deep architecture which can handle 3D signals more effectively. Specifically, we propose factorized spatio-temporal convolutional networks (FstCN) that factorize the original 3D convolution kernel learning as a sequential process of learning 2D spatial kernels in the lower layers (called spatial convolutional layers), followed by learning 1D temporal kernels in the upper layers (called temporal convolutional layers). We introduce a novel transformation and permutation operator to make factorization in FstCN possible. Moreover, to address the issue of sequence alignment, we propose an effective training and inference strategy based on sampling multiple video clips from a given action video sequence. We have tested FstCN on two commonly used benchmark datasets (UCF-101 and HMDB-51). Without using auxiliary training videos to boost the performance, FstCN outperforms existing CNN based methods and achieves comparable performance with a recent method that benefits from using auxiliary training videos.", "title": "" } ]
scidocsrr
41cf9a3cade6991077fdfdff28417747
Data Mining Techniques for Detecting Household Characteristics Based on Smart Meter Data
[ { "docid": "8e4eb520c80dfa8d39c69b1273ea89c8", "text": "This paper examines the potential impact of automatic meter reading (AMR) on short-term load forecasting for a residential customer. Real-time measurement data from customers' smart meters provided by a utility company is modeled as the sum of a deterministic component and a Gaussian noise signal. The shaping filter for the Gaussian noise is calculated using spectral analysis. Kalman filtering is then used for load prediction. The accuracy of the proposed method is evaluated for different sampling periods and planning horizons. The results show that the availability of more real-time measurement data improves the accuracy of the load forecast significantly. However, the improved prediction accuracy can come at a high computational cost. Our results qualitatively demonstrate that achieving the desired prediction accuracy while avoiding a high computational load requires limiting the volume of data used for prediction. Consequently, the measurement sampling rate must be carefully selected as a compromise between these two conflicting requirements.", "title": "" }, { "docid": "841f2ab48d111a6b70b2a3171c155f44", "text": "In this paper we present SPADE, a new algorithm for fast discovery of Sequential Patterns. The existing solutions to this problem make repeated database scans, and use complex hash structures which have poor locality. SPADE utilizes combinatorial properties to decompose the original problem into smaller sub-problems, that can be independently solved in main-memory using efficient lattice search techniques, and using simple join operations. All sequences are discovered in only three database scans. Experiments show that SPADE outperforms the best previous algorithm by a factor of two, and by an order of magnitude with some pre-processed data. It also has linear scalability with respect to the number of input-sequences, and a number of other database parameters. Finally, we discuss how the results of sequence mining can be applied in a real application domain.", "title": "" } ]
[ { "docid": "62ca2853492b017a052b9bf5e9b955ff", "text": "This paper describes our attempt to build a sentiment analysis system for Indonesian tweets. With this system, we can study and identify sentiments and opinions in a text or document computationally. We used four thousand manually labeled tweets collected in February and March 2016 to build the model. Because of the variety of content in tweets, we analyze tweets into eight groups in total, including pos(itive), neg(ative), and neu(tral). Finally, we obtained 73.2% accuracy with Long Short Term Memory (LSTM) without normalizer.", "title": "" }, { "docid": "3223563162967868075a43ca86c1d31a", "text": "Deep learning research aims at discovering learning algorithms that discover multiple levels of distributed representations, with higher levels representing more abstract concepts. Although the study of deep learning has already led to impressive theoretical results, learning algorithms and breakthrough experiments, several challenges lie ahead. This paper proposes to examine some of these challenges, centering on the questions of scaling deep learning algorithms to much larger models and datasets, reducing optimization difficulties due to ill-conditioning or local minima, designing more efficient and powerful inference and sampling procedures, and learning to disentangle the factors of variation underlying the observed data. It also proposes a few forward-looking research directions aimed at overcoming these", "title": "" }, { "docid": "559a4175347e5fea57911d9b8c5080e6", "text": "Online social networks offering various services have become ubiquitous in our daily life. Meanwhile, users nowadays are usually involved in multiple online social networks simultaneously to enjoy specific services provided by different networks. Formally, social networks that share some common users are named as partially aligned networks. In this paper, we want to predict the formation of social links in multiple partially aligned social networks at the same time, which is formally defined as the multi-network link (formation) prediction problem. In multiple partially aligned social networks, users can be extensively correlated with each other by various connections. To categorize these diverse connections among users, 7 \"intra-network social meta paths\" and 4 categories of \"inter-network social meta paths\" are proposed in this paper. These \"social meta paths\" can cover a wide variety of connection information in the network, some of which can be helpful for solving the multi-network link prediction problem but some can be not. To utilize useful connection, a subset of the most informative \"social meta paths\" are picked, the process of which is formally defined as \"social meta path selection\" in this paper. An effective general link formation prediction framework, Mli (Multi-network Link Identifier), is proposed in this paper to solve the multi-network link (formation) prediction problem. Built with heterogenous topological features extracted based on the selected \"social meta paths\" in the multiple partially aligned social networks, Mli can help refine and disambiguate the prediction results reciprocally in all aligned networks. Extensive experiments conducted on real-world partially aligned heterogeneous networks, Foursquare and Twitter, demonstrate that Mli can solve the multi-network link prediction problem very well.", "title": "" }, { "docid": "1b0a8696b0bf79c118c5b02a7a2f4d7c", "text": "Mechanical properties of living cells are commonly described in terms of the laws of continuum mechanics. The purpose of this report is to consider the implications of an alternative approach that emphasizes the discrete nature of stress bearing elements in the cell and is based on the known structural properties of the cytoskeleton. We have noted previously that tensegrity architecture seems to capture essential qualitative features of cytoskeletal shape distortion in adherent cells (Ingber, 1993a; Wang et al., 1993). Here we extend those qualitative notions into a formal microstructural analysis. On the basis of that analysis we attempt to identify unifying principles that might underlie the shape stability of the cytoskeleton. For simplicity, we focus on a tensegrity structure containing six rigid struts interconnected by 24 linearly elastic cables. Cables carry initial tension (‘‘prestress’’) counterbalanced by compression of struts. Two cases of interconnectedness between cables and struts are considered: one where they are connected by pin-joints, and the other where the cables run through frictionless loops at the junctions. At the molecular level, the pinned structure may represent the case in which different cytoskeletal filaments are cross-linked whereas the looped structure represents the case where they are free to slip past one another. The system is then subjected to uniaxial stretching. Using the principal of virtual work, stretching force vs. extension and structural stiffness vs. stretching force relationships are calculated for different prestresses. The stiffness is found to increase with increasing prestress and, at a given prestress, to increase approximately linearly with increasing stretching force. This behavior is consistent with observations in living endothelial cells exposed to shear stresses (Wang & Ingber, 1994). At a given prestress, the pinned structure is found to be stiffer than the looped one, a result consistent with data on mechanical behavior of isolated, cross-linked and uncross-linked actin networks (Wachsstock et al., 1993). On the basis of our analysis we concluded that architecture and the prestress of the cytoskeleton might be key features that underlie a cell’s ability to regulate its shape. 7 1996 Academic Press Limited", "title": "" }, { "docid": "75189509743ba4f329b5ea5877f0e8ad", "text": "The psychology of conspiracy theory beliefs is not yet well understood, although research indicates that there are stable individual differences in conspiracist ideation - individuals' general tendency to engage with conspiracy theories. Researchers have created several short self-report measures of conspiracist ideation. These measures largely consist of items referring to an assortment of prominent conspiracy theories regarding specific real-world events. However, these instruments have not been psychometrically validated, and this assessment approach suffers from practical and theoretical limitations. Therefore, we present the Generic Conspiracist Beliefs (GCB) scale: a novel measure of individual differences in generic conspiracist ideation. The scale was developed and validated across four studies. In Study 1, exploratory factor analysis of a novel 75-item measure of non-event-based conspiracist beliefs identified five conspiracist facets. The 15-item GCB scale was developed to sample from each of these themes. Studies 2, 3, and 4 examined the structure and validity of the GCB, demonstrating internal reliability, content, criterion-related, convergent and discriminant validity, and good test-retest reliability. In sum, this research indicates that the GCB is a psychometrically sound and practically useful measure of conspiracist ideation, and the findings add to our theoretical understanding of conspiracist ideation as a monological belief system unpinned by a relatively small number of generic assumptions about the typicality of conspiratorial activity in the world.", "title": "" }, { "docid": "2271347e3b04eb5a73466aecbac4e849", "text": "[1] Robin Jia, Percy Liang. “Adversarial examples for evaluating reading comprehension systems.” In EMNLP 2017. [2] Caiming Xiong, Victor Zhong, Richard Socher. “DCN+ Mixed objective and deep residual coattention for question answering.” In ICLR 2018. [3] Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes. “Reading wikipedia to answer open-domain questions.” In ACL 2017. Check out more of our work at https://einstein.ai/research Method", "title": "" }, { "docid": "65d3d020ee63cdeb74cb3da159999635", "text": "We investigated the effects of format of an initial test and whether or not students received corrective feedback on that test on a final test of retention 3 days later. In Experiment 1, subjects studied four short journal papers. Immediately after reading each paper, they received either a multiple choice (MC) test, a short answer (SA) test, a list of statements to read, or a filler task. The MC test, SA test, and list of statements tapped identical facts from the studied material. No feedback was provided during the initial tests. On a final test 3 days later (consisting of MC and SA questions), having had an intervening MC test led to better performance than an intervening SA test, but the intervening MC condition did not differ significantly from the read statements condition. To better equate exposure to test-relevant information, corrective feedback during the initial tests was introduced in Experiment 2. With feedback provided, having had an intervening SA test led to the best performance on the final test, suggesting that the more demanding the retrieval processes engendered by the intervening test, the greater the benefit to final retention. The practical application of these findings is that regular SA quizzes with feedback may be more effective in enhancing student learning than repeated presentation of target facts or taking an MC quiz.", "title": "" }, { "docid": "dd9b6b67f19622bfffbad427b93a1829", "text": "Low-resolution face recognition (LRFR) has received increasing attention over the past few years. Its applications lie widely in the real-world environment when highresolution or high-quality images are hard to capture. One of the biggest demands for LRFR technologies is video surveillance. As the the number of surveillance cameras in the city increases, the videos that captured will need to be processed automatically. However, those videos or images are usually captured with large standoffs, arbitrary illumination condition, and diverse angles of view. Faces in these images are generally small in size. Several studies addressed this problem employed techniques like super resolution, deblurring, or learning a relationship between different resolution domains. In this paper, we provide a comprehensive review of approaches to low-resolution face recognition in the past five years. First, a general problem definition is given. Later, systematically analysis of the works on this topic is presented by catogory. In addition to describing the methods, we also focus on datasets and experiment settings. We further address the related works on unconstrained lowresolution face recognition and compare them with the result that use synthetic low-resolution data. Finally, we summarized the general limitations and speculate a priorities for the future effort.", "title": "" }, { "docid": "6d594c21ff1632b780b510620484eb62", "text": "The last several years have seen intensive interest in exploring neural-networkbased models for machine comprehension (MC) and question answering (QA). In this paper, we approach the problems by closely modelling questions in a neural network framework. We first introduce syntactic information to help encode questions. We then view and model different types of questions and the information shared among them as an adaptation task and proposed adaptation models for them. On the Stanford Question Answering Dataset (SQuAD), we show that these approaches can help attain better results over a competitive baseline.", "title": "" }, { "docid": "e02707b857a51a5f4b98de1b592f5cc3", "text": "This paper presents a formal analysis of the train to trackside communication protocols used in the European Railway Tra c Management System (ERTMS) standard, and in particular the EuroRadio protocol. This protocol is used to secure important commands sent between train and trackside, such as movement authority and emergency stop messages. We perform our analysis using the applied pi-calculus and the ProVerif tool. This provides a powerful and expressive framework for protocol analysis and allows to check a wide range of security properties based on checking correspondence assertions. We show how it is possible to model the protocol’s counter-style timestamps in this framework. We define ProVerif assertions that allow us to check for secrecy of long and short term keys, authenticity of entities, message insertion, deletion, replay and reordering. We find that the protocol provides most of these security features, however it allows undetectable message deletion and the forging of emergency messages. We discuss the relevance of these results and make recommendations to further enhance the security of ERTMS.", "title": "" }, { "docid": "25b183ce7ecc4b9203686c7ea68aacea", "text": "A fundamental, and still largely unanswered, question in the context of Generative Adversarial Networks (GANs) is whether GANs are actually able to capture the key characteristics of the datasets they are trained on. The current approaches to examining this issue require significant human supervision, such as visual inspection of sampled images, and often offer only fairly limited scalability. In this paper, we propose new techniques that employ a classification–based perspective to evaluate synthetic GAN distributions and their capability to accurately reflect the essential properties of the training data. These techniques require only minimal human supervision and can easily be scaled and adapted to evaluate a variety of state-of-the-art GANs on large, popular datasets. Our analysis indicates that GANs have significant problems in reproducing the more distributional properties of the training dataset. In particular, when seen through the lens of classification, the diversity of GAN data is orders of magnitude less than that of the original data.", "title": "" }, { "docid": "2c73318b59e5d7101884f2563dd700b5", "text": "BACKGROUND\nEffective control of (upright) body posture requires a proper representation of body orientation. Stroke patients with pusher syndrome were shown to suffer from severely disturbed perception of own body orientation. They experience their body as oriented 'upright' when actually tilted by nearly 20 degrees to the ipsilesional side. Thus, it can be expected that postural control mechanisms are impaired accordingly in these patients. Our aim was to investigate pusher patients' spontaneous postural responses of the non-paretic leg and of the head during passive body tilt.\n\n\nMETHODS\nA sideways tilting motion was applied to the trunk of the subject in the roll plane. Stroke patients with pusher syndrome were compared to stroke patients not showing pushing behaviour, patients with acute unilateral vestibular loss, and non brain damaged subjects.\n\n\nRESULTS\nCompared to all groups without pushing behaviour, the non-paretic leg of the pusher patients showed a constant ipsiversive tilt across the whole tilt range for an amount which was observed in the non-pusher subjects when they were tilted for about 15 degrees into the ipsiversive direction.\n\n\nCONCLUSION\nThe observation that patients with acute unilateral vestibular loss showed no alterations of leg posture indicates that disturbed vestibular afferences alone are not responsible for the disordered leg responses seen in pusher patients. Our results may suggest that in pusher patients a representation of body orientation is disturbed that drives both conscious perception of body orientation and spontaneous postural adjustment of the non-paretic leg in the roll plane. The investigation of the pusher patients' leg-to-trunk orientation thus could serve as an additional bedside tool to detect pusher syndrome in acute stroke patients.", "title": "" }, { "docid": "0950052c92b4526c253acc0d4f0f45a0", "text": "Pictogram communication is successful when participants at both end of the communication channel share a common pictogram interpretation. Not all pictograms carry universal interpretation, however; the issue of ambiguous pictogram interpretation must be addressed to assist pictogram communication. To unveil the ambiguity possible in pictogram interpretation, we conduct a human subject experiment to identify culture-specific criteria employed by humans by detecting cultural differences in pictogram interpretations. Based on the findings, we propose a categorical semantic relevance measure which calculates how relevant a pictogram is to a given interpretation in terms of a given pictogram category. The proposed measure is applied to categorized pictogram interpretations to enhance pictogram retrieval performance. The WordNet, the ChaSen, and the EDR Electronic Dictionary registered to the Language Grid are utilized to merge synonymous pictogram interpretations and to categorize pictogram interpretations into super-concept categories. We show how the Language Grid can assist the crosscultural research process.", "title": "" }, { "docid": "b1cabb319ce759343ad3f043c7d86b14", "text": "We consider the problem of scheduling tasks requiring certain processing times on one machine so that the busy time of the machine is maximized. The problem is to find a probabilistic online algorithm with reasonable worst case performance ratio. We answer an open problem of Lipton and Tompkins concerning the best possible ratio that can be achieved. Furthermore, we extend their results to anm-machine analogue. Finally, a variant of the problem is analyzed, in which the machine is provided with a buffer to store one job. Wir betrachten das Problem der Zuteilung von Aufgaben bestimmter Rechenzeit auf einem Rechner, um so seine Auslastung zu maximieren. Die Aufgabe besteht darin, einen probabilistischen Online-Algorithmus mit vernünftigem worst-case Performance-Verhältnis zu finden. Wir geben die Antwort auf ein offenes Problem von Lipton und Tompkins, das das bestmögliche Verhältnis betrifft. Weiter verallgemeinern wir ihre Ergebnisse auf einm-Maschinen-Analogon. Schließlich wird eine Variante des Problems analysiert, in dem der Rechner mit einem Zwischenspeicher für einen Job versehen ist.", "title": "" }, { "docid": "5063adc5020cacddb5a4c6fd192fc17e", "text": "In this paper, A Novel 1 to 4 modified Wilkinson power divider operating over the frequency range of (3 GHz to 8 GHz) is proposed. The design perception of the proposed divider based on two different stages and printed on FR4 (Epoxy laminate material) with the thickness of 1.57mm and єr =4.3 respectively. The modified design of this power divider including curved corners instead of the sharp edges and some modification in the length of matching stubs. In addition, this paper contain the power divider with equal power split at all ports, reasonable insertion loss, acceptable return loss below −10 dB, good impedance matching at all ports and satisfactory isolation performance has been obtained over the mentioned frequency range. The design concept and optimization development is practicable through CST simulation software.", "title": "" }, { "docid": "66af4d496e98e4b407922fbe9970a582", "text": "Automatic summarization of open-domain spoken dialogues is a relatively new research area. This article introduces the task and the challenges involved and motivates and presents an approach for obtaining automatic-extract summaries for human transcripts of multiparty dialogues of four different genres, without any restriction on domain. We address the following issues, which are intrinsic to spoken-dialogue summarization and typically can be ignored when summarizing written text such as news wire data: (1) detection and removal of speech disfluencies; (2) detection and insertion of sentence boundaries; and (3) detection and linking of cross-speaker information units (question-answer pairs). A system evaluation is performed using a corpus of 23 dialogue excerpts with an average duration of about 10 minutes, comprising 80 topical segments and about 47,000 words total. The corpus was manually annotated for relevant text spans by six human annotators. The global evaluation shows that for the two more informal genres, our summarization system using dialogue-specific components significantly outperforms two baselines: (1) a maximum-marginal-relevance ranking algorithm using TFIDF term weighting, and (2) a LEAD baseline that extracts the first n words from a text.", "title": "" }, { "docid": "12cde236faadf6be0edf7b3699fc7a6c", "text": "for 4F2 DRAM Cell Array with sub 40 nm Technology Jae-Man Yoon, Kangyoon Lee, Seung-Bae Park, Seong-Goo Kim, Hyoung-Won Seo, Young-Woong Son, Bong-Soo Kim, Hyun-Woo Chung, Choong-Ho Lee*, Won-Sok Lee* *, Dong-Chan Kim* * *, Donggun Park*, Wonshik Lee and Byung-Il Ryu ATD Team, Device Research Team*, CAEP*, PD Team***, Semiconductor R&D Division, Samsung Electronics Co., San #24, Nongseo-Dong, Kiheung-Gu, Yongin-City, Kyunggi-Do, 449-711, Korea Tel) 82-31-209-4741, Fax) 82-31-209-3274, E-mail)", "title": "" }, { "docid": "12d565f0aaa6960e793b96f1c26cb103", "text": "The new western Mode 5 IFF (Identification Foe or Friend) system is introduced. Based on analysis of signal features and format characteristics of Mode 5, a new signal detection method using phase and Amplitude correlation is put forward. This method utilizes odd and even channels to separate the signal, and then the separated signals are performed correlation with predefined mask. Through detecting preamble, the detection of Mode 5 signal is implemented. Finally, simulation results show the validity of the proposed method.", "title": "" }, { "docid": "e5bf42029c05ceadebd9fc4205446192", "text": "To demonstrate generality and to illustrate some additional properties of the method, we also apply the explanation method to a second domain: classifying news stories. The 20 newsgroups data set is a benchmark data set used in document classification research. It contains about 20,000 news items from 20 newsgroups representing different topics, and has a vocabulary of 26,214 different words (after stemming) (Lang 1995). The 20 topics can be categorized into seven top-level usenet categories with related news items: alternative (alt), computers (comp), miscellaneous (misc), recreation (rec), science (sci), society (soc), and talk (talk). One typical problem studied with this data set is to build classifiers to identify stories from these seven high-level news categories, which for our purposes gives a wide variety of different topics across which to provide document classification explanations. Looking at the seven high-level categories also provides realistic richness to the task: in many real document classification tasks, the class of interest is actually a collection (disjunction) of related concepts (consider, for example, “hate speech” in the safe-advertising domain).", "title": "" }, { "docid": "733dc724bd0abf127c05a7717476a542", "text": "By analogy with Internet of things, Internet of vehicles (IoV) that enables ubiquitous information exchange and content sharing among vehicles with little or no human intervention is a key enabler for the intelligent transportation industry. In this paper, we study how to combine both the physical and social layer information for realizing rapid content dissemination in device-to-device vehicle-to-vehicle (D2D-V2V)-based IoV networks. In the physical layer, headway distance of vehicles is modeled as a Wiener process, and the connection probability of D2D-V2V links is estimated by employing the Kolmogorov equation. In the social layer, the social relationship tightness that represents content selection similarities is obtained by Bayesian nonparametric learning based on real-world social big data, which are collected from the largest Chinese microblogging service Sina Weibo and the largest Chinese video-sharing site Youku. Then, a price-rising-based iterative matching algorithm is proposed to solve the formulated joint peer discovery, power control, and channel selection problem under various quality-of-service requirements. Finally, numerical results demonstrate the effectiveness and superiority of the proposed algorithm from the perspectives of weighted sum rate and matching satisfaction gains.", "title": "" } ]
scidocsrr
b0dd3f1aad518c98c1f4ff4f042a5703
Semantic smart grid services: Enabling a standards-compliant Internet of energy platform with IEC 61850 and OPC UA
[ { "docid": "ed06226e548fac89cc06a798618622c6", "text": "Exciting yet challenging times lie ahead. The electrical power industry is undergoing rapid change. The rising cost of energy, the mass electrification of everyday life, and climate change are the major drivers that will determine the speed at which such transformations will occur. Regardless of how quickly various utilities embrace smart grid concepts, technologies, and systems, they all agree onthe inevitability of this massive transformation. It is a move that will not only affect their business processes but also their organization and technologies.", "title": "" }, { "docid": "3bc9eb46e389b7be4141950142c606dd", "text": "Within this contribution, we outline the use of the new automation standards family OPC Unified Architecture (IEC 62541) in scope with the IEC 61850 field automation standard. The IEC 61850 provides both an abstract data model and an abstract communication interface. Different technology mappings to implement the model exist. With the upcoming OPC UA, a new communication model to implement abstract interfaces has been introduced. We outline its use in this contribution and also give examples on how it can be used alongside the IEC 61970 Common Information Model to properly integrate ICT and field automation at communication standards level.", "title": "" } ]
[ { "docid": "008f94637ed982a75c51577f4bfc3c34", "text": "Revelations of large scale electronic surveillance and data mining by governments and corporations have fueled increased adoption of HTTPS. We present a traffic analysis attack against over 6000 webpages spanning the HTTPS deployments of 10 widely used, industryleading websites in areas such as healthcare, finance, legal services and streaming video. Our attack identifies individual pages in the same website with 89% accuracy, exposing personal details including medical conditions, financial and legal affairs and sexual orientation. We examine evaluation methodology and reveal accuracy variations as large as 18% caused by assumptions affecting caching and cookies. We present a novel defense reducing attack accuracy to 27% with a 9% traffic increase, and demonstrate significantly increased effectiveness of prior defenses in our evaluation context, inclusive of enabled caching, user-specific cookies and pages within the same website.", "title": "" }, { "docid": "a5e23ca50545378ef32ed866b97fd418", "text": "In the framework of computer assisted diagnosis of diabetic retinopathy, a new algorithm for detection of exudates is presented and discussed. The presence of exudates within the macular region is a main hallmark of diabetic macular edema and allows its detection with a high sensitivity. Hence, detection of exudates is an important diagnostic task, in which computer assistance may play a major role. Exudates are found using their high grey level variation, and their contours are determined by means of morphological reconstruction techniques. The detection of the optic disc is indispensable for this approach. We detect the optic disc by means of morphological filtering techniques and the watershed transformation. The algorithm has been tested on a small image data base and compared with the performance of a human grader. As a result, we obtain a mean sensitivity of 92.8% and a mean predictive value of 92.4%. Robustness with respect to changes of the parameters of the algorithm has been evaluated.", "title": "" }, { "docid": "f905016b422d9c16ac11b85182f196c7", "text": "The random forest (RF) classifier is an ensemble classifier derived from decision tree idea. However the parallel operations of several classifiers along with use of randomness in sample and feature selection has made the random forest a very strong classifier with accuracy rates comparable to most of currently used classifiers. Although, the use of random forest on handwritten digits has been considered before, in this paper RF is applied in recognizing Persian handwritten characters. Trying to improve the recognition rate, we suggest converting the structure of decision trees from a binary tree to a multi branch tree. The improvement gained this way proves the applicability of the idea.", "title": "" }, { "docid": "956b7139333421343e8ed245a63a7b4b", "text": "Purpose – During the last decades, different quality management concepts, including total quality management (TQM), six sigma and lean, have been applied by many different organisations. Although much important work has been documented regarding TQM, six sigma and lean, a number of questions remain concerning the applicability of these concepts in various organisations and contexts. Hence, the purpose of this paper is to describe the similarities and differences between the concepts, including an evaluation and criticism of each concept. Design/methodology/approach – Within a case study, a literature review and face-to-face interviews in typical TQM, six sigma and lean organisations have been carried out. Findings – While TQM, six sigma and lean have many similarities, especially concerning origin, methodologies, tools and effects, they differ in some areas, in particular concerning the main theory, approach and the main criticism. The lean concept is slightly different from TQM and six sigma. However, there is a lot to gain if organisations are able to combine these three concepts, as they are complementary. Six sigma and lean are excellent road-maps, which could be used one by one or combined, together with the values in TQM. Originality/value – The paper provides guidance to organisations regarding the applicability and properties of quality concepts. Organisations need to work continuously with customer-orientated activities in order to survive; irrespective of how these activities are labelled. The paper will also serve as a basis for further research in this area, focusing on practical experience of these concepts.", "title": "" }, { "docid": "4d18ea8816e9e4abf428b3f413c82f9e", "text": "This paper reviews computer vision and image analysis studies aiming at automated diagnosis or screening of malaria infection in microscope images of thin blood film smears. Existing works interpret the diagnosis problem differently or propose partial solutions to the problem. A critique of these works is furnished. In addition, a general pattern recognition framework to perform diagnosis, which includes image acquisition, pre-processing, segmentation, and pattern classification components, is described. The open problems are addressed and a perspective of the future work for realization of automated microscopy diagnosis of malaria is provided.", "title": "" }, { "docid": "bf7d502a818ac159cf402067b4416858", "text": "We present algorithms for evaluating and performing modeling operatyons on NURBS surfaces using the programmable fragment processor on the Graphics Processing Unit (GPU). We extend our GPU-based NURBS evaluator that evaluates NURBS surfaces to compute exact normals for either standard or rational B-spline surfaces for use in rendering and geometric modeling. We build on these calculations in our new GPU algorithms to perform standard modeling operations such as inverse evaluations, ray intersections, and surface-surface intersections on the GPU. Our modeling algorithms run in real time, enabling the user to sketch on the actual surface to create new features. In addition, the designer can edit the surface by interactively trimming it without the need for re-tessellation. We also present a GPU-accelerated algorithm to perform surface-surface intersection operations with NURBS surfaces that can output intersection curves in the model space as well as in the parametric spaces of both the intersecting surfaces at interactive rates.", "title": "" }, { "docid": "b3f423e513c543ecc9fe7003ff9880ea", "text": "Increasing attention has been paid to air quality monitoring with a rapid development in industry and transportation applications in the modern society. However, the existing air quality monitoring systems cannot provide satisfactory spatial and temporal resolutions of the air quality information with low costs in real time. In this paper, we propose a new method to implement the air quality monitoring system based on state-of-the-art Internet-of-Things (IoT) techniques. In this system, portable sensors collect the air quality information timely, which is transmitted through a low power wide area network. All air quality data are processed and analyzed in the IoT cloud. The completed air quality monitoring system, including both hardware and software, is developed and deployed successfully in urban environments. Experimental results show that the proposed system is reliable in sensing the air quality, which helps reveal the change patterns of air quality to some extent.", "title": "" }, { "docid": "b7062e40643ff1b879247a3f4ec3b07f", "text": "The question of whether there are different patterns of autonomic nervous system responses for different emotions is examined. Relevant conceptual issues concerning both the nature of emotion and the structure of the autonomic nervous system are discussed in the context of the development of research methods appropriate for studying this question. Are different emotional states associated with distinct patterns of autonomic nervous system (ANS) activity? This is an old question that is currently enjoying a modest revival in psychology. In the 1950s autonomic specificity was a key item on the agenda of the newly emerging discipline of psychophysiology, which saw as its mission the scientific exploration of the mind-body relationship using the tools of electrophysiological measurement. But the field of psychophysiology had the misfortune of coming of age during a period in which psychology drifted away from its physiological roots, a period in which psychology was dominated by learning, behaviourism, personality theory and later by cognition. Psychophysiology in the period between 1960 and 1980 reflected these broader trends in psychology by focusing on such issues as autonomic markers of perceptual states (e.g. orienting, stimulus processing), the interplay between personality factors and ANS responsivity, operant conditioning of autonomic functions, and finally, electrophysiological markers of cognitive states. Research on autonomic specificity in emotion became increasingly rare. Perhaps as a result of these historical trends in psychology, or perhaps because research on emotion and physiology is so difficult to do well, there 18 SOCIAL PSYCHOPHYSIOLOGY AND EMOTION exists only a small body of studies on ANS specificity. Although almost all of these studies report some evidence for the existence of specificity, the prevailing zeitgeist has been that specificity has not been empirically established. At this point in time a review of the existing literature would not be very informative, for it would inevitably dissolve into a critique of methods. Instead, what I hope to accomplish in this chapter is to provide a new framework for thinking about ANS specificity, and to propose guidelines for carrying out research on this issue that will be cognizant of the recent methodological and theoretical advances that have been made both in psychophysiology and in research on emotion. Emotion as organization From the outset, the definition of emotion that underlies this chapter should be made explicit. For me the essential function of emotion is organization. The selection of emotion for preservation across time and species is based on the need for an efficient mechanism than can mobilize and organize disparate response systems to deal with environmental events that pose a threat to survival. In this view the prototypical context for human emotions is those situations in which a multi-system response must be organized quickly, where time is not available for the lengthy processes of deliberation, reformulation, planning and rehearsal; where a fine degree of co-ordination is required among systems as disparate as the muscles of the face and the organs of the viscera; and where adaptive behaviours that normally reside near the bottom of behavioural hierarchies must be instantaneously shifted to the top. Specificity versus undifferentiated arousal In this model of emotion as organization it is assumed that each component system is capable of a number of different responses, and that the emotion will guide the selection of responses from each system. Component systems differ in terms of the number of response possibilities. Thus, in the facial expressive system a selection must be made among a limited set of prototypic emotional expressions (which are but a subset of the enormous number of expressions the face is capable of assuming). A motor behaviour must also be selected from a similarly reduced set of responses consisting of fighting, fleeing, freezing, hiding, etc. All major theories of emotion would accept the proposition that activation of the ANS is one of the changes that occur during emotion. But theories differ as to how many different ANS patterns constitute the set of selection possibilities. At one extreme are those who would argue that there are only two ANS patterns: 'off' and 'on'. The 'on' ANS pattern, according to this view, consists EMOTION AND THE AUTONOMIC NERVOUS SYSTEM 19 of a high-level, global, diffuse ANS activation, mediated primarily by the sympathetic branch of the ANS. The manifestations of this pattern rapid and forcefulcontractions of the heart, rapid and deep breathing, increased systolic blood pressure, sweating, dry mouth, redirection of blood flow to large skeletal muscles, peripheral vasoconstriction, release of large amounts of epinephrine and norepinephrine from the adrenal medulla, and the resultant release of glucose from the liver are well known. Cannon (1927) described this pattern in some detail, arguing that this kind of high-intensity, undifferentiated arousal accompanied all emotions .. Among contemporary theories the notion of undifferentiated arousal is most clearly found in Mandler's theory (Mandler, 1975). However, undifferentiated arousal also played a major role in the extraordinarily influential cognitive/physiological theory of Schachter and Singer (1962). According to this theory, undifferentiated arousal is a necessary precondition for emotionan extremely plastic medium to be moulded by cognitive processes working in concert with the available cues from the social environment. At the other extreme are those who argue that there are a large number of patterns of ANS activation, each associated with a different emotion (or subset of emotions). This is the traditional specificity position. Its classic statement is often attributed to James (1884), although Alexander (1950) provided an even more radical version. The specificity position fuelled a number of experimental studies in the 1950s and 1960s, all attempting to identify some of these autonomic patterns (e.g. Averill, 1969; Ax, 1953; Funkenstein, King and Drolette, 1954; Schachter, 1957; Sternbach, 1962). Despite these studies, all of which reported support for ANS specificity, the undifferentiated arousal theory, especially as formulated by Schachter and Singer (1962) and their followers, has been dominant for a great many years. Is the ANS capable of specific action No matter how appealing the notion of ANS specificity might be in the abstract, there would be little reason to pursue it in the laboratory if the ANS were only capable of producing one pattern of arousal. There is no question that the pattern of high-level sympathetic arousal described earlier is one pattern that the ANS can produce. Cannon's arguments notwithstanding, I believe there now is quite ample evidence that the ANS is capable of a number of different patterns of activation. Whether these patterns are reliably associated with different emotions remains an empirical question, but the potential is surely there. A case in support of this potential for specificity can be based on: (a) the neural structure of the ANS; (b) the stimulation neurochemistry of the ANS; and (c) empirical findings. 20 SOCIAL PSYCHOPHYSIOLOGY AND EMOTION", "title": "" }, { "docid": "7b0e63115a7d085a180e047ae1ab2139", "text": "We describe a set of tools for retail analytics based on a combination of video understanding and transaction-log. Tools are provided for loss prevention (returns fraud and cashier fraud), store operations (customer counting) and merchandising (display effectiveness). Results are presented on returns fraud and customer counting.", "title": "" }, { "docid": "09e2a91a25e4ecccc020a91e14a35282", "text": "A core goal of auditory neuroscience is to build quantitative models that predict cortical responses to natural sounds. Reasoning that a complete model of auditory cortex must solve ecologically relevant tasks, we optimized hierarchical neural networks for speech and music recognition. The best-performing network contained separate music and speech pathways following early shared processing, potentially replicating human cortical organization. The network performed both tasks as well as humans and exhibited human-like errors despite not being optimized to do so, suggesting common constraints on network and human performance. The network predicted fMRI voxel responses substantially better than traditional spectrotemporal filter models throughout auditory cortex. It also provided a quantitative signature of cortical representational hierarchy-primary and non-primary responses were best predicted by intermediate and late network layers, respectively. The results suggest that task optimization provides a powerful set of tools for modeling sensory systems.", "title": "" }, { "docid": "c97e005d827b712e7d61d8a911c3bed6", "text": "Industries and individuals outsource database to realize convenient and low-cost applications and services. In order to provide sufficient functionality for SQL queries, many secure database schemes have been proposed. However, such schemes are vulnerable to privacy leakage to cloud server. The main reason is that database is hosted and processed in cloud server, which is beyond the control of data owners. For the numerical range query (“>,” “<,” and so on), those schemes cannot provide sufficient privacy protection against practical challenges, e.g., privacy leakage of statistical properties, access pattern. Furthermore, increased number of queries will inevitably leak more information to the cloud server. In this paper, we propose a two-cloud architecture for secure database, with a series of intersection protocols that provide privacy preservation to various numeric-related range queries. Security analysis shows that privacy of numerical information is strongly protected against cloud providers in our proposed scheme.", "title": "" }, { "docid": "6c2b19b2888d00fccb1eae37352d653d", "text": "Between June 1985 and January 1987, the Therac-25 medical electron accelerator was involved in six massive radiation overdoses. As a result, several people died and others were seriously injured. A detailed investigation of the factors involved in the software-related overdoses and attempts by users, manufacturers, and government agencies to deal with the accidents is presented. The authors demonstrate the complex nature of accidents and the need to investigate all aspects of system development and operation in order to prevent future accidents. The authors also present some lessons learned in terms of system engineering, software engineering, and government regulation of safety-critical systems containing software components.<<ETX>>", "title": "" }, { "docid": "7dc652c9b86f63c0a6b546396980783b", "text": "An affine invariant representation is constructed with a cascade of invariants, which preserves information for classification. A joint translation and rotation invariant representation of image patches is calculated with a scattering transform. It is implemented with a deep convolution network, which computes successive wavelet transforms and modulus non-linearities. Invariants to scaling, shearing and small deformations are calculated with linear operators in the scattering domain. State-of-the-art classification results are obtained over texture databases with uncontrolled viewing conditions.", "title": "" }, { "docid": "39e71a3228331eb8b1574173cfb1e04a", "text": "Euler Number is one of the most important characteristics in topology. In two-dimension digital images, the Euler characteristic is locally computable. The form of Euler Number formula is different under 4-connected and 8-connected conditions. Based on the definition of the Foreground Segment and Neighbor Number, a formula of the Euler Number computing is proposed and is proved in this paper. It is a new idea to locally compute Euler Number of 2D image.", "title": "" }, { "docid": "b2d1a0befef19d466cd29868d5cf963b", "text": "Accurate prediction of the functional effect of genetic variation is critical for clinical genome interpretation. We systematically characterized the transcriptome effects of protein-truncating variants, a class of variants expected to have profound effects on gene function, using data from the Genotype-Tissue Expression (GTEx) and Geuvadis projects. We quantitated tissue-specific and positional effects on nonsense-mediated transcript decay and present an improved predictive model for this decay. We directly measured the effect of variants both proximal and distal to splice junctions. Furthermore, we found that robustness to heterozygous gene inactivation is not due to dosage compensation. Our results illustrate the value of transcriptome data in the functional interpretation of genetic variants.", "title": "" }, { "docid": "c51e1b845d631e6d1b9328510ef41ea0", "text": "Accurate interference models are important for use in transmission scheduling algorithms in wireless networks. In this work, we perform extensive modeling and experimentation on two 20-node TelosB motes testbeds -- one indoor and the other outdoor -- to compare a suite of interference models for their modeling accuracies. We first empirically build and validate the physical interference model via a packet reception rate vs. SINR relationship using a measurement driven method. We then similarly instantiate other simpler models, such as hop-based, range-based, protocol model, etc. The modeling accuracies are then evaluated on the two testbeds using transmission scheduling experiments. We observe that while the physical interference model is the most accurate, it is still far from perfect, providing a 90-percentile error about 20-25% (and 80 percentile error 7-12%), depending on the scenario. The accuracy of the other models is worse and scenario-specific. The second best model trails the physical model by roughly 12-18 percentile points for similar accuracy targets. Somewhat similar throughput performance differential between models is also observed when used with greedy scheduling algorithms. Carrying on further, we look closely into the the two incarnations of the physical model -- 'thresholded' (conservative, but typically considered in literature) and 'graded' (more realistic). We show via solving the one shot scheduling problem, that the graded version can improve `expected throughput' over the thresholded version by scheduling imperfect links.", "title": "" }, { "docid": "57c2422bac0a8f44b186fadbfcadb393", "text": "In this paper, we propose a vision-based multiple lane boundaries detection and estimation structure that fuses the edge features and the high intensity features. Our approach utilizes a camera as the only input sensor. The application of Kalman filter for information fusion and tracking significantly improves the reliability and robustness of our system. We test our system on roads with different driving scenarios, including day, night, heavy traffic, rain, confusing textures and shadows. The feasibility of our approach is demonstrated by quantitative evaluation using manually labeled video clips.", "title": "" }, { "docid": "838b599024a14e952145af0c12509e31", "text": "In this paper, a broadband high-power eight-way coaxial waveguide power combiner with axially symmetric structure is proposed. A combination of circuit model and full electromagnetic wave methods is used to simplify the design procedure by increasing the role of the circuit model and, in contrast, reducing the amount of full wave optimization. The presented structure is compact and easy to fabricate. Keeping its return loss greater than 12 dB, the constructed combiner operates within 112% bandwidth from 520 to 1860 MHz.", "title": "" }, { "docid": "6de71e8106d991d2c3d2b845a9e0a67e", "text": "XML repositories are now a widespread means for storing and exchanging information on the Web. As these repositories become increasingly used in dynamic applications such as e-commerce, there is a rapidly growing need for a mechanism to incorporate reactive functionality in an XML setting. Event-condition-action (ECA) rules are a technology from active databases and are a natural method for supporting suchfunctionality. ECA rules can be used for activities such as automatically enforcing document constraints, maintaining repository statistics, and facilitating publish/subscribe applications. An important question associated with the use of a ECA rules is how to statically predict their run-time behaviour. In this paper, we define a language for ECA rules on XML repositories. We then investigate methods for analysing the behaviour of a set of ECA rules, a task which has added complexity in this XML setting compared with conventional active databases.", "title": "" }, { "docid": "007f741a718d0c4a4f181676a39ed54a", "text": "Following the development of computing and communication technologies, the idea of Internet of Things (IoT) has been realized not only at research level but also at application level. Among various IoT-related application fields, biometrics applications, especially face recognition, are widely applied in video-based surveillance, access control, law enforcement and many other scenarios. In this paper, we introduce a Face in Video Recognition (FivR) framework which performs real-time key-frame extraction on IoT edge devices, then conduct face recognition using the extracted key-frames on the Cloud back-end. With our key-frame extraction engine, we are able to reduce the data volume hence dramatically relief the processing pressure of the cloud back-end. Our experimental results show with IoT edge device acceleration, it is possible to implement face in video recognition application without introducing the middle-ware or cloud-let layer, while still achieving real-time processing speed.", "title": "" } ]
scidocsrr
397e1ca66cd9cc314ee3b6182ca6b548
On Organizational Becoming: Rethinking Organizational Change
[ { "docid": "efd723e99064699de2ed5400887c1eda", "text": "Building on a formal theory of the structural aspects of organizational change initiated in Hannan, Pólos, and Carroll (2002a, 2002b), this paper focuses on structural inertia. We define inertia as a persistent organizational resistance to changing architectural features. We examine the evolutionary consequences of architectural inertia. The main theorem holds that selection favors architectural inertia in the sense that the median level of inertia in cohort of organizations presumably increases over time. A second theorem holds that the selection intensity favoring architectural inertia is greater when foresight about the consequences of changes is more limited. According to the prior theory of Hannan, Pólos, and Carroll (2002a, 2002b), foresight is limited by complexity and opacity. Thus it follows that the selection intensity favoring architectural inertia is stronger in populations composed of complex and opaque organizations than in those composed of simple and transparent ones. ∗This research was supported by fellowships from the Netherlands Institute for Advanced Study and by the Stanford Graduate School of Business Trust, ERIM at Erasmus University, and the Centre for Formal Studies in the Social Sciences at Lorand Eötvös University. We benefited from the comments of Jim Baron, Dave Barron, Gábor Péli, Joel Podolny, and the participants in the workshop of the Nagymaros Group on Organizational Ecology and in the Stanford Strategy Conference. †Stanford University ‡Loránd Eötvös University, Budapest and Erasmus University, Rotterdam §Stanford University", "title": "" }, { "docid": "9c5535f218f6228ba6b2a8e5fdf93371", "text": "Recent analyses of organizational change suggest a growing concern with the tempo of change, understood as the characteristic rate, rhythm, or pattern of work or activity. Episodic change is contrasted with continuous change on the basis of implied metaphors of organizing, analytic frameworks, ideal organizations, intervention theories, and roles for change agents. Episodic change follows the sequence unfreeze-transition-refreeze, whereas continuous change follows the sequence freeze-rebalance-unfreeze. Conceptualizations of inertia are seen to underlie the choice to view change as episodic or continuous.", "title": "" } ]
[ { "docid": "b168f298448b3ba16b7f585caae7baa6", "text": "Not only how good or bad people feel on average, but also how their feelings fluctuate across time is crucial for psychological health. The last 2 decades have witnessed a surge in research linking various patterns of short-term emotional change to adaptive or maladaptive psychological functioning, often with conflicting results. A meta-analysis was performed to identify consistent relationships between patterns of short-term emotion dynamics-including patterns reflecting emotional variability (measured in terms of within-person standard deviation of emotions across time), emotional instability (measured in terms of the magnitude of consecutive emotional changes), and emotional inertia of emotions over time (measured in terms of autocorrelation)-and relatively stable indicators of psychological well-being or psychopathology. We determined how such relationships are moderated by the type of emotional change, type of psychological well-being or psychopathology involved, valence of the emotion, and methodological factors. A total of 793 effect sizes were identified from 79 articles (N = 11,381) and were subjected to a 3-level meta-analysis. The results confirmed that overall, low psychological well-being co-occurs with more variable (overall ρ̂ = -.178), unstable (overall ρ̂ = -.205), but also more inert (overall ρ̂ = -.151) emotions. These effect sizes were stronger when involving negative compared with positive emotions. Moreover, the results provided evidence for consistency across different types of psychological well-being and psychopathology in their relation with these dynamical patterns, although specificity was also observed. The findings demonstrate that psychological flourishing is characterized by specific patterns of emotional fluctuations across time, and provide insight into what constitutes optimal and suboptimal emotional functioning. (PsycINFO Database Record", "title": "" }, { "docid": "41cdd0e8bcbffbd4c66b8088e26b94fe", "text": "We propose a neural network for 3D point cloud processing that exploits spherical convolution kernels and octree partitioning of space. The proposed metric-based spherical kernels systematically quantize point neighborhoods to identify local geometric structures in data, while maintaining the properties of translation-invariance and asymmetry. The network architecture itself is guided by octree data structuring that takes full advantage of the sparse nature of irregular point clouds. We specify spherical kernels with the help of neurons in each layer that in turn are associated with spatial locations. We exploit this association to avert dynamic kernel generation during network training, that enables efficient learning with high resolution point clouds. We demonstrate the utility of the spherical convolutional neural network for 3D object classification on standard benchmark datasets.", "title": "" }, { "docid": "917287666755fe4b1832f5b6025414bb", "text": "The Piver classification of radical hysterectomy for the treatment of cervical cancer is outdated and misused. The Surgery Committee of the Gynecological Cancer Group of the European Organization for Research and Treatment of Cancer (EORTC) produced, approved, and adopted a revised classification. It is hoped that at least within the EORTC participating centers, a standardization of procedures is achieved. The clinical indications of the new classification are discussed.", "title": "" }, { "docid": "ad5a8c3ee37219868d056b341300008e", "text": "The challenges of 4G are multifaceted. First, 4G requires multiple-input, multiple-output (MIMO) technology, and mobile devices supporting MIMO typically have multiple antennas. To obtain the benefits of MIMO communications systems, antennas typically must be properly configured to take advantage of the independent signal paths that can exist in the communications channel environment. [1] With proper design, one antenna’s radiation is prevented from traveling into the neighboring antenna and being absorbed by the opposite load circuitry. Typically, a combination of antenna separation and polarization is used to achieve the required signal isolation and independence. However, when the area inside devices such as smartphones, USB modems, and tablets is extremely limited, this approach often is not effective in meeting industrial design and performance criteria. Second, new LTE networks are expected to operate alongside all the existing services, such as 3G voice/data, Wi-Fi, Bluetooth, etc. Third, this problem gets even harder in the 700 MHz LTE band because the typical handset is not large enough to properly resonate at that frequency.", "title": "" }, { "docid": "7159d958139d684e4a74abe252788a40", "text": "Exploration in environments with sparse rewards has been a persistent problem in reinforcement learning (RL). Many tasks are natural to specify with a sparse reward, and manually shaping a reward function can result in suboptimal performance. However, finding a non-zero reward is exponentially more difficult with increasing task horizon or action dimensionality. This puts many real-world tasks out of practical reach of RL methods. In this work, we use demonstrations to overcome the exploration problem and successfully learn to perform long-horizon, multi-step robotics tasks with continuous control such as stacking blocks with a robot arm. Our method, which builds on top of Deep Deterministic Policy Gradients and Hindsight Experience Replay, provides an order of magnitude of speedup over RL on simulated robotics tasks. It is simple to implement and makes only the additional assumption that we can collect a small set of demonstrations. Furthermore, our method is able to solve tasks not solvable by either RL or behavior cloning alone, and often ends up outperforming the demonstrator policy.", "title": "" }, { "docid": "e5edb616b5d0664cf8108127b0f8684c", "text": "Night vision systems have become an important research area in recent years. Due to variations in weather conditions such as snow, fog, and rain, night images captured by camera may contain high level of noise. These conditions, in real life situations, may vary from no noise to extreme amount of noise corrupting images. Thus, ideal image restoration systems at night must consider various levels of noise and should have a technique to deal with wide range of noisy situations. In this paper, we have presented a new method that works well with different signal to noise ratios ranging from -1.58 dB to 20 dB. For moderate noise, Wigner distribution based algorithm gives good results, whereas for extreme amount of noise 2nd order Wigner distribution is used. The performance of our restoration technique is evaluated using MSE criteria. The results show that our method is capable of dealing with the wide range of Gaussian noise and gives consistent performance throughout.", "title": "" }, { "docid": "d341486002f2b0f5e620f5a63873577c", "text": "Various Internet solutions take their power processing and analysis from cloud computing services. Internet of Things (IoT) applications started discovering the benefits of computing, processing, and analysis on the device itself aiming to reduce latency for time-critical applications. However, on-device processing is not suitable for resource-constraints IoT devices. Edge computing (EC) came as an alternative solution that tends to move services and computation more closer to consumers, at the edge. In this letter, we study and discuss the applicability of merging deep learning (DL) models, i.e., convolutional neural network (CNN), recurrent neural network (RNN), and reinforcement learning (RL), with IoT and information-centric networking which is a promising future Internet architecture, combined all together with the EC concept. Therefore, a CNN model can be used in the IoT area to exploit reliably data from a complex environment. Moreover, RL and RNN have been recently integrated into IoT, which can be used to take the multi-modality of data in real-time applications into account.", "title": "" }, { "docid": "1e4a74d8d4ae131467e12911fd6ac281", "text": "Google Scholar has been well received by the research community. Its promises of free, universal and easy access to scientific literature as well as the perception that it covers better than other traditional multidisciplinary databases the areas of the Social Sciences and the Humanities have contributed to the quick expansion of Google Scholar Citations and Google Scholar Metrics: two new bibliometric products that offer citation data at the individual level and at journal level. In this paper we show the results of a experiment undertaken to analyze Google Scholar's capacity to detect citation counting manipulation. For this, six documents were uploaded to an institutional web domain authored by a false researcher and referencing all the publications of the members of the EC3 research group at the University of Granada. The detection of Google Scholar of these papers outburst the citations included in the Google Scholar Citations profiles of the authors. We discuss the effects of such outburst and how it could affect the future development of such products not only at individual level but also at journal level, especially if Google Scholar persists with its lack of transparency.", "title": "" }, { "docid": "c0a2fc4ffe5910ffe9a4a9fe983106c3", "text": "Robust inspection is important to ensure the safety of nuclear power plant components. An automated approach would require detecting often low contrast cracks that could be surrounded by or even within textures with similar appearances such as welding, scratches and grind marks. We propose a crack detection method for nuclear power plant inspection videos by fine tuning a deep neural network for detecting local patches containing cracks which are then grouped in spatial-temporal space for group-level classification. We evaluate the proposed method on a data set consisting of 17 videos consisting of nearly 150,000 frames of inspection video and provide comparison to prior methods.", "title": "" }, { "docid": "0c0d0b6d4697b1a0fc454b995bcda79a", "text": "Online multiplayer games, such as Gears of War and Halo, use skill-based matchmaking to give players fair and enjoyable matches. They depend on a skill rating system to infer accurate player skills from historical data. TrueSkill is a popular and effective skill rating system, working from only the winner and loser of each game. This paper presents an extension to TrueSkill that incorporates additional information that is readily available in online shooters, such as player experience, membership in a squad, the number of kills a player scored, tendency to quit, and skill in other game modes. This extension, which we call TrueSkill2, is shown to significantly improve the accuracy of skill ratings computed from Halo 5 matches. TrueSkill2 predicts historical match outcomes with 68% accuracy, compared to 52% accuracy for TrueSkill.", "title": "" }, { "docid": "464f7d25cb2a845293a3eb8c427f872f", "text": "Autism spectrum disorder is the fastest growing developmental disability in the United States. As such, there is an unprecedented need for research examining factors contributing to the health disparities in this population. This research suggests a relationship between the levels of physical activity and health outcomes. In fact, excessive sedentary behavior during early childhood is associated with a number of negative health outcomes. A total of 53 children participated in this study, including typically developing children (mean age = 42.5 ± 10.78 months, n = 19) and children with autism spectrum disorder (mean age = 47.42 ± 12.81 months, n = 34). The t-test results reveal that children with autism spectrum disorder spent significantly less time per day in sedentary behavior when compared to the typically developing group ( t(52) = 4.57, p < 0.001). Furthermore, the results from the general linear model reveal that there is no relationship between motor skills and the levels of physical activity. The ongoing need for objective measurement of physical activity in young children with autism spectrum disorder is of critical importance as it may shed light on an often overlooked need for early community-based interventions to increase physical activity early on in development.", "title": "" }, { "docid": "2c7bafac9d4c4fedc43982bd53c99228", "text": "One of the uniqueness of business is for firm to be customer focus. Study have shown that this could be achieved through blockchain technology in enhancing customer loyalty programs (Michael J. Casey 2015; John Ream et al 2016; Sean Dennis 2016; James O'Brien and Dave Montali, 2016; Peiguss 2012; Singh, Khan, 2012; and among others). Recent advances in block chain technology have provided the tools for marketing managers to create a new generation of being able to assess the level of control companies want to have over customer data and activities as well as security/privacy issues that always arise with every additional participant of the network While block chain technology is still in the early stages of adoption, it could prove valuable for loyalty rewards program providers. Hundreds of blockchain initiatives are already underway in various industries, particularly airline services, even though standardization is far from a reality. One attractive feature of loyalty rewards is that they are not core to business revenue and operations and companies willing to implement blockchain for customer loyalty programs benefit lower administrative costs, improved customer experiences, and increased user engagement (Michael J. Casey, 2015; James O'Brien and Dave Montali 2016; Peiguss 2012; Singh, Abstract: In today business world, companies have accelerated the use of Blockchain technology to enhance the brand recognition of their products and services. Company believes that the integration of Blockchain into the current business marketing strategy will enhance the growth of their products, and thus acting as a customer loyalty solution. The goal of this study is to obtain a deep understanding of the impact of blockchain technology in enhancing customer loyalty programs of airline business. To achieve the goal of the study, a contextualized and literature based research instrument was used to measure the application of the investigated “constructs”, and a survey was conducted to collect data from the sample population. A convenience sample of total (450) Questionnaires were distributed to customers, and managers of the surveyed airlines who could be reached by the researcher. 274 to airline customers/passengers, and the remaining 176 to managers in the various airlines researched. Questionnaires with instructions were hand-delivered to respondents. Out of the 397 completed questionnaires returned, 359 copies were found usable for the present study, resulting in an effective response rate of 79.7%. The respondents had different social, educational, and occupational backgrounds. The research instrument showed encouraging evidence of reliability and validity. Data were analyzed using descriptive statistics, percentages and ttest analysis. The findings clearly show that there is significant evidence that blockchain technology enhance customer loyalty programs of airline business. It was discovered that Usage of blockchain technology is emphasized by the surveyed airlines operators in Nigeria., the extent of effective usage of customer loyalty programs is related to blockchain technology, and that he level or extent of effective usage of blockchain technology does affect the achievement of customer loyalty program goals and objectives. Feedback from the research will assist to expand knowledge as to the usefulness of blockchain technology being a customer loyalty solution.", "title": "" }, { "docid": "c2ad090abd3f540436d3385bb6f3f013", "text": "We propose a novel neural network model for joint part-of-speech (POS) tagging and dependency parsing. Our model extends the well-known BIST graph-based dependency parser (Kiperwasser and Goldberg, 2016) by incorporating a BiLSTM-based tagging component to produce automatically predicted POS tags for the parser. On the benchmark English Penn treebank, our model obtains strong UAS and LAS scores at 94.51% and 92.87%, respectively, producing 1.5+% absolute improvements to the BIST graph-based parser, and also obtaining a state-of-the-art POS tagging accuracy at 97.97%. Furthermore, experimental results on parsing 61 “big” Universal Dependencies treebanks from raw texts show that our model outperforms the baseline UDPipe (Straka and Straková, 2017) with 0.8% higher average POS tagging score and 3.6% higher average LAS score. In addition, with our model, we also obtain state-of-the-art downstream task scores for biomedical event extraction and opinion analysis applications. Our code is available together with all pretrained models at: https://github. com/datquocnguyen/jPTDP.", "title": "" }, { "docid": "0e45e57b4e799ebf7e8b55feded7e9e1", "text": "IMPORTANCE\nIt is increasingly evident that Parkinson disease (PD) is not a single entity but rather a heterogeneous neurodegenerative disorder.\n\n\nOBJECTIVE\nTo evaluate available evidence, based on findings from clinical, imaging, genetic and pathologic studies, supporting the differentiation of PD into subtypes.\n\n\nEVIDENCE REVIEW\nWe performed a systematic review of articles cited in PubMed between 1980 and 2013 using the following search terms: Parkinson disease, parkinsonism, tremor, postural instability and gait difficulty, and Parkinson disease subtypes. The final reference list was generated on the basis of originality and relevance to the broad scope of this review.\n\n\nFINDINGS\nSeveral subtypes, such as tremor-dominant PD and postural instability gait difficulty form of PD, have been found to cluster together. Other subtypes also have been identified, but validation by subtype-specific biomarkers is still lacking.\n\n\nCONCLUSIONS AND RELEVANCE\nSeveral PD subtypes have been identified, but the pathogenic mechanisms underlying the observed clinicopathologic heterogeneity in PD are still not well understood. Further research into subtype-specific diagnostic and prognostic biomarkers may provide insights into mechanisms of neurodegeneration and improve epidemiologic and therapeutic clinical trial designs.", "title": "" }, { "docid": "a90f865e053b9339052a4d00281dbd03", "text": "Generation of 3D data by deep neural network has been attracting increasing attention in the research community. The majority of extant works resort to regular representations such as volumetric grids or collection of images, however, these representations obscure the natural invariance of 3D shapes under geometric transformations, and also suffer from a number of other issues. In this paper we address the problem of 3D reconstruction from a single image, generating a straight-forward form of output &#x2013; point cloud coordinates. Along with this problem arises a unique and interesting issue, that the groundtruth shape for an input image may be ambiguous. Driven by this unorthordox output form and the inherent ambiguity in groundtruth, we design architecture, loss function and learning paradigm that are novel and effective. Our final solution is a conditional shape sampler, capable of predicting multiple plausible 3D point clouds from an input image. In experiments not only can our system outperform state-of-the-art methods on single image based 3D reconstruction benchmarks, but it also shows strong performance for 3D shape completion and promising ability in making multiple plausible predictions.", "title": "" }, { "docid": "0cae8939c57ff3713d7321102c80816e", "text": "In this paper, we propose using 3D Convolutional Neural Networks for large scale user-independent continuous gesture recognition. We have trained an end-to-end deep network for continuous gesture recognition (jointly learning both the feature representation and the classifier). The network performs three-dimensional (i.e. space-time) convolutions to extract features related to both the appearance and motion from volumes of color frames. Space-time invariance of the extracted features is encoded via pooling layers. The earlier stages of the network are partially initialized using the work of Tran et al. before being adapted to the task of gesture recognition. An earlier version of the proposed method, which was trained for 11,250 iterations, was submitted to ChaLearn 2016 Continuous Gesture Recognition Challenge and ranked 2nd with the Mean Jaccard Index Score of 0.269235. When the proposed method was further trained for 28,750 iterations, it achieved state-of-the-art performance on the same dataset, yielding a 0.314779 Mean Jaccard Index Score.", "title": "" }, { "docid": "31fca4faa53520b240267562c9e394fe", "text": "Purpose – The aim of this study was two-fold: first, to examine the noxious effects of presenteeism on employees’ work well-being in a cross-cultural context involving Chinese and British employees; second, to explore the role of supervisory support as a pan-cultural stress buffer in the presenteeism process. Design/methodology/approach – Using structured questionnaires, the authors compared data collected from samples of 245 Chinese and 128 British employees working in various organizations and industries. Findings – Cross-cultural comparison revealed that the act of presenteeism was more prevalent among Chinese and they reported higher levels of strains than their British counterparts. Hierarchical regression analyses showed that presenteeism had noxious effects on exhaustion for both Chinese and British employees. Moreover, supervisory support buffered the negative impact of presenteeism on exhaustion for both Chinese and British employees. Specifically, the negative relation between presenteeism and exhaustion was stronger for those with more supervisory support. Practical implications – Presenteeism may be used as a career-protecting or career-promoting tactic. However, the negative effects of this behavior on employees’ work well-being across the culture divide should alert us to re-think its pros and cons as a career behavior. Employees in certain cultures (e.g. the hardworking Chinese) may exhibit more presenteeism behaviour, thus are in greater risk of ill-health. Originality/value – This is the first cross-cultural study demonstrating the universality of the act of presenteeism and its damaging effects on employees’ well-being. The authors’ findings of the buffering role of supervisory support across cultural contexts highlight the necessity to incorporate resources in mitigating the harmful impact of presenteeism.", "title": "" }, { "docid": "461062a51b0c33fcbb0f47529f3a6fba", "text": "Release of ATP from astrocytes is required for Ca2+ wave propagation among astrocytes and for feedback modulation of synaptic functions. However, the mechanism of ATP release and the source of ATP in astrocytes are still not known. Here we show that incubation of astrocytes with FM dyes leads to selective labelling of lysosomes. Time-lapse confocal imaging of FM dye-labelled fluorescent puncta, together with extracellular quenching and total-internal-reflection fluorescence microscopy (TIRFM), demonstrated directly that extracellular ATP or glutamate induced partial exocytosis of lysosomes, whereas an ischaemic insult with potassium cyanide induced both partial and full exocytosis of these organelles. We found that lysosomes contain abundant ATP, which could be released in a stimulus-dependent manner. Selective lysis of lysosomes abolished both ATP release and Ca2+ wave propagation among astrocytes, implicating physiological and pathological functions of regulated lysosome exocytosis in these cells.", "title": "" }, { "docid": "3c8e85a977df74c2fd345db9934d4699", "text": "The abstract paragraph should be indented 1/2 inch (3 picas) on both left and righthand margins. Use 10 point type, with a vertical spacing of 11 points. The word Abstract must be centered, bold, and in point size 12. Two line spaces precede the abstract. The abstract must be limited to one paragraph.", "title": "" } ]
scidocsrr
ad11557e120de6ea0d14b61f7169719b
Learning-by-Synthesis for Appearance-Based 3D Gaze Estimation
[ { "docid": "6298ab25b566616b0f3c1f6ee8889d19", "text": "This paper addresses the problem of free gaze estimation under unrestricted head motion. More precisely, unlike previous approaches that mainly focus on estimating gaze towards a small planar screen, we propose a method to estimate the gaze direction in the 3D space. In this context the paper makes the following contributions: (i) leveraging on Kinect device, we propose a multimodal method that rely on depth sensing to obtain robust and accurate head pose tracking even under large head pose, and on the visual data to obtain the remaining eye-in-head gaze directional information from the eye image; (ii) a rectification scheme of the image that exploits the 3D mesh tracking, allowing to conduct a head pose free eye-in-head gaze directional estimation; (iii) a simple way of collecting ground truth data thanks to the Kinect device. Results on three users demonstrate the great potential of our approach.", "title": "" } ]
[ { "docid": "1f355bd6b46e16c025ba72aa9250c61d", "text": "Whole-cell biosensors have several advantages for the detection of biological substances and have proven to be useful analytical tools. However, several hurdles have limited whole-cell biosensor application in the clinic, primarily their unreliable operation in complex media and low signal-to-noise ratio. We report that bacterial biosensors with genetically encoded digital amplifying genetic switches can detect clinically relevant biomarkers in human urine and serum. These bactosensors perform signal digitization and amplification, multiplexed signal processing with the use of Boolean logic gates, and data storage. In addition, we provide a framework with which to quantify whole-cell biosensor robustness in clinical samples together with a method for easily reprogramming the sensor module for distinct medical detection agendas. Last, we demonstrate that bactosensors can be used to detect pathological glycosuria in urine from diabetic patients. These next-generation whole-cell biosensors with improved computing and amplification capacity could meet clinical requirements and should enable new approaches for medical diagnosis.", "title": "" }, { "docid": "36da2b6102762c80b3ae8068d764e220", "text": "Video games have become an essential part of the way people play and learn. While an increasing number of people are using games to learn in informal environments, their acceptance in the classroom as an instructional activity has been mixed. Successes in informal learning have caused supporters to falsely believe that implementing them into the classroom would be a relatively easy transition and have the potential to revolutionise the entire educational system. In spite of all the hype, many are puzzled as to why more teachers have not yet incorporated them into their teaching. The literature is littered with reports that point to a variety of reasons. One of the reasons, we believe, is that very little has been done to convince teachers that the effort to change their curriculum to integrate video games and other forms of technology is worthy of the effort. Not until policy makers realise the importance of professional British Journal of Educational Technology (2009) doi:10.1111/j.1467-8535.2009.01007.x © 2009 The Authors. Journal compilation © 2009 Becta. Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA. development and training as an important use of funds will positive changes in thinking and perceptions come about, which will allow these various forms of technology to reach their potential. The authors have hypothesised that the major impediments to useful technology integration include the general lack of institutional infrastructure, poor teacher training, and overly-complicated technologies. Overcoming these obstacles requires both a top-down and a bottom-up approach. This paper presents the results of a pilot study with a group of preservice teachers to determine whether our hypotheses regarding potential negativity surrounding video games was valid and whether a wider scale study is warranted. The results of this study are discussed along with suggestions for further research and potential changes in teacher training programmes. Introduction Over the past 40 years, video games have become an increasingly popular way to play and learn. Those who play regularly often note that the major attraction is their ability to become quickly engaged and immersed in gameplay (Lenhart & Kayne, 2008). Many have taken notice of video games’ apparent effectiveness in teaching social interaction and critical thinking in informal learning environments. Beliefs about the effectiveness of video games in informal learning situations have been hyped to the extent that they are often described as the ‘holy grail’ that will revolutionise our entire educational system (Gee, 2003; Kirkley & Kirkley, 2004; Prensky, 2001; Sawyer, 2002). In spite of all the hype and promotion, many educators express puzzlement and disappointment that only a modest number of teachers have incorporated video games into their teaching (Egenfeldt-Nielsen, 2004; Pivec & Pivec, 2008). These results seem to mirror those reported on a general lack of successful integration on the part of teachers and educators of new technologies and media in general. The reasons reported in that research point to a varied and complex issue that involves dispelling preconceived notions, prejudices, and concerns (Kati, 2008; Kim & Baylor, 2008). It is our position that very little has been done to date to overcome these objections. We agree with Magliaro and Ezeife (2007) who posited that teachers can and do greatly influence the successes or failures of classroom interventions. Expenditures on media and technology alone do not guarantee their successful or productive use in the classroom. Policy makers need to realise that professional development and training is the most significant use of funds that will positively affect teaching styles and that will allow technology to reach its potential to change education. But as Cuban, Kirkpatrick and Peck (2001) noted, the practices of policy makers and administrators to increase the effective use of technologies in the classroom more often than not conflict with implementation. In their qualitative study of two Silicon Valley high schools, the authors found that despite ready access to computer technologies, 2 British Journal of Educational Technology © 2009 The Authors. Journal compilation © 2009 Becta. only a handful of teachers actually changed their teaching practices (ie, moved from teacher-centered to student-centered pedagogies). Furthermore, the authors identified several barriers to technological innovation in the classroom, including most notably: a lack of preparation time, poor technical support, outdated technologies, and the inability to sustain interest in the particular lessons and a lack of opportunities for collaboration due to the rigid structure and short time periods allocated to instruction. The authors concluded by suggesting that the path for integrating technology would eventually flourish, but that it initially would be riddled with problems caused by impediments placed upon its success by a lack of institutional infrastructure, poor training, and overly-complicated technologies. We agree with those who suggest that any proposed classroom intervention correlates directly to the expectations and perceived value/benefit on the part of the integrating teachers, who largely control what and how their students learn (Hanusheck, Kain & Rivkin, 1998). Faced with these significant obstacles, it should not be surprising that video games, like other technologies, have been less than successful in transforming the classroom. We further suggest that overcoming these obstacles requires both a top-down and a bottom-up approach. Policy makers carry the burden of correcting the infrastructural issues both for practical reasons as well as for creating optimism on the part of teachers to believe that their administrators actually support their decisions. On the other hand, anyone associated with educational systems for any length of time will agree that a top-down only approach is destined for failure. The successful adoption of any new classroom intervention is based, in larger part, on teachers’ investing in the belief that the experience is worth the effort. If a teacher sees little or no value in an intervention, or is unfamiliar with its use, then the chances that it will be properly implemented are minimised. In other words, a teacher’s adoption of any instructional strategy is directly correlated with his or her views, ideas, and expectations about what is possible, feasible, and useful. In their studies into the game playing habits of various college students, Shaffer, Squire and Gee (2005) alluded to the fact that of those that they interviewed, future teachers indicated that they did not play video games as often as those enrolled in other majors. Our review of these comments generated several additional research questions that we believe deserve further investigation. We began to hypothesise that if it were true that teachers, as a group, do not in fact play video games on a regular basis, it should not be surprising that they would have difficulty integrating games into their curriculum. They would not have sufficient basis to integrate the rules of gameplay with their instructional strategies, nor would they be able to make proper assessments as to which games might be the most effective. We understand that one does not have to actually like something or be good at something to appreciate its value. For example, one does not necessarily have to be a fan of rap music or have a knack for performing it to understand that it could be a useful teaching tool. But, on the other hand, we wondered whether the attitudes towards video games on the part of teachers were not merely neutral, but in fact actually negative, which would further undermine any attempts at successfully introducing games into their classrooms. Expectancy-value 3 © 2009 The Authors. Journal compilation © 2009 Becta. This paper presents the results of a pilot study we conducted that utilised a group of preservice teachers to determine whether our hypothesis regarding potential negativity surrounding video games was valid and whether a wider scale study is warranted. In this examination, we utilised a preference survey to ask participants to reveal their impressions and expectancies about video games in general, their playing habits, and their personal assessments as to the potential role games might play in their future teaching strategies. We believe that the results we found are useful in determining ramifications for some potential changes in teacher preparation and professional development programmes. They provide more background on the kinds of learning that can take place, as described by Prensky (2001), Gee (2003) and others, they consider how to evaluate supposed educational games that exist in the market, and they suggest successful integration strategies. Just as no one can assume that digital kids already have expertise in participatory learning simply because they are exposed to these experiences in their informal, outside of school activities, those responsible for teacher training cannot assume that just because up-and-coming teachers have been brought up in the digital age, they are automatically familiar with, disposed to using, and have positive ideas about how games can be integrated into their curriculum. As a case in point, we found that there exists a significant disconnect between teachers and their students regarding the value of gameplay, and whether one can efficiently and effectively learn from games. In this study, we also attempted to determine if there might be an interaction effect based on the type of console being used. We wanted to confirm Pearson and Bailey’s (2008) assertions that the Nintendo Wii (Nintendo Company, Ltd. 11-1 KamitobaHokodate-cho, Minami-ku, Kyoto 601-8501, Japan) consoles would not only promote improvements in physical move", "title": "" }, { "docid": "8e65001ed1e4a3994a95df2626ff4d89", "text": "The most popular metric distance used in iris code matching is Hamming distance. In this paper, we improve the performance of iris code matching stage by applying adaptive Hamming distance. Proposed method works with Hamming subsets with adaptive length. Based on density of masked bits in the Hamming subset, each subset is able to expand and adjoin to the right or left neighbouring bits. The adaptive behaviour of Hamming subsets increases the accuracy of Hamming distance computation and improves the performance of iris code matching. Results of applying proposed method on Chinese Academy of Science Institute of Automation, CASIA V3.3 shows performance of 99.96% and false rejection rate 0.06.", "title": "" }, { "docid": "868fe4091a136f16f6844e8739b65902", "text": "This paper uses an ant colony meta-heuristic optimization method to solve the redundancy allocation problem (RAP). The RAP is a well known NP-hard problem which has been the subject of much prior work, generally in a restricted form where each subsystem must consist of identical components in parallel to make computations tractable. Meta-heuristic methods overcome this limitation, and offer a practical way to solve large instances of the relaxed RAP where different components can be placed in parallel. The ant colony method has not yet been used in reliability design, yet it is a method that is expressly designed for combinatorial problems with a neighborhood structure, as in the case of the RAP. An ant colony optimization algorithm for the RAP is devised & tested on a well-known suite of problems from the literature. It is shown that the ant colony method performs with little variability over problem instance or random number seed. It is competitive with the best-known heuristics for redundancy allocation.", "title": "" }, { "docid": "ef3ac22e7d791113d08fd778a79008c3", "text": "Great efforts have been dedicated to harvesting knowledge bases from online encyclopedias. These knowledge bases play important roles in enabling machines to understand texts. However, most current knowledge bases are in English and non-English knowledge bases, especially Chinese ones, are still very rare. Many previous systems that extract knowledge from online encyclopedias, although are applicable for building a Chinese knowledge base, still suffer from two challenges. The first is that it requires great human efforts to construct an ontology and build a supervised knowledge extraction model. The second is that the update frequency of knowledge bases is very slow. To solve these challenges, we propose a never-ending Chinese Knowledge extraction system, CN-DBpedia, which can automatically generate a knowledge base that is of ever-increasing in size and constantly updated. Specially, we reduce the human costs by reusing the ontology of existing knowledge bases and building an end-to-end facts extraction model. We further propose a smart active update strategy to keep the freshness of our knowledge base with little human costs. The 164 million API calls of the published services justify the success of our system.", "title": "" }, { "docid": "bc4a72d96daf03f861b187fa73f57ff6", "text": "BACKGROUND\nShort-term preoperative radiotherapy and total mesorectal excision have each been shown to improve local control of disease in patients with resectable rectal cancer. We conducted a multicenter, randomized trial to determine whether the addition of preoperative radiotherapy increases the benefit of total mesorectal excision.\n\n\nMETHODS\nWe randomly assigned 1861 patients with resectable rectal cancer either to preoperative radiotherapy (5 Gy on each of five days) followed by total mesorectal excision (924 patients) or to total mesorectal excision alone (937 patients). The trial was conducted with the use of standardization and quality-control measures to ensure the consistency of the radiotherapy, surgery, and pathological techniques.\n\n\nRESULTS\nOf the 1861 patients randomly assigned to one of the two treatment groups, 1805 were eligible to participate. The overall rate of survival at two years among the eligible patients was 82.0 percent in the group assigned to both radiotherapy and surgery and 81.8 percent in the group assigned to surgery alone (P=0.84). Among the 1748 patients who underwent a macroscopically complete local resection, the rate of local recurrence at two years was 5.3 percent. The rate of local recurrence at two years was 2.4 percent in the radiotherapy-plus-surgery group and 8.2 percent in the surgery-only group (P<0.001).\n\n\nCONCLUSIONS\nShort-term preoperative radiotherapy reduces the risk of local recurrence in patients with rectal cancer who undergo a standardized total mesorectal excision.", "title": "" }, { "docid": "ad80f2e78e80397bd26dac5c0500266c", "text": "The group Lasso is an extension of the Lasso for feature selection on (predefined) nonoverlapping groups of features. The nonoverlapping group structure limits its applicability in practice. There have been several recent attempts to study a more general formulation where groups of features are given, potentially with overlaps between the groups. The resulting optimization is, however, much more challenging to solve due to the group overlaps. In this paper, we consider the efficient optimization of the overlapping group Lasso penalized problem. We reveal several key properties of the proximal operator associated with the overlapping group Lasso, and compute the proximal operator by solving the smooth and convex dual problem, which allows the use of the gradient descent type of algorithms for the optimization. Our methods and theoretical results are then generalized to tackle the general overlapping group Lasso formulation based on the eq norm. We further extend our algorithm to solve a nonconvex overlapping group Lasso formulation based on the capped norm regularization, which reduces the estimation bias introduced by the convex penalty. We have performed empirical evaluations using both a synthetic and the breast cancer gene expression dataset, which consists of 8,141 genes organized into (overlapping) gene sets. Experimental results show that the proposed algorithm is more efficient than existing state-of-the-art algorithms. Results also demonstrate the effectiveness of the nonconvex formulation for overlapping group Lasso.", "title": "" }, { "docid": "65a4197d7f12c320a34fdd7fcac556af", "text": "The article presents an overview of current specialized ontology engineering tools, as well as texts’ annotation tools based on ontologies. The main functions and features of these tools, their advantages and disadvantages are discussed. A systematic comparative analysis of means for engineering ontologies is presented. ACM Classification", "title": "" }, { "docid": "43a7e786704b5347f3b67c08ac9c4f70", "text": "Before beginning any robot task, users must position the robot's base, a task that now depends entirely on user intuition. While slight perturbation is tolerable for robots with moveable bases, correcting the problem is imperative for fixed- base robots if some essential task sections are out of reach. For mobile manipulation robots, it is necessary to decide on a specific base position before beginning manipulation tasks. This paper presents Reuleaux, an open source library for robot reachability analyses and base placement. It reduces the amount of extra repositioning and removes the manual work of identifying potential base locations. Based on the reachability map, base placement locations of a whole robot or only the arm can be efficiently determined. This can be applied to both statically mounted robots, where the position of the robot and workpiece ensure the maximum amount of work performed, and to mobile robots, where the maximum amount of workable area can be reached. The methods were tested on different robots of different specifications and evaluated for tasks in simulation and real world environment. Evaluation results indicate that Reuleaux had significantly improved performance than prior existing methods in terms of time-efficiency and range of applicability.", "title": "" }, { "docid": "0d25072b941ee3e8690d9bd274623055", "text": "The task of tracking multiple targets is often addressed with the so-called tracking-by-detection paradigm, where the first step is to obtain a set of target hypotheses for each frame independently. Tracking can then be regarded as solving two separate, but tightly coupled problems. The first is to carry out data association, i.e., to determine the origin of each of the available observations. The second problem is to reconstruct the actual trajectories that describe the spatio-temporal motion pattern of each individual target. The former is inherently a discrete problem, while the latter should intuitively be modeled in continuous space. Having to deal with an unknown number of targets, complex dependencies, and physical constraints, both are challenging tasks on their own and thus most previous work focuses on one of these subproblems. Here, we present a multi-target tracking approach that explicitly models both tasks as minimization of a unified discrete-continuous energy function. Trajectory properties are captured through global label costs, a recent concept from multi-model fitting, which we introduce to tracking. Specifically, label costs describe physical properties of individual tracks, e.g., linear and angular dynamics, or entry and exit points. We further introduce pairwise label costs to describe mutual interactions between targets in order to avoid collisions. By choosing appropriate forms for the individual energy components, powerful discrete optimization techniques can be leveraged to address data association, while the shapes of individual trajectories are updated by gradient-based continuous energy minimization. The proposed method achieves state-of-the-art results on diverse benchmark sequences.", "title": "" }, { "docid": "bdd1c64962bfb921762259cca4a23aff", "text": "Ever since the emergence of social networking sites (SNSs), it has remained a question without a conclusive answer whether SNSs make people more or less lonely. To achieve a better understanding, researchers need to move beyond studying overall SNS usage. In addition, it is necessary to attend to personal attributes as potential moderators. Given that SNSs provide rich opportunities for social comparison, one highly relevant personality trait would be social comparison orientation (SCO), and yet this personal attribute has been understudied in social media research. Drawing on literature of psychosocial implications of social media use and SCO, this study explored associations between loneliness and various Instagram activities and the role of SCO in this context. A total of 208 undergraduate students attending a U.S. mid-southern university completed a self-report survey (Mage = 19.43, SD = 1.35; 78 percent female; 57 percent White). Findings showed that Instagram interaction and Instagram browsing were both related to lower loneliness, whereas Instagram broadcasting was associated with higher loneliness. SCO moderated the relationship between Instagram use and loneliness such that Instagram interaction was related to lower loneliness only for low SCO users. The results revealed implications for healthy SNS use and the importance of including personality traits and specific SNS use patterns to disentangle the role of SNS use in psychological well-being.", "title": "" }, { "docid": "3072b7d80b0e9afffe6489996eca19aa", "text": "Brain tumors can appear anywhere in the brain and have vastly different sizes and morphology. Additionally, these tumors are often diffused and poorly contrasted. Consequently, the segmentation of brain tumor and intratumor subregions using magnetic resonance imaging (MRI) data with minimal human interventions remains a challenging task. In this paper, we present a novel fully automatic segmentation method from MRI data containing in vivo brain gliomas. This approach can not only localize the entire tumor region but can also accurately segment the intratumor structure. The proposed work was based on a cascaded deep learning convolutional neural network consisting of two subnetworks: (1) a tumor localization network (TLN) and (2) an intratumor classification network (ITCN). The TLN, a fully convolutional network (FCN) in conjunction with the transfer learning technology, was used to first process MRI data. The goal of the first subnetwork was to define the tumor region from an MRI slice. Then, the ITCN was used to label the defined tumor region into multiple subregions. Particularly, ITCN exploited a convolutional neural network (CNN) with deeper architecture and smaller kernel. The proposed approach was validated on multimodal brain tumor segmentation (BRATS 2015) datasets, which contain 220 high-grade glioma (HGG) and 54 low-grade glioma (LGG) cases. Dice similarity coefficient (DSC), positive predictive value (PPV), and sensitivity were used as evaluation metrics. Our experimental results indicated that our method could obtain the promising segmentation results and had a faster segmentation speed. More specifically, the proposed method obtained comparable and overall better DSC values (0.89, 0.77, and 0.80) on the combined (HGG + LGG) testing set, as compared to other methods reported in the literature. Additionally, the proposed approach was able to complete a segmentation task at a rate of 1.54 seconds per slice.", "title": "" }, { "docid": "8f1a5420deb75a2b664ceeaae8fc03f9", "text": "A stretchable and multiple-force-sensitive electronic fabric based on stretchable coaxial sensor electrodes is fabricated for artificial-skin application. This electronic fabric, with only one kind of sensor unit, can simultaneously map and quantify the mechanical stresses induced by normal pressure, lateral strain, and flexion.", "title": "" }, { "docid": "c2fc709aeb4c48a3bd2071b4693d4296", "text": "Following the success of deep convolutional networks, state-of-the-art methods for 3d human pose estimation have focused on deep end-to-end systems that predict 3d joint locations given raw image pixels. Despite their excellent performance, it is often not easy to understand whether their remaining error stems from a limited 2d pose (visual) understanding, or from a failure to map 2d poses into 3- dimensional positions.,,With the goal of understanding these sources of error, we set out to build a system that given 2d joint locations predicts 3d positions. Much to our surprise, we have found that, with current technology, \"lifting\" ground truth 2d joint locations to 3d space is a task that can be solved with a remarkably low error rate: a relatively simple deep feedforward network outperforms the best reported result by about 30% on Human3.6M, the largest publicly available 3d pose estimation benchmark. Furthermore, training our system on the output of an off-the-shelf state-of-the-art 2d detector (i.e., using images as input) yields state of the art results – this includes an array of systems that have been trained end-to-end specifically for this task. Our results indicate that a large portion of the error of modern deep 3d pose estimation systems stems from their visual analysis, and suggests directions to further advance the state of the art in 3d human pose estimation.", "title": "" }, { "docid": "a17818c54117d502c696abb823ba5a6b", "text": "The next generation of multimedia services have to be optimized in a personalized way, taking user factors into account for the evaluation of individual experience. Previous works have investigated the influence of user factors mostly in a controlled laboratory environment which often includes a limited number of users and fails to reflect real-life environment. Social media, especially Facebook, provide an interesting alternative for Internet-based subjective evaluation. In this article, we develop (and open-source) a Facebook application, named YouQ1, as an experimental platform for studying individual experience for videos. Our results show that subjective experiments based on YouQ can produce reliable results as compared to a controlled laboratory experiment. Additionally, YouQ has the ability to collect user information automatically from Facebook, which can be used for modeling individual experience.", "title": "" }, { "docid": "5d80fa7763fd815e4e9530bc1a99b5d0", "text": "This paper introduces a new email dataset, consisting of both single and thread emails, manually annotated with summaries and keywords. A total of 349 emails and threads have been annotated. The dataset is our first step toward developing automatic methods for summarization and keyword extraction from emails. We describe the email corpus, along with the annotation interface, annotator guidelines, and agreement studies.", "title": "" }, { "docid": "9a4dab93461185ea98ccea7733081f73", "text": "This article discusses two standards operating on principles of cognitive radio in television white space (TV WS) frequencies 802.22and 802.11af. The comparative analysis of these systems will be presented and the similarities as well as the differences among these two perspective standards will be discussed from the point of view of physical (PHY), medium access control (MAC) and cognitive layers.", "title": "" }, { "docid": "569fed958b7a471e06ce718102687a1e", "text": "The introduction of convolutional layers greatly advanced the performance of neural networks on image tasks due to innately capturing a way of encoding and learning translation-invariant operations, matching one of the underlying symmetries of the image domain. In comparison, there are a number of problems in which there are a number of different inputs which are all ’of the same type’ — multiple particles, multiple agents, multiple stock prices, etc. The corresponding symmetry to this is permutation symmetry, in that the algorithm should not depend on the specific ordering of the input data. We discuss a permutation-invariant neural network layer in analogy to convolutional layers, and show the ability of this architecture to learn to predict the motion of a variable number of interacting hard discs in 2D. In the same way that convolutional layers can generalize to different image sizes, the permutation layer we describe generalizes to different numbers of objects.", "title": "" }, { "docid": "48a0e75b97fdaa734f033c6b7791e81f", "text": "OBJECTIVE\nTo examine the role of physical activity, inactivity, and dietary patterns on annual weight changes among preadolescents and adolescents, taking growth and development into account.\n\n\nSTUDY DESIGN\nWe studied a cohort of 6149 girls and 4620 boys from all over the United States who were 9 to 14 years old in 1996. All returned questionnaires in the fall of 1996 and a year later in 1997. Each child provided his or her current height and weight and a detailed assessment of typical past-year dietary intakes, physical activities, and recreational inactivities (TV, videos/VCR, and video/computer games).\n\n\nMETHODS\nOur hypotheses were that physical activity and dietary fiber intake are negatively correlated with annual changes in adiposity and that recreational inactivity (TV/videos/games), caloric intake, and dietary fat intake are positively correlated with annual changes in adiposity. Separately for boys and girls, we performed regression analysis of 1-year change in body mass index (BMI; kg/m(2)). All hypothesized factors were in the model simultaneously with several adjustment factors.\n\n\nRESULTS\nLarger increases in BMI from 1996 to 1997 were among girls who reported higher caloric intakes (.0061 +/-.0026 kg/m(2) per 100 kcal/day; beta +/- standard error), less physical activity (-.0284 +/-.0142 kg/m(2)/hour/day) and more time with TV/videos/games (.0372 +/-.0106 kg/m(2)/hour/day) during the year between the 2 BMI assessments. Larger BMI increases were among boys who reported more time with TV/videos/games (.0384 +/-.0101) during the year. For both boys and girls, a larger rise in caloric intake from 1996 to 1997 predicted larger BMI increases (girls:.0059 +/-.0027 kg/m(2) per increase of 100 kcal/day; boys:.0082 +/-.0030). No significant associations were noted for energy-adjusted dietary fat or fiber.\n\n\nCONCLUSIONS\nFor both boys and girls, a 1-year increase in BMI was larger in those who reported more time with TV/videos/games during the year between the 2 BMI measurements, and in those who reported that their caloric intakes increased more from 1 year to the next. Larger year-to-year increases in BMI were also seen among girls who reported higher caloric intakes and less physical activity during the year between the 2 BMI measurements. Although the magnitudes of these estimated effects were small, their cumulative effects, year after year during adolescence, would produce substantial gains in body weight. Strategies to prevent excessive caloric intakes, to decrease time with TV/videos/games, and to increase physical activity would be promising as a means to prevent obesity.", "title": "" }, { "docid": "cf95d41dc5a2bcc31b691c04e3fb8b96", "text": "Resection of pancreas, in particular pancreaticoduodenectomy, is a complex procedure, commonly performed in appropriately selected patients with benign and malignant disease of the pancreas and periampullary region. Despite significant improvements in the safety and efficacy of pancreatic surgery, pancreaticoenteric anastomosis continues to be the \"Achilles heel\" of pancreaticoduodenectomy, due to its association with a measurable risk of leakage or failure of healing, leading to pancreatic fistula. The morbidity rate after pancreaticoduodenectomy remains high in the range of 30% to 65%, although the mortality has significantly dropped to below 5%. Most of these complications are related to pancreatic fistula, with serious complications of intra-abdominal abscess, postoperative bleeding, and multiorgan failure. Several pharmacological and technical interventions have been suggested to decrease the pancreatic fistula rate, but the results have been controversial. This paper considers definition and classification of pancreatic fistula, risk factors, and preventive approach and offers management strategy when they do occur.", "title": "" } ]
scidocsrr
8626d44237740695b8dd963290f7f0b9
Influence Maximization Across Partially Aligned Heterogenous Social Networks
[ { "docid": "b9daa134744b8db757fc0857f479bd70", "text": "Influence is a complex and subtle force that governs the dynamics of social networks as well as the behaviors of involved users. Understanding influence can benefit various applications such as viral marketing, recommendation, and information retrieval. However, most existing works on social influence analysis have focused on verifying the existence of social influence. Few works systematically investigate how to mine the strength of direct and indirect influence between nodes in heterogeneous networks.\n To address the problem, we propose a generative graphical model which utilizes the heterogeneous link information and the textual content associated with each node in the network to mine topic-level direct influence. Based on the learned direct influence, a topic-level influence propagation and aggregation algorithm is proposed to derive the indirect influence between nodes. We further study how the discovered topic-level influence can help the prediction of user behaviors. We validate the approach on three different genres of data sets: Twitter, Digg, and citation networks. Qualitatively, our approach can discover interesting influence patterns in heterogeneous networks. Quantitatively, the learned topic-level influence can greatly improve the accuracy of user behavior prediction.", "title": "" }, { "docid": "ee25e4acd98193e7dc3f89f3f98e42e0", "text": "Kempe et al. [4] (KKT) showed the problem of influence maximization is NP-hard and a simple greedy algorithm guarantees the best possible approximation factor in PTIME. However, it has two major sources of inefficiency. First, finding the expected spread of a node set is #P-hard. Second, the basic greedy algorithm is quadratic in the number of nodes. The first source is tackled by estimating the spread using Monte Carlo simulation or by using heuristics[4, 6, 2, 5, 1, 3]. Leskovec et al. proposed the CELF algorithm for tackling the second. In this work, we propose CELF++ and empirically show that it is 35-55% faster than CELF.", "title": "" } ]
[ { "docid": "e795381a345bf3cab74ddfd4d4763c1e", "text": "Context: Recent research discusses the use of ontologies, dictionaries and thesaurus as a means to improve activity labels of process models. However, the trade-off between quality improvement and extra effort is still an open question. It is suspected that ontology-based support could require additional effort for the modeler. Objective: In this paper, we investigate to which degree ontology-based support potentially increases the effort of modeling. We develop a theoretical perspective grounded in cognitive psychology, which leads us to the definition of three design principles for appropriate ontology-based support. The objective is to evaluate the design principles through empirical experimentation. Method: We tested the effect of presenting relevant content from the ontology to the modeler by means of a quantitative analysis. We performed controlled experiments using a prototype, which generates a simplified and context-aware visual representation of the ontology. It logs every action of the process modeler for analysis. The experiment refers to novice modelers and was performed as between-subject design with vs. without ontology-based support. It was carried out with two different samples. Results: Part of the effort-related variables we measured showed significant statistical difference between the group with and without ontology-based support. Overall, for the collected data, the ontology support achieved good results. Conclusion: We conclude that it is feasible to provide ontology-based support to the modeler in order to improve process modeling without strongly compromising time consumption and cognitive effort.", "title": "" }, { "docid": "c10a58037c4b13953236831af304e660", "text": "A 32 nm generation logic technology is described incorporating 2nd-generation high-k + metal-gate technology, 193 nm immersion lithography for critical patterning layers, and enhanced channel strain techniques. The transistors feature 9 Aring EOT high-k gate dielectric, dual band-edge workfunction metal gates, and 4th-generation strained silicon, resulting in the highest drive currents yet reported for NMOS and PMOS. Process yield, performance and reliability are demonstrated on a 291 Mbit SRAM test vehicle, with 0.171 mum2 cell size, containing >1.9 billion transistors.", "title": "" }, { "docid": "d90add899632bab1c5c2637c7080f717", "text": "Software Testing plays a important role in Software development because it can minimize the development cost. We Propose a Technique for Test Sequence Generation using UML Model Sequence Diagram.UML models give a lot of information that should not be ignored in testing. In This paper main features extract from Sequence Diagram after that we can write the Java Source code for that Features According to ModelJunit Library. ModelJUnit is a extended library of JUnit Library. By using that Source code we can Generate Test Case Automatic and Test Coverage. This paper describes a systematic Test Case Generation Technique performed on model based testing (MBT) approaches By Using Sequence Diagram.", "title": "" }, { "docid": "ef77d042a04b7fa704f13a0fa5e73688", "text": "The nature of the cellular basis of learning and memory remains an often-discussed, but elusive problem in neurobiology. A popular model for the physiological mechanisms underlying learning and memory postulates that memories are stored by alterations in the strength of neuronal connections within the appropriate neural circuitry. Thus, an understanding of the cellular and molecular basis of synaptic plasticity will expand our knowledge of the molecular basis of learning and memory. The view that learning was the result of altered synaptic weights was first proposed by Ramon y Cajal in 1911 and formalized by Donald O. Hebb. In 1949, Hebb proposed his \" learning rule, \" which suggested that alterations in the strength of synapses would occur between two neurons when those neurons were active simultaneously (1). Hebb's original postulate focused on the need for synaptic activity to lead to the generation of action potentials in the postsynaptic neuron, although more recent work has extended this to include local depolarization at the synapse. One problem with testing this hypothesis is that it has been difficult to record directly the activity of single synapses in a behaving animal. Thus, the challenge in the field has been to relate changes in synaptic efficacy to specific behavioral instances of associative learning. In this chapter, we will review the relationship among synaptic plasticity, learning, and memory. We will examine the extent to which various current models of neuronal plasticity provide potential bases for memory storage and we will explore some of the signal transduction pathways that are critically important for long-term memory storage. We will focus on two systems—the gill and siphon withdrawal reflex of the invertebrate Aplysia californica and the mammalian hippocam-pus—and discuss the abilities of models of synaptic plasticity and learning to account for a range of genetic, pharmacological, and behavioral data.", "title": "" }, { "docid": "d51408ad40bdc9a3a846aaf7da907cef", "text": "Accessing online information from various data sources has become a necessary part of our everyday life. Unfortunately such information is not always trustworthy, as different sources are of very different qualities and often provide inaccurate and conflicting information. Existing approaches attack this problem using unsupervised learning methods, and try to infer the confidence of the data value and trustworthiness of each source from each other by assuming values provided by more sources are more accurate. However, because false values can be widespread through copying among different sources and out-of-date data often overwhelm up-to-date data, such bootstrapping methods are often ineffective.\n In this paper we propose a semi-supervised approach that finds true values with the help of ground truth data. Such ground truth data, even in very small amount, can greatly help us identify trustworthy data sources. Unlike existing studies that only provide iterative algorithms, we derive the optimal solution to our problem and provide an iterative algorithm that converges to it. Experiments show our method achieves higher accuracy than existing approaches, and it can be applied on very huge data sets when implemented with MapReduce.", "title": "" }, { "docid": "bea412d20a95c853fe06e7640acb9158", "text": "We propose a novel approach to synthesizing images that are effective for training object detectors. Starting from a small set of real images, our algorithm estimates the rendering parameters required to synthesize similar images given a coarse 3D model of the target object. These parameters can then be reused to generate an unlimited number of training images of the object of interest in arbitrary 3D poses, which can then be used to increase classification performances. A key insight of our approach is that the synthetically generated images should be similar to real images, not in terms of image quality, but rather in terms of features used during the detector training. We show in the context of drone, plane, and car detection that using such synthetically generated images yields significantly better performances than simply perturbing real images or even synthesizing images in such way that they look very realistic, as is often done when only limited amounts of training data are available. 2015 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "169db6ecec2243e3566079cd473c7afe", "text": "Aspect-level sentiment classification is a finegrained task in sentiment analysis. Since it provides more complete and in-depth results, aspect-level sentiment analysis has received much attention these years. In this paper, we reveal that the sentiment polarity of a sentence is not only determined by the content but is also highly related to the concerned aspect. For instance, “The appetizers are ok, but the service is slow.”, for aspect taste, the polarity is positive while for service, the polarity is negative. Therefore, it is worthwhile to explore the connection between an aspect and the content of a sentence. To this end, we propose an Attention-based Long Short-Term Memory Network for aspect-level sentiment classification. The attention mechanism can concentrate on different parts of a sentence when different aspects are taken as input. We experiment on the SemEval 2014 dataset and results show that our model achieves state-ofthe-art performance on aspect-level sentiment classification.", "title": "" }, { "docid": "cdd27bbcbab81a243dda6bb855fb8f72", "text": "The Internet of Things (IoT), which can be regarded as an enhanced version of machine-to-machine communication technology, was proposed to realize intelligent thing-to-thing communications by utilizing the Internet connectivity. In the IoT, \"things\" are generally heterogeneous and resource constrained. In addition, such things are connected to each other over low-power and lossy networks. In this paper, we propose an inter-device authentication and session-key distribution system for devices with only encryption modules. In the proposed system, unlike existing sensor-network environments where the key distribution center distributes the key, each sensor node is involved with the generation of session keys. In addition, in the proposed scheme, the performance is improved so that the authenticated device can calculate the session key in advance. The proposed mutual authentication and session-key distribution system can withstand replay attacks, man-in-the-middle attacks, and wiretapped secret-key attacks.", "title": "" }, { "docid": "2bf48ea6d0fd3bd4776dc0a90e89254b", "text": "OBJECTIVES\nTo test whether individual differences in gratitude are related to sleep after controlling for neuroticism and other traits. To test whether pre-sleep cognitions are the mechanism underlying this relationship.\n\n\nMETHOD\nA cross-sectional questionnaire study was conducted with a large (186 males, 215 females) community sample (ages=18-68 years, mean=24.89, S.D.=9.02), including 161 people (40%) scoring above 5 on the Pittsburgh Sleep Quality Index, indicating clinically impaired sleep. Measures included gratitude, the Pittsburgh Sleep Quality Index (PSQI), self-statement test of pre-sleep cognitions, the Mini-IPIP scales of Big Five personality traits, and the Social Desirability Scale.\n\n\nRESULTS\nGratitude predicted greater subjective sleep quality and sleep duration, and less sleep latency and daytime dysfunction. The relationship between gratitude and each of the sleep variables was mediated by more positive pre-sleep cognitions and less negative pre-sleep cognitions. All of the results were independent of the effect of the Big Five personality traits (including neuroticism) and social desirability.\n\n\nCONCLUSION\nThis is the first study to show that a positive trait is related to good sleep quality above the effect of other personality traits, and to test whether pre-sleep cognitions are the mechanism underlying the relationship between any personality trait and sleep. The study is also the first to show that trait gratitude is related to sleep and to explain why this occurs, suggesting future directions for research, and novel clinical implications.", "title": "" }, { "docid": "1d3192e66e042e67dabeae96ca345def", "text": "Privacy-enhancing technologies (PETs), which constitute a wide array of technical means for protecting users’ privacy, have gained considerable momentum in both academia and industry. However, existing surveys of PETs fail to delineate what sorts of privacy the described technologies enhance, which makes it difficult to differentiate between the various PETs. Moreover, those surveys could not consider very recent important developments with regard to PET solutions. The goal of this chapter is two-fold. First, we provide an analytical framework to differentiate various PETs. This analytical framework consists of high-level privacy principles and concrete privacy concerns. Secondly, we use this framework to evaluate representative up-to-date PETs, specifically with regard to the privacy concerns they address, and how they address them (i.e., what privacy principles they follow). Based on findings of the evaluation, we outline several future research directions.", "title": "" }, { "docid": "f6388d37976740ebb789e7d5f6c072f1", "text": "With the advent of image and video representation of visual scenes in digital computer, subsequent necessity of vision-substitution representation of a given image is felt. The medium for non-visual representation of an image is chosen to be sound due to well developed auditory sensing ability of human beings and wide availability of cheap audio hardware. Visionary information of an image can be conveyed to blind and partially sighted persons through auditory representation of the image within some of the known limitations of human hearing system. The research regarding image sonification has mostly evolved through last three decades. The paper also discusses in brief about the reverse mapping, termed as sound visualization. This survey approaches to summarize the methodologies and issues of the implemented and unimplemented experimental systems developed for subjective sonification of image scenes and let researchers accumulate knowledge about the previous direction of researches in this domain.", "title": "" }, { "docid": "adc03d95eea19cede1ea91aae733943b", "text": "In this paper, we discuss the emerging application of device-free localization (DFL) using wireless sensor networks, which find people and objects in the environment in which the network is deployed, even in buildings and through walls. These networks are termed “RF sensor networks” because the wireless network itself is the sensor, using radio-frequency (RF) signals to probe the deployment area. DFL in cluttered multipath environments has been shown to be feasible, and in fact benefits from rich multipath channels. We describe modalities of measurements made by RF sensors, the statistical models which relate a person's position to channel measurements, and describe research progress in this area.", "title": "" }, { "docid": "45043fe3e4aa28daddea21c6546e7640", "text": "The Booth multiplier has been widely used for high performance signed multiplication by encoding and thereby reducing the number of partial products. A multiplier using the radix-<inline-formula><tex-math notation=\"LaTeX\">$4$ </tex-math><alternatives><inline-graphic xlink:type=\"simple\" xlink:href=\"han-ieq1-2493547.gif\"/></alternatives></inline-formula> (or modified Booth) algorithm is very efficient due to the ease of partial product generation, whereas the radix- <inline-formula><tex-math notation=\"LaTeX\">$8$</tex-math><alternatives> <inline-graphic xlink:type=\"simple\" xlink:href=\"han-ieq2-2493547.gif\"/></alternatives></inline-formula> Booth multiplier is slow due to the complexity of generating the odd multiples of the multiplicand. In this paper, this issue is alleviated by the application of approximate designs. An approximate <inline-formula><tex-math notation=\"LaTeX\">$2$</tex-math> <alternatives><inline-graphic xlink:type=\"simple\" xlink:href=\"han-ieq3-2493547.gif\"/></alternatives></inline-formula>-bit adder is deliberately designed for calculating the sum of <inline-formula><tex-math notation=\"LaTeX\">$1\\times$</tex-math> <alternatives><inline-graphic xlink:type=\"simple\" xlink:href=\"han-ieq4-2493547.gif\"/></alternatives></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">$2\\times$</tex-math><alternatives><inline-graphic xlink:type=\"simple\" xlink:href=\"han-ieq5-2493547.gif\"/> </alternatives></inline-formula> of a binary number. This adder requires a small area, a low power and a short critical path delay. Subsequently, the <inline-formula><tex-math notation=\"LaTeX\">$2$</tex-math><alternatives> <inline-graphic xlink:type=\"simple\" xlink:href=\"han-ieq6-2493547.gif\"/></alternatives></inline-formula>-bit adder is employed to implement the less significant section of a recoding adder for generating the triple multiplicand with no carry propagation. In the pursuit of a trade-off between accuracy and power consumption, two signed <inline-formula> <tex-math notation=\"LaTeX\">$16\\times 16$</tex-math><alternatives><inline-graphic xlink:type=\"simple\" xlink:href=\"han-ieq7-2493547.gif\"/> </alternatives></inline-formula> bit approximate radix-8 Booth multipliers are designed using the approximate recoding adder with and without the truncation of a number of less significant bits in the partial products. The proposed approximate multipliers are faster and more power efficient than the accurate Booth multiplier. The multiplier with 15-bit truncation achieves the best overall performance in terms of hardware and accuracy when compared to other approximate Booth multiplier designs. Finally, the approximate multipliers are applied to the design of a low-pass FIR filter and they show better performance than other approximate Booth multipliers.", "title": "" }, { "docid": "30dfcf624badf766c3c7070548a47af4", "text": "The primary purpose of this paper is to stimulate discussion about a research agenda for a new interdisciplinary field. This field-the study of coordination-draws upon a variety of different disciplines including computer science, organization theory, management science, economics, and psychology. Work in this new area will include developing a body of scientific theory, which we will call \"coordination theory,\" about how the activities of separate actors can be coordinated. One important use for coordination theory will be in developing and using computer and communication systems to help people coordinate their activities in new ways. We will call these systems \"coordination technology.\" Rationale There are four reasons why work in this area is timely: (1) In recent years, large numbers of people have acquired direct access to computers. These computers are now beginning to be connected to each other. Therefore, we now have, for the first time, an opportunity for vastly larger numbers of people to use computing and communications capabilities to help coordinate their work. For example, specialized new software has been developed to (a) support multiple authors working together on the same document, (b) help people display and manipulate information more effectively in face-to-face meetings, and (c) help people intelligently route and process electronic messages. It already appears likely that there will be commercially successful products of this new type (often called \"computer supported cooperative work\" or \"groupware\"), and to some observers these applications herald a paradigm shift in computer usage as significant as the earlier shifts to time-sharing and personal computing. It is less clear whether the continuing development of new computer applications in this area will depend solely on the intuitions of successful designers or whether it will also be guided by a coherent underlying theory of how people coordinate their activities now and how they might do so differently with computer support. (2) In the long run, the dramatic improvements in the costs and capabilities of information technologies are changing-by orders of magnitude-the constraints on how certain kinds of communication and coordination can occur. At the same time, there is a pervasive feeling in American business that the pace of change is accelerating and that we need to create more flexible and adaptive organizations. Together, these changes may soon lead us across a threshhold where entirely new ways of organizing human activities become desirable. For 2 example, new capabilities for communicating information faster, less expensively, and …", "title": "" }, { "docid": "c0650814388c7e1de19ee6e668d40e69", "text": "In this paper we consider persuasion in the context of practical reasoning, and discuss the problems associated with construing reasoning about actions in a manner similar to reasoning about beliefs. We propose a perspective on practical reasoning as presumptive justification of a course of action, along with critical questions of this justification, building on the account of Walton. From this perspective, we articulate an interaction protocol, which we call PARMA, for dialogues over proposed actions based on this theory. We outline an axiomatic semantics for the PARMA Protocol, and discuss two implementations which use this protocol to mediate a discussion between humans. We then show how our proposal can be made computational within the framework of agents based on the Belief-Desire-Intention model, and illustrate this proposal with an example debate within a multi agent system.", "title": "" }, { "docid": "886c284d72a01db9bc4eb9467e14bbbb", "text": "The Bitcoin cryptocurrency introduced a novel distributed consensus mechanism relying on economic incentives. While a coalition controlling a majority of computational power may undermine the system, for example by double-spending funds, it is often assumed it would be incentivized not to attack to protect its long-term stake in the health of the currency. We show how an attacker might purchase mining power (perhaps at a cost premium) for a short duration via bribery. Indeed, bribery can even be performed in-band with the system itself enforcing the bribe. A bribing attacker would not have the same concerns about the long-term health of the system, as their majority control is inherently short-lived. New modeling assumptions are needed to explain why such attacks have not been observed in practice. The need for all miners to avoid short-term profits by accepting bribes further suggests a potential tragedy of the commons which has not yet been analyzed.", "title": "" }, { "docid": "1c4e71d00521219717607cbef90b5bec", "text": "The design of security for cyber-physical systems must take into account several characteristics common to such systems. Among these are feedback between the cyber and physical environment, distributed management and control, uncertainty, real-time requirements, and geographic distribution. This paper discusses these characteristics and suggests a design approach that better integrates security into the core design of the system. A research roadmap is presented that highlights some of the missing pieces needed to enable such an approach. 1. What is a Cyber-Physical-System? The term cyber-physical system has been applied to many problems, ranging from robotics, through SCADA, and distributed control systems. Not all cyber-physical systems involve critical infrastructure, but there are common elements that change the nature of the solutions that must be considered when securing cyber-physical systems. First, the extremely critical nature of activities performed by some cyber-physical systems means that we need security that works, and that by itself means we need something different. All kidding aside, there are fundamental system differences in cyber-physical systems that will force us to look at security in ways more closely tied to the physical application. It is my position that by focusing on these differences we can see where new (or rediscovered) approaches are needed, and that by building systems that support the inclusion of security as part of the application architecture, we can improve the security of both cyber-physical systems, where such an approach is most clearly warranted, as well as improve the security of cyber-only systems, where such an approach is more easily ignored. In this position paper I explain the characteristics of cyber-physical systems that must drive new research in security. I discuss the security problem areas that need attention because of these characteristics and I describe a design methodology for security that provides for better integration of security design with application design. Finally, I suggest some of the components of future systems that can help us include security as a focusing issue in the architectural design of critical applications.", "title": "" }, { "docid": "c3f4f7d75c1b5cfd713ad7a10c887a3a", "text": "This paper presents an open-source diarization toolkit which is mostly dedicated to speaker and developed by the LIUM. This toolkit includes hierarchical agglomerative clustering methods using well-known measures such as BIC and CLR. Two applications for which the toolkit has been used are presented: one is for broadcast news using the ESTER 2 data and the other is for telephone conversations using the MEDIA corpus.", "title": "" }, { "docid": "d161ab557edb4268a0ebc606bb9dbcb6", "text": "Recommender systems play an important role in reducing the negative impact of information overload on those websites where users have the possibility of voting for their preferences on Ítems. The most normal technique for dealing with the recommendation mechanism is to use collaborative filtering, in which it is essential to discover the most similar users to whom you desire to make recommendations. The hypothesis of this paper is that the results obtained by applying traditional similarities measures can be improved by taking contextual information, drawn from the entire body of users, and using it to calcúlate the singularity which exists, for each item, in the votes cast by each pair of users that you wish to compare. As such, the greater the measure of singularity result between the votes cast by two given users, the greater the impact this will have on the similarity. The results, tested on the Movielens, Netflix and FilmAffinity databases, corrobórate the excellent behaviour of the singularity measure proposed.", "title": "" }, { "docid": "a93bf6b8408bf0adba4985e7bd571d29", "text": "The modern data compression is mainly based on two approaches to entropy coding: Huffman (HC) and arithmetic/range coding (AC). The former is much faster, but approximates probabilities with powers of 2, usually leading to relatively low compression rates. The latter uses nearly exact probabilities easily approaching theoretical compression rate limit (Shannon entropy), but at cost of much larger computational cost. Asymmetric numeral systems (ANS) is a new approach to accurate entropy coding, which allows to end this tradeoff between speed and rate: the recent implementation [1] provides about 50% faster decoding than HC for 256 size alphabet, with compression rate similar to provided by AC. This advantage is due to being simpler than AC: using single natural number as the state, instead of two to represent a range. Beside simplifying renormalization, it allows to put the entire behavior for given probability distribution into a relatively small table: defining entropy coding automaton. The memory cost of such table for 256 size alphabet is a few kilobytes. There is a large freedom while choosing a specific table using pseudorandom number generator initialized with cryptographic key for this purpose allows to simultaneously encrypt the data. This article also introduces and discusses many other variants of this new entropy coding approach, which can provide direct alternatives for standard AC, for large alphabet range coding, or for approximated quasi arithmetic coding.", "title": "" } ]
scidocsrr
8ea6c2e2d82663cb0a47e7863d07b2ae
Projective Feature Learning for 3D Shapes with Multi-View Depth Images
[ { "docid": "0964d1cc6584f2e20496c2f02952ba46", "text": "This paper proposes to learn a set of high-level feature representations through deep learning, referred to as Deep hidden IDentity features (DeepID), for face verification. We argue that DeepID can be effectively learned through challenging multi-class face identification tasks, whilst they can be generalized to other tasks (such as verification) and new identities unseen in the training set. Moreover, the generalization capability of DeepID increases as more face classes are to be predicted at training. DeepID features are taken from the last hidden layer neuron activations of deep convolutional networks (ConvNets). When learned as classifiers to recognize about 10, 000 face identities in the training set and configured to keep reducing the neuron numbers along the feature extraction hierarchy, these deep ConvNets gradually form compact identity-related features in the top layers with only a small number of hidden neurons. The proposed features are extracted from various face regions to form complementary and over-complete representations. Any state-of-the-art classifiers can be learned based on these high-level representations for face verification. 97:45% verification accuracy on LFW is achieved with only weakly aligned faces.", "title": "" } ]
[ { "docid": "614174e5e1dffe9824d7ef8fae6fb499", "text": "This paper starts with presenting a fundamental principle based on which the celebrated orthogonal frequency division multiplexing (OFDM) waveform is constructed. It then extends the same principle to construct the newly introduced generalized frequency division multiplexing (GFDM) signals. This novel derivation sheds light on some interesting properties of GFDM. In particular, our derivation seamlessly leads to an implementation of GFDM transmitter which has significantly lower complexity than what has been reported so far. Our derivation also facilitates a trivial understanding of how GFDM (similar to OFDM) can be applied in MIMO channels.", "title": "" }, { "docid": "0f5caf6bb5e0fdb99fba592fd34f1a8b", "text": "Lawrence Kohlberg (1958) agreed with Piaget's (1932) theory of moral development in principle but wanted to develop his ideas further. He used Piaget’s storytelling technique to tell people stories involving moral dilemmas. In each case, he presented a choice to be considered, for example, between the rights of some authority and the needs of some deserving individual who is being unfairly treated. One of the best known of Kohlberg’s (1958) stories concerns a man called Heinz who lived somewhere in Europe. Heinz’s wife was dying from a particular type of cancer. Doctors said a new drug might save her. The drug had been discovered by a local chemist, and the Heinz tried desperately to buy some, but the chemist was charging ten times the money it cost to make the drug, and this was much more than the Heinz could afford. Heinz could only raise half the money, even after help from family and friends. He explained to the chemist that his wife was dying and asked if he could have the drug cheaper or pay the rest of the money later. The chemist refused, saying that he had discovered the drug and was going to make money from it. The husband was desperate to save his wife, so later that night he broke into the chemist’s and stole the drug.", "title": "" }, { "docid": "61980865ef90d0236af464caf2005024", "text": "Driver fatigue has become one of the major causes of traffic accidents, and is a complicated physiological process. However, there is no effective method to detect driving fatigue. Electroencephalography (EEG) signals are complex, unstable, and non-linear; non-linear analysis methods, such as entropy, maybe more appropriate. This study evaluates a combined entropy-based processing method of EEG data to detect driver fatigue. In this paper, 12 subjects were selected to take part in an experiment, obeying driving training in a virtual environment under the instruction of the operator. Four types of enthrones (spectrum entropy, approximate entropy, sample entropy and fuzzy entropy) were used to extract features for the purpose of driver fatigue detection. Electrode selection process and a support vector machine (SVM) classification algorithm were also proposed. The average recognition accuracy was 98.75%. Retrospective analysis of the EEG showed that the extracted features from electrodes T5, TP7, TP8 and FP1 may yield better performance. SVM classification algorithm using radial basis function as kernel function obtained better results. A combined entropy-based method demonstrates good classification performance for studying driver fatigue detection.", "title": "" }, { "docid": "c4fef61aa26aa1d3ef693845b2ff3ee0", "text": "According to AV vendors malicious software has been growing exponentially last years. One of the main reasons for these high volumes is that in order to evade detection, malware authors started using polymorphic and metamorphic techniques. As a result, traditional signature-based approaches to detect malware are being insufficient against new malware and the categorization of malware samples had become essential to know the basis of the behavior of malware and to fight back cybercriminals. During the last decade, solutions that fight against malicious software had begun using machine learning approaches. Unfortunately, there are few opensource datasets available for the academic community. One of the biggest datasets available was released last year in a competition hosted on Kaggle with data provided by Microsoft for the Big Data Innovators Gathering (BIG 2015). This thesis presents two novel and scalable approaches using Convolutional Neural Networks (CNNs) to assign malware to its corresponding family. On one hand, the first approach makes use of CNNs to learn a feature hierarchy to discriminate among samples of malware represented as gray-scale images. On the other hand, the second approach uses the CNN architecture introduced by Yoon Kim [12] to classify malware samples according their x86 instructions. The proposed methods achieved an improvement of 93.86% and 98,56% with respect to the equal probability benchmark.", "title": "" }, { "docid": "dfc9099b1b31d5f214b341c65fbb8e92", "text": "In this communication, a dual-feed dual-polarized microstrip antenna with low cross polarization and high isolation is experimentally studied. Two different feed mechanisms are designed to excite a dual orthogonal linearly polarized mode from a single radiating patch. One of the two modes is excited by an aperture-coupled feed, which comprises a compact resonant annular-ring slot and a T-shaped microstrip feedline; while the other is excited by a pair of meandering strips with a 180$^{\\circ}$ phase differences. Both linearly polarized modes are designed to operate at 2400-MHz frequency band, and from the measured results, it is found that the isolation between the two feeding ports is less than 40 dB across a 10-dB input-impedance bandwidth of 14%. In addition, low cross polarization is observed from the radiation patterns of the two modes, especially at the broadside direction. Simulation analyses are also carried out to support the measured results.", "title": "" }, { "docid": "5e43dd30c8cf58fe1b79686b33a015b9", "text": "We review Boltzmann machines extended for time-series. These models often have recurrent structure, and back propagration through time (BPTT) is used to learn their parameters. The perstep computational complexity of BPTT in online learning, however, grows linearly with respect to the length of preceding time-series (i.e., learning rule is not local in time), which limits the applicability of BPTT in online learning. We then review dynamic Boltzmann machines (DyBMs), whose learning rule is local in time. DyBM’s learning rule relates to spike-timing dependent plasticity (STDP), which has been postulated and experimentally confirmed for biological neural networks.", "title": "" }, { "docid": "040f73fc915d3799193abf5e3a48e8f4", "text": "BACKGROUND\nDiphallia is a very rare anomaly and seen once in every 5.5 million live births. True diphallia with normal penile structures is extremely rare. Surgical management for patients with complete penile duplication without any penile or urethral pathology is challenging.\n\n\nCASE REPORT\nA 4-year-old boy presented with diphallia. Initial physical examination revealed first physical examination revealed complete penile duplication, urine flow from both penises, meconium flow from right urethra, and anal atresia. Further evaluations showed double colon and rectum, double bladder, and large recto-vesical fistula. Two cavernous bodies and one spongious body were detected in each penile body. Surgical treatment plan consisted of right total penectomy and end-to-side urethra-urethrostomy. No postoperative complications and no voiding dysfunction were detected during the 18 months follow-up.\n\n\nCONCLUSION\nPenile duplication is a rare anomaly, which presents differently in each patient. Because of this, the treatment should be individualized and end-to-side urethra-urethrostomy may be an alternative to removing posterior urethra. This approach eliminates the risk of damaging prostate gland and sphincter.", "title": "" }, { "docid": "48c4b2a708f2607a8d66b642e917433d", "text": "In this paper we present an approach to control a real car with brain signals. To achieve this, we use a brain computer interface (BCI) which is connected to our autonomous car. The car is equipped with a variety of sensors and can be controlled by a computer. We implemented two scenarios to test the usability of the BCI for controlling our car. In the first scenario our car is completely brain controlled, using four different brain patterns for steering and throttle/brake. We will describe the control interface which is necessary for a smooth, brain controlled driving. In a second scenario, decisions for path selection at intersections and forkings are made using the BCI. Between these points, the remaining autonomous functions (e.g. path following and obstacle avoidance) are still active. We evaluated our approach in a variety of experiments on a closed airfield and will present results on accuracy, reaction times and usability.", "title": "" }, { "docid": "b4cadd9179150203638ff9b045a4145d", "text": "Interpenetrating network (IPN) hydrogel membranes of sodium alginate (SA) and poly(vinyl alcohol) (PVA) were prepared by solvent casting method for transdermal delivery of an anti-hypertensive drug, prazosin hydrochloride. The prepared membranes were thin, flexible and smooth. The X-ray diffraction studies indicated the amorphous dispersion of drug in the membranes. Differential scanning calorimetric analysis confirmed the IPN formation and suggests that the membrane stiffness increases with increased concentration of glutaraldehyde (GA) in the membranes. All the membranes were permeable to water vapors depending upon the extent of cross-linking. The in vitro drug release study was performed through excised rat abdominal skin; drug release depends on the concentrations of GA in membranes. The IPN membranes extended drug release up to 24 h, while SA and PVA membranes discharged the drug quickly. The primary skin irritation and skin histopathology study indicated that the prepared IPN membranes were less irritant and safe for skin application.", "title": "" }, { "docid": "b123916f2795ab6810a773ac69bdf00b", "text": "The acceptance of open data practices by individuals and organizations lead to an enormous explosion in data production on the Internet. The access to a large number of these data is carried out through Web services, which provide a standard way to interact with data. This class of services is known as data services. In this context, users' queries often require the composition of multiple data services to be answered. On the other hand, the data returned by a data service is not always certain due to various raisons, e.g., the service accesses different data sources, privacy constraints, etc. In this paper, we study the basic activities of data services that are affected by the uncertainty of data, more specifically, modeling, invocation and composition. We propose a possibilistic approach that treats the uncertainty in all these activities.", "title": "" }, { "docid": "8fdfebc612ff46103281fcdd7c9d28c8", "text": "We develop a shortest augmenting path algorithm for the linear assignment problem. It contains new initialization routines and a special implementation of Dijkstra's shortest path method. For both dense and sparse problems computational experiments show this algorithm to be uniformly faster than the best algorithms from the literature. A Pascal implementation is presented. Wir entwickeln einen Algorithmus mit kürzesten alternierenden Wegen für das lineare Zuordnungsproblem. Er enthält neue Routinen für die Anfangswerte und eine spezielle Implementierung der Kürzesten-Wege-Methode von Dijkstra. Sowohl für dichte als auch für dünne Probleme zeigen Testläufe, daß unser Algorithmus gleichmäßig schneller als die besten Algorithmen aus der Literatur ist. Eine Implementierung in Pascal wird angegeben.", "title": "" }, { "docid": "eb9b4bea2d1a6230f8fb9e742bb7bc23", "text": "Increasing the size of a neural network typically improves accuracy but also increases the memory and compute requirements for training the model. We introduce methodology for training deep neural networks using half-precision floating point numbers, without losing model accuracy or having to modify hyperparameters. This nearly halves memory requirements and, on recent GPUs, speeds up arithmetic. Weights, activations, and gradients are stored in IEEE halfprecision format. Since this format has a narrower range than single-precision we propose three techniques for preventing the loss of critical information. Firstly, we recommend maintaining a single-precision copy of weights that accumulates the gradients after each optimizer step (this copy is rounded to half-precision for the forwardand back-propagation). Secondly, we propose loss-scaling to preserve gradient values with small magnitudes. Thirdly, we use half-precision arithmetic that accumulates into single-precision outputs, which are converted to halfprecision before storing to memory. We demonstrate that the proposed methodology works across a wide variety of tasks and modern large scale (exceeding 100 million parameters) model architectures, trained on large datasets.", "title": "" }, { "docid": "9c2e89bad3ca7b7416042f95bf4f4396", "text": "We present a simple and computationally efficient algorithm for approximating Catmull-Clark subdivision surfaces using a minimal set of bicubic patches. For each quadrilateral face of the control mesh, we construct a geometry patch and a pair of tangent patches. The geometry patches approximate the shape and silhouette of the Catmull-Clark surface and are smooth everywhere except along patch edges containing an extraordinary vertex where the patches are C0. To make the patch surface appear smooth, we provide a pair of tangent patches that approximate the tangent fields of the Catmull-Clark surface. These tangent patches are used to construct a continuous normal field (through their cross-product) for shading and displacement mapping. Using this bifurcated representation, we are able to define an accurate proxy for Catmull-Clark surfaces that is efficient to evaluate on next-generation GPU architectures that expose a programmable tessellation unit.", "title": "" }, { "docid": "3fa5de33e7ccd6c440a4a65a5681f8b8", "text": "Argumentation is the process by which arguments are constructed and handled. Argumentation constitutes a major component of human intelligence. The ability to engage in argumentation is essential for humans to understand new problems, to perform scientific reasoning, to express, to clarify and to defend their opinions in their daily lives. Argumentation mining aims to detect the arguments presented in a text document, the relations between them and the internal structure of each individual argument. In this paper we analyse the main research questions when dealing with argumentation mining and the different methods we have studied and developed in order to successfully confront the challenges of argumentation mining in legal texts.", "title": "" }, { "docid": "5793cf03753f498a649c417e410c325e", "text": "The paper investigates parameterized approximate message-passing schemes that are based on bounded inference and are inspired by Pearl's belief propagation algorithm (BP). We start with the bounded inference mini-clustering algorithm and then move to the iterative scheme called Iterative Join-Graph Propagation (IJGP), that combines both iteration and bounded inference. Algorithm IJGP belongs to the class of Generalized Belief Propagation algorithms, a framework that allowed connections with approximate algorithms from statistical physics and is shown empirically to surpass the performance of mini-clustering and belief propagation, as well as a number of other state-of-the-art algorithms on several classes of networks. We also provide insight into the accuracy of iterative BP and IJGP by relating these algorithms to well known classes of constraint propagation schemes.", "title": "" }, { "docid": "b1960cfe66e08bac1d4ff790ecfb0190", "text": "Cloud federations are a new collaboration paradigm where organizations share data across their private cloud infrastructures. However, the adoption of cloud federations is hindered by federated organizations' concerns on potential risks of data leakage and data misuse. For cloud federations to be viable, federated organizations' privacy concerns should be alleviated by providing mechanisms that allow organizations to control which users from other federated organizations can access which data. We propose a novel identity and access management system for cloud federations. The system allows federated organizations to enforce attribute-based access control policies on their data in a privacy-preserving fashion. Users are granted access to federated data when their identity attributes match the policies, but without revealing their attributes to the federated organization owning data. The system also guarantees the integrity of the policy evaluation process by using block chain technology and Intel SGX trusted hardware. It uses block chain to ensure that users identity attributes and access control policies cannot be modified by a malicious user, while Intel SGX protects the integrity and confidentiality of the policy enforcement process. We present the access control protocol, the system architecture and discuss future extensions.", "title": "" }, { "docid": "b7e78ca489cdfb8efad03961247e12f2", "text": "ASR short for Automatic Speech Recognition is the process of converting a spoken speech into text that can be manipulated by a computer. Although ASR has several applications, it is still erroneous and imprecise especially if used in a harsh surrounding wherein the input speech is of low quality. This paper proposes a post-editing ASR error correction method and algorithm based on Bing’s online spelling suggestion. In this approach, the ASR recognized output text is spell-checked using Bing’s spelling suggestion technology to detect and correct misrecognized words. More specifically, the proposed algorithm breaks down the ASR output text into several word-tokens that are submitted as search queries to Bing search engine. A returned spelling suggestion implies that a query is misspelled; and thus it is replaced by the suggested correction; otherwise, no correction is performed and the algorithm continues with the next token until all tokens get validated. Experiments carried out on various speeches in different languages indicated a successful decrease in the number of ASR errors and an improvement in the overall error correction rate. Future research can improve upon the proposed algorithm so much so that it can be parallelized to take advantage of multiprocessor computers. KeywordsSpeech Recognition; Error Correction; Bing Spelling", "title": "" }, { "docid": "7431ee071307189e58b5c7a9ce3a2189", "text": "Among tangible threats and vulnerabilities facing current biometric systems are spoofing attacks. A spoofing attack occurs when a person tries to masquerade as someone else by falsifying data and thereby gaining illegitimate access and advantages. Recently, an increasing attention has been given to this research problem. This can be attested by the growing number of articles and the various competitions that appear in major biometric forums. We have recently participated in a large consortium (TABULARASA) dealing with the vulnerabilities of existing biometric systems to spoofing attacks with the aim of assessing the impact of spoofing attacks, proposing new countermeasures, setting standards/protocols, and recording databases for the analysis of spoofing attacks to a wide range of biometrics including face, voice, gait, fingerprints, retina, iris, vein, electro-physiological signals (EEG and ECG). The goal of this position paper is to share the lessons learned about spoofing and anti-spoofing in face biometrics, and to highlight open issues and future directions.", "title": "" }, { "docid": "8a22660b73d11ee9c634579527049d43", "text": "Current unsupervised image-to-image translation techniques struggle to focus their attention on individual objects without altering the background or the way multiple objects interact within a scene. Motivated by the important role of attention in human perception, we tackle this limitation by introducing unsupervised attention mechanisms that are jointly adversarially trained with the generators and discriminators. We demonstrate qualitatively and quantitatively that our approach attends to relevant regions in the image without requiring supervision, which creates more realistic mappings when compared to those of recent approaches. Input Ours CycleGAN [1] RA [2] DiscoGAN [3] UNIT [4] DualGAN [5] Figure 1: By explicitly modeling attention, our algorithm is able to better alter the object of interest in unsupervised image-to-image translation tasks, without changing the background at the same time.", "title": "" }, { "docid": "ec593c78e3b2bc8f9b8a657093daac49", "text": "Analyses of 3-D seismic data in predominantly basin-floor settings offshore Indonesia, Nigeria, and the Gulf of Mexico, reveal the extensive presence of gravity-flow depositional elements. Five key elements were observed: (1) turbidity-flow leveed channels, (2) channeloverbank sediment waves and levees, (3) frontal splays or distributarychannel complexes, (4) crevasse-splay complexes, and (5) debris-flow channels, lobes, and sheets. Each depositional element displays a unique morphology and seismic expression. The reservoir architecture of each of these depositional elements is a function of the interaction between sedimentary process, sea-floor morphology, and sediment grain-size distribution. (1) Turbidity-flow leveed-channel widths range from greater than 3 km to less than 200 m. Sinuosity ranges from moderate to high, and channel meanders in most instances migrate down-system. The highamplitude reflection character that commonly characterizes these features suggests the presence of sand within the channels. In some instances, high-sinuosity channels are associated with (2) channel-overbank sediment-wave development in proximal overbank levee settings, especially in association with outer channel bends. These sediment waves reach heights of 20 m and spacings of 2–3 km. The crests of these sediment waves are oriented normal to the inferred transport direction of turbidity flows, and the waves have migrated in an upflow direction. Channel-margin levee thickness decreases systematically down-system. Where levee thickness can no longer be resolved seismically, high-sinuosity channels feed (3) frontal splays or low-sinuosity, distributary-channel complexes. Low-sinuosity distributary-channel complexes are expressed as lobate sheets up to 5–10 km wide and tens of kilometers long that extend to the distal edges of these systems. They likely comprise sheet-like sandstone units consisting of shallow channelized and associated sand-rich overbank deposits. Also observed are (4) crevasse-splay deposits, which form as a result of the breaching of levees, commonly at channel bends. Similar to frontal splays, but smaller in size, these deposits commonly are characterized by sheet-like turbidites. (5) Debris-flow deposits comprise low-sinuosity channel fills, narrow elongate lobes, and sheets and are characterized seismically by contorted, chaotic, low-amplitude reflection patterns. These deposits commonly overlie striated or grooved pavements that can be up to tens of kilometers long, 15 m deep, and 25 m wide. Where flows are unconfined, striation patterns suggest that divergent flow is common. Debris-flow deposits extend as far basinward as turbidites, and individual debris-flow units can reach 80 m in thickness and commonly are marked by steep edges. Transparent to chaotic seismic reflection character suggest that these deposits are mud-rich. Stratigraphically, deep-water basin-floor successions commonly are characterized by mass-transport deposits at the base, overlain by turbidite frontal-splay deposits and subsequently by leveed-channel deposits. Capping this succession is another mass-transport unit ultimately overlain and draped by condensed-section deposits. This succession can be related to a cycle of relative sea-level change and associated events at the corresponding shelf edge. Commonly, deposition of a deep-water sequence is initiated with the onset of relative sea-level fall and ends with subsequent rapid relative sea-level rise. INTRODUCTION The understanding of deep-water depositional systems has advanced significantly in recent years. In the past, much understanding of deep-water sedimentation came from studies of outcrops, recent fan systems, and 2D reflection seismic data (Bouma 1962; Mutti and Ricci Lucchi 1972; Normark 1970, 1978; Walker 1978; Posamentier et al. 1991; Weimer 1991; Mutti and Normark 1991). However, in recent years this knowledge has advanced significantly because of (1) the interest by petroleum companies in deep-water exploration (e.g., Pirmez et al. 2000), and the advent of widely available high-quality 3D seismic data across a broad range of deepwater environments (e.g., Beaubouef and Friedman 2000; Posamentier et al. 2000), (2) the recent drilling and coring of both near-surface and reservoir-level deep-water systems (e.g., Twichell et al. 1992), and (3) the increasing utilization of deep-tow side-scan sonar and other imaging devices (e.g., Twichell et al. 1992; Kenyon and Millington 1995). It is arguably the first factor that has had the most significant impact on our understanding of deep-water systems. Three-dimensional seismic data afford an unparalleled view of the deep-water depositional environment, in some instances with vertical resolution down to 2–3 m. Seismic time slices, horizon-datum time slices, and interval attributes provide images of deepwater depositional systems in map view that can then be analyzed from a geomorphologic perspective. Geomorphologic analyses lead to the identification of depositional elements, which, when integrated with seismic profiles, can yield significant stratigraphic insight. Finally, calibration by correlation with borehole data, including logs, conventional core, and biostratigraphic samples, can provide the interpreter with an improved understanding of the geology of deep-water systems. The focus of this study is the deep-water component of a depositional sequence. We describe and discuss only those elements and stratigraphic successions that are present in deep-water depositional environments. The examples shown in this study largely are Pleistocene in age and most are encountered within the uppermost 400 m of substrate. These relatively shallowly buried features represent the full range of lowstand deep-water depositional sequences from early and late lowstand through transgressive and highstand deposits. Because they are not buried deeply, these stratigraphic units commonly are well-imaged on 3D seismic data. It is also noteworthy that although the examples shown here largely are of Pleistocene age, the age of these deposits should not play a significant role in subsequent discussion. What determines the architecture of deep-water deposits are the controlling parameters of flow discharge, sand-to-mud ratio, slope length, slope gradient, and rugosity of the seafloor, and not the age of the deposits. It does not matter whether these deposits are Pleistocene, Carboniferous, or Precambrian; the physical ‘‘first principles’’ of sediment gravity flow apply without distinguishing between when these deposits formed. However, from the perspective of studying deep-water turbidites it is advantageous that the Pleistocene was such an active time in the deepwater environment, resulting in deposition of numerous shallowly buried, well-imaged, deep-water systems. Depositional Elements Approach This study is based on the grouping of similar geomorphic features referred to as depositional elements. Depositional elements are defined by 368 H.W. POSAMENTIER AND V. KOLLA FIG. 1.—Schematic depiction of principal depositional elements in deep-water settings. Mutti and Normark (1991) as the basic mappable components of both modern and ancient turbidite systems and stages that can be recognized in marine, outcrop, and subsurface studies. These features are the building blocks of landscapes. The focus of this study is to use 3D seismic data to characterize the geomorphology and stratigraphy of deep-water depositional elements and infer process of deposition where appropriate. Depositional elements can vary from place to place and in the same place through time with changes of environmental parameters such as sand-to-mud ratio, flow discharge, and slope gradient. In some instances, systematic changes in these environmental parameters can be tied back to changes of relative sea level. The following depositional elements will be discussed: (1) turbidityflow leveed channels, (2) overbank sediment waves and levees, (3) frontal splays or distributary-channel complexes, (4) crevasse-splay complexes, and (5) debris-flow channels, lobes, and sheets (Fig. 1). Each element is described and depositional processes are discussed. Finally, the exploration significance of each depositional element is reviewed. Examples are drawn from three deep-water slope and basin-floor settings: the Gulf of Mexico, offshore Nigeria, and offshore eastern Kalimantan, Indonesia. We utilized various visualization techniques, including 3D perspective views, horizon slices, and horizon and interval attribute displays, to bring out the detailed characteristics of depositional elements and their respective geologic settings. The deep-water depositional elements we present here are commonly characterized by peak seismic frequencies in excess of 100 Hz. The vertical resolution at these shallow depths of burial is in the range of 3–4 m, thus affording high-resolution images of depositional elements. We hope that our study, based on observations from the shallow subsurface, will provide general insights into the reservoir architecture of deep-water depositional elements, which can be extrapolated to more poorly resolved deep-water systems encountered at deeper exploration depths. DEPOSITIONAL ELEMENTS The following discussion focuses on five depositional elements in deepwater environments. These include turbidity-flow leveed channels, overbank or levee deposits, frontal splays or distributary-channel complexes, crevasse splays, and debris-flow sheets, lobes, and channels (Fig. 1). Turbidity-Flow Leveed Channels Leveed channels are common depositional elements in slope and basinfloor environments. Leveed channels observed in this study range in width from 3 km to less than 250 m and in sinuosity (i.e., the ratio of channelaxis length to channel-belt length) between 1.2 and 2.2. Some leveed channels are internally characterized by complex cut-and-fill architecture. Many leveed channels show evidence ", "title": "" } ]
scidocsrr
8890d941123da99a28bbdfe2b12638ca
QoE and power efficiency tradeoff for fog computing networks with fog node cooperation
[ { "docid": "37be9e992a6a99af165f7c6ddbbed36d", "text": "The past 15 years have seen the rise of the Cloud, along with rapid increase in Internet backbone traffic and more sophisticated cellular core networks. There are three different types of “Clouds:” (1) data center, (2) backbone IP network and (3) cellular core network, responsible for computation, storage, communication and network management. Now the functions of these three types of Clouds are “descending” to be among or near the end users, i.e., to the edge of networks, as “Fog.”", "title": "" }, { "docid": "ae19bd4334434cfb8c5ac015dc8d3bd4", "text": "Soldiers and front-line personnel operating in tactical environments increasingly make use of handheld devices to help with tasks such as face recognition, language translation, decision-making, and mission planning. These resource constrained edge environments are characterized by dynamic context, limited computing resources, high levels of stress, and intermittent network connectivity. Cyber-foraging is the leverage of external resource-rich surrogates to augment the capabilities of resource-limited devices. In cloudlet-based cyber-foraging, resource-intensive computation and data is offloaded to cloudlets. Forward-deployed, discoverable, virtual-machine-based tactical cloudlets can be hosted on vehicles or other platforms to provide infrastructure to offload computation, provide forward data staging for a mission, perform data filtering to remove unnecessary data from streams intended for dismounted users, and serve as collection points for data heading for enterprise repositories. This paper describes tactical cloudlets and presents experimentation results for five different cloudlet provisioning mechanisms. The goal is to demonstrate that cyber-foraging in tactical environments is possible by moving cloud computing concepts and technologies closer to the edge so that tactical cloudlets, even if disconnected from the enterprise, can provide capabilities that can lead to enhanced situational awareness and decision making at the edge.", "title": "" }, { "docid": "9e4417a0ea21de3ffffb9017f0bad705", "text": "Distributed optimization algorithms are highly attractive for solving big data problems. In particular, many machine learning problems can be formulated as the global consensus optimization problem, which can then be solved in a distributed manner by the alternating direction method of multipliers (ADMM) algorithm. However, this suffers from the straggler problem as its updates have to be synchronized. In this paper, we propose an asynchronous ADMM algorithm by using two conditions to control the asynchrony: partial barrier and bounded delay. The proposed algorithm has a simple structure and good convergence guarantees (its convergence rate can be reduced to that of its synchronous counterpart). Experiments on different distributed ADMM applications show that asynchrony reduces the time on network waiting, and achieves faster convergence than its synchronous counterpart in terms of the wall clock time.", "title": "" } ]
[ { "docid": "0a7558a172509707b33fcdfaafe0b732", "text": "Cloud computing has established itself as an alternative IT infrastructure and service model. However, as with all logically centralized resource and service provisioning infrastructures, cloud does not handle well local issues involving a large number of networked elements (IoTs) and it is not responsive enough for many applications that require immediate attention of a local controller. Fog computing preserves many benefits of cloud computing and it is also in a good position to address these local and performance issues because its resources and specific services are virtualized and located at the edge of the customer premise. However, data security is a critical challenge in fog computing especially when fog nodes and their data move frequently in its environment. This paper addresses the data protection and the performance issues by 1) proposing a Region-Based Trust-Aware (RBTA) model for trust translation among fog nodes of regions, 2) introducing a Fog-based Privacy-aware Role Based Access Control (FPRBAC) for access control at fog nodes, and 3) developing a mobility management service to handle changes of users and fog devices' locations. The implementation results demonstrate the feasibility and the efficiency of our proposed framework.", "title": "" }, { "docid": "4bd161b3e91dea05b728a72ade72e106", "text": "Julio Rodriguez∗ Faculté des Sciences et Techniques de l’Ingénieur (STI), Institut de Microtechnique (IMT), Laboratoire de Production Microtechnique (LPM), Ecole Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne, Switzerland and Fakultät für Physik, Universität Bielefeld, D-33501 Bielefeld, Germany ∗Corresponding author. Email: julio.rodriguez@epfl.ch and jrodrigu@physik.uni-bielefeld.de", "title": "" }, { "docid": "84d2cb7c4b8e0f835dab1cd3971b60c5", "text": "Ambient intelligence (AmI) deals with a new world of ubiquitous computing devices, where physical environments interact intelligently and unobtrusively with people. These environments should be aware of people's needs, customizing requirements and forecasting behaviors. AmI environments can be diverse, such as homes, offices, meeting rooms, schools, hospitals, control centers, vehicles, tourist attractions, stores, sports facilities, and music devices. Artificial intelligence research aims to include more intelligence in AmI environments, allowing better support for humans and access to the essential knowledge for making better decisions when interacting with these environments. This article, which introduces a special issue on AmI, views the area from an artificial intelligence perspective.", "title": "" }, { "docid": "88128ec1201e2202f13f2c09da0f07f2", "text": "A new mechanism is proposed for exciting the magnetic state of a ferromagnet. Assuming ballistic conditions and using WKB wave functions, we predict that a transfer of vectorial spin accompanies an electric current flowing perpendicular to two parallel magnetic films connected by a normal metallic spacer. This spin transfer drives motions of the two magnetization vectors within their instantaneously common plane. Consequent new mesoscopic precession and switching phenomena with potential applications are predicted. PACS: 75.50.Rr; 75.70.Cn A magnetic multilayer (MML) is composed of alternating ferromagnetic and paramagnetic sublayers whose thicknesses usually range between 1 and l0 nm. The discovery in 1988 of gian t magne tore s i s tance (GMR) in such multilayers stimulates much current research [1]. Although the initial reports dealt with currents flowing in the layer planes (CIP), the magnetoresistive phenomenon is known to be even stronger for currents flowing perpendicular to the plane (CPP) [2]. We predict here that the spinpolarized nature of such a perpendicular current generally creates a mutual transference of spin angular momentum between the magnetic sublayers which is manifested in their dynamic response. This response, which occurs only for CPP geometry, we propose to characterize as spin transfer . It can dominate the Larmor response to the magnetic field induced by * Fax: + 1-914-945-3291; email: slon@watson.ibm.com. the current when the magnetic sublayer thickness is about 1 nm and the smaller of its other two dimensions is less than 10= to 10 3 r im. On this mesoscopic scale, two new phenomena become possible: a steady precession driven by a constant current, and alternatively a novel form of switching driven by a pulsed current. Other forms of current-driven magnetic response without the use of any electromagnetically induced magnetic field are already known. Reports of both theory and experiments show how the exchange effect of external current flowing through a ferromagnetic domain wall causes it to move [3]. Even closer to the present subject is the magnetic response to tunneling current in the case of the sandwich structure f e r r o m a g n e t / i n s u l a t o r / f e r r o m a g n e t ( F / I / F ) predicted previously [4]. Unfortunately, theoretical relations indicated that the dissipation of energy, and therefore temperature rise, needed to produce more than barely observable spin-transfer through a tunneling barrier is prohibitively large. 0304-8853/96/$15.00 Copyright © 1996 Elsevier Science B.V. All rights reserved. PH S0304-8853(96)00062-5 12 ,/.C, Slo,cgewski / Journal of Magnetism and Magnetic Materials 159 (1996) L/ L7 However. the advent of multilayers incorporating very thin paramagnetic metallic spacers, rather than a barrier, places the realization of spin transfer in a different light. In the first place, the metallic spacer implies a low resistance and therefore low Ohmic dissipation for a given current, to which spin-transfer effects are proportional. Secondly, numerous experiments [5] and theories [6] show that the fundamental interlayer exchange coupling of RKKY type diminishes in strength and varies in sign as spacer thickness increases. Indeed, there exist experimental spacers which are thick enough (e.g. 4 nm) for the exchange coupling to be negligible even though spin relaxation is too weak to significantly diminish the GMR effect which relies on preservation of spin direction during electron transit across the spacer. Moreover, the same fact of long spin relaxation time in magnetic multilayers is illustrated on an even larger distance scale, an order of magnitude greater than the circa 10 nm electron mean free path, by spin injection experiments [7]. It follows, as we show below, that interesting current-driven spin-transfer effects are expected under laboratory conditions involving very small distance scales. We begin with simple arguments to explain current-driven spin transfer and establish its physical scale. We then sketch a detailed treatment and summarize its results. Finally, we predict two spin-transfer phenomena: steady magnetic precession driven by a constant current and a novel form of magnetic switching. We consider the five metallic regions represented schematically in Fig. 1. Layers A, B, and C are paramagnetic, whilst F I and F2 are ferromagnetic. The instantaneous macroscopic vectors hS~ and kS 2 forming the included angle 0 represent the respective total spin momenta per unit area of the ferromagnets. Now consider a flow of electrons moving rightward through the sandwich. The works on spin injection [7] show that if the thickness of spacer B is less than the spin-diffusion length, usually at least 100 nm, then some degree of spin polarization along the instantaneous axis parallel to the vector S~ of local ferromagnetic polarization in FI will be present in the electrons impinging on F2. This leads us to consider a three-layer (B, F2, C in Fig. 1) model in which an electron with initial spin state along the direction Sj is incident from S i ~ i S2 ~, EF=0J. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .", "title": "" }, { "docid": "7161122eaa9c9766e9914ba0f2ee66ef", "text": "Cross-linguistically consistent annotation is necessary for sound comparative evaluation and cross-lingual learning experiments. It is also useful for multilingual system development and comparative linguistic studies. Universal Dependencies is an open community effort to create cross-linguistically consistent treebank annotation for many languages within a dependency-based lexicalist framework. In this paper, we describe v1 of the universal guidelines, the underlying design principles, and the currently available treebanks for 33 languages.", "title": "" }, { "docid": "b741698d7e4d15cb7f4e203f2ddbce1d", "text": "This study examined the process of how socioeconomic status, specifically parents' education and income, indirectly relates to children's academic achievement through parents' beliefs and behaviors. Data from a national, cross-sectional study of children were used for this study. The subjects were 868 8-12-year-olds, divided approximately equally across gender (436 females, 433 males). This sample was 49% non-Hispanic European American and 47% African American. Using structural equation modeling techniques, the author found that the socioeconomic factors were related indirectly to children's academic achievement through parents' beliefs and behaviors but that the process of these relations was different by racial group. Parents' years of schooling also was found to be an important socioeconomic factor to take into consideration in both policy and research when looking at school-age children.", "title": "" }, { "docid": "f35007fdca9c35b4c243cb58bd6ede7a", "text": "Photovoltaic Thermal Collector (PVT) is a hybrid generator which converts solar radiation into useful electric and thermal energies simultaneously. This paper gathers all PVT sub-models in order to form a unique dynamic model that reveals PVT parameters interactions. As PVT is a multi-input/output/output system, a state space model based on energy balance equations is developed in order to analyze and assess the parameters behaviors and correlations of PVT constituents. The model simulation is performed using LabVIEW Software. The simulation shows the impact of the fluid flow rate variation on the collector efficiencies (thermal and electrical).", "title": "" }, { "docid": "957170b015e5acd4ab7ce076f5a4c900", "text": "On many social networking web sites such as Facebook and Twitter, resharing or reposting functionality allows users to share others' content with their own friends or followers. As content is reshared from user to user, large cascades of reshares can form. While a growing body of research has focused on analyzing and characterizing such cascades, a recent, parallel line of work has argued that the future trajectory of a cascade may be inherently unpredictable. In this work, we develop a framework for addressing cascade prediction problems. On a large sample of photo reshare cascades on Facebook, we find strong performance in predicting whether a cascade will continue to grow in the future. We find that the relative growth of a cascade becomes more predictable as we observe more of its reshares, that temporal and structural features are key predictors of cascade size, and that initially, breadth, rather than depth in a cascade is a better indicator of larger cascades. This prediction performance is robust in the sense that multiple distinct classes of features all achieve similar performance. We also discover that temporal features are predictive of a cascade's eventual shape. Observing independent cascades of the same content, we find that while these cascades differ greatly in size, we are still able to predict which ends up the largest.", "title": "" }, { "docid": "d30343a3a888139eb239c6605ccb0f41", "text": "Low-power wireless networks are quickly becoming a critical part of our everyday infrastructure. Power consumption is a critical concern, but power measurement and estimation is a challenge. We present Powertrace, which to the best of our knowledge is the first system for network-level power profiling of low-power wireless systems. Powertrace uses power state tracking to estimate system power consumption and a structure called energy capsules to attribute energy consumption to activities such as packet transmissions and receptions. With Powertrace, the power consumption of a system can be broken down into individual activities which allows us to answer questions such as “How much energy is spent forwarding packets for node X?”, “How much energy is spent on control traffic and how much on critical data?”, and “How much energy does application X account for?”. Experiments show that Powertrace is accurate to 94% of the energy consumption of a device. To demonstrate the usefulness of Powertrace, we use it to experimentally analyze the power behavior of the proposed IETF standard IPv6 RPL routing protocol and a sensor network data collection protocol. Through using Powertrace, we find the highest power consumers and are able to reduce the power consumption of data collection with 24%. It is our hope that Powertrace will help the community to make empirical energy evaluation a widely used tool in the low-power wireless research community toolbox.", "title": "" }, { "docid": "70b325c1767e9977ac27894cfa051fab", "text": "BACKGROUND\nDecreased systolic function is central to the pathogenesis of heart failure in millions of patients worldwide, but mechanism-related adverse effects restrict existing inotropic treatments. This study tested the hypothesis that omecamtiv mecarbil, a selective cardiac myosin activator, will augment cardiac function in human beings.\n\n\nMETHODS\nIn this dose-escalating, crossover study, 34 healthy men received a 6-h double-blind intravenous infusion of omecamtiv mecarbil or placebo once a week for 4 weeks. Each sequence consisted of three ascending omecamtiv mecarbil doses (ranging from 0·005 to 1·0 mg/kg per h) with a placebo infusion randomised into the sequence. Vital signs, blood samples, electrocardiographs (ECGs), and echocardiograms were obtained before, during, and after each infusion. The primary aim was to establish maximum tolerated dose (the highest infusion rate tolerated by at least eight participants) and plasma concentrations of omecamtiv mecarbil; secondary aims were evaluation of pharmacodynamic and pharmacokinetic characteristics, safety, and tolerability. This study is registered at ClinicalTrials.gov, number NCT01380223.\n\n\nFINDINGS\nThe maximum tolerated dose of omecamtiv mecarbil was 0·5 mg/kg per h. Omecamtiv mecarbil infusion resulted in dose-related and concentration-related increases in systolic ejection time (mean increase from baseline at maximum tolerated dose, 85 [SD 5] ms), the most sensitive indicator of drug effect (r(2)=0·99 by dose), associated with increases in stroke volume (15 [2] mL), fractional shortening (8% [1]), and ejection fraction (7% [1]; all p<0·0001). Omecamtiv mecarbil increased atrial contractile function, and there were no clinically relevant changes in diastolic function. There were no clinically significant dose-related adverse effects on vital signs, serum chemistries, ECGs, or adverse events up to a dose of 0·625 mg/kg per h. The dose-limiting toxic effect was myocardial ischaemia due to excessive prolongation of systolic ejection time.\n\n\nINTERPRETATION\nThese first-in-man data show highly dose-dependent augmentation of left ventricular systolic function in response to omecamtiv mecarbil and support potential clinical use of the drug in patients with heart failure.\n\n\nFUNDING\nCytokinetics Inc.", "title": "" }, { "docid": "b5ecd3e4e14cae137b88de8bd4c92c5d", "text": "Design and analysis of ultrahigh-frequency (UHF) micropower rectifiers based on a diode-connected dynamic threshold MOSFET (DTMOST) is discussed. An analytical design model for DTMOST rectifiers is derived based on curve-fitted diode equation parameters. Several DTMOST six-stage charge-pump rectifiers were designed and fabricated using a CMOS 0.18-mum process with deep n-well isolation. Measured results verified the design model with average accuracy of 10.85% for an input power level between -4 and 0 dBm. At the same time, three other rectifiers based on various types of transistors were fabricated on the same chip. The measured results are compared with a Schottky diode solution.", "title": "" }, { "docid": "bde70da078bba2a63899cc7eb2a9aaf9", "text": "In the past few years, cloud computing develops very quickly. A large amount of data are uploaded and stored in remote public cloud servers which cannot fully be trusted by users. Especially, more and more enterprises would like to manage their data by the aid of the cloud servers. However, when the data outsourced in the cloud are sensitive, the challenges of security and privacy becomes urgent for wide deployment of the cloud systems. This paper proposes a secure data sharing scheme to ensure the privacy of data owner and the security of the outsourced cloud data. The proposed scheme provides flexible utility of data while solving the privacy and security challenges for data sharing. The security and efficiency analysis demonstrate that the designed scheme is feasible and efficient. At last, we discuss its application in electronic health record.", "title": "" }, { "docid": "6883add239f58223ef1941d5044d4aa8", "text": "A novel jitter equalization circuit is presented that addresses crosstalk-induced jitter in high-speed serial links. A simple model of electromagnetic coupling demonstrates the generation of crosstalk-induced jitter. The analysis highlights unique aspects of crosstalk-induced jitter that differ from far-end crosstalk. The model is used to predict the crosstalk-induced jitter in 2-PAM and 4-PAM, which is compared to measurement. Furthermore, the model suggests an equalizer that compensates for the data-induced electromagnetic coupling between adjacent links and is suitable for pre- or post-emphasis schemes. The circuits are implemented using 130-nm MOSFETs and operate at 5-10 Gb/s. The results demonstrate reduced deterministic jitter and lower bit-error rate (BER). At 10 Gb/s, the crosstalk-induced jitter equalizer opens the eye at 10/sup -12/ BER from 17 to 45 ps and lowers the rms jitter from 8.7 to 6.3 ps.", "title": "" }, { "docid": "ba9030da218e0ba5d4369758d80be5b9", "text": "Generative adversarial networks (GANs) can implicitly learn rich distributions over images, audio, and data which are hard to model with an explicit likelihood. We present a practical Bayesian formulation for unsupervised and semi-supervised learning with GANs, in conjunction with stochastic gradient Hamiltonian Monte Carlo to marginalize the weights of the generator and discriminator networks. The resulting approach is straightforward and obtains good performance without any standard interventions such as feature matching, or mini-batch discrimination. By exploring an expressive posterior over the parameters of the generator, the Bayesian GAN avoids mode-collapse, produces interpretable candidate samples with notable variability, and in particular provides state-of-the-art quantitative results for semi-supervised learning on benchmarks including SVHN, CelebA, and CIFAR-10, outperforming DCGAN, Wasserstein GANs, and DCGAN ensembles.", "title": "" }, { "docid": "5cfef434d0d33ac5859bcdb77227d7b7", "text": "The prevalence of mobile phones, the internet-of-things technology, and networks of sensors has led to an enormous and ever increasing amount of data that are now more commonly available in a streaming fashion [1]-[5]. Often, it is assumed - either implicitly or explicitly - that the process generating such a stream of data is stationary, that is, the data are drawn from a fixed, albeit unknown probability distribution. In many real-world scenarios, however, such an assumption is simply not true, and the underlying process generating the data stream is characterized by an intrinsic nonstationary (or evolving or drifting) phenomenon. The nonstationarity can be due, for example, to seasonality or periodicity effects, changes in the users' habits or preferences, hardware or software faults affecting a cyber-physical system, thermal drifts or aging effects in sensors. In such nonstationary environments, where the probabilistic properties of the data change over time, a non-adaptive model trained under the false stationarity assumption is bound to become obsolete in time, and perform sub-optimally at best, or fail catastrophically at worst.", "title": "" }, { "docid": "16546193b0096392d4f5ebf6ad7d35a8", "text": "According to the ways to see the real environments, mirror metaphor augmented reality systems can be classified into video see-through virtual mirror displays and reflective half-mirror displays. The two systems have distinctive characteristics and application fields with different types of complexity. In this paper, we introduce a system configuration to implement a prototype of a reflective half-mirror display-based augmented reality system. We also present a two-phase calibration method using an extra camera for the system. Finally, we describe three error sources in the proposed system and show the result of analysis of these errors with several experiments.", "title": "" }, { "docid": "bbea93884f1f0189be1061939783a1c0", "text": "Severe adolescent female stress urinary incontinence (SAFSUI) can be defined as female adolescents between the ages of 12 and 17 years complaining of involuntary loss of urine multiple times each day during normal activities or sneezing or coughing rather than during sporting activities. An updated review of its likely prevalence, etiology, and management is required. The case of a 15-year-old female adolescent presenting with a 7-year history of SUI resistant to antimuscarinic medications and 18 months of intensive physiotherapy prompted this review. Issues of performing physical and urodynamic assessment at this young age were overcome in order to achieve the diagnosis of urodynamic stress incontinence (USI). Failed use of tampons was followed by the insertion of (retropubic) suburethral synthetic tape (SUST) under assisted local anesthetic into tissues deemed softer than the equivalent for an adult female. Whereas occasional urinary incontinence can occur in between 6 % and 45 % nulliparous adolescents, the prevalence of non‐neurogenic SAFSUI is uncertain but more likely rare. Risk factors for the occurrence of more severe AFSUI include obesity, athletic activities or high-impact training, and lung diseases such as cystic fibrosis (CF). This first reported use of a SUST in a patient with SAFSUI proved safe and completely curative. Artificial urinary sphincters, periurethral injectables and pubovaginal slings have been tried previously in equivalent patients. SAFSUI is a relatively rare but physically and emotionally disabling presentation. Multiple conservative options may fail, necessitating surgical management; SUST can prove safe and effective.", "title": "" }, { "docid": "cac556bfbdf64e655766da2404cb24c2", "text": "How can we learn a classi€er that is “fair” for a protected or sensitive group, when we do not know if the input to the classi€er belongs to the protected group? How can we train such a classi€er when data on the protected group is dicult to aŠain? In many settings, €nding out the sensitive input aŠribute can be prohibitively expensive even during model training, and sometimes impossible during model serving. For example, in recommender systems, if we want to predict if a user will click on a given recommendation, we o‰en do not know many aŠributes of the user, e.g., race or age, and many aŠributes of the content are hard to determine, e.g., the language or topic. Œus, it is not feasible to use a di‚erent classi€er calibrated based on knowledge of the sensitive aŠribute. Here, we use an adversarial training procedure to remove information about the sensitive aŠribute from the latent representation learned by a neural network. In particular, we study how the choice of data for the adversarial training e‚ects the resulting fairness properties. We €nd two interesting results: a small amount of data is needed to train these adversarial models, and the data distribution empirically drives the adversary’s notion of fairness. ACM Reference format: Alex Beutel, Jilin Chen, Zhe Zhao, Ed H. Chi. 2017. Data Decisions and Œeoretical Implications when Adversarially Learning Fair Representations. In Proceedings of 2017Workshop on Fairness, Accountability, and Transparency in Machine Learning, Halifax, Canada, August 2017 (FAT/ML ’17), 5 pages.", "title": "" } ]
scidocsrr
c14512660c09c02d1faa4b6688ef42f5
Diverse synaptic plasticity mechanisms orchestrated to form and retrieve memories in spiking neural networks
[ { "docid": "ffeb8ab86966a7ac9b8c66bdec7bfc32", "text": "Electrophysiological connectivity patterns in cortex often have a few strong connections, which are sometimes bidirectional, among a lot of weak connections. To explain these connectivity patterns, we created a model of spike timing–dependent plasticity (STDP) in which synaptic changes depend on presynaptic spike arrival and the postsynaptic membrane potential, filtered with two different time constants. Our model describes several nonlinear effects that are observed in STDP experiments, as well as the voltage dependence of plasticity. We found that, in a simulated recurrent network of spiking neurons, our plasticity rule led not only to development of localized receptive fields but also to connectivity patterns that reflect the neural code. For temporal coding procedures with spatio-temporal input correlations, strong connections were predominantly unidirectional, whereas they were bidirectional under rate-coded input with spatial correlations only. Thus, variable connectivity patterns in the brain could reflect different coding principles across brain areas; moreover, our simulations suggested that plasticity is fast.", "title": "" } ]
[ { "docid": "675795d2799838f72898afcfcbd77370", "text": "Data-driven techniques for interactive narrative generation are the subject of growing interest. Reinforcement learning (RL) offers significant potential for devising data-driven interactive narrative generators that tailor players’ story experiences by inducing policies from player interaction logs. A key open question in RL-based interactive narrative generation is how to model complex player interaction patterns to learn effective policies. In this paper we present a deep RL-based interactive narrative generation framework that leverages synthetic data produced by a bipartite simulated player model. Specifically, the framework involves training a set of Q-networks to control adaptable narrative event sequences with long short-term memory network-based simulated players. We investigate the deep RL framework’s performance with an educational interactive narrative, CRYSTAL ISLAND. Results suggest that the deep RL-based narrative generation framework yields effective personalized interactive narratives.", "title": "" }, { "docid": "537cf2257d1ca9ef49f023dbdc109e0b", "text": "0950-7051/$ see front matter 2010 Elsevier B.V. A doi:10.1016/j.knosys.2010.07.006 * Corresponding author. Tel.: +886 3 5712121x573 E-mail addresses: bill.net.tw@yahoo.com.tw (Y.-S (L.-I. Tong). The autoregressive integrated moving average (ARIMA), which is a conventional statistical method, is employed in many fields to construct models for forecasting time series. Although ARIMA can be adopted to obtain a highly accurate linear forecasting model, it cannot accurately forecast nonlinear time series. Artificial neural network (ANN) can be utilized to construct more accurate forecasting model than ARIMA for nonlinear time series, but explaining the meaning of the hidden layers of ANN is difficult and, moreover, it does not yield a mathematical equation. This study proposes a hybrid forecasting model for nonlinear time series by combining ARIMA with genetic programming (GP) to improve upon both the ANN and the ARIMA forecasting models. Finally, some real data sets are adopted to demonstrate the effectiveness of the proposed forecasting model. 2010 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "54c9c1323a03f0ef3af5eea204fd51ce", "text": "The fabrication and characterization of magnetic sensors consisting of double magnetic layers are described. Both thin film based material and wire based materials were used for the double layers. The sensor elements were fabricated by patterning NiFe/CoFe multilayer thin films. This thin film based sensor exhibited a constant output voltage per excitation magnetic field at frequencies down to 0.1 Hz. The magnetic sensor using a twisted FeCoV wire, the conventional material for the Wiegand effect, had the disadvantage of an asymmetric output voltage generated by an alternating magnetic field. It was found that the magnetic wire whose ends were both slightly etched exhibited a symmetric output voltage.", "title": "" }, { "docid": "f917a32b3bfed48dfe14c05d248ef53f", "text": "Recently Adleman has shown that a small traveling salesman problem can be solved by molecular operations. In this paper we show how the same principles can be applied to breaking the Data Encryption Standard (DES). We describe in detail a library of operations which are useful when working with a molecular computer. We estimate that given one arbitrary (plain-text, cipher-text) pair, one can recover the DES key in about 4 months of work. Furthermore, we show that under chosen plain-text attack it is possible to recover the DES key in one day using some preprocessing. Our method can be generalized to break any cryptosystem which uses keys of length less than 64 bits.", "title": "" }, { "docid": "1315349a48c402398c7c4164c92e95bf", "text": "Over the past years, the computing industry has started various initiatives announced to increase computer security by means of new hardware architectures. The most notable effort is the Trusted Computing Group (TCG) and the Next-Generation Secure Computing Base (NGSCB). This technology offers useful new functionalities as the possibility to verify the integrity of a platform (attestation) or binding quantities on a specific platform (sealing).In this paper, we point out the deficiencies of the attestation and sealing functionalities proposed by the existing specification of the TCG: we show that these mechanisms can be misused to discriminate certain platforms, i.e., their operating systems and consequently the corresponding vendors. A particular problem in this context is that of managing the multitude of possible configurations. Moreover, we highlight other shortcomings related to the attestation, namely system updates and backup. Clearly, the consequences caused by these problems lead to an unsatisfactory situation both for the private and business branch, and to an unbalanced market when such platforms are in wide use.To overcome these problems generally, we propose a completely new approach: the attestation of a platform should not depend on the specific software or/and hardware (configuration) as it is today's practice but only on the \"properties\" that the platform offers. Thus, a property-based attestation should only verify whether these properties are sufficient to fulfill certain (security) requirements of the party who asks for attestation. We propose and discuss a variety of solutions based on the existing Trusted Computing (TC) functionality. We also demonstrate, how a property-based attestation protocol can be realized based on the existing TC hardware such as a Trusted Platform Module (TPM).", "title": "" }, { "docid": "70bce8834a23bc84bea7804c58bcdefe", "text": "This study presents novel coplanar waveguide (CPW) power splitters comprising a CPW T-junction with outputs attached to phase-adjusting circuits, i.e., the composite right/left-handed (CRLH) CPW and the conventional CPW, to achieve a constant phase difference with arbitrary value over a wide bandwidth. To demonstrate the proposed technique, a 180/spl deg/ CRLH CPW power splitter with a phase error of less than 10/spl deg/ and a magnitude difference of below 1.5 dB within 2.4 to 5.22 GHz is experimentally demonstrated. Compared with the conventional 180/spl deg/ delay-line power splitter, the proposed structure possesses not only superior phase and magnitude performances but also a 37% size reduction. The equivalent circuit of the CRLH CPW, which represents the left-handed (LH), right-handed (RH), and lossy characteristics, is constructed and the results obtained are in good agreement with the full-wave simulation and measurement. Applications involving the wideband coplanar waveguide-to-coplanar stripline (CPW-to-CPS) transition and the tapered loop antenna are presented to stress the practicality of the 180/spl deg/ CRLH CPW power splitter. The 3-dB insertion loss bandwidth is measured as 98% for the case of a back-to-back CPW-to-CPS transition. The tapered loop antenna fed by the proposed transition achieves a measured 10-dB return loss bandwidth of 114%, and shows similar radiation patterns and 6-9 dBi antenna gain in its operating band.", "title": "" }, { "docid": "d318f73ccfd1069acbf7e95596fb1028", "text": "In this paper a novel application of multimodal emotion recognition algorithms in software engineering is described. Several application scenarios are proposed concerning program usability testing and software process improvement. Also a set of emotional states relevant in that application area is identified. The multimodal emotion recognition method that integrates video and depth channels, physiological signals and input devices usage patterns is proposed and some preliminary results on learning set creation are described.", "title": "" }, { "docid": "5aa20cb4100085a12d02c6789ad44097", "text": "The rapid progress in nanoelectronics showed an urgent need for microwave measurement of impedances extremely different from the 50Ω reference impedance of measurement instruments. In commonly used methods input impedance or admittance of a device under test (DUT) is derived from measured value of its reflection coefficient causing serious accuracy problems for very high and very low impedances due to insufficient sensitivity of the reflection coefficient to impedance of the DUT. This paper brings theoretical description and experimental verification of a method developed especially for measurement of extreme impedances. The method can significantly improve measurement sensitivity and reduce errors caused by the VNA. It is based on subtraction (or addition) of a reference reflection coefficient and the reflection coefficient of the DUT by a passive network, amplifying the resulting signal by an amplifier and measuring the amplified signal as a transmission coefficient by a common vector network analyzer (VNA). A suitable calibration technique is also presented.", "title": "" }, { "docid": "cf2e23cddb72b02d1cca83b4c3bf17a8", "text": "This article seeks to reconceptualize the relationship between flexibility and efficiency. Much organization theory argues that efficiency requires bureaucracy, that bureaucracy impedes flexibility, and that organizations therefore confront a tradeoff between efficiency and flexibility. Some researchers have challenged this line of reasoning, arguing that organizations can shift the efficiency/flexibility tradeoff to attain both superior efficiency and superior flexibility. Others have pointed out numerous obstacles to successfully shifting the tradeoff. Seeking to advance our understanding of these obstacles and how they might be overcome, we analyze an auto assembly plant that appears to be far above average industry performance in both efficiency and flexibility. NUMMI, a Toyota subsidiary located in Fremont, California, relied on a highly bureaucratic organization to achieve its high efficiency. Analyzing two recent major model changes, we find that NUMMI used four mechanisms to support its exceptional flexibility/efficiency combination. First, metaroutines (routines for changing other routines) facilitated the efficient performance of nonroutine tasks. Second, both workers and suppliers contributed to nonroutine tasks while they worked in routine production. Third, routine and nonroutine tasks were separated temporally, and workers switched sequentially between them. Finally, novel forms of organizational partitioning enabled differentiated subunits to work in parallel on routine and nonroutine tasks. NUMMI’s success with these four mechanisms depended on several features of the broader organizational context, most notably training, trust, and leadership. (Flexibility; Bureaucracy; Tradeoffs; Routines; Metaroutines; Ambidexterity; Switching; Partitioning; Trust) Introduction The postulate of a tradeoff between efficiency and flexibility is one of the more enduring ideas in organizational theory. Thompson (1967, p. 15) described it as a central “paradox of administration.” Managers must choose between organization designs suited to routine, repetitive tasks and those suited to nonroutine, innovative tasks. However, as competitive rivalry intensifies, a growing number of firms are trying to improve simultaneously in efficiencyand flexibility-related dimensions (de Meyer et al. 1989, Volberda 1996, Organization Science 1996). How can firms shift the terms of the efficiency-flexibility tradeoff? To explore how firms can create simultaneously superior efficiency and superior flexibility, we examine an exceptional auto assembly plant, NUMMI, a joint venture of Toyota and GM whose day-to-day operations were unD ow nl oa de d fr om in fo rm s. or g by [ 12 8. 32 .7 5. 11 8] o n 28 A pr il 20 14 , a t 1 0: 21 . Fo r pe rs on al u se o nl y, a ll ri gh ts r es er ve d. PAUL S. ADLER, BARBARA GOLDOFTAS AND DAVID I. LEVINE Flexibility Versus Efficiency? 44 ORGANIZATION SCIENCE/Vol. 10, No. 1, January–February 1999 der Toyota control. Like other Japanese auto transplants in the U.S., NUMMI far outpaced its Big Three counterparts simultaneously in efficiency and quality and in model change flexibility (Womack et al. 1990, Business Week 1994). In the next section we set the theoretical stage by reviewing prior research on the efficiency/flexibility tradeoff. Prior research suggests four mechanisms by which organizations can shift the tradeoff as well as some potentially serious impediments to each mechanism. We then describe our research methods and the NUMMI organization. The following sections first outline in summary form the results of this investigation, then provide the supporting evidence in our analysis of two major model changeovers at NUMMI and how they differed from traditional U.S. Big Three practice. A discussion section identifies some conditions underlying NUMMI’s success in shifting the tradeoff and in overcoming the potential impediments to the four trade-off shifting mechanisms. Flexibility Versus Efficiency? There are many kinds of flexibility and indeed a sizable literature devoted to competing typologies of the various kinds of flexibility (see overview by Sethi and Sethi 1990). However, from an organizational point of view, all forms of flexibility present a common challenge: efficiency requires a bureaucratic form of organization with high levels of standardization, formalization, specialization, hierarchy, and staffs; but these features of bureaucracy impede the fluid process of mutual adjustment required for flexibility; and organizations therefore confront a tradeoff between efficiency and flexibility (Knott 1996, Kurke 1988). Contingency theory argues that organizations will be more effective if they are designed to fit the nature of their primary task. Specifically, organizations should adopt a mechanistic form if their task is simple and stable and their goal is efficiency, and they should adopt an organic form if their task is complex and changing and their goal is therefore flexibility (Burns and Stalker 1961). Organizational theory presents a string of contrasts reflecting this mechanistic/organic polarity: machine bureaucracies vs. adhocracies (Mintzberg 1979); adaptive learning based on formal rules and hierarchical controls versus generative learning relying on shared values, teams, and lateral communication (McGill et al. 1992); generalists who pursue opportunistic r-strategies and rely on excess capacity to do well in open environments versus specialists that are more likely to survive in competitive environments by pursuing k-strategies that trade less flexibility for greater efficiency (Hannan and Freeman 1977, 1989). March (1991) and Levinthal and March (1993) make the parallel argument that organizations must choose between structures that facilitate exploration—the search for new knowledge—and those that facilitate exploitation—the use of existing knowledge. Social-psychological theories provide a rationale for this polarization. Merton (1958) shows how goal displacement in bureaucratic organizations generates rigidity. Argyris and Schon (1978) show how defensiveness makes single-loop learning—focused on pursuing given goals more effectively (read: efficiency)—an impediment to double-loop learning—focused on defining new task goals (read: flexibility). Thus, argues Weick (1969), adaptation precludes adaptability. This tradeoff view has been echoed in other disciplines. Standard economic theory postulates a tradeoff between flexibility and average costs (e.g., Stigler 1939, Hart 1942). Further extending this line of thought, Klein (1984) contrasts static and dynamic efficiency. Operations management researchers have long argued that productivity and flexibility or innovation trade off against each other in manufacturing plant performance (Abernathy 1978; see reviews by Gerwin 1993, Suárez et al. 1996, Corrêa 1994). Hayes and Wheelwright’s (1984) product/process matrix postulates a close correspondence between product variety and process efficiency (see Safizadeh et al. 1996). Strategy researchers such as Ghemawat and Costa (1993) argue that firms must chose between a strategy of dynamic effectiveness through flexibility and static efficiency through more rigid discipline. In support of a key corollary of the tradeoff postulate articulated in the organization theory literature, they argue that in general the optimal choice is at one end or the other of the spectrum, since a firm pursuing both goals simultaneously would have to mix organizational elements appropriate to each strategy and thus lose the benefit of the complementarities that typically obtain between the various elements of each type of organization. They would thus be “stuck in the middle” (Porter 1980). Beyond the Tradeoff? Empirical evidence for the tradeoff postulate is, however, remarkably weak. Take, for example, product mix flexibility. On the one hand, Hayes and Wheelwright (1984) and Skinner (1985) provide anecdotal evidence that more focused factories—ones producing a narrower range of products—are more efficient. In their survey of plants across a range of manufacturing industries, Safizadeh et al. (1996) confirmed that in general more product variety was associated with reliance on job-shop rather continuous processes. D ow nl oa de d fr om in fo rm s. or g by [ 12 8. 32 .7 5. 11 8] o n 28 A pr il 20 14 , a t 1 0: 21 . Fo r pe rs on al u se o nl y, a ll ri gh ts r es er ve d. PAUL S. ADLER, BARBARA GOLDOFTAS AND DAVID I. LEVINE Flexibility Versus Efficiency? ORGANIZATION SCIENCE/Vol. 10, No. 1, January–February 1999 45 On the other hand, Kekre and Srinivasan’s (1990) study of companies selling industrial products found that a broader product line was significantly associated with lower manufacturing costs. MacDuffie et al. (1996) found that greater product variety had no discernible affect on auto assembly plant productivity. Suárez et al. (1996) found that product mix flexibility had no discernible relationship to costs or quality in printed circuit board assembly. Brush and Karnani (1996) found only three out of 19 manufacturing industries showed statistically significant productivity returns to narrower product lines, while two industries showed significant returns to broader product lines. Research by Fleischman (1996) on employment flexibility revealed a similar pattern: within 2digit SIC code industries that face relatively homogeneous levels of expected volatility of employment, the employment adjustment costs of the least flexible 4-digit industries were anywhere between 4 and 10 times greater than the adjustment costs found in the most flexible 4digit industries. Some authors argue that the era of tradeoffs is behind us (Ferdows and de Meyer 1990). Hypercompetitive environments force firms to compete on several dimensions at once (Organization Science 1996), and flexible technologies enable firms to shift the tradeoff curve just as quickly as they could move to a different point on the existing tr", "title": "" }, { "docid": "329486a3e7f13f79c9b02365ff555fdf", "text": "A novel ultra-wideband (UWB) bandpass filter (BPF) with improved upper stopband performance using a defected ground structure (DGS) is presented in this letter. The proposed BPF is composed of seven DGSs that are positioned under the input and output microstrip line and coupled double step impedance resonator (CDSIR). By using CDSIR and open loop defected ground structure (OLDGS), we can achieve UWB BPF characteristics, and by using the conventional CDGSs under the input and output microstrip line, we can improve the upper stopband performance. Simulated and measured results are found in good agreement with each other, showing a wide passband from 3.4 to 10.9 GHz, minimum insertion loss of 0.61 dB at 7.02 GHz, a group delay variation of less than 0.4 ns in the operating band, and a wide upper stopband with more than 30 dB attenuation up to 20 GHz. In addition, the proposed UWB BPF has a compact size (0.27¿g ~ 0.29¿g , ¿g : guided wavelength at the central frequency of 6.85 GHz).", "title": "" }, { "docid": "4c004745828100f6ccc6fd660ee93125", "text": "Steganography has been proposed as a new alternative technique to enforce data security. Lately, novel and versatile audio steganographic methods have been proposed. A perfect audio Steganographic technique aim at embedding data in an imperceptible, robust and secure way and then extracting it by authorized people. Hence, up to date the main challenge in digital audio steganography is to obtain robust high capacity steganographic systems. Leaning towards designing a system that ensures high capacity or robustness and security of embedded data has led to great diversity in the existing steganographic techniques. In this paper, we present a current state of art literature in digital audio steganographic techniques. We explore their potentials and limitations to ensure secure communication. A comparison and an evaluation for the reviewed techniques is also presented in this paper.", "title": "" }, { "docid": "36fb4d86453a2e73c2989c04286b2ee2", "text": "Video super-resolution (SR) aims to generate a highresolution (HR) frame from multiple low-resolution (LR) frames in a local temporal window. The inter-frame temporal relation is as crucial as the intra-frame spatial relation for tackling this problem. However, how to utilize temporal information efficiently and effectively remains challenging since complex motion is difficult to model and can introduce adverse effects if not handled properly. We address this problem from two aspects. First, we propose a temporal adaptive neural network that can adaptively determine the optimal scale of temporal dependency. Filters on various temporal scales are applied to the input LR sequence before their responses are adaptively aggregated. Second, we reduce the complexity of motion between neighboring frames using a spatial alignment network which is much more robust and efficient than competing alignment methods and can be jointly trained with the temporal adaptive network in an end-to-end manner. Our proposed models with learned temporal dynamics are systematically evaluated on public video datasets and achieve state-of-the-art SR results compared with other recent video SR approaches. Both of the temporal adaptation and the spatial alignment modules are demonstrated to considerably improve SR quality over their plain counterparts.", "title": "" }, { "docid": "dbd06c81892bc0535e2648ee21cb00b4", "text": "This paper examines the causes of conflict in Burundi and discusses strategies for building peace. The analysis of the complex relationships between distribution and group dynamics reveals that these relationships are reciprocal, implying that distribution and group dynamics are endogenous. The nature of endogenously generated group dynamics determines the type of preferences (altruistic or exclusionist), which in turn determines the type of allocative institutions and policies that prevail in the political and economic system. While unequal distribution of resources may be socially inefficient, it nonetheless can be rational from the perspective of the ruling elite, especially because inequality perpetuates dominance. However, as the unequal distribution of resources generates conflict, maintaining a system based on inequality is difficult because it requires ever increasing investments in repression. It is therefore clear that if the new Burundian leadership is serious about building peace, it must engineer institutions that uproot the legacy of discrimination and promote equal opportunity for social mobility for all members of ethnic groups and regions.", "title": "" }, { "docid": "682fe9a6e4e30a38ce5c05ee1f809bd1", "text": "3 chapter This chapter examines the effects of fiscal consolidation —tax hikes and government spending cuts—on economic activity. Based on a historical analysis of fiscal consolidation in advanced economies, and on simulations of the IMF's Global Integrated Monetary and Fiscal Model (GIMF), it finds that fiscal consolidation typically reduces output and raises unemployment in the short term. At the same time, interest rate cuts, a fall in the value of the currency, and a rise in net exports usually soften the contractionary impact. Consolidation is more painful when it relies primarily on tax hikes; this occurs largely because central banks typically provide less monetary stimulus during such episodes, particularly when they involve indirect tax hikes that raise inflation. Also, fiscal consolidation is more costly when the perceived risk of sovereign default is low. These findings suggest that budget deficit cuts are likely to be more painful if they occur simultaneously across many countries, and if monetary policy is not in a position to offset them. Over the long term, reducing government debt is likely to raise output, as real interest rates decline and the lighter burden of interest payments permits cuts to distortionary taxes. Budget deficits and government debt soared during the Great Recession. In 2009, the budget deficit averaged about 9 percent of GDP in advanced economies, up from only 1 percent of GDP in 2007. 1 By the end of 2010, government debt is expected to reach about 100 percent of GDP—its highest level in 50 years. Looking ahead, population aging could create even more serious problems for public finances. In response to these worrisome developments, virtually all advanced economies will face the challenge of fiscal consolidation. Indeed, many governments are already undertaking or planning The main authors of this chapter are Daniel Leigh (team leader), Advanced economies are defined as the 33 economies so designated based on the World Economic Outlook classification described in the Statistical Appendix. large spending cuts and tax hikes. An important and timely question is, therefore, whether fiscal retrenchment will hurt economic performance. Although there is widespread agreement that reducing debt has important long-term benefits, there is no consensus regarding the short-term effects of fiscal austerity. On the one hand, the conventional Keynesian view is that cutting spending or raising taxes reduces economic activity in the short term. On the other hand, a number of studies present evidence that cutting budget deficits can …", "title": "" }, { "docid": "b8c48e65558504284849e05c9d3f1a19", "text": "Applications in radar systems and communications systems require very often antennas with beam steering or multi beam capabilities. For the millimeter frequency range Rotman lenses can be useful as multiple beam forming networks for linear antennas providing the advantage of broadband performance. The design and development of Rotman lens at 220 GHz feeding an antenna array for beam steering applications is presented. The construction is completely realized in waveguide technology. Experimental results are compared with theoretical considerations and electromagnetic simulations.", "title": "" }, { "docid": "b7bf3ae864ce774874041b0e5308323f", "text": "This paper examines factors that influence prices of most common five cryptocurrencies such Bitcoin, Ethereum, Dash, Litecoin, and Monero over 20102018 using weekly data. The study employs ARDL technique and documents several findings. First, cryptomarket-related factors such as market beta, trading volume, and volatility appear to be significant determinant for all five cryptocurrencies both in shortand long-run. Second, attractiveness of cryptocurrencies also matters in terms of their price determination, but only in long-run. This indicates that formation (recognition) of the attractiveness of cryptocurrencies are subjected to time factor. In other words, it travels slowly within the market. Third, SP500 index seems to have weak positive long-run impact on Bitcoin, Ethereum, and Litcoin, while its sign turns to negative losing significance in short-run, except Bitcoin that generates an estimate of -0.20 at 10% significance level. Lastly, error-correction models for Bitcoin, Etherem, Dash, Litcoin, and Monero show that cointegrated series cannot drift too far apart, and converge to a longrun equilibrium at a speed of 23.68%, 12.76%, 10.20%, 22.91%, and 14.27% respectively.", "title": "" }, { "docid": "35ffdb3e5b2ac637f7e8d796c4cdc97e", "text": "Pedestrian detection in real world scenes is a challenging problem. In recent years a variety of approaches have been proposed, and impressive results have been reported on a variety of databases. This paper systematically evaluates (1) various local shape descriptors, namely Shape Context and Local Chamfer descriptor and (2) four different interest point detectors for the detection of pedestrians. Those results are compared to the standard global Chamfer matching approach. A main result of the paper is that Shape Context trained on real edge images rather than on clean pedestrian silhouettes combined with the Hessian-Laplace detector outperforms all other tested approaches.", "title": "" }, { "docid": "32b2cd6b63c6fc4de5b086772ef9d319", "text": "Link prediction for knowledge graphs is the task of predicting missing relationships between entities. Previous work on link prediction has focused on shallow, fast models which can scale to large knowledge graphs. However, these models learn less expressive features than deep, multi-layer models – which potentially limits performance. In this work we introduce ConvE, a multi-layer convolutional network model for link prediction, and report state-of-the-art results for several established datasets. We also show that the model is highly parameter efficient, yielding the same performance as DistMult and R-GCN with 8x and 17x fewer parameters. Analysis of our model suggests that it is particularly effective at modelling nodes with high indegree – which are common in highlyconnected, complex knowledge graphs such as Freebase and YAGO3. In addition, it has been noted that the WN18 and FB15k datasets suffer from test set leakage, due to inverse relations from the training set being present in the test set – however, the extent of this issue has so far not been quantified. We find this problem to be severe: a simple rule-based model can achieve state-of-the-art results on both WN18 and FB15k. To ensure that models are evaluated on datasets where simply exploiting inverse relations cannot yield competitive results, we investigate and validate several commonly used datasets – deriving robust variants where necessary. We then perform experiments on these robust datasets for our own and several previously proposed models, and find that ConvE achieves state-of-the-art Mean Reciprocal Rank across all datasets.", "title": "" }, { "docid": "f7239ce387f17b279263e6bdaff612d0", "text": "Purpose – This survey aims to study and analyze current techniques and methods for context-aware web service systems, to discuss future trends and propose further steps on making web services systems context-aware. Design/methodology/approach – The paper analyzes and compares existing context-aware web service-based systems based on techniques they support, such as context information modeling, context sensing, distribution, security and privacy, and adaptation techniques. Existing systems are also examined in terms of application domains, system type, mobility support, multi-organization support and level of web services implementation. Findings – Supporting context-aware web service-based systems is increasing. It is hard to find a truly context-aware web service-based system that is interoperable and secure, and operates on multi-organizational environments. Various issues, such as distributed context management, context-aware service modeling and engineering, context reasoning and quality of context, security and privacy issues have not been well addressed. Research limitations/implications – The number of systems analyzed is limited. Furthermore, the survey is based on published papers. Therefore, up-to-date information and development might not be taken into account. Originality/value – Existing surveys do not focus on context-awareness techniques for web services. This paper helps to understand the state of the art in context-aware techniques for web services that can be employed in the future of services which is built around, amongst others, mobile devices, web services, and pervasive environments.", "title": "" }, { "docid": "995ad137b6711f254c6b9852611242b5", "text": "In this paper, we study beam selection for millimeter-wave (mm-wave) multiuser multiple input multiple output (MIMO) systems where a base station (BS) and users are equipped with antenna arrays. Exploiting a certain sparsity of mm-wave channels, a low-complexity beam selection method for beamforming by low-cost analog beamformers is derived. It is shown that beam selection can be carried out without explicit channel estimation using the notion of compressive sensing (CS). Due to various reasons (e.g., the background noise and interference), some users may choose the same BS beam, which results in high inter-user interference. To overcome this problem, we further consider BS beam selection by users. Through simulations, we show that the performance gap between the proposed approach and the optimal beamforming approach, which requires full channel state information (CSI), becomes narrower for a larger number of users at a moderate/low signal-to-noise ratio (SNR). Since the optimal beamforming approach is difficult to be used due to prohibitively high computational complexity for large antenna arrays with a large number of users, the proposed approach becomes attractive for BSs and users in mm-wave systems where large antenna arrays can be employed.", "title": "" } ]
scidocsrr
65a1853af116c63a9854549e34fd9d75
Texture-aware ASCII art synthesis with proportional fonts
[ { "docid": "921b024ca0a99e3b7cd3a81154d70c66", "text": "Image quality assessment (IQA) aims to use computational models to measure the image quality consistently with subjective evaluations. The well-known structural similarity index brings IQA from pixel- to structure-based stage. In this paper, a novel feature similarity (FSIM) index for full reference IQA is proposed based on the fact that human visual system (HVS) understands an image mainly according to its low-level features. Specifically, the phase congruency (PC), which is a dimensionless measure of the significance of a local structure, is used as the primary feature in FSIM. Considering that PC is contrast invariant while the contrast information does affect HVS' perception of image quality, the image gradient magnitude (GM) is employed as the secondary feature in FSIM. PC and GM play complementary roles in characterizing the image local quality. After obtaining the local quality map, we use PC again as a weighting function to derive a single quality score. Extensive experiments performed on six benchmark IQA databases demonstrate that FSIM can achieve much higher consistency with the subjective evaluations than state-of-the-art IQA metrics.", "title": "" }, { "docid": "07a1d62b56bd1e2acf4282f69e85fb93", "text": "Many state-of-the-art perceptual image quality assessment (IQA) algorithms share a common two-stage structure: local quality/distortion measurement followed by pooling. While significant progress has been made in measuring local image quality/distortion, the pooling stage is often done in ad-hoc ways, lacking theoretical principles and reliable computational models. This paper aims to test the hypothesis that when viewing natural images, the optimal perceptual weights for pooling should be proportional to local information content, which can be estimated in units of bit using advanced statistical models of natural images. Our extensive studies based upon six publicly-available subject-rated image databases concluded with three useful findings. First, information content weighting leads to consistent improvement in the performance of IQA algorithms. Second, surprisingly, with information content weighting, even the widely criticized peak signal-to-noise-ratio can be converted to a competitive perceptual quality measure when compared with state-of-the-art algorithms. Third, the best overall performance is achieved by combining information content weighting with multiscale structural similarity measures.", "title": "" } ]
[ { "docid": "3d4cfb2d3ba1e70e5dd03060f5d5f663", "text": "BACKGROUND\nAlzheimer's disease (AD) causes considerable distress in caregivers who are continuously required to deal with requests from patients. Coping strategies play a fundamental role in modulating the psychologic impact of the disease, although their role is still debated. The present study aims to evaluate the burden and anxiety experienced by caregivers, the effectiveness of adopted coping strategies, and their relationships with burden and anxiety.\n\n\nMETHODS\nEighty-six caregivers received the Caregiver Burden Inventory (CBI) and the State-Trait Anxiety Inventory (STAI Y-1 and Y-2). The coping strategies were assessed by means of the Coping Inventory for Stressful Situations (CISS), according to the model proposed by Endler and Parker in 1990.\n\n\nRESULTS\nThe CBI scores (overall and single sections) were extremely high and correlated with dementia severity. Women, as well as older caregivers, showed higher scores. The trait anxiety (STAI-Y-2) correlated with the CBI overall score. The CISS showed that caregivers mainly adopted task-focused strategies. Women mainly adopted emotion-focused strategies and this style was related to a higher level of distress.\n\n\nCONCLUSION\nAD is associated with high distress among caregivers. The burden strongly correlates with dementia severity and is higher in women and in elderly subjects. Chronic anxiety affects caregivers who mainly rely on emotion-oriented coping strategies. The findings suggest providing support to families of patients with AD through tailored strategies aimed to reshape the dysfunctional coping styles.", "title": "" }, { "docid": "081da5941b0431d00b4058c26987d43f", "text": "Artificial bee colony algorithm simulating the intelligent foraging behavior of honey bee swarms is one of the most popular swarm based optimization algorithms. It has been introduced in 2005 and applied in several fields to solve different problems up to date. In this paper, an artificial bee colony algorithm, called as Artificial Bee Colony Programming (ABCP), is described for the first time as a new method on symbolic regression which is a very important practical problem. Symbolic regression is a process of obtaining a mathematical model using given finite sampling of values of independent variables and associated values of dependent variables. In this work, a set of symbolic regression benchmark problems are solved using artificial bee colony programming and then its performance is compared with the very well-known method evolving computer programs, genetic programming. The simulation results indicate that the proposed method is very feasible and robust on the considered test problems of symbolic regression. 2012 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "98e9d8fb4a04ad141b3a196fe0a9c08b", "text": "ÐGraphs are a powerful and universal data structure useful in various subfields of science and engineering. In this paper, we propose a new algorithm for subgraph isomorphism detection from a set of a priori known model graphs to an input graph that is given online. The new approach is based on a compact representation of the model graphs that is computed offline. Subgraphs that appear multiple times within the same or within different model graphs are represented only once, thus reducing the computational effort to detect them in an input graph. In the extreme case where all model graphs are highly similar, the run-time of the new algorithm becomes independent of the number of model graphs. Both a theoretical complexity analysis and practical experiments characterizing the performance of the new approach will be given. Index TermsÐGraph matching, graph isomorphism, subgraph isomorphism, preprocessing.", "title": "" }, { "docid": "f24f686a705a1546d211ac37d5cc2fdb", "text": "In commercial-off-the-shelf (COTS) multi-core systems, a task running on one core can be delayed by other tasks running simultaneously on other cores due to interference in the shared DRAM main memory. Such memory interference delay can be large and highly variable, thereby posing a significant challenge for the design of predictable real-time systems. In this paper, we present techniques to provide a tight upper bound on the worst-case memory interference in a COTS-based multi-core system. We explicitly model the major resources in the DRAM system, including banks, buses and the memory controller. By considering their timing characteristics, we analyze the worst-case memory interference delay imposed on a task by other tasks running in parallel. To the best of our knowledge, this is the first work bounding the request re-ordering effect of COTS memory controllers. Our work also enables the quantification of the extent by which memory interference can be reduced by partitioning DRAM banks. We evaluate our approach on a commodity multi-core platform running Linux/RK. Experimental results show that our approach provides an upper bound very close to our measured worst-case interference.", "title": "" }, { "docid": "894e4f975ce81a181025e65227e70b18", "text": "Gesturing and motion control have become common as interaction methods for video games since the advent of the Nintendo Wii game console. Despite the growing number of motion-based control platforms for video games, no set of shared design heuristics for motion control across the platforms has been published. Our approach in this paper combines analysis of player experiences across platforms. We work towards a collection of design heuristics for motion-based control by studying game reviews in two motion-based control platforms, Xbox 360 Kinect and PlayStation 3 Move. In this paper we present an analysis of player problems within 256 game reviews, on which we ground a set of heuristics for motion-controlled games.", "title": "" }, { "docid": "c89f44a3216a9411a42cb0a420f4b73b", "text": "Chemical fiber paper tubes are the essential spinning equipment on filament high-speed spinning and winding machine of the chemical fiber industry. The precision of its application directly impacts on the formation of the silk, determines the cost of the spinning industry. Due to the accuracy of its application requirements, the paper tubes with defects must be detected and removed. Traditional industrial defect detection methods are usually carried out using the target operator's characteristics, only to obtain surface information, not only the detection efficiency and accuracy is difficult to improve, due to human judgment, it's difficult to give effective algorithm for some targets. And the existing learning algorithms are also difficult to use the deep features, so they can not get good results. Based on the Faster-RCNN method in depth learning, this paper extracts the deep features of the defective target by Convolutional Neural Network (CNN), which effectively solves the internal joint defects that the traditional algorithm can not effectively detect. As to the external joints and damaged flaws that the traditional algorithm can detect, this algorithm has better results, the experimental accuracy rate can be raised up to 98.00%. At the same time, it can be applied to a variety of lighting conditions, reducing the pretreatment steps and improving efficiency. The experimental results show that the method is effective and worthy of further research.", "title": "" }, { "docid": "299e7f7d1c48d4a6a22c88dcf422f7a1", "text": "Due to the advantages of deep learning, in this paper, a regularized deep feature extraction (FE) method is presented for hyperspectral image (HSI) classification using a convolutional neural network (CNN). The proposed approach employs several convolutional and pooling layers to extract deep features from HSIs, which are nonlinear, discriminant, and invariant. These features are useful for image classification and target detection. Furthermore, in order to address the common issue of imbalance between high dimensionality and limited availability of training samples for the classification of HSI, a few strategies such as L2 regularization and dropout are investigated to avoid overfitting in class data modeling. More importantly, we propose a 3-D CNN-based FE model with combined regularization to extract effective spectral-spatial features of hyperspectral imagery. Finally, in order to further improve the performance, a virtual sample enhanced method is proposed. The proposed approaches are carried out on three widely used hyperspectral data sets: Indian Pines, University of Pavia, and Kennedy Space Center. The obtained results reveal that the proposed models with sparse constraints provide competitive results to state-of-the-art methods. In addition, the proposed deep FE opens a new window for further research.", "title": "" }, { "docid": "6bbc32ecaf54b9a51442f92edbc2604a", "text": "Artificial bee colony (ABC), an optimization algorithm is a recent addition to the family of population based search algorithm. ABC has taken its inspiration from the collective intelligent foraging behavior of honey bees. In this study we have incorporated golden section search mechanism in the structure of basic ABC to improve the global convergence and prevent to stick on a local solution. The proposed variant is termed as ILS-ABC. Comparative numerical results with the state-of-art algorithms show the performance of the proposal when applied to the set of unconstrained engineering design problems. The simulated results show that the proposed variant can be successfully applied to solve real life problems.", "title": "" }, { "docid": "407574abdcba82be2e9aea5a9b38c0a3", "text": "In this paper, we investigate resource block (RB) assignment and modulation-and-coding scheme (MCS) selection to maximize downlink throughput of long-term evolution (LTE) systems, where all RB's assigned to the same user in any given transmission time interval (TTI) must use the same MCS. We develop several effective MCS selection schemes by using the effective packet-level SINR based on exponential effective SINR mapping (EESM), arithmetic mean, geometric mean, and harmonic mean. From both analysis and simulation results, we show that the system throughput of all the proposed schemes are better than that of the scheme in [7]. Furthermore, the MCS selection scheme using harmonic mean based effective packet-level SINR almost reaches the optimal performance and significantly outperforms the other proposed schemes.", "title": "" }, { "docid": "1d51506f851a8b125edd7edcd8c6bd1b", "text": "A stress-detection system is proposed based on physiological signals. Concretely, galvanic skin response (GSR) and heart rate (HR) are proposed to provide information on the state of mind of an individual, due to their nonintrusiveness and noninvasiveness. Furthermore, specific psychological experiments were designed to induce properly stress on individuals in order to acquire a database for training, validating, and testing the proposed system. Such system is based on fuzzy logic, and it described the behavior of an individual under stressing stimuli in terms of HR and GSR. The stress-detection accuracy obtained is 99.5% by acquiring HR and GSR during a period of 10 s, and what is more, rates over 90% of success are achieved by decreasing that acquisition period to 3-5 s. Finally, this paper comes up with a proposal that an accurate stress detection only requires two physiological signals, namely, HR and GSR, and the fact that the proposed stress-detection system is suitable for real-time applications.", "title": "" }, { "docid": "a49c8e6f222b661447d1de32e29d0f16", "text": "The discovery of ammonia oxidation by mesophilic and thermophilic Crenarchaeota and the widespread distribution of these organisms in marine and terrestrial environments indicated an important role for them in the global nitrogen cycle. However, very little is known about their physiology or their contribution to nitrification. Here we report oligotrophic ammonia oxidation kinetics and cellular characteristics of the mesophilic crenarchaeon ‘Candidatus Nitrosopumilus maritimus’ strain SCM1. Unlike characterized ammonia-oxidizing bacteria, SCM1 is adapted to life under extreme nutrient limitation, sustaining high specific oxidation rates at ammonium concentrations found in open oceans. Its half-saturation constant (Km = 133 nM total ammonium) and substrate threshold (≤10 nM) closely resemble kinetics of in situ nitrification in marine systems and directly link ammonia-oxidizing Archaea to oligotrophic nitrification. The remarkably high specific affinity for reduced nitrogen (68,700 l per g cells per h) of SCM1 suggests that Nitrosopumilus-like ammonia-oxidizing Archaea could successfully compete with heterotrophic bacterioplankton and phytoplankton. Together these findings support the hypothesis that nitrification is more prevalent in the marine nitrogen cycle than accounted for in current biogeochemical models.", "title": "" }, { "docid": "703f0baf67a1de0dfb03b3192327c4cf", "text": "Fleet management systems are commonly used to coordinate mobility and delivery services in a broad variety of domains. However, their traditional top-down control architecture becomes a bottleneck in open and dynamic environments, where scalability, proactiveness, and autonomy are becoming key factors for their success. Here, the authors present an abstract event-based architecture for fleet management systems that supports tailoring dynamic control regimes for coordinating fleet vehicles, and illustrate it for the case of medical emergency management. Then, they go one step ahead in the transition toward automatic or driverless fleets, by conceiving fleet management systems in terms of cyber-physical systems, and putting forward the notion of cyber fleets.", "title": "" }, { "docid": "815feed9cce2344872c50da6ffb77093", "text": "Over the last decade blogs became an important part of the Web, where people can announce anything that is on their mind. Due to their high popularity blogs have great potential to mine public opinions regarding products. Such knowledge is very valuable as it could be used to adjust marketing campaigns or advertisement of products accordingly. In this paper we investigate how the blogosphere can be used to predict the success of products in the domain of music and movies. We analyze and characterize the blogging behavior in both domains particularly around product releases, propose different methods for extracting characteristic features from the blogosphere, and show that our predictions correspond to the real world measures Sales Rank and box office revenue respectively.", "title": "" }, { "docid": "d214ef50a5c26fb65d8c06ea7db3d07c", "text": "We introduce a method for learning to generate the surface of 3D shapes. Our approach represents a 3D shape as a collection of parametric surface elements and, in contrast to methods generating voxel grids or point clouds, naturally infers a surface representation of the shape. Beyond its novelty, our new shape generation framework, AtlasNet, comes with significant advantages, such as improved precision and generalization capabilities, and the possibility to generate a shape of arbitrary resolution without memory issues. We demonstrate these benefits and compare to strong baselines on the ShapeNet benchmark for two applications: (i) autoencoding shapes, and (ii) single-view reconstruction from a still image. We also provide results showing its potential for other applications, such as morphing, parametrization, super-resolution, matching, and co-segmentation.", "title": "" }, { "docid": "b7c0864be28d70d49ae4a28fb7d78f04", "text": "UNLABELLED\nThe replacement of crowns and bridges is a common procedure for many dental practitioners. When correctly planned and executed, fixed prostheses will provide predictable function, aesthetics and value for money. However, when done poorly, they are more likely to fail prematurely and lead to irreversible damage to the teeth and supporting structures beneath. Sound diagnosis, assessment and technical skills are essential when dealing with failed or failing fixed restorations. These skills are essential for the 21st century dentist. This paper, with treated clinical examples, illustrates the areas of technical skill and clinical decisions needed for this type of work. It also provides advice on how the risk of premature failure can, in general, be further reduced. The article also confirms the very real risk in the UK of dento-legal problems when patients experience unexpected problems with their crowns and bridges.\n\n\nCLINICAL RELEVANCE\nThis paper outlines clinical implications of failed fixed prosthodontics to the dental surgeon. It also discusses factors that we can all use to predict and reduce the risk of premature restoration failure. Restoration design, clinical execution and patient factors are the most frequent reasons for premature problems. It is worth remembering (and informing patients) that the health of the underlying supporting dental tissue is often irreversibly compromised at the time of fixed restoration failure.", "title": "" }, { "docid": "d5a9d2a212deee5057a0289f72b51d9b", "text": "Compared to supervised feature selection, unsupervised feature selection tends to be more challenging due to the lack of guidance from class labels. Along with the increasing variety of data sources, many datasets are also equipped with certain side information of heterogeneous structure. Such side information can be critical for feature selection when class labels are unavailable. In this paper, we propose a new feature selection method, SideFS, to exploit such rich side information. We model the complex side information as a heterogeneous network and derive instance correlations to guide subsequent feature selection. Representations are learned from the side information network and the feature selection is performed in a unified framework. Experimental results show that the proposed method can effectively enhance the quality of selected features by incorporating heterogeneous side information.", "title": "" }, { "docid": "3294f746432ba9746a8cc8082a1021f7", "text": "CRYPTONITE is a programmable processor tailored to the needs of crypto algorithms. The design of CRYPTONITE was based on an in-depth application analysis in which standard crypto algorithms (AES, DES, MD5, SHA-1, etc) were distilled down to their core functionality. We describe this methodology and use AES as a central example. Starting with a functional description of AES, we give a high level account of how to implement AES efficiently in hardware, and present several novel optimizations (which are independent of CRYPTONITE).We then describe the CRYPTONITE architecture, highlighting how AES implementation issues influenced the design of the processor and its instruction set. CRYPTONITE is designed to run at high clock rates and be easy to implement in silicon while providing a significantly better performance/area/power tradeoff than general purpose processors.", "title": "" }, { "docid": "f9765c97a101a163a486b18e270d67f5", "text": "We present a formulation of deep learning that aims at producing a large margin classifier. The notion of margin, minimum distance to a decision boundary, has served as the foundation of several theoretically profound and empirically successful results for both classification and regression tasks. However, most large margin algorithms are applicable only to shallow models with a preset feature representation; and conventional margin methods for neural networks only enforce margin at the output layer. Such methods are therefore not well suited for deep networks. In this work, we propose a novel loss function to impose a margin on any chosen set of layers of a deep network (including input and hidden layers). Our formulation allows choosing any lp norm (p ≥ 1) on the metric measuring the margin. We demonstrate that the decision boundary obtained by our loss has nice properties compared to standard classification loss functions. Specifically, we show improved empirical results on the MNIST, CIFAR-10 and ImageNet datasets on multiple tasks: generalization from small training sets, corrupted labels, and robustness against adversarial perturbations. The resulting loss is general and complementary to existing data augmentation (such as random/adversarial input transform) and regularization techniques such as weight decay, dropout, and batch norm. 2", "title": "" }, { "docid": "1ed9151f81e15db5bb08a7979d5eeddb", "text": "Deep learning has delivered its powerfulness in many application domains, especially in image and speech recognition. As the backbone of deep learning, deep neural networks (DNNs) consist of multiple layers of various types with hundreds to thousands of neurons. Embedded platforms are now becoming essential for deep learning deployment due to their portability, versatility, and energy efficiency. The large model size of DNNs, while providing excellent accuracy, also burdens the embedded platforms with intensive computation and storage. Researchers have investigated on reducing DNN model size with negligible accuracy loss. This work proposes a Fast Fourier Transform (FFT)-based DNN training and inference model suitable for embedded platforms with reduced asymptotic complexity of both computation and storage, making our approach distinguished from existing approaches. We develop the training and inference algorithms based on FFT as the computing kernel and deploy the FFT-based inference model on embedded platforms achieving extraordinary processing speed.", "title": "" }, { "docid": "808de7fe99686dabb5b1ea28187cd406", "text": "Automated Guided Vehicles (AGVs) are being increasingly used for intelligent transportation and distribution of materials in warehouses and auto-production lines. In this paper, a preliminary hazard analysis of an AGV’s critical components is conducted by the approach of Failure Modes Effects and Criticality Analysis (FMECA). To implement this research, a particular AGV transport system is modelled as a phased mission. Then, Fault Tree Analysis (FTA) is adopted to model the causes of phase failure, enabling the probability of success in each phase and hence mission success to be determined. Through this research, a promising technical approach is established, which allows the identification of the critical AGV components and crucial mission phases of AGVs at the design stage. 1998 ACM Subject Classification B.8 Performance and Reliability", "title": "" } ]
scidocsrr
9a1fa0b7b8c2aef8ca0f36c7d5b5bc72
Insights into deep neural networks for speaker recognition
[ { "docid": "cd733cb756884a21cfcc9143e425f0f6", "text": "We propose a novel framework for speaker recognition in which extraction of sufficient statistics for the state-of-the-art i-vector model is driven by a deep neural network (DNN) trained for automatic speech recognition (ASR). Specifically, the DNN replaces the standard Gaussian mixture model (GMM) to produce frame alignments. The use of an ASR-DNN system in the speaker recognition pipeline is attractive as it integrates the information from speech content directly into the statistics, allowing the standard backends to remain unchanged. Improvement from the proposed framework compared to a state-of-the-art system are of 30% relative at the equal error rate when evaluated on the telephone conditions from the 2012 NIST speaker recognition evaluation (SRE). The proposed framework is a successful way to efficiently leverage transcribed data for speaker recognition, thus opening up a wide spectrum of research directions.", "title": "" }, { "docid": "e64f1f11ed113ca91094ef36eaf794a7", "text": "We describe the neural-network training framework used in the Kaldi speech recognition toolkit, which is geared towards training DNNs with large amounts of training data using multiple GPU-equipped or multicore machines. In order to be as hardwareagnostic as possible, we needed a way to use multiple machines without generating excessive network traffic. Our method is to average the neural network parameters periodically (typically every minute or two), and redistribute the averaged parameters to the machines for further training. Each machine sees different data. By itself, this method does not work very well. However, we have another method, an approximate and efficient implementation of Natural Gradient for Stochastic Gradient Descent (NG-SGD), which seems to allow our periodic-averaging method to work well, as well as substantially improving the convergence of SGD on a single machine.", "title": "" } ]
[ { "docid": "c14eca26d1dc76a5e533583a56e4bd5d", "text": "In restorative dentistry, the non-vital tooth and its restoration have been extensively studied from both its structural and esthetic aspects. The restoration of endodontically treated teeth has much in common with modern implantology: both must include multifaceted biological, biomechanical and esthetic considerations with a profound understanding of materials and techniques; both are technique sensitive and both require a multidisciplinary approach. And for both, two fundamental principles from team sports apply well: firstly, the weakest link determines the limits, and secondly, it is a very long way to the top, but a very short way to failure. Nevertheless, there is one major difference: if the tooth fails, there is the option of the implant, but if the implant fails, there is only another implant or nothing. The aim of this essay is to try to answer some clinically relevant conceptual questions and to give some clinical guidelines regarding the reconstructive aspects, based on scientific evidence and clinical expertise.", "title": "" }, { "docid": "c4d1d0d636e23c377473fe631022bef1", "text": "Electronic concept mapping tools provide a flexible vehicle for constructing concept maps, linking concept maps to other concept maps and related resources, and distributing concept maps to others. As electronic concept maps are constructed, it is often helpful for users to consult additional resources, in order to jog their memories or to locate resources to link to the map under construction. The World Wide Web provides a rich range of resources for these tasks—if the right resources can be found. This paper presents ongoing research on how to automatically generate Web queries from concept maps under construction, in order to proactively suggest related information to aid concept mapping. First, it examines how concept map structure and content can be exploited to automatically select terms to include in initial queries, based on studies of (1) how concept map structure influences human judgments of concept importance, and (2) the relative value of including information from concept labels and linking phrases. Second, it examines how a concept map can be used to refine future queries by reinforcing the weights of terms that have proven to be good discriminators for the topic of the concept map. The described methods are being applied to developing “intelligent suggesters” to support the concept mapping process.", "title": "" }, { "docid": "5a7b68c341e20d5d788e46c089cfd855", "text": "This study aims at investigating alcoholic inpatients' attachment system by combining a measurement of adult attachment style (AAQ, Hazan and Shaver, 1987. Journal of Personality and Social Psychology, 52(3): 511-524) and the degree of alexithymia (BVAQ, Bermond and Vorst, 1998. Bermond-Vorst Alexithymia Questionnaire, Unpublished data). Data were collected from 101 patients (71 men, 30 women) admitted to a psychiatric hospital in Belgium for alcohol use-related problems, between September 2003 and December 2004. To investigate the research question, cluster analyses and regression analyses are performed. We found that it makes sense to distinguish three subgroups of alcoholic inpatients with different degrees of impairment of the attachment system. Our results also reveal a pattern of correspondence between the severity of psychiatric symptoms-personality disorder traits (ADP-IV), anxiety (STAI), and depression (BDI-II-Nl)-and the severity of the attachment system's impairment. Limitations of the study and suggestions for further research are highlighted and implications for diagnosis and treatment are discussed.", "title": "" }, { "docid": "e85b761664a01273a10819566699bf4f", "text": "Julius Bernstein belonged to the Berlin school of “organic physicists” who played a prominent role in creating modern physiology and biophysics during the second half of the nineteenth century. He trained under du Bois-Reymond in Berlin, worked with von Helmholtz in Heidelberg, and finally became Professor of Physiology at the University of Halle. Nowadays his name is primarily associated with two discoveries: (1) The first accurate description of the action potential in 1868. He developed a new instrument, a differential rheotome (= current slicer) that allowed him to resolve the exact time course of electrical activity in nerve and muscle and to measure its conduction velocity. (2) His ‘Membrane Theory of Electrical Potentials’ in biological cells and tissues. This theory, published by Bernstein in 1902, provided the first plausible physico-chemical model of bioelectric events; its fundamental concepts remain valid to this day. Bernstein pursued an intense and long-range program of research in which he achieved a new level of precision and refinement by formulating quantitative theories supported by exact measurements. The innovative design and application of his electromechanical instruments were milestones in the development of biomedical engineering techniques. His seminal work prepared the ground for hypotheses and experiments on the conduction of the nervous impulse and ultimately the transmission of information in the nervous system. Shortly after his retirement, Bernstein (1912) summarized his electrophysiological work and extended his theoretical concepts in a book Elektrobiologie that became a classic in its field. The Bernstein Centers for Computational Neuroscience recently established at several universities in Germany were named to honor the person and his work.", "title": "" }, { "docid": "78d00cb1af094c91cc7877ba051f925e", "text": "Neuropathic pain refers to pain that originates from pathology of the nervous system. Diabetes, infection (herpes zoster), nerve compression, nerve trauma, \"channelopathies,\" and autoimmune disease are examples of diseases that may cause neuropathic pain. The development of both animal models and newer pharmacological strategies has led to an explosion of interest in the underlying mechanisms. Neuropathic pain reflects both peripheral and central sensitization mechanisms. Abnormal signals arise not only from injured axons but also from the intact nociceptors that share the innervation territory of the injured nerve. This review focuses on how both human studies and animal models are helping to elucidate the mechanisms underlying these surprisingly common disorders. The rapid gain in knowledge about abnormal signaling promises breakthroughs in the treatment of these often debilitating disorders.", "title": "" }, { "docid": "30e47a275e7e00f80c8f12061575ee82", "text": "Spliddit is a first-of-its-kind fair division website, which offers provably fair solutions for the division of rent, goods, and credit. In this note, we discuss Spliddit's goals, methods, and implementation.", "title": "" }, { "docid": "3a5d43d86d39966aca2d93d1cf66b13d", "text": "In the current context of increased surveillance and security, more sophisticated and robust surveillance systems are needed. One idea relies on the use of pairs of video (visible spectrum) and thermal infrared (IR) cameras located around premises of interest. To automate the system, a robust person detection algorithm and the development of an efficient technique enabling the fusion of the information provided by the two sensors becomes necessary and these are described in this chapter. Recently, multi-sensor based image fusion system is a challenging task and fundamental to several modern day image processing applications, such as security systems, defence applications, and intelligent machines. Image fusion techniques have been actively investigated and have wide application in various fields. It is often a vital pre-processing procedure to many computer vision and image processing tasks which are dependent on the acquisition of imaging data via sensors, such as IR and visible. One such task is that of human detection. To detect humans with an artificial system is difficult for a number of reasons as shown in Figure 1 (Gavrila, 2001). The main challenge for a vision-based pedestrian detector is the high degree of variability with the human appearance due to articulated motion, body size, partial occlusion, inconsistent cloth texture, highly cluttered backgrounds and changing lighting conditions.", "title": "" }, { "docid": "6a1fa32d9a716b57a321561dfce83879", "text": "Most successful computational approaches for protein function prediction integrate multiple genomics and proteomics data sources to make inferences about the function of unknown proteins. The most accurate of these algorithms have long running times, making them unsuitable for real-time protein function prediction in large genomes. As a result, the predictions of these algorithms are stored in static databases that can easily become outdated. We propose a new algorithm, GeneMANIA, that is as accurate as the leading methods, while capable of predicting protein function in real-time. We use a fast heuristic algorithm, derived from ridge regression, to integrate multiple functional association networks and predict gene function from a single process-specific network using label propagation. Our algorithm is efficient enough to be deployed on a modern webserver and is as accurate as, or more so than, the leading methods on the MouseFunc I benchmark and a new yeast function prediction benchmark; it is robust to redundant and irrelevant data and requires, on average, less than ten seconds of computation time on tasks from these benchmarks. GeneMANIA is fast enough to predict gene function on-the-fly while achieving state-of-the-art accuracy. A prototype version of a GeneMANIA-based webserver is available at http://morrislab.med.utoronto.ca/prototype .", "title": "" }, { "docid": "9d3e0a8af748c9addf598a27f414e0b2", "text": "Although insecticide resistance is a widespread problem for most insect pests, frequently the assessment of resistance occurs over a limited geographic range. Herein, we report the first widespread survey of insecticide resistance in the USA ever undertaken for the house fly, Musca domestica, a major pest in animal production facilities. The levels of resistance to six different insecticides were determined (using discriminating concentration bioassays) in 10 collections of house flies from dairies in nine different states. In addition, the frequencies of Vssc and CYP6D1 alleles that confer resistance to pyrethroid insecticides were determined for each fly population. Levels of resistance to the six insecticides varied among states and insecticides. Resistance to permethrin was highest overall and most consistent across the states. Resistance to methomyl was relatively consistent, with 65-91% survival in nine of the ten collections. In contrast, resistance to cyfluthrin and pyrethrins + piperonyl butoxide varied considerably (2.9-76% survival). Resistance to imidacloprid was overall modest and showed no signs of increasing relative to collections made in 2004, despite increasing use of this insecticide. The frequency of Vssc alleles that confer pyrethroid resistance was variable between locations. The highest frequencies of kdr, kdr-his and super-kdr were found in Minnesota, North Carolina and Kansas, respectively. In contrast, the New Mexico population had the highest frequency (0.67) of the susceptible allele. The implications of these results to resistance management and to the understanding of the evolution of insecticide resistance are discussed.", "title": "" }, { "docid": "5064d758b361171310ac31c323aa734b", "text": "The problem of assessing the significance of data mining results on high-dimensional 0-1 data sets has been studied extensively in the literature. For problems such as mining frequent sets and finding correlations, significance testing can be done by, e.g., chi-square tests, or many other methods. However, the results of such tests depend only on the specific attributes and not on the dataset as a whole. Moreover, the tests are more difficult to apply to sets of patterns or other complex results of data mining. In this paper, we consider a simple randomization technique that deals with this shortcoming. The approach consists of producing random datasets that have the same row and column margins with the given dataset, computing the results of interest on the randomized instances, and comparing them against the results on the actual data. This randomization technique can be used to assess the results of many different types of data mining algorithms, such as frequent sets, clustering, and rankings. To generate random datasets with given margins, we use variations of a Markov chain approach, which is based on a simple swap operation. We give theoretical results on the efficiency of different randomization methods, and apply the swap randomization method to several well-known datasets. Our results indicate that for some datasets the structure discovered by the data mining algorithms is a random artifact, while for other datasets the discovered structure conveys meaningful information.", "title": "" }, { "docid": "ffbab4b090448de06ff5237d43c5e293", "text": "Motivated by a project to create a system for people who are deaf or hard-of-hearing that would use automatic speech recognition (ASR) to produce real-time text captions of spoken English during in-person meetings with hearing individuals, we have augmented a transcript of the Switchboard conversational dialogue corpus with an overlay of word-importance annotations, with a numeric score for each word, to indicate its importance to the meaning of each dialogue turn. Further, we demonstrate the utility of this corpus by training an automatic word importance labeling model; our best performing model has an F-score of 0.60 in an ordinal 6-class word-importance classification task with an agreement (concordance correlation coefficient) of 0.839 with the human annotators (agreement score between annotators is 0.89). Finally, we discuss our intended future applications of this resource, particularly for the task of evaluating ASR performance, i.e. creating metrics that predict ASR-output caption text usability for DHH users better than Word Error Rate (WER).", "title": "" }, { "docid": "471db984564becfea70fb2946ef4871e", "text": "We propose a novel group regularization which we call exclusive lasso. Unlike the group lasso regularizer that assumes covarying variables in groups, the proposed exclusive lasso regularizer models the scenario when variables in the same group compete with each other. Analysis is presented to illustrate the properties of the proposed regularizer. We present a framework of kernel based multi-task feature selection algorithm based on the proposed exclusive lasso regularizer. An efficient algorithm is derived to solve the related optimization problem. Experiments with document categorization show that our approach outperforms state-of-theart algorithms for multi-task feature selection.", "title": "" }, { "docid": "9cdc0646b8c057ead7000ec14736fc12", "text": "This paper presents a multilayer aperture coupled microstrip antenna with a non symmetric U-shaped feed line. The antenna structure consists of a rectangular patch which is excited through two slots on the ground plane. A parametric study is presented on the effects of the position and dimensions of the slots. Results show that the antenna has VSWR < 2 from 2.6 GHz to 5.4 GHz (70%) and the gain of the structure is more than 7 dB from 2.7 GHz to 4.4 GHz (48%).", "title": "" }, { "docid": "f3f70e5ba87399e9d44bda293a231399", "text": "During natural disasters or crises, users on social media tend to easily believe contents of postings related to the events, and retweet the postings with hoping them to be reached to many other users. Unfortunately, there are malicious users who understand the tendency and post misinformation such as spam and fake messages with expecting wider propagation. To resolve the problem, in this paper we conduct a case study of 2013 Moore Tornado and Hurricane Sandy. Concretely, we (i) understand behaviors of these malicious users, (ii) analyze properties of spam, fake and legitimate messages, (iii) propose flat and hierarchical classification approaches, and (iv) detect both fake and spam messages with even distinguishing between them. Our experimental results show that our proposed approaches identify spam and fake messages with 96.43% accuracy and 0.961 F-measure.", "title": "" }, { "docid": "10ef865d0c70369d64c900fb46a1399d", "text": "This work introduces a set of scalable algorithms to identify patterns of human daily behaviors. These patterns are extracted from multivariate temporal data that have been collected from smartphones. We have exploited sensors that are available on these devices, and have identified frequent behavioral patterns with a temporal granularity, which has been inspired by the way individuals segment time into events. These patterns are helpful to both end-users and third parties who provide services based on this information. We have demonstrated our approach on two real-world datasets and showed that our pattern identification algorithms are scalable. This scalability makes analysis on resource constrained and small devices such as smartwatches feasible. Traditional data analysis systems are usually operated in a remote system outside the device. This is largely due to the lack of scalability originating from software and hardware restrictions of mobile/wearable devices. By analyzing the data on the device, the user has the control over the data, i.e., privacy, and the network costs will also be removed.", "title": "" }, { "docid": "c5f0155b2f6ce35a9cbfa38773042833", "text": "Leishmaniasis is caused by protozoa of the genus Leishmania, with the presentation restricted to the mucosa being infrequent. Although the nasal mucosa is the main site affected in this form of the disease, it is also possible the involvement of the lips, mouth, pharynx and larynx. The lesions are characteristically ulcerative-vegetative, with granulation tissue formation. Patients usually complain of pain, dysphagia and odynophagia. Differential diagnosis should include cancer, infectious diseases and granulomatous diseases. We present a case of a 64-year-old male patient, coming from an endemic area for American Tegumentary Leishmaniasis (ATL), with a chief complaint of persistent dysphagia and nasal obstruction for 6 months. The lesion was ulcerative with a purulent infiltration into the soft palate and uvula. After excluding other diseases, ATL was suggested as a hypothesis, having been requested serology and biopsy of the lesions. Was started the treatment with pentavalent antimony and the patient presented regression of the lesions in 30 days, with no other complications.", "title": "" }, { "docid": "362c41e8f90c097160c7785e8b4c9053", "text": "This paper focuses on biomimetic design in the field of technical textiles / smart fabrics. Biologically inspired design is a very promising approach that has provided many elegant solutions. Firstly, a few bio-inspired innovations are presented, followed the introduction of trans-disciplinary research as a useful tool for defining the design problem and giving solutions. Furthermore, the required methods for identifying and applying biological analogies are analysed. Finally, the bio-mimetic approach is questioned and the difficulties, limitations and errors that a designer might face when adopting it are discussed. Researchers and product developers that use this approach should also problematize on the role of biomimetic design: is it a practice that redirects us towards a new model of sustainable development or is it just another tool for generating product ideas in order to increase a company’s competitiveness in the global market? Author", "title": "" }, { "docid": "98e392ace28d496dafd83ec962ce00af", "text": "Continuous-time Markov chains (CTMCs) have been widely used to determine system performance and dependability characteristics. Their analysis most often concerns the computation of steady-state and transient-state probabilities. This paper introduces a branching temporal logic for expressing real-time probabilistic properties on CTMCs and presents approximate model checking algorithms for this logic. The logic, an extension of the continuous stochastic logic CSL of Aziz et al., contains a time-bounded until operator to express probabilistic timing properties over paths as well as an operator to express steady-state probabilities. We show that the model checking problem for this logic reduces to a system of linear equations (for unbounded until and the steady-state operator) and a Volterra integral equation system (for time-bounded until). We then show that the problem of model-checking timebounded until properties can be reduced to the problem of computing transient state probabilities for CTMCs. This allows the verification of probabilistic timing properties by efficient techniques for transient analysis for CTMCs such as uniformization. Finally, we show that a variant of lumping equivalence (bisimulation), a well-known notion for aggregating CTMCs, preserves the validity of all formulas in the logic.", "title": "" }, { "docid": "0512987d091d29681eb8ba38a1079cff", "text": "Deep convolutional neural networks (CNNs) have shown excellent performance in object recognition tasks and dense classification problems such as semantic segmentation. However, training deep neural networks on large and sparse datasets is still challenging and can require large amounts of computation and memory. In this work, we address the task of performing semantic segmentation on large data sets, such as three-dimensional medical images. We propose an adaptive sampling scheme that uses a-posterior error maps, generated throughout training, to focus sampling on difficult regions, resulting in improved learning. Our contribution is threefold: 1) We give a detailed description of the proposed sampling algorithm to speed up and improve learning performance on large images. 2) We propose a deep dual path CNN that captures information at fine and coarse scales, resulting in a network with a large field of view and high resolution outputs. 3) We show that our method is able to attain new state-of-the-art results on the VISCERAL Anatomy benchmark.", "title": "" } ]
scidocsrr
9b439b4dd326e5392be3351868cd1645
Swing-up of the double pendulum on a cart by feedforward and feedback control with experimental validation
[ { "docid": "d61ff7159a1559ec2c4be9450c1ad3b6", "text": "This paper presents the control of an underactuated two-link robot called the Pendubot. We propose a controller for swinging the linkage and rise it to its uppermost unstable equilibrium position. The balancing control is based on an energy approach and the passivity properties of the system.", "title": "" } ]
[ { "docid": "caa30379a2d0b8be2e1b4ddf6e6602c2", "text": "Multi-Processor Systems-on-Chip (MPSoCs) are increasingly popular in embedded systems. Due to their complexity and huge design space to explore for such systems, CAD tools and frameworks to customize MPSoCs are mandatory. Some academic and industrial frameworks are available to support bus-based MPSoCs, but few works target NoCs as underlying communication architecture. A framework targeting MPSoC customization must provide abstract models to enable fast design space exploration, flexible application mapping strategies, all coupled to features to evaluate the performance of running applications. This paper proposes a framework to customize NoC-based MPSoCs with support to static and dynamic task mapping and C/SystemC simulation models for processors and memories. A simple, specifically designed microkernel executes in each processor, enabling multitasking at the processor level. Graphical tools enable debug and system verification, individualizing data for each task. Practical results highlight the benefit of using dynamic mapping strategies (total execution time reduction) and abstract models (total simulation time reduction without losing accuracy).", "title": "" }, { "docid": "9244b687b0031e895cea1fcf5a0b11da", "text": "Bacopa monnieri (L.) Wettst., a traditional Indian medicinal plant with high commercial potential, is used as a potent nervine tonic. A slow growth protocol was developed for medium-term conservation using mineral oil (MO) overlay. Nodal segments of B. monnieri (two genotypes; IC249250, IC468878) were conserved using MO for 24 months. Single node explants were implanted on MS medium supplemented with 0.2 mg l−1 BA and were covered with MO. Subculture duration could be significantly enhanced from 6 to 24 months, on the above medium. Normal plants regenerated from conserved cultures were successfully established in soil. On the basis of 20 random amplified polymorphic DNA and 5 inter-simple sequence repeat primers analyses and bacoside A content using HPLC, no significant reproducible variation was observed between the controls and in vitro-conserved plants. The results demonstrate the feasibility of using MO for medium-term conservation of B. monnieri germplasm without any adverse genetical and biochemical effects.", "title": "" }, { "docid": "15205e074804764a6df0bdb7186c0d8c", "text": "Lactose (milk sugar) is a fermentable substrate. It can be fermented outside of the body to produce cheeses, yoghurts and acidified milks. It can be fermented within the large intestine in those people who have insufficient expression of lactase enzyme on the intestinal mucosa to ferment this disaccharide to its absorbable, simple hexose sugars: glucose and galactose. In this way, the issues of lactose intolerance and of fermented foods are joined. It is only at the extremes of life, in infancy and old age, in which severe and life-threatening consequences from lactose maldigestion may occur. Fermentation as part of food processing can be used for preservation, for liberation of pre-digested nutrients, or to create ethanolic beverages. Almost all cultures and ethnic groups have developed some typical forms of fermented foods. Lessons from fermentation of non-dairy items may be applicable to fermentation of milk, and vice versa.", "title": "" }, { "docid": "11d551da8299c7da76fbeb22b533c7f1", "text": "The use of brushless permanent magnet DC drive motors in racing motorcycles is discussed in this paper. The application requirements are highlighted and the characteristics of the load demand and drive converter outlined. The possible topologies of the machine are investigated and a design for a internal permanent magnet is developed. This is a 6-pole machine with 18 stator slots and coils of one stator tooth pitch. The performance predictions are put forward and these are obtained from design software. Cooling is vital for these machines and this is briefly discussed.", "title": "" }, { "docid": "5ba3baabc84d02f0039748a4626ace36", "text": "BACKGROUND\nGreen tea (GT) extract may play a role in body weight regulation. Suggested mechanisms are decreased fat absorption and increased energy expenditure.\n\n\nOBJECTIVE\nWe examined whether GT supplementation for 12 wk has beneficial effects on weight control via a reduction in dietary lipid absorption as well as an increase in resting energy expenditure (REE).\n\n\nMETHODS\nSixty Caucasian men and women [BMI (in kg/m²): 18-25 or >25; age: 18-50 y] were included in a randomized placebo-controlled study in which fecal energy content (FEC), fecal fat content (FFC), resting energy expenditure, respiratory quotient (RQ), body composition, and physical activity were measured twice (baseline vs. week 12). For 12 wk, subjects consumed either GT (>0.56 g/d epigallocatechin gallate + 0.28-0.45 g/d caffeine) or placebo capsules. Before the measurements, subjects recorded energy intake for 4 consecutive days and collected feces for 3 consecutive days.\n\n\nRESULTS\nNo significant differences between groups and no significant changes over time were observed for the measured variables. Overall means ± SDs were 7.2 ± 3.8 g/d, 6.1 ± 1.2 MJ/d, 67.3 ± 14.3 kg, and 29.8 ± 8.6% for FFC, REE, body weight, and body fat percentage, respectively.\n\n\nCONCLUSION\nGT supplementation for 12 wk in 60 men and women did not have a significant effect on FEC, FFC, REE, RQ, and body composition.", "title": "" }, { "docid": "ab3dd1f92c09e15ee05ab7f65f676afe", "text": "We introduce a novel learning method for 3D pose estimation from color images. While acquiring annotations for color images is a difficult task, our approach circumvents this problem by learning a mapping from paired color and depth images captured with an RGB-D camera. We jointly learn the pose from synthetic depth images that are easy to generate, and learn to align these synthetic depth images with the real depth images. We show our approach for the task of 3D hand pose estimation and 3D object pose estimation, both from color images only. Our method achieves performances comparable to state-of-the-art methods on popular benchmark datasets, without requiring any annotations for the color images.", "title": "" }, { "docid": "0c34e8355f1635b3679159abd0a82806", "text": "Bar charts are an effective way to convey numeric information, but today's algorithms cannot parse them. Existing methods fail when faced with even minor variations in appearance. Here, we present DVQA, a dataset that tests many aspects of bar chart understanding in a question answering framework. Unlike visual question answering (VQA), DVQA requires processing words and answers that are unique to a particular bar chart. State-of-the-art VQA algorithms perform poorly on DVQA, and we propose two strong baselines that perform considerably better. Our work will enable algorithms to automatically extract numeric and semantic information from vast quantities of bar charts found in scientific publications, Internet articles, business reports, and many other areas.", "title": "" }, { "docid": "769c1933f833cbe0c79422e3e15a6ff3", "text": "The concept of presortedness and its use in sorting are studied. Natural ways to measure presortedness are given and some general properties necessary for a measure are proposed. A concept of a sorting algorithm optimal with respect to a measure of presortedness is defined, and examples of such algorithms are given. A new insertion sort algorithm is shown to be optimal with respect to three natural measures. The problem of finding an optimal algorithm for an arbitrary measure is studied, and partial results are proven.", "title": "" }, { "docid": "f3a253dcae5127fcd4e62fd2508eef09", "text": "ACC: allergic contact cheilitis Bronopol: 2-Bromo-2-nitropropane-1,3-diol MI: methylisothiazolinone MCI: methylchloroisothiazolinone INTRODUCTION Pediatric cheilitis can be a debilitating condition for the child and parents. Patch testing can help isolate allergens to avoid. Here we describe a 2-yearold boy with allergic contact cheilitis improving remarkably after prudent avoidance of contactants and food avoidance.", "title": "" }, { "docid": "dc693ab2e8991630f62caf0f62eb0dc6", "text": "The paper presents the power amplifier design. The introduction of a practical harmonic balance capability at the device measurement stage brings a number of advantages and challenges. Breaking down this traditional barrier means that the test-bench engineer needs to become more aware of the design process and requirements. The inverse is also true, as the measurement specifications for a harmonically tuned amplifier are a bit more complex than just the measurement of load-pull contours. We hope that the new level of integration between both will also result in better exchanges between both sides and go beyond showing either very accurate, highly tuned device models, or using the device model as the traditional scapegoat for unsuccessful PA designs. A nonlinear model and its quality can now be diagnosed through direct comparison of simulated and measured wave forms. The quality of a PA design can be verified by placing the device within the measurement system, practical harmonic balance emulator into the same impedance state in which it will operate in the actual realized design.", "title": "" }, { "docid": "a161b0fe0b38381a96f02694fd84c3bf", "text": "We have been developing human mimetic musculoskeletal humanoids from the view point of human-inspired design approach. Kengoro is our latest version of musculoskeletal humanoid designed to achieve physically interactive actions in real world. This study presents the design concept, body characteristics, and motion achievements of Kengoro. In the design process of Kengoro, we adopted the novel idea of multifunctional skeletal structures to achieve both humanoid performance and humanlike proportions. We adopted the sensor-driver integrated muscle modules for improved muscle control. In order to demonstrate the effectiveness of these body structures, we conducted several preliminary movements using Kengoro.", "title": "" }, { "docid": "1c16fa259b56e3d64f2468fdf758693a", "text": "Dysregulated expression of microRNAs (miRNAs) in various tissues has been associated with a variety of diseases, including cancers. Here we demonstrate that miRNAs are present in the serum and plasma of humans and other animals such as mice, rats, bovine fetuses, calves, and horses. The levels of miRNAs in serum are stable, reproducible, and consistent among individuals of the same species. Employing Solexa, we sequenced all serum miRNAs of healthy Chinese subjects and found over 100 and 91 serum miRNAs in male and female subjects, respectively. We also identified specific expression patterns of serum miRNAs for lung cancer, colorectal cancer, and diabetes, providing evidence that serum miRNAs contain fingerprints for various diseases. Two non-small cell lung cancer-specific serum miRNAs obtained by Solexa were further validated in an independent trial of 75 healthy donors and 152 cancer patients, using quantitative reverse transcription polymerase chain reaction assays. Through these analyses, we conclude that serum miRNAs can serve as potential biomarkers for the detection of various cancers and other diseases.", "title": "" }, { "docid": "ccc70871f57f25da6141a7083bdf5174", "text": "This paper outlines and tests two agency models of dividends. According to the “outcome” model, dividends are the result of effective pressure by minority shareholders to force corporate insiders to disgorge cash. According to the “substitute” model, insiders interested in issuing equity in the future choose to pay dividends to establish a reputation for decent treatment of minority shareholders. The first model predicts that stronger minority shareholder rights should be associated with higher dividend payouts; the second model predicts the opposite. Tests on a cross-section of 4,000 companies from 33 countries with different levels of minority shareholder rights support the outcome agency model of dividends. The authors are from Harvard University, Harvard University, Harvard University and University of Chicago, respectively. They are grateful to Alexander Aganin for excellent research assistance, and to Lucian Bebchuk, Mihir Desai, Edward Glaeser, Denis Gromb, Oliver Hart, James Hines, Kose John, James Poterba, Roberta Romano, Raghu Rajan, Lemma Senbet, René Stulz, Daniel Wolfenzohn, Luigi Zingales, and two anonymous referees for helpful comments. 2 The so-called dividend puzzle (Black 1976) has preoccupied the attention of financial economists at least since Modigliani and Miller’s (1958, 1961) seminal work. This work established that, in a frictionless world, when the investment policy of a firm is held constant, its dividend payout policy has no consequences for shareholder wealth. Higher dividend payouts lead to lower retained earnings and capital gains, and vice versa, leaving total wealth of the shareholders unchanged. Contrary to this prediction, however, corporations follow extremely deliberate dividend payout strategies (Lintner (1956)). This evidence raises a puzzle: how do firms choose their dividend policies? In the United States and other countries, the puzzle is even deeper since many shareholders are taxed more heavily on their dividend receipts than on capital gains. The actual magnitude of this tax burden is debated (see Poterba and Summers (1985) and Allen and Michaely (1997)), but taxes generally make it even harder to explain dividend policies of firms. Economists have proposed a number of explanations of the dividend puzzle. Of these, particularly popular is the idea that firms can signal future profitability by paying dividends (Bhattacharya (1979), John and Williams (1985), Miller and Rock (1985), Ambarish, John, and Williams (1987)). Empirically, this theory had considerable initial success, since firms that initiate (or raise) dividends experience share price increases, and the converse is true for firms that eliminate (or cut) dividends (Aharony and Swary (1980), Asquith and Mullins (1983)). Recent results are more mixed, since current dividend changes do not help predict firms’ future earnings growth (DeAngelo, DeAngelo, and Skinner (1996) and Benartzi, Michaely, and Thaler (1997)). Another idea, which has received only limited attention until recently (e.g., Easterbrook (1984), Jensen (1986), Fluck (1998a, 1998b), Myers (1998), Gomes (1998), Zwiebel (1996)), is 3 that dividend policies address agency problems between corporate insiders and outside shareholders. According to these theories, unless profits are paid out to shareholders, they may be diverted by the insiders for personal use or committed to unprofitable projects that provide private benefits for the insiders. As a consequence, outside shareholders have a preference for dividends over retained earnings. Theories differ on how outside shareholders actually get firms to disgorge cash. The key point, however, is that failure to disgorge cash leads to its diversion or waste, which is detrimental to outside shareholders’ interest. The agency approach moves away from the assumptions of the Modigliani-Miller theorem by recognizing two points. First, the investment policy of the firm cannot be taken as independent of its dividend policy, and, in particular, paying out dividends may reduce the inefficiency of marginal investments. Second, and more subtly, the allocation of all the profits of the firm to shareholders on a pro-rata basis cannot be taken for granted, and in particular the insiders may get preferential treatment through asset diversion, transfer prices and theft, even holding the investment policy constant. In so far as dividends are paid on a pro-rata basis, they benefit outside shareholders relative to the alternative of expropriation of retained earnings. In this paper, we attempt to identify some of the basic elements of the agency approach to dividends, to understand its key implications, and to evaluate them on a cross-section of over 4,000 firms from 33 countries around the world. The reason for looking around the world is that the severity of agency problems to which minority shareholders are exposed differs greatly across countries, in part because legal protection of these shareholders vary (La Porta et al. (1997, 1998)). Empirically, we find that dividend policies vary across legal regimes in ways consistent with a particular version of the agency theory of dividends. Specifically, firms in common law 4 countries, where investor protection is typically better, make higher dividend payouts than firms in civil law countries do. Moreover, in common but not civil law countries, high growth firms make lower dividend payouts than low growth firms. These results support the version of the agency theory in which investors in good legal protection countries use their legal powers to extract dividends from firms, especially when reinvestment opportunities are poor. Section I of the paper summarizes some of the theoretical arguments. Section II describes the data. Section III presents our empirical findings. Section IV concludes. I. Theoretical Issues. A. Agency Problems and Legal Regimes Conflicts of interest between corporate insiders, such as managers and controlling shareholders, on the one hand, and outside investors, such as minority shareholders, on the other hand, are central to the analysis of the modern corporation (Berle and Means (1932), Jensen and Meckling (1976)). The insiders who control corporate assets can use these assets for a range of purposes that are detrimental to the interests of the outside investors. Most simply, they can divert corporate assets to themselves, through outright theft, dilution of outside investors through share issues to the insiders, excessive salaries, asset sales to themselves or other corporations they control at favorable prices, or transfer pricing with other entities they control (see Shleifer and Vishny (1997) for a discussion). Alternatively, insiders can use corporate assets to pursue investment strategies that yield them personal benefits of control, such as growth or diversification, without benefitting outside investors (e.g., Baumol (1959), Jensen (1986)). What is meant by insiders varies from country to country. In the United States, U.K., 5 Canada, and Australia, where ownership in large corporations is relatively dispersed, most large corporations are to a significant extent controlled by their managers. In most other countries, large firms typically have shareholders that own a significant fraction of equity, such as the founding families (La Porta, Lopez-de-Silanes, and Shleifer (1999)). The controlling shareholders can effectively determine the decisions of the managers (indeed, managers typically come from the controlling family), and hence the problem of managerial control per se is not as severe as it is in the rich common law countries. On the other hand, the controlling shareholders can implement policies that benefit themselves at the expense of minority shareholders. Regardless of the identity of the insiders, the victims of insider control are minority shareholders. It is these minority shareholders that would typically have a taste for dividends. One of the principal remedies to agency problems is the law. Corporate and other law gives outside investors, including shareholders, certain powers to protect their investment against expropriation by insiders. These powers in the case of shareholders range from the right to receive the same per share dividends as the insiders, to the right to vote on important corporate matters, including the election of directors, to the right to sue the company for damages. The very fact that this legal protection exists probably explains why becoming a minority shareholder is a viable investment strategy, as opposed to just being an outright giveaway of money to strangers who are under few if any obligations to give it back. As pointed out by La Porta et al. (1998), the extent of legal protection of outside investors differs enormously across countries. Legal protection consists of both the content of the laws and the quality of their enforcement. Some countries, including most notably the wealthy common law countries such as the U.S. and the U.K., provide effective protection of minority shareholders 6 so that the outright expropriation of corporate assets by the insiders is rare. Agency problems manifest themselves primarily through non-value-maximizing investment choices. In many other countries, the condition of outside investors is a good deal more precarious, but even there some protection does exist. La Porta et al. (1998) show in particular that common law countries appear to have the best legal protection of minority shareholders, whereas civil law countries, and most conspicuously the French civil law countries, have the weakest protection. The quality of investor protection, viewed as a proxy for lower agency costs, has been shown to matter for a number of important issues in corporate finance. For example, corporate ownership is more concentrated in countries with inferior shareholder protection (La Porta et al. (1998), La Porta, Lopez-de-Silanes, and Shleifer (1999)). The valuation and breadth of cap", "title": "" }, { "docid": "f4bd8831ff5bf3372b2ab11d7c53a64b", "text": "The demonstration that dopamine loss is the key pathological feature of Parkinson's disease (PD), and the subsequent introduction of levodopa have revolutionalized the field of PD therapeutics. This review will discuss the significant progress that has been made in the development of new pharmacological and surgical tools to treat PD motor symptoms since this major breakthrough in the 1960s. However, we will also highlight some of the challenges the field of PD therapeutics has been struggling with during the past decades. The lack of neuroprotective therapies and the limited treatment strategies for the nonmotor symptoms of the disease (ie, cognitive impairments, autonomic dysfunctions, psychiatric disorders, etc.) are among the most pressing issues to be addressed in the years to come. It appears that the combination of early PD nonmotor symptoms with imaging of the nigrostriatal dopaminergic system offers a promising path toward the identification of PD biomarkers, which, once characterized, will set the stage for efficient use of neuroprotective agents that could slow down and alter the course of the disease.", "title": "" }, { "docid": "f5f1300baf7ed92626c912b98b6308c9", "text": "The constant increase in global energy demand, together with the awareness of the finite supply of fossil fuels, has brought about an imperious need to take advantage of renewable energy sources. At the same time, concern over CO(2) emissions and future rises in the cost of gasoline has boosted technological efforts to make hybrid and electric vehicles available to the general public. Energy storage is a vital issue to be addressed within this scenario, and batteries are certainly a key player. In this tutorial review, the most recent and significant scientific advances in the field of rechargeable batteries, whose performance is dependent on their underlying chemistry, are covered. In view of its utmost current significance and future prospects, special emphasis is given to progress in lithium-based technologies.", "title": "" }, { "docid": "4f58172c8101b67b9cd544b25d09f2e2", "text": "For years, researchers in face recognition area have been representing and recognizing faces based on subspace discriminant analysis or statistical learning. Nevertheless, these approaches are always suffering from the generalizability problem. This paper proposes a novel non-statistics based face representation approach, local Gabor binary pattern histogram sequence (LGBPHS), in which training procedure is unnecessary to construct the face model, so that the generalizability problem is naturally avoided. In this approach, a face image is modeled as a \"histogram sequence\" by concatenating the histograms of all the local regions of all the local Gabor magnitude binary pattern maps. For recognition, histogram intersection is used to measure the similarity of different LGBPHSs and the nearest neighborhood is exploited for final classification. Additionally, we have further proposed to assign different weights for each histogram piece when measuring two LGBPHSes. Our experimental results on AR and FERET face database show the validity of the proposed approach especially for partially occluded face images, and more impressively, we have achieved the best result on FERET face database.", "title": "" }, { "docid": "48019a3106c6d74e4cfcc5ac596d4617", "text": "Despite a variety of new communication technologies, loneliness is prevalent in Western countries. Boosting emotional communication through intimate connections has the potential to reduce loneliness. New technologies might exploit biosignals as intimate emotional cues because of their strong relationship to emotions. Through two studies, we investigate the possibilities of heartbeat communication as an intimate cue. In the first study (N = 32), we demonstrate, using self-report and behavioral tracking in an immersive virtual environment, that heartbeat perception influences social behavior in a similar manner as traditional intimate signals such as gaze and interpersonal distance. In the second study (N = 34), we demonstrate that a sound of the heartbeat is not sufficient to cause the effect; the stimulus must be attributed to the conversational partner in order to have influence. Together, these results show that heartbeat communication is a promising way to increase intimacy. Implications and possibilities for applications are discussed.", "title": "" }, { "docid": "91ed0637e0533801be8b03d5ad21d586", "text": "With the rapid development of modern wireless communication systems, the desirable miniaturization, multifunctionality strong harmonic suppression, and enhanced bandwidth of the rat-race coupler has generated much interest and continues to be a focus of research. Whether the current rat-race coupler is sufficient to adapt to the future development of microwave systems has become a heated topic.", "title": "" }, { "docid": "9a12ec03e4521a33a7e76c0c538b6b43", "text": "Sparse representation of information provides a powerful means to perform feature extraction on high-dimensional data and is of broad interest for applications in signal processing, computer vision, object recognition and neurobiology. Sparse coding is also believed to be a key mechanism by which biological neural systems can efficiently process a large amount of complex sensory data while consuming very little power. Here, we report the experimental implementation of sparse coding algorithms in a bio-inspired approach using a 32 × 32 crossbar array of analog memristors. This network enables efficient implementation of pattern matching and lateral neuron inhibition and allows input data to be sparsely encoded using neuron activities and stored dictionary elements. Different dictionary sets can be trained and stored in the same system, depending on the nature of the input signals. Using the sparse coding algorithm, we also perform natural image processing based on a learned dictionary.", "title": "" }, { "docid": "c72dc472d12c9c822ae240bec5d57c37", "text": "The cognitive processes in a widely used, nonverbal test of analytic intelligence, the Raven Progressive Matrices Test (Raven, 1962), are analyzed in terms of which processes distinguish between higher scoring and lower scoring subjects and which processes are common to all subjects and all items on the test. The analysis is based on detailed performance characteristics, such as verbal protocols, eye-fixation patterns, and errors. The theory is expressed as a pair of computer simulation models that perform like the median or best college students in the sample. The processing characteristic common to all subjects is an incremental, reiterative strategy for encoding and inducing the regularities in each problem. The processes that distinguish among individuals are primarily the ability to induce abstract relations and the ability to dynamically manage a large set of problem-solving goals in working memory.", "title": "" } ]
scidocsrr
945ba57676c8d5d5f087939aa6b5a6b5
Obstacle detection with ultrasonic sensors and signal analysis metrics
[ { "docid": "990c123bcc1bf3bbf2a42990ba724169", "text": "This paper demonstrates an innovative and simple solution for obstacle detection and collision avoidance of unmanned aerial vehicles (UAVs) optimized for and evaluated with quadrotors. The sensors exploited in this paper are low-cost ultrasonic and infrared range finders, which are much cheaper though noisier than more expensive sensors such as laser scanners. This needs to be taken into consideration for the design, implementation, and parametrization of the signal processing and control algorithm for such a system, which is the topic of this paper. For improved data fusion, inertial and optical flow sensors are used as a distance derivative for reference. As a result, a UAV is capable of distance controlled collision avoidance, which is more complex and powerful than comparable simple solutions. At the same time, the solution remains simple with a low computational burden. Thus, memory and time-consuming simultaneous localization and mapping is not required for collision avoidance.", "title": "" } ]
[ { "docid": "963f97c27adbc7d1136e713247e9a852", "text": "Scheduling in the context of parallel systems is often thought of in terms of assigning tasks in a program to processors, so as to minimize the makespan. This formulation assumes that the processors are dedicated to the program in question. But when the parallel system is shared by a number of users, this is not necessarily the case. In the context of multiprogrammed parallel machines, scheduling refers to the execution of threads from competing programs. This is an operating system issue, involved with resource allocation, not a program development issue. Scheduling schemes for multiprogrammed parallel systems can be classi ed as one or two leveled. Single-level scheduling combines the allocation of processing power with the decision of which thread will use it. Two level scheduling decouples the two issues: rst, processors are allocated to the job, and then the job's threads are scheduled using this pool of processors. The processors of a parallel system can be shared in two basic ways, which are relevant for both one-level and two-level scheduling. One approach is to use time slicing, e.g. when all the processors in the system (or all the processors in the pool) service a global queue of ready threads. The other approach is to use space slicing, and partition the processors statically or dynamically among the di erent jobs. As these approaches are orthogonal to each other, it is also possible to combine them in various ways; for example, this is often done in gang scheduling. Systems using the various approaches are described, and the implications of the di erent mechanisms are discussed. The goals of this survey are to describe the many di erent approaches within a uni ed framework based on the mechanisms used to achieve multiprogramming, and at the same time document commercial systems that have not been described in the open literature.", "title": "" }, { "docid": "add026119d82ec730038fcc3521304c5", "text": "Deep Learning has emerged as a new area in machine learning and is applied to a number of signal and image applications.The main purpose of the work presented in this paper, is to apply the concept of a Deep Learning algorithm namely, Convolutional neural networks (CNN) in image classification. The algorithm is tested on various standard datasets, like remote sensing data of aerial images (UC Merced Land Use Dataset) and scene images from SUN database. The performance of the algorithm is evaluated based on the quality metric known as Mean Squared Error (MSE) and classification accuracy. The graphical representation of the experimental results is given on the basis of MSE against the number of training epochs. The experimental result analysis based on the quality metrics and the graphical representation proves that the algorithm (CNN) gives fairly good classification accuracy for all the tested datasets.", "title": "" }, { "docid": "6e675e8a57574daf83ab78cea25688f5", "text": "Collecting quality data from software projects can be time-consuming and expensive. Hence, some researchers explore “unsupervised” approaches to quality prediction that does not require labelled data. An alternate technique is to use “supervised” approaches that learn models from project data labelled with, say, “defective” or “not-defective”. Most researchers use these supervised models since, it is argued, they can exploit more knowledge of the projects. \nAt FSE’16, Yang et al. reported startling results where unsupervised defect predictors outperformed supervised predictors for effort-aware just-in-time defect prediction. If confirmed, these results would lead to a dramatic simplification of a seemingly complex task (data mining) that is widely explored in the software engineering literature. \nThis paper repeats and refutes those results as follows. (1) There is much variability in the efficacy of the Yang et al. predictors so even with their approach, some supervised data is required to prune weaker predictors away. (2) Their findings were grouped across N projects. When we repeat their analysis on a project-by-project basis, supervised predictors are seen to work better. \nEven though this paper rejects the specific conclusions of Yang et al., we still endorse their general goal. In our our experiments, supervised predictors did not perform outstandingly better than unsupervised ones for effort-aware just-in-time defect prediction. Hence, they may indeed be some combination of unsupervised learners to achieve comparable performance to supervised ones. We therefore encourage others to work in this promising area.", "title": "" }, { "docid": "bffddca72c7e9d6e5a8c760758a98de0", "text": "In this paper we present Sentimentor, a tool for sentiment analysis of Twitter data. Sentimentor utilises the naive Bayes Classifier to classify Tweets into positive, negative or objective sets. We present experimental evaluation of our dataset and classification results, our findings are not contridictory with existing work.", "title": "" }, { "docid": "848f8efe11785c00e8e8af737d173d44", "text": "Detecting frauds in credit card transactions is perhaps one of the best testbeds for computational intelligence algorithms. In fact, this problem involves a number of relevant challenges, namely: concept drift (customers’ habits evolve and fraudsters change their strategies over time), class imbalance (genuine transactions far outnumber frauds), and verification latency (only a small set of transactions are timely checked by investigators). However, the vast majority of learning algorithms that have been proposed for fraud detection rely on assumptions that hardly hold in a real-world fraud-detection system (FDS). This lack of realism concerns two main aspects: 1) the way and timing with which supervised information is provided and 2) the measures used to assess fraud-detection performance. This paper has three major contributions. First, we propose, with the help of our industrial partner, a formalization of the fraud-detection problem that realistically describes the operating conditions of FDSs that everyday analyze massive streams of credit card transactions. We also illustrate the most appropriate performance measures to be used for fraud-detection purposes. Second, we design and assess a novel learning strategy that effectively addresses class imbalance, concept drift, and verification latency. Third, in our experiments, we demonstrate the impact of class unbalance and concept drift in a real-world data stream containing more than 75 million transactions, authorized over a time window of three years.", "title": "" }, { "docid": "b3235d925a1f452ee5ed97cac709b9d4", "text": "Xiaoming Zhai is a doctoral student in the Department of Physics, Beijing Normal University, and is a visiting scholar in the College of Education, University of Washington. His research interests include physics assessment and evaluation, as well as technology-supported physics instruction. He has been a distinguished high school physics teacher who won numerous nationwide instructional awards. Meilan Zhang is an instructor in the Department of Teacher Education at University of Texas at El Paso. Her research focuses on improving student learning using mobile technology, understanding Internet use and the digital divide using big data from Internet search trends and Web analytics. Min Li is an Associate Professor in the College of Education, University of Washington. Her expertise is science assessment and evaluation, and quantitative methods. Address for correspondence: Xiaoming Zhai, Department of Physics, Beijing Normal University, Room A321, No. 19 Xinjiekouwai Street, Haidian District, Beijing 100875, China. Email: xiaomingzh@mail.bnu.edu.cn", "title": "" }, { "docid": "2b23723ab291aeff31781cba640b987b", "text": "As the urban population is increasing, more and more cars are circulating in the city to search for parking spaces which contributes to the global problem of traffic congestion. To alleviate the parking problems, smart parking systems must be implemented. In this paper, the background on parking problems is introduced and relevant algorithms, systems, and techniques behind the smart parking are reviewed and discussed. This paper provides a good insight into the guidance, monitoring and reservations components of the smart car parking and directions to the future development.", "title": "" }, { "docid": "4bd7a933cf0d54a84c106a1591452565", "text": "Face anti-spoofing (a.k.a. presentation attack detection) has recently emerged as an active topic with great significance for both academia and industry due to the rapidly increasing demand in user authentication on mobile phones, PCs, tablets, and so on. Recently, numerous face spoofing detection schemes have been proposed based on the assumption that training and testing samples are in the same domain in terms of the feature space and marginal probability distribution. However, due to unlimited variations of the dominant conditions (illumination, facial appearance, camera quality, and so on) in face acquisition, such single domain methods lack generalization capability, which further prevents them from being applied in practical applications. In light of this, we introduce an unsupervised domain adaptation face anti-spoofing scheme to address the real-world scenario that learns the classifier for the target domain based on training samples in a different source domain. In particular, an embedding function is first imposed based on source and target domain data, which maps the data to a new space where the distribution similarity can be measured. Subsequently, the Maximum Mean Discrepancy between the latent features in source and target domains is minimized such that a more generalized classifier can be learned. State-of-the-art representations including both hand-crafted and deep neural network learned features are further adopted into the framework to quest the capability of them in domain adaptation. Moreover, we introduce a new database for face spoofing detection, which contains more than 4000 face samples with a large variety of spoofing types, capture devices, illuminations, and so on. Extensive experiments on existing benchmark databases and the new database verify that the proposed approach can gain significantly better generalization capability in cross-domain scenarios by providing consistently better anti-spoofing performance.", "title": "" }, { "docid": "b56a6fe9c9d4b45e9d15054004fac918", "text": "Code-switching refers to the phenomena of mixing of words or phrases from foreign languages while communicating in a native language by the multilingual speakers. Codeswitching is a global phenomenon and is widely accepted in multilingual communities. However, for training the language model (LM) for such tasks, a very limited code-switched textual resources are available as yet. In this work, we present an approach to reduce the perplexity (PPL) of Hindi-English code-switched data when tested over the LM trained on purely native Hindi data. For this purpose, we propose a novel textual feature which allows the LM to predict the code-switching instances. The proposed feature is referred to as code-switching factor (CS-factor). Also, we developed a tagger that facilitates the automatic tagging of the code-switching instances. This tagger is trained on a development data and assigns an equivalent class of foreign (English) words to each of the potential native (Hindi) words. For this study, the textual resource has been created by crawling the blogs from a couple of websites educating about the usage of the Internet. In the context of recognition of the code-switching data, the proposed technique is found to yield a substantial improvement in terms of PPL.", "title": "" }, { "docid": "b54abd40f41235fa8e8cd4e9f42cd777", "text": "This paper presents a review of thermal energy storage system design methodologies and the factors to be considered at different hierarchical levels for concentrating solar power (CSP) plants. Thermal energy storage forms a key component of a power plant for improvement of its dispatchability. Though there have been many reviews of storage media, there are not many that focus on storage system design along with its integration into the power plant. This paper discusses the thermal energy storage system designs presented in the literature along with thermal and exergy efficiency analyses of various thermal energy storage systems integrated into the power plant. Economic aspects of these systems and the relevant publications in literature are also summarized in this effort. 2013 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "63da0b3d1bc7d6aedd5356b8cdf67b24", "text": "This paper concentrated on a new application of Deep Neural Network (DNN) approach. The DNN, also widely known as Deep Learning(DL), has been the most popular topic in research community recently. Through the DNN, the original data set can be represented in a new feature space with machine learning algorithms, and intelligence models may have the chance to obtain a better performance in the “learned” feature space. Scientists have achieved encouraging results by employing DNN in some research fields, including Computer Vision, Speech Recognition, Natural Linguistic Programming and Bioinformation Processing. However, as an approach mainly functioned for learning features, DNN is reasonably believed to be a more universal approach: it may have the potential in other data domains and provide better feature spaces for other type of problems. In this paper, we present some initial investigations on applying DNN to deal with the time series problem in meteorology field. In our research, we apply DNN to process the massive weather data involving millions of atmosphere records provided by The Hong Kong Observatory (HKO)1. The obtained features are employed to predict the weather change in the next 24 hours. The results show that the DNN is able to provide a better feature space for weather data sets, and DNN is also a potential tool for the feature fusion of time series problems.", "title": "" }, { "docid": "1fcd6f0c91522a91fa05b0d969f8eec1", "text": "Nonnegative matrix factorization (NMF) is a popular method for multivariate analysis of nonnegative data, the goal of which is to decompose a data matrix into a product of two factor matrices with all entries in factor matrices restricted to be nonnegative. NMF was shown to be useful in a task of clustering (especially document clustering), but in some cases NMF produces the results inappropriate to the clustering problems. In this paper, we present an algorithm for orthogonal nonnegative matrix factorization, where an orthogonality constraint is imposed on the nonnegative decomposition of a term-document matrix. The result of orthogonal NMF can be clearly interpreted for the clustering problems, and also the performance of clustering is usually better than that of the NMF. We develop multiplicative updates directly from true gradient on Stiefel manifold, whereas existing algorithms consider additive orthogonality constraints. Experiments on several different document data sets show our orthogonal NMF algorithms perform better in a task of clustering, compared to the standard NMF and an existing orthogonal NMF.", "title": "" }, { "docid": "e048d73b37168c7b7ed46915e11b1bf0", "text": "Creating graphic designs can be challenging for novice users. This paper presents DesignScape, a system which aids the design process by making interactive layout suggestions, i.e., changes in the position, scale, and alignment of elements. The system uses two distinct but complementary types of suggestions: refinement suggestions, which improve the current layout, and brainstorming suggestions, which change the style. We investigate two interfaces for interacting with suggestions. First, we develop a suggestive interface, where suggestions are previewed and can be accepted. Second, we develop an adaptive interface where elements move automatically to improve the layout. We compare both interfaces with a baseline without suggestions, and show that for novice designers, both interfaces produce significantly better layouts, as evaluated by other novices.", "title": "" }, { "docid": "01202e09e54a1fc9f5b36d67fbbf3870", "text": "This paper is intended to investigate the copper-graphene surface plasmon resonance (SPR)-based biosensor by considering the high adsorption efficiency of graphene. Copper (Cu) is used as a plasmonic material whereas graphene is used to prevent Cu from oxidation and enhance the reflectance intensity. Numerical investigation is performed using finite-difference-time-domain (FDTD) method by comparing the sensing performance such as reflectance intensity that explains the sensor sensitivity and the full-width-at-half-maximum (FWHM) of the spectrum for detection accuracy. The measurements were observed with various Cu thin film thicknesses ranging from 20nm to 80nm with 785nm operating wavelength. The proposed sensor shows that the 40nm-thick Cu-graphene (1 layer) SPR-based sensor gave better performance with narrower plasmonic spectrum line width (reflectance intensity of 91.2%) and better FWHM of 3.08°. The measured results also indicate that the Cu-graphene SPR-based sensor is suitable for detecting urea with refractive index of 1.49 in dielectric medium.", "title": "" }, { "docid": "609997fbec79d71daa7c63e6fbbc6cc4", "text": "Memory encoding occurs rapidly, but the consolidation of memory in the neocortex has long been held to be a more gradual process. We now report, however, that systems consolidation can occur extremely quickly if an associative \"schema\" into which new information is incorporated has previously been created. In experiments using a hippocampal-dependent paired-associate task for rats, the memory of flavor-place associations became persistent over time as a putative neocortical schema gradually developed. New traces, trained for only one trial, then became assimilated and rapidly hippocampal-independent. Schemas also played a causal role in the creation of lasting associative memory representations during one-trial learning. The concept of neocortical schemas may unite psychological accounts of knowledge structures with neurobiological theories of systems memory consolidation.", "title": "" }, { "docid": "3e8f290f9d19996feb6551cde8815307", "text": "Simplification of IT services is an imperative of the times we are in. Large legacy behemoths that exist at financial institutions are a result of years of patch work development on legacy landscapes that have developed in silos at various lines of businesses (LOBs). This increases costs -- for running financial services, changing the services as well as providing services to customers. We present here a basic guide to what constitutes complexity of IT landscape at financial institutions, what simplification means, and opportunities for simplification and how it can be carried out. We also explain a 4-phase approach to planning and executing Simplification of IT services at financial institutions.", "title": "" }, { "docid": "526e36dd9e3db50149687ea6358b4451", "text": "A query over RDF data is usually expressed in terms of matching between a graph representing the target and a huge graph representing the source. Unfortunately, graph matching is typically performed in terms of subgraph isomorphism, which makes semantic data querying a hard problem. In this paper we illustrate a novel technique for querying RDF data in which the answers are built by combining paths of the underlying data graph that align with paths specified by the query. The approach is approximate and generates the combinations of the paths that best align with the query. We show that, in this way, the complexity of the overall process is significantly reduced and verify experimentally that our framework exhibits an excellent behavior with respect to other approaches in terms of both efficiency and effectiveness.", "title": "" }, { "docid": "f45e43935de492d3598469cd24c48188", "text": "Given a task of predicting Y from X , a loss function L, and a set of probability distributions Γ on (X,Y ), what is the optimal decision rule minimizing the worstcase expected loss over Γ? In this paper, we address this question by introducing a generalization of the maximum entropy principle. Applying this principle to sets of distributions with marginal on X constrained to be the empirical marginal, we provide a minimax interpretation of the maximum likelihood problem over generalized linear models, which connects the minimax problem for each loss function to a generalized linear model. While in some cases such as quadratic and logarithmic loss functions we revisit well-known linear and logistic regression models, our approach reveals novel models for other loss functions. In particular, for the 0-1 loss we derive a classification approach which we call the minimax SVM. The minimax SVM minimizes the worst-case expected 0-1 loss over the proposed Γ by solving a tractable optimization problem. Moreover, applying the minimax approach to Brier loss function we derive a new classification model called the minimax Brier. The maximum likelihood problem for this model uses the Huber penalty function. We perform several numerical experiments to show the power of the minimax SVM and the minimax Brier.", "title": "" }, { "docid": "00a3504c21cf0a971a717ce676d76933", "text": "In recent years, researchers have proposed systems for running trusted code on an untrusted operating system. Protection mechanisms deployed by such systems keep a malicious kernel from directly manipulating a trusted application's state. Under such systems, the application and kernel are, conceptually, peers, and the system call API defines an RPC interface between them.\n We introduce Iago attacks, attacks that a malicious kernel can mount in this model. We show how a carefully chosen sequence of integer return values to Linux system calls can lead a supposedly protected process to act against its interests, and even to undertake arbitrary computation at the malicious kernel's behest.\n Iago attacks are evidence that protecting applications from malicious kernels is more difficult than previously realized.", "title": "" }, { "docid": "625002b73c5e386989ddd243a71a1b56", "text": "AutoTutor is a learning environment that tutors students by holding a conversation in natural language. AutoTutor has been developed for Newtonian qualitative physics and computer literacy. Its design was inspired by explanation-based constructivist theories of learning, intelligent tutoring systems that adaptively respond to student knowledge, and empirical research on dialogue patterns in tutorial discourse. AutoTutor presents challenging problems (formulated as questions) from a curriculum script and then engages in mixed initiative dialogue that guides the student in building an answer. It provides the student with positive, neutral, or negative feedback on the student's typed responses, pumps the student for more information, prompts the student to fill in missing words, gives hints, fills in missing information with assertions, identifies and corrects erroneous ideas, answers the student's questions, and summarizes answers. AutoTutor has produced learning gains of approximately .70 sigma for deep levels of comprehension.", "title": "" } ]
scidocsrr
a69534aff3e44a8641428e4ddbe1de14
Tensor decomposition of EEG signals: A brief review
[ { "docid": "ffc36fa0dcc81a7f5ba9751eee9094d7", "text": "The independent component analysis (ICA) of a random vector consists of searching for a linear transformation that minimizes the statistical dependence between its components. In order to define suitable search criteria, the expansion of mutual information is utilized as a function of cumulants of increasing orders. An efficient algorithm is proposed, which allows the computation of the ICA of a data matrix within a polynomial time. The concept of lCA may actually be seen as an extension of the principal component analysis (PCA), which can only impose independence up to the second order and, consequently, defines directions that are orthogonal. Potential applications of ICA include data analysis and compression, Bayesian detection, localization of sources, and blind identification and deconvolution. Zusammenfassung Die Analyse unabhfingiger Komponenten (ICA) eines Vektors beruht auf der Suche nach einer linearen Transformation, die die statistische Abh~ingigkeit zwischen den Komponenten minimiert. Zur Definition geeigneter Such-Kriterien wird die Entwicklung gemeinsamer Information als Funktion von Kumulanten steigender Ordnung genutzt. Es wird ein effizienter Algorithmus vorgeschlagen, der die Berechnung der ICA ffir Datenmatrizen innerhalb einer polynomischen Zeit erlaubt. Das Konzept der ICA kann eigentlich als Erweiterung der 'Principal Component Analysis' (PCA) betrachtet werden, die nur die Unabh~ingigkeit bis zur zweiten Ordnung erzwingen kann und deshalb Richtungen definiert, die orthogonal sind. Potentielle Anwendungen der ICA beinhalten Daten-Analyse und Kompression, Bayes-Detektion, Quellenlokalisierung und blinde Identifikation und Entfaltung.", "title": "" } ]
[ { "docid": "e90e2a651c54b8510efe00eb1d8e7be0", "text": "The design simulation, fabrication, and measurement of a 2.4-GHz horizontally polarized omnidirectional planar printed antenna for WLAN applications is presented. The antenna adopts the printed Alford-loop-type structure. The three-dimensional (3-D) EM simulator HFSS is used for design simulation. The designed antenna is fabricated on an FR-4 printed-circuit-board substrate. The measured input standing-wave-ratio (SWR) is less than three from 2.40 to 2.483 GHz. As desired, the horizontal-polarization H-plane pattern is quite omnidirectional and the E-plane pattern is also very close to that of an ideal dipole antenna. Also a comparison with the popular printed inverted-F antenna (PIFA) has been conducted, the measured H-plane pattern of the Alford-loop-structure antenna is better than that of the PIFA when the omnidirectional pattern is desired. Further more, the study of the antenna printed on a simulated PCMCIA card and that inserted inside a laptop PC are also conducted. The HFSS model of a laptop PC housing, consisting of the display, the screen, and the metallic box with the keyboard, is constructed. The effect of the laptop PC housing with different angle between the display and keyboard on the antenna is also investigated. It is found that there is about 15 dB attenuation of the gain pattern (horizontal-polarization field) in the opposite direction of the PCMCIA slot on the laptop PC. Hence, the effect of the large ground plane of the PCMCIA card and the attenuation effect of the laptop PC housing should be taken into consideration for the antenna design for WLAN applications. For the proposed antenna, in addition to be used alone for a horizontally polarized antenna, it can be also a part of a diversity antenna", "title": "" }, { "docid": "94aa0777f80aa25ec854f159dc3e0706", "text": "To develop a knowledge-aware recommender system, a key data problem is how we can obtain rich and structured knowledge information for recommender system (RS) items. Existing datasets or methods either use side information from original recommender systems (containing very few kinds of useful information) or utilize private knowledge base (KB). In this paper, we present the first public linked KB dataset for recommender systems, named KB4Rec v1.0, which has linked three widely used RS datasets with the popular KB Freebase. Based on our linked dataset, we first preform some interesting qualitative analysis experiments, in which we discuss the effect of two important factors (i.e., popularity and recency) on whether a RS item can be linked to a KB entity. Finally, we present the comparison of several knowledge-aware recommendation algorithms on our linked dataset.", "title": "" }, { "docid": "a7de62c78f1286e66fd35145f3163f1c", "text": "A particularly insidious type of concurrency bug is atomicity violations. While there has been substantial work on automatic detection of atomicity violations, each existing technique has focused on a certain type of atomic region. To address this limitation, this paper presents Atom Tracker, a comprehensive approach to atomic region inference and violation detection. Atom Tracker is the first scheme to (1) automatically infer generic atomic regions (not limited by issues such as the number of variables accessed, the number of instructions included, or the type of code construct the region is embedded in) and (2) automatically detect violations of them at runtime with negligible execution overhead. Atom Tracker provides novel algorithms to infer generic atomic regions and to detect atomicity violations of them. Moreover, we present a hardware implementation of the violation detection algorithm that leverages cache coherence state transitions in a multiprocessor. In our evaluation, we take eight atomicity violation bugs from real-world codes like Apache, MySql, and Mozilla, and show that Atom Tracker detects them all. In addition, Atom Tracker automatically infers all of the atomic regions in a set of micro benchmarks accurately. Finally, we also show that the hardware implementation induces a negligible execution time overhead of 0.2–4.0% and, therefore, enables Atom Tracker to find atomicity violations on-the-fly in production runs.", "title": "" }, { "docid": "4acc30bade98c1257ab0a904f3695f3d", "text": "Manoeuvre assistance is currently receiving increasing attention from the car industry. In this article we focus on the implementation of a reverse parking assistance and more precisely, a reverse parking manoeuvre planner. This paper is based on a manoeuvre planning technique presented in previous work and specialised in planning reverse parking manoeuvre. Since a key part of the previous method was not explicited, our goal in this paper is to present a practical and reproducible way to implement a reverse parking manoeuvre planner. Our implementation uses a database engine to search for the elementary movements that will make the complete parking manoeuvre. Our results have been successfully tested on a real platform: the CSIRO Autonomous Tractor.", "title": "" }, { "docid": "139b3dae4713a5bcff97e1b209bd3206", "text": "Utilizing parametric and nonparametric techniques, we assess the role of a heretofore relatively unexplored ‘input’ in the educational process, homework, on academic achievement. Our results indicate that homework is an important determinant of student test scores. Relative to more standard spending related measures, extra homework has a larger and more significant impact on test scores. However, the effects are not uniform across different subpopulations. Specifically, we find additional homework to be most effective for high and low achievers, which is further confirmed by stochastic dominance analysis. Moreover, the parametric estimates of the educational production function overstate the impact of schooling related inputs. In all estimates, the homework coefficient from the parametric model maps to the upper deciles of the nonparametric coefficient distribution and as a by-product the parametric model understates the percentage of students with negative responses to additional homework. JEL: C14, I21, I28", "title": "" }, { "docid": "d18ed4c40450454d6f517c808da7115a", "text": "Polythelia is a rare congenital malformation that occurs in 1-2% of the population. Intra-areolar polythelia is the presence of one or more supernumerary nipples located within the areola. This is extremely rare. This article presents 3 cases of intra-areolar polythelia treated at our Department. These cases did not present other associated malformation. Surgical correction was performed for psychological and cosmetic reasons using advancement flaps. The aesthetic and functional results were satisfactory.", "title": "" }, { "docid": "e2b42351d30b2b1938497c6fdab68135", "text": "An automatic road sign recognition system first locates road signs within images captured by an imaging sensor on-board of a vehicle, and then identifies the detected road signs. This paper presents an automatic neural-network-based road sign recognition system. First, a study of the existing road sign recognition research is presented. In this study, the issues associated with automatic road sign recognition are described, the existing methods developed to tackle the road sign recognition problem are reviewed, and a comparison of the features of these methods is given. Second, the developed road sign recognition system is described. The system is capable of analysing live colour road scene images, detecting multiple road signs within each image, and classifying the type of road signs detected. The system consists of two modules: detection and classification. The detection module segments the input image in the hue-saturation-intensity colour space, and then detects road signs using a Multi-layer Perceptron neural-network. The classification module determines the type of detected road signs using a series of one to one architectural Multi-layer Perceptron neural networks. Two sets of classifiers are trained using the Resillient-Backpropagation and Scaled-Conjugate-Gradient algorithms. The two modules of the system are evaluated individually first. Then the system is tested as a whole. The experimental results demonstrate that the system is capable of achieving an average recognition hit-rate of 95.96% using the scaled-conjugate-gradient trained classifiers.", "title": "" }, { "docid": "97b7065942b53f2d873c80f32242cd00", "text": "Hierarchical multilabel classification (HMC) allows an instance to have multiple labels residing in a hierarchy. A popular loss function used in HMC is the H-loss, which penalizes only the first classification mistake along each prediction path. However, the H-loss metric can only be used on tree-structured label hierarchies, but not on DAG hierarchies. Moreover, it may lead to misleading predictions as not all misclassifications in the hierarchy are penalized. In this paper, we overcome these deficiencies by proposing a hierarchy-aware loss function that is more appropriate for HMC. Using Bayesian decision theory, we then develop a Bayes-optimal classifier with respect to this loss function. Instead of requiring an exhaustive summation and search for the optimal multilabel, the proposed classification problem can be efficiently solved using a greedy algorithm on both tree-and DAG-structured label hierarchies. Experimental results on a large number of real-world data sets show that the proposed algorithm outperforms existing HMC methods.", "title": "" }, { "docid": "025d4933b4cc199366ffbff7cf51aea6", "text": "An increase in pulsatile release of LHRH is essential for the onset of puberty. However, the mechanism controlling the pubertal increase in LHRH release is still unclear. In primates the LHRH neurosecretory system is already active during the neonatal period but subsequently enters a dormant state in the juvenile/prepubertal period. Neither gonadal steroid hormones nor the absence of facilitatory neuronal inputs to LHRH neurons is responsible for the low levels of LHRH release before the onset of puberty in primates. Recent studies suggest that during the prepubertal period an inhibitory neuronal system suppresses LHRH release and that during the subsequent maturation of the hypothalamus this prepubertal inhibition is removed, allowing the adult pattern of pulsatile LHRH release. In fact, y-aminobutyric acid (GABA) appears to be an inhibitory neurotransmitter responsible for restricting LHRH release before the onset of puberty in female rhesus monkeys. In addition, it appears that the reduction in tonic GABA inhibition allows an increase in the release of glutamate as well as other neurotransmitters, which contributes to the increase in pubertal LHRH release. In this review, developmental changes in several neurotransmitter systems controlling pulsatile LHRH release are extensively reviewed.", "title": "" }, { "docid": "4e5661631557563430a82b4685ef6aa3", "text": "Cloud Computing (CC) is fast becoming well known in the computing world as the latest technology. CC enables users to use resources as and when they are required. Mobile Cloud Computing (MCC) is an integration of the concept of cloud computing within a mobile environment, which removes barriers linked to the mobile devices' performance. Nevertheless, these new benefits are not problem-free entirely. Several common problems encountered by MCC are privacy, personal data management, identity authentication, and potential attacks. The security issues are a major hindrance in the mobile cloud computing's adaptability. This study begins by presenting the background of MCC including the various definitions, infrastructures, and applications. In addition, the current challenges and opportunities will be presented including the different approaches that have been adapted in studying MCC.", "title": "" }, { "docid": "7f2dff96e9c1742842fea6a43d17f93e", "text": "We study shock-based methods for credible causal inference in corporate finance research. We focus on corporate governance research, survey 13,461 papers published between 2001 and 2011 in 22 major accounting, economics, finance, law, and management journals; and identify 863 empirical studies in which corporate governance is associated with firm value or other characteristics. We classify the methods used in these studies and assess whether they support a causal link between corporate governance and firm value or another outcome. Only a stall minority of studies have convincing causal inference strategies. The convincing strategies largely rely on external shocks – usually from legal rules – often called “natural experiments”. We examine the 74 shock-based papers and provide a guide to shock-based research design, which stresses the common features across different designs and the value of using combined designs.", "title": "" }, { "docid": "ef7c3f93851f77274f4d2b9557e572d6", "text": "In today’s world most of us depend on Social Media to communicate, express our feelings and share information with our friends. Social Media is the medium where now a day’s people feel free to express their emotions. Social Media collects the data in structured and unstructured, formal and informal data as users do not care about the spellings and accurate grammatical construction of a sentence while communicating with each other using different social networking websites ( Facebook, Twitter, LinkedIn and YouTube). Gathered data contains sentiments and opinion of users which will be processed using data mining techniques and analyzed for achieving the meaningful information from it. Using Social media data we can classify the type of users by analysis of their posted data on the social web sites. Machine learning algorithms are used for text classification which will extract meaningful data from these websites. Here, in this paper we will discuss the different types of classifiers and their advantages and disadvantages.", "title": "" }, { "docid": "0bf150f6cd566c31ec840a57d8d2fa55", "text": "Within the past few years, organizations in diverse industries have adopted MapReduce-based systems for large-scale data processing. Along with these new users, important new workloads have emerged which feature many small, short, and increasingly interactive jobs in addition to the large, long-running batch jobs for which MapReduce was originally designed. As interactive, large-scale query processing is a strength of the RDBMS community, it is important that lessons from that field be carried over and applied where possible in this new domain. However, these new workloads have not yet been described in the literature. We fill this gap with an empirical analysis of MapReduce traces from six separate business-critical deployments inside Facebook and at Cloudera customers in e-commerce, telecommunications, media, and retail. Our key contribution is a characterization of new MapReduce workloads which are driven in part by interactive analysis, and which make heavy use of querylike programming frameworks on top of MapReduce. These workloads display diverse behaviors which invalidate prior assumptions about MapReduce such as uniform data access, regular diurnal patterns, and prevalence of large jobs. A secondary contribution is a first step towards creating a TPC-like data processing benchmark for MapReduce.", "title": "" }, { "docid": "36209810c1a842c871b639220ba63036", "text": "This paper proposes an extension to the Generative Adversarial Networks (GANs), namely as ArtGAN to synthetically generate more challenging and complex images such as artwork that have abstract characteristics. This is in contrast to most of the current solutions that focused on generating natural images such as room interiors, birds, flowers and faces. The key innovation of our work is to allow back-propagation of the loss function w.r.t. the labels (randomly assigned to each generated images) to the generator from the discriminator. With the feedback from the label information, the generator is able to learn faster and achieve better generated image quality. Empirically, we show that the proposed ArtGAN is capable to create realistic artwork, as well as generate compelling real world images that globally look natural with clear shape on CIFAR-10.", "title": "" }, { "docid": "f9879c1592683bc6f3304f3937d5eee2", "text": "Altered cell metabolism is a characteristic feature of many cancers. Aside from well-described changes in nutrient consumption and waste excretion, altered cancer cell metabolism also results in changes to intracellular metabolite concentrations. Increased levels of metabolites that result directly from genetic mutations and cancer-associated modifications in protein expression can promote cancer initiation and progression. Changes in the levels of specific metabolites, such as 2-hydroxyglutarate, fumarate, succinate, aspartate and reactive oxygen species, can result in altered cell signalling, enzyme activity and/or metabolic flux. In this Review, we discuss the mechanisms that lead to changes in metabolite concentrations in cancer cells, the consequences of these changes for the cells and how they might be exploited to improve cancer therapy.", "title": "" }, { "docid": "34c41c33ce2cd7642cf29d8bfcab8a3f", "text": "I2Head database has been created with the aim to become an optimal reference for low cost gaze estimation. It exhibits the following outstanding characteristics: it takes into account key aspects of low resolution eye tracking technology; it combines images of users gazing at different grids of points from alternative positions with registers of user’s head position and it provides calibration information of the camera and a simple 3D head model for each user. Hardware used to build the database includes a 6D magnetic sensor and a webcam. A careful calibration method between the sensor and the camera has been developed to guarantee the accuracy of the data. Different sessions have been recorded for each user including not only static head scenarios but also controlled displacements and even free head movements. The database is an outstanding framework to test both gaze estimation algorithms and head pose estimation methods.", "title": "" }, { "docid": "78e631aceb9598767289c86ace415e2b", "text": "We present the Balloon family of password hashing functions. These are the first cryptographic hash functions with proven space-hardness properties that: (i) use a password-independent access pattern, (ii) build exclusively upon standard cryptographic primitives, and (iii) are fast enough for real-world use. Space-hard functions require a large amount of working space to evaluate efficiently and, when used for password hashing, they dramatically increase the cost of offline dictionary attacks. The central technical challenge of this work was to devise the graph-theoretic and linear-algebraic techniques necessary to prove the space-hardness properties of the Balloon functions (in the random-oracle model). To motivate our interest in security proofs, we demonstrate that it is possible to compute Argon2i, a recently proposed space-hard function that lacks a formal analysis, in less than the claimed required space with no increase in the computation time.", "title": "" }, { "docid": "e1a4468ccd5305b5158c26b2160d04a6", "text": "Recent years have seen a deluge of behavioral data from players hitting the game industry. Reasons for this data surge are many and include the introduction of new business models, technical innovations, the popularity of online games, and the increasing persistence of games. Irrespective of the causes, the proliferation of behavioral data poses the problem of how to derive insights therefrom. Behavioral data sets can be large, time-dependent and high-dimensional. Clustering offers a way to explore such data and to discover patterns that can reduce the overall complexity of the data. Clustering and other techniques for player profiling and play style analysis have, therefore, become popular in the nascent field of game analytics. However, the proper use of clustering techniques requires expertise and an understanding of games is essential to evaluate results. With this paper, we address game data scientists and present a review and tutorial focusing on the application of clustering techniques to mine behavioral game data. Several algorithms are reviewed and examples of their application shown. Key topics such as feature normalization are discussed and open problems in the context of game analytics are pointed out.", "title": "" }, { "docid": "425ee0a0dc813a3870af72ac02ea8bbc", "text": "Although the mechanism of action of botulinum toxin (BTX) has been intensively studied, many unanswered questions remain regarding the composition and clinical properties of the two formulations of BTX currently approved for cosmetic use. In the first half of this review, these questions are explored in detail, with emphasis on the most pertinent and revelatory studies in the literature. The second half delineates most of the common and some not so common uses of BTX in the face and neck, stressing important patient selection and safety considerations. Complications from neurotoxins at cosmetic doses are generally rare and usually technique dependent.", "title": "" } ]
scidocsrr
59847000e175024b7b600b79e60d9de5
Circumferential Traveling Wave Slot Array on Cylindrical Substrate Integrated Waveguide (CSIW)
[ { "docid": "24151cf5d4481ba03e6ffd1ca29f3441", "text": "The design, fabrication and characterization of 79 GHz slot antennas based on substrate integrated waveguides (SIW) are presented in this paper. All the prototypes are fabricated in a polyimide flex foil using printed circuit board (PCB) fabrication processes. A novel concept is used to minimize the leakage losses of the SIWs at millimeter wave frequencies. Different losses in the SIWs are analyzed. SIW-based single slot antenna, longitudinal and four-by-four slot array antennas are numerically and experimentally studied. Measurements of the antennas show approximately 4.7%, 5.4% and 10.7% impedance bandwidth (S11=-10 dB) with 2.8 dBi, 6.0 dBi and 11.0 dBi maximum antenna gain around 79 GHz, respectively. The measured results are in good agreement with the numerical simulations.", "title": "" }, { "docid": "97a8c2ba66f6fdb917d25729a1874d92", "text": "Transverse slot array antennas fed by a half-mode substrate integrated waveguide (HMSIW) are proposed and developed in this paper. The design concept of these new radiating structures is based on the study of the field distribution and phase constant along the HMSIW as well as on the resonant characteristics of a single slot etched on its top conducting wall. Two types of HMSIW-fed slot array antennas, operating, respectively, in X-band and Ka-band, are designed following a procedure similar to the design of slot array antennas fed by a dielectric-filled rectangular waveguide. Compared with slot array antennas fed by a conventional rectangular waveguide, such proposed HMSIW-fed slot array antennas possess the advantages of low profile, compact size, low cost, and easy integration with other microwave and millimeter wave planar circuits. It is worth noting that the width of HMSIW slot array antennas is reduced by nearly half compared to that of slot array antennas fed by a substrate integrated waveguide.", "title": "" }, { "docid": "29c6cba747a2ad280d2185bfcd5866e2", "text": "A millimeter-wave shaped-beam substrate integrated conformal array antenna is demonstrated in this paper. After discussing the influence of conformal shape on the characteristics of a substrate integrated waveguide (SIW) and a radiating slot, an array mounted on a cylindrical surface with a radius of 20 mm, i.e., 2.3 λ, is synthesized at the center frequency of 35 GHz. All components, including a 1-to-8 divider, a phase compensated network and an 8 × 8 slot array are fabricated in a single dielectric substrate together. In measurement, it has a - 27.4 dB sidelobe level (SLL) beam in H-plane and a flat-topped fan beam with -38° ~ 37° 3 dB beamwidth in E-plane at the center frequency of 35 GHz. The cross polarization is lower than -41.7 dB at the beam direction. Experimental results agree well with simulations, thus validating our design. This SIW scheme is able to solve the difficulty of integration between conformal array elements and a feed network in millimeter-wave frequency band, while avoid radiation leakage and element-to-element parasitic cross-coupling from the feed network.", "title": "" }, { "docid": "9b0c0001e3bf9d3618928bbfcad07ae9", "text": "A Ka-band compact single layer substrate integrated waveguide monopulse slot array antenna for the application of monopulse tracking system is designed, fabricated and measured. The feeding network as well as the monopulse comparator and the subarrays is integrated on the same dielectric with the size of 140 mmtimes130 mm. The bandwidth ( S11 < -10 dB) of the antenna is 7.39% with an operating frequency range of 30.80 GHz-33.14 GHz. The maximum gain at 31.5 GHz is 18.74 dB and the maximum null depth is -46.3 dB. The sum- and difference patterns of three planes: H-plane, E-plane and diagonal plane are measured and presented.", "title": "" }, { "docid": "a7ca3ffcae09ad267281eb494532dc54", "text": "A substrate integrated metamaterial-based leaky-wave antenna is proposed to improve its boresight radiation bandwidth. The proposed leaky-wave antenna based on a composite right/left-handed substrate integrated waveguide consists of two leaky-wave radiator elements which are with different unit cells. The dual-element antenna prototype features boresight gain of 12.0 dBi with variation of 1.0 dB over the frequency range of 8.775-9.15 GHz or 4.2%. In addition, the antenna is able to offer a beam scanning from to with frequency from 8.25 GHz to 13.0 GHz.", "title": "" } ]
[ { "docid": "c6d3f20e9d535faab83fb34cec0fdb5b", "text": "Over the past two decades several attempts have been made to address the problem of face recognition and a voluminous literature has been produced. Current face recognition systems are able to perform very well in controlled environments e.g. frontal face recognition, where face images are acquired under frontal pose with strict constraints as defined in related face recognition standards. However, in unconstrained situations where a face may be captured in outdoor environments, under arbitrary illumination and large pose variations these systems fail to work. With the current focus of research to deal with these problems, much attention has been devoted in the facial feature extraction stage. Facial feature extraction is the most important step in face recognition. Several studies have been made to answer the questions like what features to use, how to describe them and several feature extraction techniques have been proposed. While many comprehensive literature reviews exist for face recognition a complete reference for different feature extraction techniques and their advantages/disadvantages with regards to a typical face recognition task in unconstrained scenarios is much needed. In this chapter we present a comprehensive review of the most relevant feature extraction techniques used in 2D face recognition and introduce a new feature extraction technique termed as Face-GLOH-signature to be used in face recognition for the first time (Sarfraz and Hellwich, 2008), which has a number of advantages over the commonly used feature descriptions in the context of unconstrained face recognition. The goal of feature extraction is to find a specific representation of the data that can highlight relevant information. This representation can be found by maximizing a criterion or can be a pre-defined representation. Usually, a face image is represented by a high dimensional vector containing pixel values (holistic representation) or a set of vectors where each vector summarizes the underlying content of a local region by using a high level 1", "title": "" }, { "docid": "d001d61e90dd38eb0eab0c8d4af9d2a6", "text": "Wireless LANs, especially WiFi, have been pervasively deployed and have fostered myriad wireless communication services and ubiquitous computing applications. A primary concern in designing each scenario-tailored application is to combat harsh indoor propagation environments, particularly Non-Line-Of-Sight (NLOS) propagation. The ability to distinguish Line-Of-Sight (LOS) path from NLOS paths acts as a key enabler for adaptive communication, cognitive radios, robust localization, etc. Enabling such capability on commodity WiFi infrastructure, however, is prohibitive due to the coarse multipath resolution with mere MAC layer RSSI. In this work, we dive into the PHY layer and strive to eliminate irrelevant noise and NLOS paths with long delays from the multipath channel responses. To further break away from the intrinsic bandwidth limit of WiFi, we extend to the spatial domain and harness natural mobility to magnify the randomness of NLOS paths while retaining the deterministic nature of the LOS component. We prototype LiFi, a statistical LOS identification scheme for commodity WiFi infrastructure and evaluate it in typical indoor environments covering an area of 1500 m2. Experimental results demonstrate an overall LOS identification rate of 90.4% with a false alarm rate of 9.3%.", "title": "" }, { "docid": "8fa0c59e04193ff1375b3ed544847229", "text": "In this paper, the problem of workspace analysis of spherical parallel manipulators (SPMs) is addressed with respect to a spherical robotic wrist. The wrist is designed following a modular approach and capable of a unlimited rotation of rolling. An equation dealing with singularity surfaces is derived and branches of the singularity surfaces are identified. By using the Euler parameters, the singularity surfaces are generated in a solid unit sphere, the workspace analysis and dexterity evaluation hence being able to be performed in the confined region of the sphere. Examples of workspace evaluation of the spherical wrist and general SPMs are included to demonstrate the application of the proposed method.", "title": "" }, { "docid": "c4fe9fd7e506e18f1a38bc71b7434b99", "text": "We introduce Evenly Cascaded convolutional Network (ECN), a neural network taking inspiration from the cascade algorithm of wavelet analysis. ECN employs two feature streams - a low-level and high-level steam. At each layer these streams interact, such that low-level features are modulated using advanced perspectives from the high-level stream. ECN is evenly structured through resizing feature map dimensions by a consistent ratio, which removes the burden of ad-hoc specification of feature map dimensions. ECN produces easily interpretable features maps, a result whose intuition can be understood in the context of scale-space theory. We demonstrate that ECN’s design facilitates the training process through providing easily trainable shortcuts. We report new state-of-the-art results for small networks, without the need for additional treatment such as pruning or compression - a consequence of ECN’s simple structure and direct training. A 6-layered ECN design with under 500k parameters achieves 95.24% and 78.99% accuracy on CIFAR-10 and CIFAR-100 datasets, respectively, outperforming the current state-of-the-art on small parameter networks, and a 3 million parameter ECN produces results competitive to the state-of-the-art.", "title": "" }, { "docid": "4f1949af3455bd5741e731a9a60ecdf1", "text": "BACKGROUND\nGuava leaf tea (GLT), exhibiting a diversity of medicinal bioactivities, has become a popularly consumed daily beverage. To improve the product quality, a new process was recommended to the Ser-Tou Farmers' Association (SFA), who began field production in 2005. The new process comprised simplified steps: one bud-two leaves were plucked at 3:00-6:00 am, in the early dawn period, followed by withering at ambient temperature (25-28 °C), rolling at 50 °C for 50-70 min, with or without fermentation, then drying at 45-50 °C for 70-90 min, and finally sorted.\n\n\nRESULTS\nThe product manufactured by this new process (named herein GLTSF) exhibited higher contents (in mg g(-1), based on dry ethyl acetate fraction/methanolic extract) of polyphenolics (417.9 ± 12.3) and flavonoids (452.5 ± 32.3) containing a compositional profile much simpler than previously found: total quercetins (190.3 ± 9.1), total myricetin (3.3 ± 0.9), total catechins (36.4 ± 5.3), gallic acid (8.8 ± 0.6), ellagic acid (39.1 ± 6.4) and tannins (2.5 ± 9.1).\n\n\nCONCLUSION\nWe have successfully developed a new process for manufacturing GLTSF with a unique polyphenolic profile. Such characteristic compositional distribution can be ascribed to the right harvesting hour in the early dawn and appropriate treatment process at low temperature, avoiding direct sunlight.", "title": "" }, { "docid": "3e2c79715d8ae80e952d1aabf03db540", "text": "Professor Yrjo Paatero, in 1961, first introduced the Orthopantomography (OPG) [1]. It has been extensively used in dentistry for analysing the number and type of teeth present, caries, impacted teeth, root resorption, ankylosis, shape of the condyles [2], temporomandibular joints, sinuses, fractures, cysts, tumours and alveolar bone level [3,4]. Panoramic radiography is advised to all patients seeking orthodontic treatment; including Class I malocclusions [5].", "title": "" }, { "docid": "fc3d4b4ac0d13b34aeadf5806013689d", "text": "Internet of Things (IoT) is one of the emerging technologies of this century and its various aspects, such as the Infrastructure, Security, Architecture and Privacy, play an important role in shaping the future of the digitalised world. Internet of Things devices are connected through sensors which have significant impacts on the data and its security. In this research, we used IoT five layered architecture of the Internet of Things to address the security and private issues of IoT enabled services and applications. Furthermore, a detailed survey on Internet of Things infrastructure, architecture, security, and privacy of the heterogeneous objects were presented. The paper identifies the major challenge in the field of IoT; one of them is to secure the data while accessing the objects through sensing machines. This research advocates the importance of securing the IoT ecosystem at each layer resulting in an enhanced overall security of the connected devices as well as the data generated. Thus, this paper put forwards a security model to be utilised by the researchers, manufacturers and developers of IoT devices, applications and services.", "title": "" }, { "docid": "468306f51c998bfe6792df6acfd784f2", "text": "We propose a novel non-rigid image registration algorithm that is built upon fully convolutional networks (FCNs) to optimize and learn spatial transformations between pairs of images to be registered. Different from most existing deep learning based image registration methods that learn spatial transformations from training data with known corresponding spatial transformations, our method directly estimates spatial transformations between pairs of images by maximizing an image-wise similarity metric between fixed and deformed moving images, similar to conventional image registration algorithms. At the same time, our method also learns FCNs for encoding the spatial transformations at the same spatial resolution of images to be registered, rather than learning coarse-grained spatial transformation information. The image registration is implemented in a multi-resolution image registration framework to jointly optimize and learn spatial transformations and FCNs at different resolutions with deep selfsupervision through typical feedforward and backpropagation computation. Since our method simultaneously optimizes and learns spatial transformations for the image registration, our method can be directly used to register a pair of images, and the registration of a set of images is also a training procedure for FCNs so that the trained FCNs can be directly adopted to register new images by feedforward computation of the learned FCNs without any optimization. The proposed method has been evaluated for registering 3D structural brain magnetic resonance (MR) images and obtained better performance than state-of-the-art image registration algorithms.", "title": "" }, { "docid": "7121d534b758bab829e1db31d0ce2e43", "text": "With the increased complexity of modern computer attacks, there is a need for defenders not only to detect malicious activity as it happens, but also to predict the specific steps that will be taken by an adversary when performing an attack. However this is still an open research problem, and previous research in predicting malicious events only looked at binary outcomes (eg. whether an attack would happen or not), but not at the specific steps that an attacker would undertake. To fill this gap we present Tiresias xspace, a system that leverages Recurrent Neural Networks (RNNs) to predict future events on a machine, based on previous observations. We test Tiresias xspace on a dataset of 3.4 billion security events collected from a commercial intrusion prevention system, and show that our approach is effective in predicting the next event that will occur on a machine with a precision of up to 0.93. We also show that the models learned by Tiresias xspace are reasonably stable over time, and provide a mechanism that can identify sudden drops in precision and trigger a retraining of the system. Finally, we show that the long-term memory typical of RNNs is key in performing event prediction, rendering simpler methods not up to the task.", "title": "" }, { "docid": "7ca7bca5a704681e8b8c7d213c6ad990", "text": "Three experiments in naming Chinese characters are presented here to address the relationships between character frequency, consistency, and regularity effects in Chinese character naming. Significant interactions between character consistency and frequency were found across the three experiments, regardless of whether the phonetic radical of the phonogram is a legitimate character in its own right or not. These findings suggest that the phonological information embedded in Chinese characters has an influence upon the naming process of Chinese characters. Furthermore, phonetic radicals exist as computation units mainly because they are structures occurring systematically within Chinese characters, not because they can function as recognized, freestanding characters. On the other hand, the significant interaction between regularity and consistency found in the first experiment suggests that these two factors affect Chinese character naming in different ways. These findings are accounted for within interactive activation frameworks and a connectionist model.", "title": "" }, { "docid": "4b6da0b9c88f4d94abfbbcb08bb0fc43", "text": "In this paper we show how word embeddings can be used to increase the effectiveness of a state-of-the art Locality Sensitive Hashing (LSH) based first story detection (FSD) system over a standard tweet corpus. Vocabulary mismatch, in which related tweets use different words, is a serious hindrance to the effectiveness of a modern FSD system. In this case, a tweet could be flagged as a first story even if a related tweet, which uses different but synonymous words, was already returned as a first story. In this work, we propose a novel approach to mitigate this problem of lexical variation, based on tweet expansion. In particular, we propose to expand tweets with semantically related paraphrases identified via automatically mined word embeddings over a background tweet corpus. Through experimentation on a large data stream comprised of 50 million tweets, we show that FSD effectiveness can be improved by 9.5% over a state-of-the-art FSD system.", "title": "" }, { "docid": "6f989e22917aa2f99749701c8509fcca", "text": "The reflection of an object can be distorted by undulations of the reflector, be it a funhouse mirror or a fluid surface. Painters and photographers have long exploited this effect, for example, in imaging scenery distorted by ripples on a lake. Here, we use this phenomenon to visualize micrometric surface waves generated as a millimetric droplet bounces on the surface of a vibrating fluid bath (Bush 2015b). This system, discovered a decade ago (Couder et al. 2005), is of current interest as a hydrodynamic quantum analog; specifically, the walking droplets exhibit several features reminiscent of quantum particles (Bush 2015a).", "title": "" }, { "docid": "4ac88aa31bff5b4942dd062d42879d27", "text": "In this paper we demonstrate the potential of data analytics methods for location-based services. We develop a support system that enables user-based relocation of vehicles in free-floating carsharing models. In these businesses, customers can rent and leave cars anywhere within a predefined operational area. However, due to this flexibility, freefloating carsharing is prone to supply and demand imbalance. The support system detects imbalances by analyzing patterns in vehicle idle times. Alternative rental destinations are proposed to customers in exchange for a discount. Using data on 250,000 rentals in the city of Vancouver, we evaluate the relocation system through a simulation. The results show that our approach decreases the average vehicle idle time by up to 16 percent, suggesting a more balanced state of supply and demand. Employing the system results in a higher degree of vehicle utilization and leads to a substantial increase of profits for providers.", "title": "" }, { "docid": "9544b2cc301e2e3f170f050de659dda4", "text": "In SDN, the underlying infrastructure is usually abstracted for applications that can treat the network as a logical or virtual entity. Commonly, the ``mappings\" between virtual abstractions and their actual physical implementations are not one-to-one, e.g., a single \"big switch\" abstract object might be implemented using a distributed set of physical devices. A key question is, what abstractions could be mapped to multiple physical elements while faithfully preserving their native semantics? E.g., can an application developer always expect her abstract \"big switch\" to act exactly as a physical big switch, despite being implemented using multiple physical switches in reality?\n We show that the answer to that question is \"no\" for existing virtual-to-physical mapping techniques: behavior can differ between the virtual \"big switch\" and the physical network, providing incorrect application-level behavior. We also show that that those incorrect behaviors occur despite the fact that the most pervasive and commonly-used correctness invariants, such as per-packet consistency, are preserved throughout. These examples demonstrate that for practical notions of correctness, new systems and a new analytical framework are needed. We take the first steps by defining end-to-end correctness, a correctness condition that focuses on applications only, and outline a research vision to obtain virtualization systems with correct virtual to physical mappings.", "title": "" }, { "docid": "f1c210ee9f70db482d134bf544984f77", "text": "Character segmentation plays an important role in the Arabic optical character recognition (OCR) system, because the letters incorrectly segmented perform to unrecognized character. Accuracy of character recognition depends mainly on the segmentation algorithm used. The domain of off-line handwriting in the Arabic script presents unique technical challenges and has been addressed more recently than other domains. Many different segmentation algorithms for off-line Arabic handwriting recognition have been proposed and applied to various types of word images. This paper provides modify segmentation algorithm based on bounding box to improve segmentation accuracy using two main stages: preprocessing stage and segmentation stage. In preprocessing stage, used a set of methods such as noise removal, binarization, skew correction, thinning and slant correction, which retains shape of the character. In segmentation stage, the modify bounding box algorithm is done. In this algorithm a distance analysis use on bounding boxes of two connected components (CCs): main (CCs), auxiliary (CCs). The modified algorithm is presented and taking place according to three cases. Cut points also determined using structural features for segmentation character. The modified bounding box algorithm has been successfully tested on 450 word images of Arabic handwritten words. The results were very promising, indicating the efficiency of the suggested", "title": "" }, { "docid": "42ca37dd78bf8b52da5739ad442f203f", "text": "Frame interpolation attempts to synthesise intermediate frames given one or more consecutive video frames. In recent years, deep learning approaches, and in particular convolutional neural networks, have succeeded at tackling lowand high-level computer vision problems including frame interpolation. There are two main pursuits in this line of research, namely algorithm efficiency and reconstruction quality. In this paper, we present a multi-scale generative adversarial network for frame interpolation (FIGAN). To maximise the efficiency of our network, we propose a novel multi-scale residual estimation module where the predicted flow and synthesised frame are constructed in a coarse-tofine fashion. To improve the quality of synthesised intermediate video frames, our network is jointly supervised at different levels with a perceptual loss function that consists of an adversarial and two content losses. We evaluate the proposed approach using a collection of 60fps videos from YouTube-8m. Our results improve the state-of-the-art accuracy and efficiency, and a subjective visual quality comparable to the best performing interpolation method.", "title": "" }, { "docid": "2f83ca2bdd8401334877ae4406a4491c", "text": "Mobile IP is the current standard for supporting macromobility of mobile hosts. However, in the case of micromobility support, there are several competing proposals. In this paper, we present the design, implementation, and performance evaluation of HAWAII, a domain-based approach for supporting mobility. HAWAII uses specialized path setup schemes which install host-based forwarding entries in specific routers to support intra-domain micromobility. These path setup schemes deliver excellent performance by reducing mobility related disruption to user applications. Also, mobile hosts retain their network address while moving within the domain, simplifying quality-of-service (QoS) support. Furthermore, reliability is achieved through maintaining soft-state forwarding entries for the mobile hosts and leveraging fault detection mechanisms built in existing intra-domain routing protocols. HAWAII defaults to using Mobile IP for macromobility, thus providing a comprehensive solution for mobility support in wide-area wireless networks.", "title": "" }, { "docid": "0edc89fbf770bbab2fb4d882a589c161", "text": "A calculus is developed in this paper (Part I) and the sequel (Part 11) for obtaining bounds on delay and buffering requirements in a communication network operating in a packet switched mode under a fixed routing strategy. The theory we develop is different from traditional approaches to analyzing delay because the model we use to describe the entry of data into the network is nonprobabilistic: We suppose that the data stream entered intq the network by any given user satisfies “burstiness constraints.” A data stream is said to satisfy a burstiness constraint if the quantity of data from the stream contained in any interval of time is less than a value that depends on the length of the interval. Several network elements are defined that can be used as building blocks to model a wide variety of communication networks. Each type of network element is analyzed by assuming that the traffic entering it satisfies burstiness constraints. Under this assumption bounds are obtained on delay and buffering requirements for the network element, burstiness constraints satisfied by the traffic that exits the element are derived. Index Terms -Queueing networks, burstiness, flow control, packet switching, high speed networks.", "title": "" }, { "docid": "548e1962ac4a2ea36bf90db116c4ff49", "text": "LSTMs and other RNN variants have shown strong performance on character-level language modeling. These models are typically trained using truncated backpropagation through time, and it is common to assume that their success stems from their ability to remember long-term contexts. In this paper, we show that a deep (64-layer) transformer model (Vaswani et al. 2017) with fixed context outperforms RNN variants by a large margin, achieving state of the art on two popular benchmarks: 1.13 bits per character on text8 and 1.06 on enwik8. To get good results at this depth, we show that it is important to add auxiliary losses, both at intermediate network layers and intermediate sequence positions.", "title": "" }, { "docid": "f391c56dd581d965548062944200e95f", "text": "We present a traceability recovery method and tool based on latent semantic indexing (LSI) in the context of an artefact management system. The tool highlights the candidate links not identified yet by the software engineer and the links identified but missed by the tool, probably due to inconsistencies in the usage of domain terms in the traced software artefacts. We also present a case study of using the traceability recovery tool on software artefacts belonging to different categories of documents, including requirement, design, and testing documents, as well as code components.", "title": "" } ]
scidocsrr
9dcef20242cd852b9f363fd031d641ec
Interactive Instance-based Evaluation of Knowledge Base Question Answering
[ { "docid": "1fd9db81e41fc3b9a76a52cc9a0618c1", "text": "Semantic parsing is a rich fusion of the logical and the statistical worlds.", "title": "" }, { "docid": "9b288ed3a6079bee5ed3154b1aab296e", "text": "We introduce ParlAI (pronounced “parlay”), an open-source software platform for dialog research implemented in Python, available at http://parl.ai. Its goal is to provide a unified framework for sharing, training and testing dialog models; integration of Amazon Mechanical Turk for data collection, human evaluation, and online/reinforcement learning; and a repository of machine learning models for comparing with others’ models, and improving upon existing architectures. Over 20 tasks are supported in the first release, including popular datasets such as SQuAD, bAbI tasks, MCTest, WikiQA, QACNN, QADailyMail, CBT, bAbI Dialog, Ubuntu, OpenSubtitles and VQA. Several models are integrated, including neural models such as memory networks, seq2seq and attentive LSTMs.", "title": "" } ]
[ { "docid": "b7ae9cae900253f270d43c4b34e68c57", "text": "In this paper, a complete voiceprint recognition based on Matlab was realized, including speech processing and feature extraction at early stage, and model training and recognition at later stage. For speech processing and feature extraction at early stage, Mel Frequency Cepstrum Coefficient (MFCC) was taken as feature parameter. For speaker model method, DTW model was adopted to reflect the voiceprint characteristics of speech, converting voiceprint recognition into speaker speech data evaluation, and breaking up complex speech training and matching into model parameter training and probability calculation. Simulation experiment results show that this system is effective to recognize voiceprint.", "title": "" }, { "docid": "66610cf27a67760f6625e2fe4bbc7783", "text": "UNLABELLED\nYale Image Finder (YIF) is a publicly accessible search engine featuring a new way of retrieving biomedical images and associated papers based on the text carried inside the images. Image queries can also be issued against the image caption, as well as words in the associated paper abstract and title. A typical search scenario using YIF is as follows: a user provides few search keywords and the most relevant images are returned and presented in the form of thumbnails. Users can click on the image of interest to retrieve the high resolution image. In addition, the search engine will provide two types of related images: those that appear in the same paper, and those from other papers with similar image content. Retrieved images link back to their source papers, allowing users to find related papers starting with an image of interest. Currently, YIF has indexed over 140 000 images from over 34 000 open access biomedical journal papers.\n\n\nAVAILABILITY\nhttp://krauthammerlab.med.yale.edu/imagefinder/", "title": "" }, { "docid": "40fcf74d2f15757ac3c9b401c05a4fb9", "text": "Phones with some of the capabilities of modern computers also have the same kind of drawbacks. These phones are commonly referred to as smartphones. They have both phone and personal digital assistant (PDA) functionality. Typical to these devices is to have a wide selection of different connectivity options from general packet radio service (GPRS) data transfer to multi media messages (MMS) and wireless local area network (WLAN) capabilities. They also have standardized operating systems, which makes smartphones a viable platform for malware writers. Since the design of the operating systems is recent, many common security holes and vulnerabilities have been taken into account during the design. However, these precautions have not fully protected these devices. Even now, when smartphones are not that common, there is a handful of viruses for them. In this paper we will discuss some of the most typical viruses in the mobile environment and propose guidelines and predictions for the future.", "title": "" }, { "docid": "a791f5339b1a49567581cd64a1c678c8", "text": "Making data to be more connected is one of the goals of Semantic Technology. Therefore, relational data model as one of important data resource type, is needed to be mapped and converted to graph model. In this paper we focus in mapping and converting without semantically loss, by considering semantic abstraction of the real world, which has been ignored in some previous researches. As a graph schema model, it can be implemented in graph database or linked data in RDF/OWL format. This approach studies that relationship should be paid more attention in mapping and converting because, often be found a gap semantic abstraction during those processes. In our small experiment shows that our idea can map and convert relational model to graph model without semantically loss.", "title": "" }, { "docid": "f0958d2c952c7140c998fa13a2bf4374", "text": "OBJECTIVE\nThe objective of this study is to outline explicit criteria for assessing the contribution of qualitative empirical studies in health and medicine, leading to a hierarchy of evidence specific to qualitative methods.\n\n\nSTUDY DESIGN AND SETTING\nThis paper arose from a series of critical appraisal exercises based on recent qualitative research studies in the health literature. We focused on the central methodological procedures of qualitative method (defining a research framework, sampling and data collection, data analysis, and drawing research conclusions) to devise a hierarchy of qualitative research designs, reflecting the reliability of study conclusions for decisions made in health practice and policy.\n\n\nRESULTS\nWe describe four levels of a qualitative hierarchy of evidence-for-practice. The least likely studies to produce good evidence-for-practice are single case studies, followed by descriptive studies that may provide helpful lists of quotations but do not offer detailed analysis. More weight is given to conceptual studies that analyze all data according to conceptual themes but may be limited by a lack of diversity in the sample. Generalizable studies using conceptual frameworks to derive an appropriately diversified sample with analysis accounting for all data are considered to provide the best evidence-for-practice. Explicit criteria and illustrative examples are described for each level.\n\n\nCONCLUSION\nA hierarchy of evidence-for-practice specific to qualitative methods provides a useful guide for the critical appraisal of papers using these methods and for defining the strength of evidence as a basis for decision making and policy generation.", "title": "" }, { "docid": "d1515b3c475989e3c3584e02c0d5c329", "text": "Sexting has received increasing scholarly and media attention. Especially, minors’ engagement in this behaviour is a source of concern. As adolescents are highly sensitive about their image among peers and prone to peer influence, the present study implemented the prototype willingness model in order to assess how perceptions of peers engaging in sexting possibly influence adolescents’ willingness to send sexting messages. A survey was conducted among 217 15to 19-year-olds. A total of 18% of respondents had engaged in sexting in the 2 months preceding the study. Analyses further revealed that the subjective norm was the strongest predictor of sexting intention, followed by behavioural willingness and attitude towards sexting. Additionally, the more favourable young people evaluated the prototype of a person engaging in sexting and the higher they assessed their similarity with this prototype, the more they were willing to send sexting messages. Differences were also found based on gender, relationship status and need for popularity. 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "977f7723cde3baa1d98ca99cd9ed8881", "text": "Identity Crime is well known, established, and costly. Identity Crime is the term used to refer to all types of crime in which someone wrongfully obtains and uses another person’s personal data in some way that involves fraud or deception, typically for economic gain. Forgery and use of fraudulent identity documents are major enablers of Identity Fraud. It has affected the e-commerce. It is increasing significantly with the development of modern technology and the global superhighways of communication, resulting in the loss of lots of money worldwide each year. Also along with transaction the application domain such as credit application is hit by this crime. These are growing concerns for not only governmental bodies but business organizations also all over the world. This paper gives a brief summary of the identity fraud. Also it discusses various data mining techniques used to overcome it.", "title": "" }, { "docid": "329420b8b13e8c315d341e382419315a", "text": "The aim of this research is to design an intelligent system that addresses the problem of real-time localization and navigation of visually impaired (VI) in an indoor environment using a monocular camera. Systems that have been developed so far for the VI use either many cameras (stereo and monocular) integrated with other sensors or use very complex algorithms that are computationally expensive. In this research work, a computationally less expensive integrated system has been proposed to combine imaging geometry, Visual Odometry (VO), Object Detection (OD) along with Distance-Depth (D-D) estimation algorithms for precise navigation and localization by utilizing a single monocular camera as the only sensor. The developed algorithm is tested for both standard Karlsruhe and indoor environment recorded datasets. Tests have been carried out in real-time using a smartphone camera that captures image data of the environment as the person moves and is sent over Wi-Fi for further processing to the MATLAB software model running on an Intel i7 processor. The algorithm provides accurate results on real-time navigation in the environment with an audio feedback about the person's location. The trajectory of the navigation is expressed in an arbitrary scale. Object detection based localization is accurate. The D-D estimation provides distance and depth measurements up to an accuracy of 94–98%.", "title": "" }, { "docid": "39a59eac80c6f4621971399dde2fbb7f", "text": "Social media sites such as Flickr, YouTube, and Facebook host substantial amounts of user-contributed materials (e.g., photographs, videos, and textual content) for a wide variety of real-world events. These range from widely known events, such as the presidential inauguration, to smaller, community-specific events, such as annual conventions and local gatherings. By identifying these events and their associated user-contributed social media documents, which is the focus of this paper, we can greatly improve local event browsing and search in state-of-the-art search engines. To address our problem of focus, we exploit the rich “context” associated with social media content, including user-provided annotations (e.g., title, tags) and automatically generated information (e.g., content creation time). We form a variety of representations of social media documents using different context dimensions, and combine these dimensions in a principled way into a single clustering solution—where each document cluster ideally corresponds to one event—using a weighted ensemble approach. We evaluate our approach on a large-scale, real-world dataset of event images, and report promising performance with respect to several baseline approaches. Our preliminary experiments suggest that our ensemble approach identifies events, and their associated images, more effectively than the state-of-the-art strategies on which we build.", "title": "" }, { "docid": "d717a5955faf08583b946385cf9f41d3", "text": "Spasticity is a prevalent and potentially disabling symptom common in individuals with multiple sclerosis. Adequate evaluation and management of spasticity requires a careful assessment of the patient's history to determine functional impact of spasticity and potential exacerbating factors, and physical examination to determine the extent of the condition and culpable muscles. A host of options for spasticity management are available: therapeutic exercise, physical modalities, complementary/alternative medicine interventions, oral medications, chemodenervation, and implantation of an intrathecal baclofen pump. Choice of treatment hinges on a combination of the extent of symptoms, patient preference, and availability of services.", "title": "" }, { "docid": "5b56288bb7b49f18148f28798cfd8129", "text": "According to World Health Organization (WHO) estimations, one out of five adults worldwide will be obese by 2025. Worldwide obesity has doubled since 1980. In fact, more than 1.9 billion adults (39%) of 18 years and older were overweight and over 600 million (13%) of these were obese in 2014. 42 million children under the age of five were overweight or obese in 2014. Obesity is a top public health problem due to its associated morbidity and mortality. This paper reviews the main techniques to measure the level of obesity and body fat percentage, and explains the complications that can carry to the individual's quality of life, longevity and the significant cost of healthcare systems. Researchers and developers are adapting the existing technology, as intelligent phones or some wearable gadgets to be used for controlling obesity. They include the promoting of healthy eating culture and adopting the physical activity lifestyle. The paper also shows a comprehensive study of the most used mobile applications and Wireless Body Area Networks focused on controlling the obesity and overweight. Finally, this paper proposes an intelligent architecture that takes into account both, physiological and cognitive aspects to reduce the degree of obesity and overweight.", "title": "" }, { "docid": "d60f7144d7321567136aabdf8cc1ea04", "text": "The higher variability introduced by distributed generation leads to fast changes in the aggregate load composition, and thus in the power response during voltage variations. The smart transformer, a power electronics-based distribution transformer with advanced control functionalities, can exploit the load dependence on voltage for providing services to the distribution and transmission grids. In this paper, two possible applications are proposed: 1) the smart transformer overload control by means of voltage control action and 2) the soft load reduction method, that reduces load consumption avoiding the load disconnection. These services depend on the correct identification of load dependence on voltage, which the smart transformer evaluates in real time based on load measurements. The effect of the distributed generation on net load sensitivity has been derived and demonstrated with the control hardware in loop evaluation by means of a real time digital simulator.", "title": "" }, { "docid": "85bc241c03d417099aa155766e6a1421", "text": "Passwords continue to prevail on the web as the primary method for user authentication despite their well-known security and usability drawbacks. Password managers offer some improvement without requiring server-side changes. In this paper, we evaluate the security of dual-possession authentication, an authentication approach offering encrypted storage of passwords and theft-resistance without the use of a master password. We further introduce Tapas, a concrete implementation of dual-possession authentication leveraging a desktop computer and a smartphone. Tapas requires no server-side changes to websites, no master password, and protects all the stored passwords in the event either the primary or secondary device (e.g., computer or phone) is stolen. To evaluate the viability of Tapas as an alternative to traditional password managers, we perform a 30 participant user study comparing Tapas to two configurations of Firefox's built-in password manager. We found users significantly preferred Tapas. We then improve Tapas by incorporating feedback from this study, and reevaluate it with an additional 10 participants.", "title": "" }, { "docid": "001d2da1fbdaf2c49311f6e68b245076", "text": "Lack of physical activity is a serious health concern for individuals who are visually impaired as they have fewer opportunities and incentives to engage in physical activities that provide the amounts and kinds of stimulation sufficient to maintain adequate fitness and to support a healthy standard of living. Exergames are video games that use physical activity as input and which have the potential to change sedentary lifestyles and associated health problems such as obesity. We identify that exergames have a number properties that could overcome the barriers to physical activity that individuals with visual impairments face. However, exergames rely upon being able to perceive visual cues that indicate to the player what input to provide. This paper presents VI Tennis, a modified version of a popular motion sensing exergame that explores the use of vibrotactile and audio cues. The effectiveness of providing multimodal (tactile/audio) versus unimodal (audio) cues was evaluated with a user study with 13 children who are blind. Children achieved moderate to vigorous levels of physical activity- the amount required to yield health benefits. No significant difference in active energy expenditure was found between both versions, though children scored significantly better with the tactile/audio version and also enjoyed playing this version more, which emphasizes the potential of tactile/audio feedback for engaging players for longer periods of time.", "title": "" }, { "docid": "940e7dc630b7dcbe097ade7abb2883a4", "text": "Modern object detection methods typically rely on bounding box proposals as input. While initially popularized in the 2D case, this idea has received increasing attention for 3D bounding boxes. Nevertheless, existing 3D box proposal techniques all assume having access to depth as input, which is unfortunately not always available in practice. In this paper, we therefore introduce an approach to generating 3D box proposals from a single monocular RGB image. To this end, we develop an integrated, fully differentiable framework that inherently predicts a depth map, extracts a 3D volumetric scene representation and generates 3D object proposals. At the core of our approach lies a novel residual, differentiable truncated signed distance function module, which, accounting for the relatively low accuracy of the predicted depth map, extracts a 3D volumetric representation of the scene. Our experiments on the standard NYUv2 dataset demonstrate that our framework lets us generate high-quality 3D box proposals and that it outperforms the two-stage technique consisting of successively performing state-of-the-art depth prediction and depthbased 3D proposal generation.", "title": "" }, { "docid": "645f49ff21d31bb99cce9f05449df0d7", "text": "The growing popularity of the JSON format has fueled increased interest in loading and processing JSON data within analytical data processing systems. However, in many applications, JSON parsing dominates performance and cost. In this paper, we present a new JSON parser called Mison that is particularly tailored to this class of applications, by pushing down both projection and filter operators of analytical queries into the parser. To achieve these features, we propose to deviate from the traditional approach of building parsers using finite state machines (FSMs). Instead, we follow a two-level approach that enables the parser to jump directly to the correct position of a queried field without having to perform expensive tokenizing steps to find the field. At the upper level, Mison speculatively predicts the logical locations of queried fields based on previously seen patterns in a dataset. At the lower level, Mison builds structural indices on JSON data to map logical locations to physical locations. Unlike all existing FSM-based parsers, building structural indices converts control flow into data flow, thereby largely eliminating inherently unpredictable branches in the program and exploiting the parallelism available in modern processors. We experimentally evaluate Mison using representative real-world JSON datasets and the TPC-H benchmark, and show that Mison produces significant performance benefits over the best existing JSON parsers; in some cases, the performance improvement is over one order of magnitude.", "title": "" }, { "docid": "5bf9aeb37fc1a82420b2ff4136f547d0", "text": "Visual Question Answering (VQA) is a popular research problem that involves inferring answers to natural language questions about a given visual scene. Recent neural network approaches to VQA use attention to select relevant image features based on the question. In this paper, we propose a novel Dual Attention Network (DAN) that not only attends to image features, but also to question features. The selected linguistic and visual features are combined by a recurrent model to infer the final answer. We experiment with different question representations and do several ablation studies to evaluate the model on the challenging VQA dataset.", "title": "" }, { "docid": "93af342862b02d12463fc452834b6717", "text": "The posterior cerebral artery (PCA) has been noted in literature to have anatomical variations, specifically fenestration. Cerebral arteries with fenestrations are uncommon, especially when associated with other vascular pathologies. We report a case here of fenestrations within the P1 segment of the right PCA associated with a right middle cerebral artery (MCA) aneurysm in an elder adult male who presented with a new onset of headaches. The patient was treated with vascular clipping of the MCA and has recovered well. Identifying anatomical variations with appropriate imaging is of particular importance in neuro-interventional procedures as it may have an impact on the procedure itself and consequently post-interventional outcomes. Categories: Neurology, Neurosurgery", "title": "" }, { "docid": "3361e6c7a448e69a73e8b3e879815386", "text": "The neck is not only the first anatomical area to show aging but also contributes to the persona of the individual. The understanding the aging process of the neck is essential for neck rejuvenation. Multiple neck rejuvenation techniques have been reported in the literature. In 1974, Skoog [1] described the anatomy of the superficial musculoaponeurotic system (SMAS) and its role in the aging of the neck. Recently, many patients have expressed interest in minimally invasive surgery with a low risk of complications and short recovery period. The use of thread for neck rejuvenation and the concept of the suture suspension neck lift have become widespread as a convenient and effective procedure; nevertheless, complications have also been reported such as recurrence, inadequate correction, and palpability of the sutures. In this study, we analyzed a new type of thread lift: elastic lift that uses elastic thread (Elasticum; Korpo SRL, Genova, Italy). We already use this new technique for the midface lift and can confirm its efficacy and safety in that context. The purpose of this study was to evaluate the outcomes and safety of the elastic lift technique for neck region lifting.", "title": "" }, { "docid": "33ad7f5618d356b5d28b887f30e3ba84", "text": "BACKGROUND\nHaving cancer may result in extensive emotional, physical and social suffering. Music interventions have been used to alleviate symptoms and treatment side effects in cancer patients.\n\n\nOBJECTIVES\nTo compare the effects of music therapy or music medicine interventions and standard care with standard care alone, or standard care and other interventions in patients with cancer.\n\n\nSEARCH STRATEGY\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library 2010, Issue 10), MEDLINE, EMBASE, CINAHL, PsycINFO, LILACS, Science Citation Index, CancerLit, www.musictherapyworld.net, CAIRSS, Proquest Digital Dissertations, ClinicalTrials.gov, Current Controlled Trials, and the National Research Register. All databases were searched from their start date to September 2010. We handsearched music therapy journals and reference lists and contacted experts. There was no language restriction.\n\n\nSELECTION CRITERIA\nWe included all randomized controlled trials (RCTs) and quasi-randomized trials of music interventions for improving psychological and physical outcomes in patients with cancer. Participants undergoing biopsy and aspiration for diagnostic purposes were excluded.\n\n\nDATA COLLECTION AND ANALYSIS\nTwo review authors independently extracted the data and assessed the risk of bias. Where possible, results were presented in meta analyses using mean differences and standardized mean differences. Post-test scores were used. In cases of significant baseline difference, we used change scores.\n\n\nMAIN RESULTS\nWe included 30 trials with a total of 1891 participants. We included music therapy interventions, offered by trained music therapists, as well as listening to pre-recorded music, offered by medical staff. The results suggest that music interventions may have a beneficial effect on anxiety in people with cancer, with a reported average anxiety reduction of 11.20 units (95% confidence interval (CI) -19.59 to -2.82, P = 0.009) on the STAI-S scale and -0.61 standardized units (95% CI -0.97 to -0.26, P = 0.0007) on other anxiety scales. Results also suggested a positive impact on mood (standardised mean difference (SMD) = 0.42, 95% CI 0.03 to 0.81, P = 0.03), but no support was found for depression.Music interventions may lead to small reductions in heart rate, respiratory rate, and blood pressure. A moderate pain-reducing effect was found (SMD = -0.59, 95% CI -0.92 to -0.27, P = 0.0003), but no strong evidence was found for enhancement of fatigue or physical status. The pooled estimate of two trials suggested a beneficial effect of music therapy on patients' quality of life (QoL) (SMD = 1.02, 95% CI 0.58 to 1.47, P = 0.00001).No conclusions could be drawn regarding the effect of music interventions on distress, body image, oxygen saturation level, immunologic functioning, spirituality, and communication outcomes.Seventeen trials used listening to pre-recorded music and 13 trials used music therapy interventions that actively engaged the patients. Not all studies included the same outcomes and due to the small number of studies per outcome, we could not compare the effectiveness of music medicine interventions with that of music therapy interventions.\n\n\nAUTHORS' CONCLUSIONS\nThis systematic review indicates that music interventions may have beneficial effects on anxiety, pain, mood, and QoL in people with cancer. Furthermore, music may have a small effect on heart rate, respiratory rate, and blood pressure. Most trials were at high risk of bias and, therefore, these results need to be interpreted with caution.", "title": "" } ]
scidocsrr
ef31e3bb3c357c2731f139175f9f9126
An active compliance controller for quadruped trotting
[ { "docid": "a258c6b5abf18cb3880e4bc7a436c887", "text": "We propose a reactive controller framework for robust quadrupedal locomotion, designed to cope with terrain irregularities, trajectory tracking errors and poor state estimation. The framework comprises two main modules: One related to the generation of elliptic trajectories for the feet and the other for control of the stability of the whole robot. We propose a task space CPG-based trajectory generation that can be modulated according to terrain irregularities and the posture of the robot trunk. To improve the robot's stability, we implemented a null space based attitude control for the trunk and a push recovery algorithm based on the concept of capture points. Simulations and experimental results on the hydraulically actuated quadruped robot HyQ will be presented to demonstrate the effectiveness of our framework.", "title": "" }, { "docid": "1495ed50a24703566b2bda35d7ec4931", "text": "This paper examines the passive dynamics of quadrupedal bounding. First, an unexpected difference between local and global behavior of the forward speed versus touchdown angle in the selfstabilized Spring Loaded Inverted Pendulum (SLIP) model is exposed and discussed. Next, the stability properties of a simplified sagittal plane model of our Scout II quadrupedal robot are investigated. Despite its simplicity, this model captures the targeted steady state behavior of Scout II without dependence on the fine details of the robot structure. Two variations of the bounding gait, which are observed experimentally in Scout II, are considered. Surprisingly, numerical return map studies reveal that passive generation of a large variety of cyclic bounding motion is possible. Most strikingly, local stability analysis shows that the dynamics of the open loop passive system alone can confer stability to the motion! These results can be used in developing a general control methodology for legged robots, resulting from the synthesis of feedforward and feedback models that take advantage of the mechanical sysPortions of this paper have previously appeared in conference publications Poulakakis, Papadopoulos, and Buehler (2003) and Poulakakis, Smith, and Buehler (2005b). The first and third authors were with the Centre for Intelligent Machines at McGill University when this work was performed. Address all correspondence related to this paper to the first author. The International Journal of Robotics Research Vol. 25, No. 7, July 2006, pp. 669-687 DOI: 10.1177/0278364906066768 ©2006 SAGE Publications Figures appear in color online: http://ijr.sagepub.com tem, and might explain the success of simple, open loop bounding controllers on our experimental robot. KEY WORDS—passive dynamics, bounding gait, dynamic running, quadrupedal robot", "title": "" }, { "docid": "956ffd90cc922e77632b8f9f79f42a98", "text": "Energy efficient actuators with adjustable stiffness: a review on AwAS, AwAS-II and CompACT VSA changing stiffness based on lever mechanism Amir jafari Nikos Tsagarakis Darwin G Caldwell Article information: To cite this document: Amir jafari Nikos Tsagarakis Darwin G Caldwell , (2015),\"Energy efficient actuators with adjustable stiffness: a review on AwAS, AwAS-II and CompACT VSA changing stiffness based on lever mechanism\", Industrial Robot: An International Journal, Vol. 42 Iss 3 pp. Permanent link to this document: http://dx.doi.org/10.1108/IR-12-2014-0433", "title": "" } ]
[ { "docid": "3bc9e621a0cfa7b8791ae3fb94eff738", "text": "This paper deals with environment perception for automobile applications. Environment perception comprises measuring the surrounding field with onboard sensors such as cameras, radar, lidars, etc., and signal processing to extract relevant information for the planned safety or assistance function. Relevant information is primarily supplied using two well-known methods, namely, object based and grid based. In the introduction, we discuss the advantages and disadvantages of the two methods and subsequently present an approach that combines the two methods to achieve better results. The first part outlines how measurements from stereo sensors can be mapped onto an occupancy grid using an appropriate inverse sensor model. We employ the Dempster-Shafer theory to describe the occupancy grid, which has certain advantages over Bayes' theorem. Furthermore, we generate clusters of grid cells that potentially belong to separate obstacles in the field. These clusters serve as input for an object-tracking framework implemented with an interacting multiple-model estimator. Thereby, moving objects in the field can be identified, and this, in turn, helps update the occupancy grid more effectively. The first experimental results are illustrated, and the next possible research intentions are also discussed.", "title": "" }, { "docid": "78c89f8aec24989737575c10b6bbad90", "text": "News topics, which are constructed from news stories using the techniques of Topic Detection and Tracking (TDT), bring convenience to users who intend to see what is going on through the Internet. However, it is almost impossible to view all the generated topics, because of the large amount. So it will be helpful if all topics are ranked and the top ones, which are both timely and important, can be viewed with high priority. Generally, topic ranking is determined by two primary factors. One is how frequently and recently a topic is reported by the media; the other is how much attention users pay to it. Both media focus and user attention varies as time goes on, so the effect of time on topic ranking has already been included. However, inconsistency exists between both factors. In this paper, an automatic online news topic ranking algorithm is proposed based on inconsistency analysis between media focus and user attention. News stories are organized into topics, which are ranked in terms of both media focus and user attention. Experiments performed on practical Web datasets show that the topic ranking result reflects the influence of time, the media and users. The main contributions of this paper are as follows. First, we present the quantitative measure of the inconsistency between media focus and user attention, which provides a basis for topic ranking and an experimental evidence to show that there is a gap between what the media provide and what users view. Second, to the best of our knowledge, it is the first attempt to synthesize the two factors into one algorithm for automatic online topic ranking.", "title": "" }, { "docid": "7b44c4ec18d01f46fdd513780ba97963", "text": "This paper presents a robust approach for road marking detection and recognition from images captured by an embedded camera mounted on a car. Our method is designed to cope with illumination changes, shadows, and harsh meteorological conditions. Furthermore, the algorithm can effectively group complex multi-symbol shapes into an individual road marking. For this purpose, the proposed technique relies on MSER features to obtain candidate regions which are further merged using density-based clustering. Finally, these regions of interest are recognized using machine learning approaches. Worth noting, the algorithm is versatile since it does not utilize any prior information about lane position or road space. The proposed method compares favorably to other existing works through a large number of experiments on an extensive road marking dataset.", "title": "" }, { "docid": "7e422bc9e691d552543c245e7c154cbf", "text": "Personality assessment and, specifically, the assessment of personality disorders have traditionally been indifferent to computational models. Computational personality is a new field that involves the automatic classification of individuals' personality traits that can be compared against gold-standard labels. In this context, we introduce a new vectorial semantics approach to personality assessment, which involves the construction of vectors representing personality dimensions and disorders, and the automatic measurements of the similarity between these vectors and texts written by human subjects. We evaluated our approach by using a corpus of 2468 essays written by students who were also assessed through the five-factor personality model. To validate our approach, we measured the similarity between the essays and the personality vectors to produce personality disorder scores. These scores and their correspondence with the subjects' classification of the five personality factors reproduce patterns well-documented in the psychological literature. In addition, we show that, based on the personality vectors, we can predict each of the five personality factors with high accuracy.", "title": "" }, { "docid": "f6099a1e6641d0a93c764efef120dd53", "text": "For the past two decades, the security community has been fighting malicious programs for Windows-based operating systems. However, the recent surge in adoption of embedded devices and the IoT revolution are rapidly changing the malware landscape. Embedded devices are profoundly different than traditional personal computers. In fact, while personal computers run predominantly on x86-flavored architectures, embedded systems rely on a variety of different architectures. In turn, this aspect causes a large number of these systems to run some variants of the Linux operating system, pushing malicious actors to give birth to \"\"Linux malware.\"\" To the best of our knowledge, there is currently no comprehensive study attempting to characterize, analyze, and understand Linux malware. The majority of resources on the topic are available as sparse reports often published as blog posts, while the few systematic studies focused on the analysis of specific families of malware (e.g., the Mirai botnet) mainly by looking at their network-level behavior, thus leaving the main challenges of analyzing Linux malware unaddressed. This work constitutes the first step towards filling this gap. After a systematic exploration of the challenges involved in the process, we present the design and implementation details of the first malware analysis pipeline specifically tailored for Linux malware. We then present the results of the first large-scale measurement study conducted on 10,548 malware samples (collected over a time frame of one year) documenting detailed statistics and insights that can help directing future work in the area.", "title": "" }, { "docid": "abc48ae19e2ea1e1bb296ff0ccd492a2", "text": "This paper reports the results achieved by Carnegie Mellon University on the Topic Detection and Tracking Project’s secondyear evaluation for the segmentation, detection, and tracking tasks. Additional post-evaluation improvements are also", "title": "" }, { "docid": "62cf2ae97e48e6b57139f305d616ec1b", "text": "Many analytics applications generate mixed workloads, i.e., workloads comprised of analytical tasks with different processing characteristics including data pre-processing, SQL, and iterative machine learning algorithms. Examples of such mixed workloads can be found in web data analysis, social media analysis, and graph analytics, where they are executed repetitively on large input datasets (e.g., Find the average user time spent on the top 10 most popular web pages on the UK domain web graph.). Scale-out processing engines satisfy the needs of these applications by distributing the data and the processing task efficiently among multiple workers that are first reserved and then used to execute the task in parallel on a cluster of machines. Finding the resource allocation that can complete the workload execution within a given time constraint, and optimizing cluster resource allocations among multiple analytical workloads motivates the need for estimating the runtime of the workload before its actual execution. Predicting runtime of analytical workloads is a challenging problem as runtime depends on a large number of factors that are hard to model a priori execution. These factors can be summarized as workload characteristics (data statistics and processing costs) , the execution configuration (deployment, resource allocation, and software settings), and the cost model that captures the interplay among all of the above parameters. While conventional cost models proposed in the context of query optimization can assess the relative order among alternative SQL query plans, they are not aimed to estimate absolute runtime. Additionally, conventional models are ill-equipped to estimate the runtime of iterative analytics that are executed repetitively until convergence and that of user defined data pre-processing operators which are not “owned” by the underlying data management system. This thesis demonstrates that runtime for data analytics can be predicted accurately by breaking the analytical tasks into multiple processing phases, collecting key input features during a reference execution on a sample of the dataset, and then using the features to build per-phase cost models. We develop prediction models for three categories of data analytics produced by social media applications: iterative machine learning, data pre-processing, and reporting SQL. The prediction framework for iterative analytics, PREDIcT, addresses the challenging problem of estimating the number of iterations, and per-iteration runtime for a class of iterative machine learning algorithms that are run repetitively until convergence. The hybrid prediction models we develop for data pre-processing tasks and for reporting SQL combine the benefits of analytical modeling with that of machine learning-based models. Through a", "title": "" }, { "docid": "bfe76736623dfc3271be4856f5dc2eef", "text": "Fact-related information contained in fictional narratives may induce substantial changes in readers’ real-world beliefs. Current models of persuasion through fiction assume that these effects occur because readers are psychologically transported into the fictional world of the narrative. Contrary to general dual-process models of persuasion, models of persuasion through fiction also imply that persuasive effects of fictional narratives are persistent and even increase over time (absolute sleeper effect). In an experiment designed to test this prediction, 81 participants read either a fictional story that contained true as well as false assertions about realworld topics or a control story. There were large short-term persuasive effects of false information, and these effects were even larger for a group with a two-week assessment delay. Belief certainty was weakened immediately after reading but returned to baseline level after two weeks, indicating that beliefs acquired by reading fictional narratives are integrated into realworld knowledge.", "title": "" }, { "docid": "03c74ae78bfe862499c4cb1e18a58ae7", "text": "Age-associated disease and disability are placing a growing burden on society. However, ageing does not affect people uniformly. Hence, markers of the underlying biological ageing process are needed to help identify people at increased risk of age-associated physical and cognitive impairments and ultimately, death. Here, we present such a biomarker, ‘brain-predicted age’, derived using structural neuroimaging. Brain-predicted age was calculated using machine-learning analysis, trained on neuroimaging data from a large healthy reference sample (N=2001), then tested in the Lothian Birth Cohort 1936 (N=669), to determine relationships with age-associated functional measures and mortality. Having a brain-predicted age indicative of an older-appearing brain was associated with: weaker grip strength, poorer lung function, slower walking speed, lower fluid intelligence, higher allostatic load and increased mortality risk. Furthermore, while combining brain-predicted age with grey matter and cerebrospinal fluid volumes (themselves strong predictors) not did improve mortality risk prediction, the combination of brain-predicted age and DNA-methylation-predicted age did. This indicates that neuroimaging and epigenetics measures of ageing can provide complementary data regarding health outcomes. Our study introduces a clinically-relevant neuroimaging ageing biomarker and demonstrates that combining distinct measurements of biological ageing further helps to determine risk of age-related deterioration and death.", "title": "" }, { "docid": "29ce9730d55b55b84e195983a8506e5c", "text": "In situ Raman spectroscopy is an extremely valuable technique for investigating fundamental reactions that occur inside lithium rechargeable batteries. However, specialized in situ Raman spectroelectrochemical cells must be constructed to perform these experiments. These cells are often quite different from the cells used in normal electrochemical investigations. More importantly, the number of cells is usually limited by construction costs; thus, routine usage of in situ Raman spectroscopy is hampered for most laboratories. This paper describes a modification to industrially available coin cells that facilitates routine in situ Raman spectroelectrochemical measurements of lithium batteries. To test this strategy, in situ Raman spectroelectrochemical measurements are performed on Li//V2O5 cells. Various phases of Li(x)V2O5 could be identified in the modified coin cells with Raman spectroscopy, and the electrochemical cycling performance between in situ and unmodified cells is nearly identical.", "title": "" }, { "docid": "e244cbd076ea62b4d720378c2adf4438", "text": "This paper introduces flash organizations: crowds structured like organizations to achieve complex and open-ended goals. Microtask workflows, the dominant crowdsourcing structures today, only enable goals that are so simple and modular that their path can be entirely pre-defined. We present a system that organizes crowd workers into computationally-represented structures inspired by those used in organizations - roles, teams, and hierarchies - which support emergent and adaptive coordination toward open-ended goals. Our system introduces two technical contributions: 1) encoding the crowd's division of labor into de-individualized roles, much as movie crews or disaster response teams use roles to support coordination between on-demand workers who have not worked together before; and 2) reconfiguring these structures through a model inspired by version control, enabling continuous adaptation of the work and the division of labor. We report a deployment in which flash organizations successfully carried out open-ended and complex goals previously out of reach for crowdsourcing, including product design, software development, and game production. This research demonstrates digitally networked organizations that flexibly assemble and reassemble themselves from a globally distributed online workforce to accomplish complex work.", "title": "" }, { "docid": "8baddf0d82411d18a77be03759101c82", "text": "Deep convolutional neural networks (DCNNs) have been successfully used in many computer vision tasks. Previous works on DCNN acceleration usually use a fixed computation pattern for diverse DCNN models, leading to imbalance between power efficiency and performance. We solve this problem by designing a DCNN acceleration architecture called deep neural architecture (DNA), with reconfigurable computation patterns for different models. The computation pattern comprises a data reuse pattern and a convolution mapping method. For massive and different layer sizes, DNA reconfigures its data paths to support a hybrid data reuse pattern, which reduces total energy consumption by 5.9~8.4 times over conventional methods. For various convolution parameters, DNA reconfigures its computing resources to support a highly scalable convolution mapping method, which obtains 93% computing resource utilization on modern DCNNs. Finally, a layer-based scheduling framework is proposed to balance DNA’s power efficiency and performance for different DCNNs. DNA is implemented in the area of 16 mm2 at 65 nm. On the benchmarks, it achieves 194.4 GOPS at 200 MHz and consumes only 479 mW. The system-level power efficiency is 152.9 GOPS/W (considering DRAM access power), which outperforms the state-of-the-art designs by one to two orders.", "title": "" }, { "docid": "4def0dc478dfb5ddb5a0ec59ec7433f5", "text": "A system that enables continuous slip compensation for a Mars rover has been designed, implemented, and field-tested. This system is composed of several components that allow the rover to accurately and continuously follow a designated path, compensate for slippage, and reach intended goals in high-slip environments. These components include: visual odometry, vehicle kinematics, a Kalman filter pose estimator, and a slip compensation/path follower. Visual odometry tracks distinctive scene features in stereo imagery to estimate rover motion between successively acquired stereo image pairs. The vehicle kinematics for a rocker-bogie suspension system estimates motion by measuring wheel rates, and rocker, bogie, and steering angles. The Kalman filter merges data from an inertial measurement unit (IMU) and visual odometry. This merged estimate is then compared to the kinematic estimate to determine how much slippage has occurred, taking into account estimate uncertainties. If slippage has occurred then a slip vector is calculated by differencing the current Kalman filter estimate from the kinematic estimate. This slip vector is then used to determine the necessary wheel velocities and steering angles to compensate for slip and follow the desired path.", "title": "" }, { "docid": "29f8b647d8f8de484f2b8f164b9e5add", "text": "is the latest release of a versatile and very well optimized package for molecular simulation. Much effort has been devoted to achieving extremely high performance on both workstations and parallel computers. The design includes an extraction of vi-rial and periodic boundary conditions from the loops over pairwise interactions, and special software routines to enable rapid calculation of x –1/2. Inner loops are generated automatically in C or Fortran at compile time, with optimizations adapted to each architecture. Assembly loops using SSE and 3DNow! Multimedia instructions are provided for x86 processors, resulting in exceptional performance on inexpensive PC workstations. The interface is simple and easy to use (no scripting language), based on standard command line arguments with self-explanatory functionality and integrated documentation. All binary files are independent of hardware endian and can be read by versions of GROMACS compiled using different floating-point precision. A large collection of flexible tools for trajectory analysis is included, with output in the form of finished Xmgr/Grace graphs. A basic trajectory viewer is included, and several external visualization tools can read the GROMACS trajectory format. Starting with version 3.0, GROMACS is available under the GNU General Public License from", "title": "" }, { "docid": "528796e22fc248de78a91cc089467c04", "text": "Automatic recognition of emotional states from human speech is a current research topic with a wide range. In this paper an attempt has been made to recognize and classify the speech emotion from three language databases, namely, Berlin, Japan and Thai emotion databases. Speech features consisting of Fundamental Frequency (F0), Energy, Zero Crossing Rate (ZCR), Linear Predictive Coding (LPC) and Mel Frequency Cepstral Coefficient (MFCC) from short-time wavelet signals are comprehensively investigated. In this regard, Support Vector Machines (SVM) is utilized as the classification model. Empirical experimentation shows that the combined features of F0, Energy and MFCC provide the highest accuracy on all databases provided using the linear kernel. It gives 89.80%, 93.57% and 98.00% classification accuracy for Berlin, Japan and Thai emotions databases, respectively.", "title": "" }, { "docid": "88cb8c2f7f4fd5cdc95cc8e48faa3cb7", "text": "Prediction or prognostication is at the core of modern evidence-based medicine. Prediction of overall mortality and cardiovascular disease can be improved by a systematic evaluation of measurements from large-scale epidemiological studies or by using nested sampling designs to discover new markers from omics technologies. In study I, we investigated if prediction measures such as calibration, discrimination and reclassification could be calculated within traditional sampling designs and which of these designs were the most efficient. We found that is possible to calculate prediction measures by using a proper weighting system and that a stratified casecohort design is a reasonable choice both in terms of efficiency and simplicity. In study II, we investigated the clinical utility of several genetic scores for incident coronary heart disease. We found that genetic information could be of clinical value in improving the allocation of patients to correct risk strata and that the assessment of a genetic risk score among intermediate risk subjects could help to prevent about one coronary heart disease event every 318 people screened. In study III, we explored the association between circulating metabolites and incident coronary heart disease. We found four new metabolites associated with coronary heart disease independently of established cardiovascular risk factors and with evidence of clinical utility. By using genetic information we determined a potential causal effect on coronary heart disease of one of these novel metabolites. In study IV, we compared a large number of demographics, health and lifestyle measurements for association with all-cause and cause-specific mortality. By ranking measurements in terms of their predictive abilities we could provide new insights about their relative importance, as well as reveal some unexpected associations. Moreover we developed and validated a prediction score for five-year mortality with good discrimination ability and calibrated it for the entire UK population. In conclusion, we applied a translational approach spanning from the discovery of novel biomarkers to their evaluation in terms of clinical utility. We combined this effort with methodological improvements aimed to expand prediction measures in settings that were not previously explored. We identified promising novel metabolomics markers for cardiovascular disease and supported the potential clinical utility of a genetic score in primary prevention. Our results might fuel future studies aimed to implement these findings in clinical practice.", "title": "" }, { "docid": "5ee410ddc75170aa38c39281a8d86827", "text": "Research in automotive safety leads to the conclusion that modern vehicle should utilize active and passive sensors for the recognition of the environment surrounding them. Thus, the development of tracking systems utilizing efficient state estimators is very important. In this case, problems such as moving platform carrying the sensor and maneuvering targets could introduce large errors in the state estimation and in some cases can lead to the divergence of the filter. In order to avoid sub-optimal performance, the unscented Kalman filter is chosen, while a new curvilinear model is applied which takes into account both the turn rate of the detected object and its tangential acceleration, leading to a more accurate modeling of its movement. The performance of the unscented filter using the proposed model in the case of automotive applications is proven to be superior compared to the performance of the extended and linear Kalman filter.", "title": "" }, { "docid": "f47fcbd6412384b85ef458fd3e6b27f3", "text": "In this paper, we consider positioning with observed-time-difference-of-arrival (OTDOA) for a device deployed in long-term-evolution (LTE) based narrow-band Internet-of-things (NB-IoT) systems. We propose an iterative expectation- maximization based successive interference cancellation (EM-SIC) algorithm to jointly consider estimations of residual frequency- offset (FO), fading-channel taps and time-of- arrival (ToA) of the first arrival-path for each of the detected cells. In order to design a low complexity ToA detector and also due to the limits of low-cost analog circuits, we assume an NB-IoT device working at a low-sampling rate such as 1.92 MHz or lower. The proposed EM-SIC algorithm comprises two stages to detect ToA, based on which OTDOA can be calculated. In a first stage, after running the EM-SIC block a predefined number of iterations, a coarse ToA is estimated for each of the detected cells. Then in a second stage, to improve the ToA resolution, a low-pass filter is utilized to interpolate the correlations of time-domain PRS signal evaluated at a low sampling-rate to a high sampling-rate such as 30.72 MHz. To keep low-complexity, only the correlations inside a small search window centered at the coarse ToA estimates are upsampled. Then, the refined ToAs are estimated based on upsampled correlations. If at least three cells are detected, with OTDOA and the locations of detected cell sites, the position of the NB-IoT device can be estimated. We show through numerical simulations that, the proposed EM-SIC based ToA detector is robust against impairments introduced by inter-cell interference, fading-channel and residual FO. Thus significant signal-to-noise (SNR) gains are obtained over traditional ToA detectors that do not consider these impairments when positioning a device.", "title": "" }, { "docid": "36d7f776d7297f67a136825e9628effc", "text": "Random walks are at the heart of many existing network embedding methods. However, such algorithms have many limitations that arise from the use of random walks, e.g., the features resulting from these methods are unable to transfer to new nodes and graphs as they are tied to vertex identity. In this work, we introduce the Role2Vec framework which uses the flexible notion of attributed random walks, and serves as a basis for generalizing existing methods such as DeepWalk, node2vec, and many others that leverage random walks. Our proposed framework enables these methods to be more widely applicable for both transductive and inductive learning as well as for use on graphs with attributes (if available). This is achieved by learning functions that generalize to new nodes and graphs. We show that our proposed framework is effective with an average AUC improvement of 16.55% while requiring on average 853x less space than existing methods on a variety of graphs.", "title": "" } ]
scidocsrr
1bb113abb6663a85e1fe4ff40f104804
Single Switched Capacitor Battery Balancing System Enhancements
[ { "docid": "b6bbd83da68fbf1d964503fb611a2be5", "text": "Battery systems are affected by many factors, the most important one is the cells unbalancing. Without the balancing system, the individual cell voltages will differ over time, battery pack capacity will decrease quickly. That will result in the fail of the total battery system. Thus cell balancing acts an important role on the battery life preserving. Different cell balancing methodologies have been proposed for battery pack. This paper presents a review and comparisons between the different proposed balancing topologies for battery string based on MATLAB/Simulink® simulation. The comparison carried out according to circuit design, balancing simulation, practical implementations, application, balancing speed, complexity, cost, size, balancing system efficiency, voltage/current stress … etc.", "title": "" }, { "docid": "90c3543eca7a689188725e610e106ce9", "text": "Lithium-based battery technology offers performance advantages over traditional battery technologies at the cost of increased monitoring and controls overhead. Multiple-cell Lead-Acid battery packs can be equalized by a controlled overcharge, eliminating the need to periodically adjust individual cells to match the rest of the pack. Lithium-based based batteries cannot be equalized by an overcharge, so alternative methods are required. This paper discusses several cell-balancing methodologies. Active cell balancing methods remove charge from one or more high cells and deliver the charge to one or more low cells. Dissipative techniques find the high cells in the pack, and remove excess energy through a resistive element until their charges match the low cells. This paper presents the theory of charge balancing techniques and the advantages and disadvantages of the presented methods. INTRODUCTION Lithium Ion and Lithium Polymer battery chemistries cannot be overcharged without damaging active materials [1-5]. The electrolyte breakdown voltage is precariously close to the fully charged terminal voltage, typically in the range of 4.1 to 4.3 volts/cell. Therefore, careful monitoring and controls must be implemented to avoid any single cell from experiencing an overvoltage due to excessive charging. Single lithium-based cells require monitoring so that cell voltage does not exceed predefined limits of the chemistry. Series connected lithium cells pose a more complex problem: each cell in the string must be monitored and controlled. Even though the pack voltage may appear to be within acceptable limits, one cell of the series string may be experiencing damaging voltage due to cell-to-cell imbalances. Traditionally, cell-to-cell imbalances in lead-acid batteries have been solved by controlled overcharging [6,7]. Leadacid batteries can be brought into overcharge conditions without permanent cell damage, as the excess energy is released by gassing. This gassing mechanism is the natural method for balancing a series string of lead acid battery cells. Other chemistries, such as NiMH, exhibit similar natural cell-to-cell balancing mechanisms [8]. Because a Lithium battery cannot be overcharged, there is no natural mechanism for cell equalization. Therefore, an alternative method must be employed. This paper discusses three categories of cell balancing methodologies: charging methods, active methods, and passive methods. Cell balancing is necessary for highly transient lithium battery applications, especially those applications where charging occurs frequently, such as regenerative braking in electric vehicle (EV) or hybrid electric vehicle (HEV) applications. Regenerative braking can cause problems for Lithium Ion batteries because the instantaneous regenerative braking current inrush can cause battery voltage to increase suddenly, possibly over the electrolyte breakdown threshold voltage. Deviations in cell behaviors generally occur because of two phenomenon: changes in internal impedance or cell capacity reduction due to aging. In either case, if one cell in a battery pack experiences deviant cell behavior, that cell becomes a likely candidate to overvoltage during high power charging events. Cells with reduced capacity or high internal impedance tend to have large voltage swings when charging and discharging. For HEV applications, it is necessary to cell balance lithium chemistry because of this overvoltage potential. For EV applications, cell balancing is desirable to obtain maximum usable capacity from the battery pack. During charging, an out-of-balance cell may prematurely approach the end-of-charge voltage (typically 4.1 to 4.3 volts/cell) and trigger the charger to turn off. Cell balancing is useful to control the higher voltage cells until the rest of the cells can catch up. In this way, the charger is not turned off until the cells simultaneously reach the end-of-charge voltage. END-OF-CHARGE CELL BALANCING METHODS Typically, cell-balancing methods employed during and at end-of-charging are useful only for electric vehicle purposes. This is because electric vehicle batteries are generally fully charged between each use cycle. Hybrid electric vehicle batteries may or may not be maintained fully charged, resulting in unpredictable end-of-charge conditions to enact the balancing mechanism. Hybrid vehicle batteries also require both high power charge (regenerative braking) and discharge (launch assist or boost) capabilities. For this reason, their batteries are usually maintained at a SOC that can discharge the required power but still have enough headroom to accept the necessary regenerative power. To fully charge the HEV battery for cell balancing would diminish charge acceptance capability (regenerative braking). CHARGE SHUNTING The charge-shunting cell balancing method selectively shunts the charging current around each cell as they become fully charged (Figure 1). This method is most efficiently employed on systems with known charge rates. The shunt resistor R is sized to shunt exactly the charging current I when the fully charged cell voltage V is reached. If the charging current decreases, resistor R will discharge the shunted cell. To avoid extremely large power dissipations due to R, this method is best used with stepped-current chargers with a small end-of-charge current.", "title": "" }, { "docid": "b05df5ff16750040a499f3c62fed2e3f", "text": "The automobile industry is progressing toward hybrid, plug-in hybrid, and fully electric vehicles in their future car models. The energy storage unit is one of the most important blocks in the power train of future electric-drive vehicles. Batteries and/or ultracapacitors are the most prominent storage systems utilized so far. Hence, their reliability during the lifetime of the vehicle is of great importance. Charge equalization of series-connected batteries or ultracapacitors is essential due to the capacity imbalances stemming from manufacturing, ensuing driving environment, and operational usage. Double-tiered capacitive charge shuttling technique is introduced and applied to a battery system in order to balance the battery-cell voltages. Parameters in the system are varied, and their effects on the performance of the system are determined. Results are compared to a single-tiered approach. MATLAB simulation shows a substantial improvement in charge transport using the new topology. Experimental results verifying simulation are presented.", "title": "" } ]
[ { "docid": "be4defd26cf7c7a29a85da2e15132be9", "text": "The quantity of rooftop solar photovoltaic (PV) installations has grown rapidly in the US in recent years. There is a strong interest among decision makers in obtaining high quality information about rooftop PV, such as the locations, power capacity, and energy production of existing rooftop PV installations. Solar PV installations are typically connected directly to local power distribution grids, and therefore it is important for the reliable integration of solar energy to have information at high geospatial resolutions: by county, zip code, or even by neighborhood. Unfortunately, traditional means of obtaining this information, such as surveys and utility interconnection filings, are limited in availability and geospatial resolution. In this work a new approach is investigated where a computer vision algorithm is used to detect rooftop PV installations in high resolution color satellite imagery and aerial photography. It may then be possible to use the identified PV images to estimate power capacity and energy production for each array of panels, yielding a fast, scalable, and inexpensive method to obtain rooftop PV estimates for regions of any size. The aim of this work is to investigate the feasibility of the first step of the proposed approach: detecting rooftop PV in satellite imagery. Towards this goal, a collection of satellite rooftop images is used to develop and evaluate a detection algorithm. The results show excellent detection performance on the testing dataset and that, with further development, the proposed approach may be an effective solution for fast and scalable rooftop PV information collection.", "title": "" }, { "docid": "e947cf1b4670c10f2453b9012078c3b5", "text": "BACKGROUND\nDyadic suicide pacts are cases in which two individuals (and very rarely more) agree to die together. These account for fewer than 1% of all completed suicides.\n\n\nOBJECTIVE\nThe authors describe two men in a long-term domestic partnership who entered into a suicide pact and, despite utilizing a high-lethality method (simultaneous arm amputation with a power saw), survived.\n\n\nMETHOD\nThe authors investigated the psychiatric, psychological, and social causes of suicide pacts by delving into the history of these two participants, who displayed a very high degree of suicidal intent. Psychiatric interviews and a family conference call, along with the strong support of one patient's family, were elicited.\n\n\nRESULTS\nThe patients, both HIV-positive, showed high levels of depression and hopelessness, as well as social isolation and financial hardship. With the support of his family, one patient was discharged to their care, while the other partner was hospitalized pending reunion with his partner.\n\n\nDISCUSSION\nThis case illustrates many of the key, defining features of suicide pacts that are carried out and also highlights the nature of the dependency relationship.", "title": "" }, { "docid": "4073da56cc874ea71f5e8f9c1c376cf8", "text": "AIM\nThis article reports the results of a study evaluating a preferred music listening intervention for reducing anxiety in older adults with dementia in nursing homes.\n\n\nBACKGROUND\nAnxiety can have a significant negative impact on older adults' functional status, quality of life and health care resources. However, anxiety is often under-diagnosed and inappropriately treated in those with dementia. Little is known about the use of a preferred music listening intervention for managing anxiety in those with dementia.\n\n\nDESIGN\nA quasi-experimental pretest and posttest design was used.\n\n\nMETHODS\nThis study aimed to evaluate the effectiveness of a preferred music listening intervention on anxiety in older adults with dementia in nursing home. Twenty-nine participants in the experimental group received a 30-minute music listening intervention based on personal preferences delivered by trained nursing staff in mid-afternoon, twice a week for six weeks. Meanwhile, 23 participants in the control group only received usual standard care with no music. Anxiety was measured by Rating Anxiety in Dementia at baseline and week six. Analysis of covariance (ancova) was used to determine the effectiveness of a preferred music listening intervention on anxiety at six weeks while controlling for pretest anxiety, age and marital status.\n\n\nRESULTS\nancova results indicated that older adults who received the preferred music listening had a significantly lower anxiety score at six weeks compared with those who received the usual standard care with no music (F = 12.15, p = 0.001).\n\n\nCONCLUSIONS\nPreferred music listening had a positive impact by reducing the level of anxiety in older adults with dementia.\n\n\nRELEVANCE TO CLINICAL PRACTICE\nNursing staff can learn how to implement preferred music intervention to provide appropriate care tailored to the individual needs of older adults with dementia. Preferred music listening is an inexpensive and viable intervention to promote mental health of those with dementia.", "title": "" }, { "docid": "4ddbdf0217d13c8b349137f1e59910d6", "text": "In the early days, content-based image retrieval (CBIR) was studied with global features. Since 2003, image retrieval based on local descriptors (de facto SIFT) has been extensively studied for over a decade due to the advantage of SIFT in dealing with image transformations. Recently, image representations based on the convolutional neural network (CNN) have attracted increasing interest in the community and demonstrated impressive performance. Given this time of rapid evolution, this article provides a comprehensive survey of instance retrieval over the last decade. Two broad categories, SIFT-based and CNN-based methods, are presented. For the former, according to the codebook size, we organize the literature into using large/medium-sized/small codebooks. For the latter, we discuss three lines of methods, i.e., using pre-trained or fine-tuned CNN models, and hybrid methods. The first two perform a single-pass of an image to the network, while the last category employs a patch-based feature extraction scheme. This survey presents milestones in modern instance retrieval, reviews a broad selection of previous works in different categories, and provides insights on the connection between SIFT and CNN-based methods. After analyzing and comparing retrieval performance of different categories on several datasets, we discuss promising directions towards generic and specialized instance retrieval.", "title": "" }, { "docid": "94bd0b242079d2b82c141e9f117154f7", "text": "BACKGROUND\nNewborns with critical health conditions are monitored in neonatal intensive care units (NICU). In NICU, one of the most important problems that they face is the risk of brain injury. There is a need for continuous monitoring of newborn's brain function to prevent any potential brain injury. This type of monitoring should not interfere with intensive care of the newborn. Therefore, it should be non-invasive and portable.\n\n\nMETHODS\nIn this paper, a low-cost, battery operated, dual wavelength, continuous wave near infrared spectroscopy system for continuous bedside hemodynamic monitoring of neonatal brain is presented. The system has been designed to optimize SNR by optimizing the wavelength-multiplexing parameters with special emphasis on safety issues concerning burn injuries. SNR improvement by utilizing the entire dynamic range has been satisfied with modifications in analog circuitry.\n\n\nRESULTS AND CONCLUSION\nAs a result, a shot-limited SNR of 67 dB has been achieved for 10 Hz temporal resolution. The system can operate more than 30 hours without recharging when an off-the-shelf 1850 mAh-7.2 V battery is used. Laboratory tests with optical phantoms and preliminary data recorded in NICU demonstrate the potential of the system as a reliable clinical tool to be employed in the bedside regional monitoring of newborn brain metabolism under intensive care.", "title": "" }, { "docid": "7364ae253ce5ace1df277f1d7f620861", "text": "Recent advances in signal processing and the revolution by the mobile technologies have spurred several innovations in all the areas and albeit more so in home based tele-medicine. We used variational mode decomposition (VMD) based denoising on large-scale phonocardiogram (PCG) data sets and achieved better accuracy. We have also implemented a reliable, external hardware and mobile based phonocardiography system that uses VMD signal processing technique to denoise the PCG signal that visually displays the waveform and inform the end-user and send the data to cloud based analytics system.", "title": "" }, { "docid": "f7424faa6dd97ebe93d1acfd5f0c9da9", "text": "This work examines the implications of uncoupled intersections with local realworld topology and sensor setup on traffic light control approaches. Control approaches are evaluated with respect to: Traffic flow, fuel consumption and noise emission at intersections. The real-world road network of Friedrichshafen is depicted, preprocessed and the present traffic light controlled intersections are modeled with respect to state space and action space. Different strategies, containing fixed-time, gap-based and time-based control approaches as well as our deep reinforcement learning based control approach, are implemented and assessed. Our novel DRL approach allows for modeling the TLC action space, with respect to phase selection as well as selection of transition timings. It was found that real-world topologies, and thus irregularly arranged intersections have an influence on the performance of traffic light control approaches. This is even to be observed within the same intersection types (n-arm, m-phases). Moreover we could show, that these influences can be efficiently dealt with by our deep reinforcement learning based control approach.", "title": "" }, { "docid": "b70a70896a3d904c25adb126b584a858", "text": "A case of a fatal cardiac episode resulting from an unusual autoerotic practice involving the use of a vacuum cleaner, is presented. Scene investigation and autopsy findings are discussed.", "title": "" }, { "docid": "4b878ffe2fd7b1f87e2f06321e5f03fa", "text": "Physical unclonable function (PUF) leverages the immensely complex and irreproducible nature of physical structures to achieve device authentication and secret information storage. To enhance the security and robustness of conventional PUFs, reconfigurable physical unclonable functions (RPUFs) with dynamically refreshable challenge-response pairs (CRPs) have emerged recently. In this paper, we propose two novel physically reconfigurable PUF (P-RPUF) schemes that exploit the process parameter variability and programming sensitivity of phase change memory (PCM) for CRP reconfiguration and evaluation. The first proposed PCM-based P-RPUF scheme extracts its CRPs from the measurable differences of the PCM cell resistances programmed by randomly varying pulses. An imprecisely controlled regulator is used to protect the privacy of the CRP in case the configuration state of the RPUF is divulged. The second proposed PCM-based RPUF scheme produces the random response by counting the number of programming pulses required to make the cell resistance converge to a predetermined target value. The merging of CRP reconfiguration and evaluation overcomes the inherent vulnerability of P-RPUF devices to malicious prediction attacks by limiting the number of accessible CRPs between two consecutive reconfigurations to only one. Both schemes were experimentally evaluated on 180-nm PCM chips. The obtained results demonstrated their quality for refreshable key generation when appropriate fuzzy extractor algorithms are incorporated.", "title": "" }, { "docid": "aa5d8162801abcc81ac542f7f2a423e5", "text": "Prediction of popularity has profound impact for social media, since it offers opportunities to reveal individual preference and public attention from evolutionary social systems. Previous research, although achieves promising results, neglects one distinctive characteristic of social data, i.e., sequentiality. For example, the popularity of online content is generated over time with sequential post streams of social media. To investigate the sequential prediction of popularity, we propose a novel prediction framework called Deep Temporal Context Networks (DTCN) by incorporating both temporal context and temporal attention into account. Our DTCN contains three main components, from embedding, learning to predicting. With a joint embedding network, we obtain a unified deep representation of multi-modal user-post data in a common embedding space. Then, based on the embedded data sequence over time, temporal context learning attempts to recurrently learn two adaptive temporal contexts for sequential popularity. Finally, a novel temporal attention is designed to predict new popularity (the popularity of a new userpost pair) with temporal coherence across multiple time-scales. Experiments on our released image dataset with about 600K Flickr photos demonstrate that DTCN outperforms state-of-the-art deep prediction algorithms, with an average of 21.51% relative performance improvement in the popularity prediction (Spearman Ranking Correlation).", "title": "" }, { "docid": "5d1e77b6b09ebac609f2e518b316bd49", "text": "Principles of muscle coordination in gait have been based largely on analyses of body motion, ground reaction force and EMG measurements. However, data from dynamical simulations provide a cause-effect framework for analyzing these measurements; for example, Part I (Gait Posture, in press) of this two-part review described how force generation in a muscle affects the acceleration and energy flow among the segments. This Part II reviews the mechanical and coordination concepts arising from analyses of simulations of walking. Simple models have elucidated the basic multisegmented ballistic and passive mechanics of walking. Dynamical models driven by net joint moments have provided clues about coordination in healthy and pathological gait. Simulations driven by muscle excitations have highlighted the partial stability afforded by muscles with their viscoelastic-like properties and the predictability of walking performance when minimization of metabolic energy per unit distance is assumed. When combined with neural control models for exciting motoneuronal pools, simulations have shown how the integrative properties of the neuro-musculo-skeletal systems maintain a stable gait. Other analyses of walking simulations have revealed how individual muscles contribute to trunk support and progression. Finally, we discuss how biomechanical models and simulations may enhance our understanding of the mechanics and muscle function of walking in individuals with gait impairments.", "title": "" }, { "docid": "c9c03474e9add95ebb0b89cacdb6c712", "text": "We study the use of randomized value functions to guide deep exploration in reinforcement learning. This offers an elegant means for synthesizing statistically and computationally efficient exploration with common practical approaches to value function learning. We present several reinforcement learning algorithms that leverage randomized value functions and demonstrate their efficacy through computational studies. We also prove a regret bound that establishes statistical efficiency with a tabular representation.", "title": "" }, { "docid": "59c16bb2ec81dfb0e27ff47ccae0a169", "text": "A geometric dissection is a set of pieces which can be assembled in different ways to form distinct shapes. Dissections are used as recreational puzzles because it is striking when a single set of pieces can construct highly different forms. Existing techniques for creating dissections find pieces that reconstruct two input shapes exactly. Unfortunately, these methods only support simple, abstract shapes because an excessive number of pieces may be needed to reconstruct more complex, naturalistic shapes. We introduce a dissection design technique that supports such shapes by requiring that the pieces reconstruct the shapes only approximately. We find that, in most cases, a small number of pieces suffices to tightly approximate the input shapes. We frame the search for a viable dissection as a combinatorial optimization problem, where the goal is to search for the best approximation to the input shapes using a given number of pieces. We find a lower bound on the tightness of the approximation for a partial dissection solution, which allows us to prune the search space and makes the problem tractable. We demonstrate our approach on several challenging examples, showing that it can create dissections between shapes of significantly greater complexity than those supported by previous techniques.", "title": "" }, { "docid": "0e893315d6e9257f5a1e6e85291c89ef", "text": "In unsupervised semantic role labeling, identifying the role of an argument is usually informed by its dependency relation with the predicate. In this work, we propose a neural model to learn argument embeddings from the context by explicitly incorporating dependency relations as multiplicative factors, which bias argument embeddings according to their dependency roles. Our model outperforms existing state-of-the-art embeddings in unsupervised semantic role induction on the CoNLL 2008 dataset and the SimLex999 word similarity task. Qualitative results demonstrate our model can effectively bias argument embeddings based on their dependency role.", "title": "" }, { "docid": "95ca78f61a46f6e34edce6210d5e0939", "text": "Wireless sensor networks (WSNs) have recently gained a lot of attention by scientific community. Small and inexpensive devices with low energy consumption and limited computing resources are increasingly being adopted in different application scenarios including environmental monitoring, target tracking and biomedical health monitoring. In many such applications, node localization is inherently one of the system parameters. Localization process is necessary to report the origin of events, routing and to answer questions on the network coverage ,assist group querying of sensors. In general, localization schemes are classified into two broad categories: range-based and range-free. However, it is difficult to classify hybrid solutions as range-based or range-free. In this paper we make this classification easy, where range-based schemes and range-free schemes are divided into two types: fully schemes and hybrid schemes. Moreover, we compare the most relevant localization algorithms and discuss the future research directions for wireless sensor networks localization schemes.", "title": "" }, { "docid": "c3e8960170cb72f711263e7503a56684", "text": "BACKGROUND\nThe deltoid ligament has both superficial and deep layers and consists of up to six ligamentous bands. The prevalence of the individual bands is variable, and no consensus as to which bands are constant or variable exists. Although other studies have looked at the variance in the deltoid anatomy, none have quantified the distance to relevant osseous landmarks.\n\n\nMETHODS\nThe deltoid ligaments from fourteen non-paired, fresh-frozen cadaveric specimens were isolated and the ligamentous bands were identified. The lengths, footprint areas, orientations, and distances from relevant osseous landmarks were measured with a three-dimensional coordinate measurement device.\n\n\nRESULTS\nIn all specimens, the tibionavicular, tibiospring, and deep posterior tibiotalar ligaments were identified. Three additional bands were variable in our specimen cohort: the tibiocalcaneal, superficial posterior tibiotalar, and deep anterior tibiotalar ligaments. The deep posterior tibiotalar ligament was the largest band of the deltoid ligament. The origins from the distal center of the intercollicular groove were 16.1 mm (95% confidence interval, 14.7 to 17.5 mm) for the tibionavicular ligament, 13.1 mm (95% confidence interval, 11.1 to 15.1 mm) for the tibiospring ligament, and 7.6 mm (95% confidence interval, 6.7 to 8.5 mm) for the deep posterior tibiotalar ligament. Relevant to other pertinent osseous landmarks, the tibionavicular ligament inserted at 9.7 mm (95% confidence interval, 8.4 to 11.0 mm) from the tuberosity of the navicular, the tibiospring inserted at 35% (95% confidence interval, 33.4% to 36.6%) of the spring ligament's posteroanterior distance, and the deep posterior tibiotalar ligament inserted at 17.8 mm (95% confidence interval, 16.3 to 19.3 mm) from the posteromedial talar tubercle.\n\n\nCONCLUSIONS\nThe tibionavicular, tibiospring, and deep posterior tibiotalar ligament bands were constant components of the deltoid ligament. The deep posterior tibiotalar ligament was the largest band of the deltoid ligament.\n\n\nCLINICAL RELEVANCE\nThe anatomical data regarding the deltoid ligament bands in this study will help to guide anatomical placement of repairs and reconstructions for deltoid ligament injury or instability.", "title": "" }, { "docid": "7251ff8a3ff1adbf13ddd62ab9a9c9c3", "text": "The performance of a brushless motor which has a surface-mounted magnet rotor and a trapezoidal back-emf waveform when it is operated in BLDC and BLAC modes is evaluated, in both constant torque and flux-weakening regions, assuming the same torque, the same peak current, and the same rms current. It is shown that although the motor has an essentially trapezoidal back-emf waveform, the output power and torque when operated in the BLAC mode in the flux-weakening region are significantly higher than that can be achieved when operated in the BLDC mode due to the influence of the winding inductance and back-emf harmonics", "title": "" }, { "docid": "3f47acf3bd67849be29670a3236294c7", "text": "The aims of this study were as follows: (a) to examine the possible presence of an identifiable group of stable victims of cyberbullying; (b) to analyze whether the stability of cybervictimization is associated with the perpetration of cyberbullying and bully–victim status (i.e., being only a bully, only a victim, or being both a bully and a victim); and (c) to test whether stable victims report a greater number of psychosocial problems compared to non-stable victims and uninvolved peers. A sample of 680 Spanish adolescents (410 girls) completed self-report measures on cyberbullying perpetration and victimization, depressive symptoms, and problematic alcohol use at two time points that were separated by one year. The results of cluster analyses suggested the existence of four distinct victimization profiles: ‘‘Stable-Victims,’’ who reported victimization at both Time 1 and Time 2 (5.8% of the sample), ‘‘Time 1-Victims,’’ and ‘‘Time 2-Victims,’’ who presented victimization only at one time (14.5% and 17.6%, respectively), and ‘‘Non-Victims,’’ who presented minimal victimization at both times (61.9% of the sample). Stable victims were more likely to fall into the ‘‘bully–victim’’ category and presented more cyberbullying perpetration than the rest of the groups. Overall, the Stable Victims group displayed higher scores of depressive symptoms and problematic alcohol use over time than the other groups, whereas the Non-Victims displayed the lowest of these scores. These findings have major implications for prevention and intervention efforts aimed at reducing cyberbullying and its consequences. 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "038064c2998a5da8664be1ba493a0326", "text": "The bandit problem is revisited and considered under the PAC model. Our main contribution in this part is to show that given n arms, it suffices to pull the arms O( n 2 log 1 δ ) times to find an -optimal arm with probability of at least 1 − δ. This is in contrast to the naive bound of O( n 2 log n δ ). We derive another algorithm whose complexity depends on the specific setting of the rewards, rather than the worst case setting. We also provide a matching lower bound. We show how given an algorithm for the PAC model Multi-Armed Bandit problem, one can derive a batch learning algorithm for Markov Decision Processes. This is done essentially by simulating Value Iteration, and in each iteration invoking the multi-armed bandit algorithm. Using our PAC algorithm for the multi-armed bandit problem we improve the dependence on the number of actions.", "title": "" }, { "docid": "ca9f48691e93b6282df2277f4cf8885e", "text": "This paper presents a novel technique, anatomy, for publishing sensitive data. Anatomy releases all the quasi-identifier and sensitive values directly in two separate tables. Combined with a grouping mechanism, this approach protects privacy, and captures a large amount of correlation in the microdata. We develop a linear-time algorithm for computing anatomized tables that obey the l-diversity privacy requirement, and minimize the error of reconstructing the microdata. Extensive experiments confirm that our technique allows significantly more effective data analysis than the conventional publication method based on generalization. Specifically, anatomy permits aggregate reasoning with average error below 10%, which is lower than the error obtained from a generalized table by orders of magnitude.", "title": "" } ]
scidocsrr
146547ed597a23462ff5fccb23c76181
A vision-guided autonomous quadrotor in an air-ground multi-robot system
[ { "docid": "5cdcb7073bd0f8e1b0affe5ffb4adfc7", "text": "This paper presents a nonlinear controller for hovering flight and touchdown control for a vertical take-off and landing (VTOL) unmanned aerial vehicle (UAV) using inertial optical flow. The VTOL vehicle is assumed to be a rigid body, equipped with a minimum sensor suite (camera and IMU), manoeuvring over a textured flat target plane. Two different tasks are considered in this paper: the first concerns the stability of hovering flight and the second one concerns regulation of automatic landing using the divergent optical flow as feedback information. Experimental results on a quad-rotor UAV demonstrate the performance of the proposed control strategy.", "title": "" }, { "docid": "cff9a7f38ca6699b235c774232a56f54", "text": "This paper presents a Miniature Aerial Vehicle (MAV) capable of handsoff autonomous operation within indoor environments. Our prototype is a Quadrotor weighing approximately 600g, with a diameter of 550mm, which carries the necessary electronics for stability control, altitude control, collision avoidance and anti-drift control. This MAV is equipped with three rate gyroscopes, three accelerometers, one ultrasonic sensor, four infrared sensors, a high-speed motor controller and a flight computer. Autonomous flight tests have been carried out in a 7x6-m room.", "title": "" } ]
[ { "docid": "569a7cfcf7dd4cc5132dc7ffa107bfcf", "text": "We present the results of a study of definite descriptions use in written texts aimed at assessing the feasibility of annotating corpora with information about definite description interpretation. We ran two experiments, in which subjects were asked to classify the uses of definite descriptions in a corpus of 33 newspaper articles, containing a total of 1412 definite descriptions. We measured the agreement among annotators about the classes assigned to definite descriptions, as well as the agreement about the antecedent assigned to those definites that the annotators classified as being related to an antecedent in the text. Themost interesting result of this study from a corpus annotation perspective was the rather low agreement (K=0.63) that we obtained using versions of Hawkins’ and Prince’s classification schemes; better results (K=0.76) were obtained using the simplified scheme proposed by Fraurud that includes only two classes, first-mention and subsequent-mention. The agreement about antecedents was also not complete. These findings raise questions concerning the strategy of evaluating systems for definite description interpretation by comparing their results with a standardized annotation. From a linguistic point of view, the most interesting observations were the great number of discourse-newdefinites in our corpus (in one of our experiments, about 50% of the definites in the collection were classified as discourse-new, 30% as anaphoric, and 18% as associative/bridging) and the presence of definites which did not seem to require a complete disambiguation. This paper will appear in Computational Linguistics.", "title": "" }, { "docid": "89cba76ab33c66a3687481ea56e1e556", "text": "With sustained growth of software complexity, finding security vulnerabilities in operating systems has become an important necessity. Nowadays, OS are shipped with thousands of binary executables. Unfortunately, methodologies and tools for an OS scale program testing within a limited time budget are still missing.\n In this paper we present an approach that uses lightweight static and dynamic features to predict if a test case is likely to contain a software vulnerability using machine learning techniques. To show the effectiveness of our approach, we set up a large experiment to detect easily exploitable memory corruptions using 1039 Debian programs obtained from its bug tracker, collected 138,308 unique execution traces and statically explored 76,083 different subsequences of function calls. We managed to predict with reasonable accuracy which programs contained dangerous memory corruptions.\n We also developed and implemented VDiscover, a tool that uses state-of-the-art Machine Learning techniques to predict vulnerabilities in test cases. Such tool will be released as open-source to encourage the research of vulnerability discovery at a large scale, together with VDiscovery, a public dataset that collects raw analyzed data.", "title": "" }, { "docid": "06f99b18bae3f15e77db8ff2d8c159cc", "text": "The exact nature of the relationship among species range sizes, speciation, and extinction events is not well understood. The factors that promote larger ranges, such as broad niche widths and high dispersal abilities, could increase the likelihood of encountering new habitats but also prevent local adaptation due to high gene flow. Similarly, low dispersal abilities or narrower niche widths could cause populations to be isolated, but such populations may lack advantageous mutations due to low population sizes. Here we present a large-scale, spatially explicit, individual-based model addressing the relationships between species ranges, speciation, and extinction. We followed the evolutionary dynamics of hundreds of thousands of diploid individuals for 200,000 generations. Individuals adapted to multiple resources and formed ecological species in a multidimensional trait space. These species varied in niche widths, and we observed the coexistence of generalists and specialists on a few resources. Our model shows that species ranges correlate with dispersal abilities but do not change with the strength of fitness trade-offs; however, high dispersal abilities and low resource utilization costs, which favored broad niche widths, have a strong negative effect on speciation rates. An unexpected result of our model is the strong effect of underlying resource distributions on speciation: in highly fragmented landscapes, speciation rates are reduced.", "title": "" }, { "docid": "5637bed8be75d7e79a2c2adb95d4c28e", "text": "BACKGROUND\nLimited evidence exists to show that adding a third agent to platinum-doublet chemotherapy improves efficacy in the first-line advanced non-small-cell lung cancer (NSCLC) setting. The anti-PD-1 antibody pembrolizumab has shown efficacy as monotherapy in patients with advanced NSCLC and has a non-overlapping toxicity profile with chemotherapy. We assessed whether the addition of pembrolizumab to platinum-doublet chemotherapy improves efficacy in patients with advanced non-squamous NSCLC.\n\n\nMETHODS\nIn this randomised, open-label, phase 2 cohort of a multicohort study (KEYNOTE-021), patients were enrolled at 26 medical centres in the USA and Taiwan. Patients with chemotherapy-naive, stage IIIB or IV, non-squamous NSCLC without targetable EGFR or ALK genetic aberrations were randomly assigned (1:1) in blocks of four stratified by PD-L1 tumour proportion score (<1% vs ≥1%) using an interactive voice-response system to 4 cycles of pembrolizumab 200 mg plus carboplatin area under curve 5 mg/mL per min and pemetrexed 500 mg/m2 every 3 weeks followed by pembrolizumab for 24 months and indefinite pemetrexed maintenance therapy or to 4 cycles of carboplatin and pemetrexed alone followed by indefinite pemetrexed maintenance therapy. The primary endpoint was the proportion of patients who achieved an objective response, defined as the percentage of patients with radiologically confirmed complete or partial response according to Response Evaluation Criteria in Solid Tumors version 1.1 assessed by masked, independent central review, in the intention-to-treat population, defined as all patients who were allocated to study treatment. Significance threshold was p<0·025 (one sided). Safety was assessed in the as-treated population, defined as all patients who received at least one dose of the assigned study treatment. This trial, which is closed for enrolment but continuing for follow-up, is registered with ClinicalTrials.gov, number NCT02039674.\n\n\nFINDINGS\nBetween Nov 25, 2014, and Jan 25, 2016, 123 patients were enrolled; 60 were randomly assigned to the pembrolizumab plus chemotherapy group and 63 to the chemotherapy alone group. 33 (55%; 95% CI 42-68) of 60 patients in the pembrolizumab plus chemotherapy group achieved an objective response compared with 18 (29%; 18-41) of 63 patients in the chemotherapy alone group (estimated treatment difference 26% [95% CI 9-42%]; p=0·0016). The incidence of grade 3 or worse treatment-related adverse events was similar between groups (23 [39%] of 59 patients in the pembrolizumab plus chemotherapy group and 16 [26%] of 62 in the chemotherapy alone group). The most common grade 3 or worse treatment-related adverse events in the pembrolizumab plus chemotherapy group were anaemia (seven [12%] of 59) and decreased neutrophil count (three [5%]); an additional six events each occurred in two (3%) for acute kidney injury, decreased lymphocyte count, fatigue, neutropenia, and sepsis, and thrombocytopenia. In the chemotherapy alone group, the most common grade 3 or worse events were anaemia (nine [15%] of 62) and decreased neutrophil count, pancytopenia, and thrombocytopenia (two [3%] each). One (2%) of 59 patients in the pembrolizumab plus chemotherapy group experienced treatment-related death because of sepsis compared with two (3%) of 62 patients in the chemotherapy group: one because of sepsis and one because of pancytopenia.\n\n\nINTERPRETATION\nCombination of pembrolizumab, carboplatin, and pemetrexed could be an effective and tolerable first-line treatment option for patients with advanced non-squamous NSCLC. This finding is being further explored in an ongoing international, randomised, double-blind, phase 3 study.\n\n\nFUNDING\nMerck & Co.", "title": "" }, { "docid": "cb693221e954efcc593b46553d7bea6f", "text": "The increased accessibility of digitally sourced data and advance technology to analyse it drives many industries to digital change. Many global businesses are talking about the potential of big data and they believe that analysing big data sets can help businesses derive competitive insight and shape organisations’ marketing strategy decisions. Potential impact of digital technology varies widely by industry. Sectors such as financial services, insurances and mobile telecommunications which are offering virtual rather than physical products are more likely highly susceptible to digital transformation. Howeverthe interaction between digital technology and organisations is complex and there are many barriers for to effective digital change which are presented by big data. Changes brought by technology challenges both researchers and practitioners. Various global business and digital tends have highlights the emergent need for collaboration between academia and market practitioners. There are “theories-in – use” which are academically rigorous but still there is gap between implementation of theory in practice. In this paper we identify theoretical dilemmas of the digital revolution and importance of challenges within practice. Preliminary results show that those industries that tried to narrow the gap and put necessary mechanisms in place to make use of big data for marketing are upfront on the market. INTRODUCTION Advances in digital technology has made a significant impact on marketing theory and practice. Technology expands the opportunity to capture better quality customer data, increase focus on customer relationship, rise of customer insight and Customer Relationship Management (CRM). Availability of big data made traditional marketing tools to work more powerful and innovative way. In current digital age of marketing some predictions of effects of the digital changes have come to function but still there is no definite answer to what works and what doesn’t in terms of implementing the changes in an organisation context. The choice of this specific topic is motivated by the need for a better understanding for impact of digital on marketing fild.This paper will discusses the potential positive impact of the big data on digital marketing. It also present the evidence of positive views in academia and highlight the gap between academia and practices. The main focus is on understanding the gap and providing recommendation for fillingit in. The aim of this paper is to identify theoretical dilemmas of the digital revolution and importance of challenges within practice. Preliminary results presented here show that those industries that tried to narrow the gap and put necessary mechanisms in place to make use of big data for marketing are upfront on the market. In our discussion we shall identify these industries and present evaluations of which industry sectors would need to be looking at understanding of impact that big data may have on their practices and businesses. Digital Marketing and Big data In early 90’s when views about digital changes has started Parsons at el (1998) believed that to achieve success in digital marketing consumer marketers should create a new model with five essential elements in new media environment. Figure below shows five success factors and issues that marketers should address around it. Figure 1. Digital marketing Framework and levers Parson et al (1998) International Conference on Communication, Media, Technology and Design 24 26 April 2014, Istanbul – Turkey 147 Today in digital age of marketing some predictions of effects of this changes have come to function but still there is no define answers on what works and what doesn’t in terms of implement it in organisation context.S. Dibb (2012). There are deferent explanations, arguments and views about impact of digital on marketing strategy in the literature. At first, it is important to define what is meant by digital marketing, what are the challenges brought by it and then understand how it is adopted. Simply, Digital Marketing (2012) can be defined as “a sub branch of traditional Marketing using modern digital channels for the placement of products such as downloadable music, and primarily for communicating with stakeholders e.g. customers and investors about brand, products and business progress”. According to (Smith, 2007) the digital marketing refers “The use of digital technologies to create an integrated, targeted and measurable communication which helps to acquire and retain customers while building deeper relationships with them”. There are a number of accepted theoretical frameworks however as Parsons et al (1998) suggested potentialities offered by digital marketing need to consider carefully where and how to build in each organisation by the senior managers. The most recent developments in this area has been triggered by growing amount of digital data now known as Big Data. Tech American Foundation (2004) defines Big Data as a “term that describes large volumes of high velocity, complex and variable data that require advanced techniques and technologies to enable the capture storage, distribution, management and analysis of information”. D. Krajicek (2013) argues that the big challenge of Big Data is the ability to focus on what is meaningful not on what is possible, with so much information at their fingerprint marketers and their research partners can and often do fall into “more is better” fallacy. Knowing something and knowing it quickly is not enough. Therefore to have valuable Big data it needs to be sorted by professional people who have skills to understand dynamics of market and can identify what is relevant and meaningful. G. Day (2011). Data should be used for achieve competitive advantage by creating effective relationship with the target segments. According to K. Kendall (2014) with de right capabilities, you can take a whole range of new data sources such as web browsing, social data and geotracking data and develop much more complete profile about your customers and then with this information you can segment better. Successful Big Data initiatives should start with a specific and clearly defined business requirement then leaders of these initiatives need to assess the technical requirement and identify gap in their capabilities and then plan the investment to close those gaps (Big Data Analytics 2014) The impact and current challenges Bileviciene (2012) suggest that well conducted market research is the basis for successful marketing and well conducted study is the basis of successful market segmentation. Generally marketing management is broken down into a series of steps, which include market research, segmentation of markets and positioning the company’s offering in such a way as to appeal to the targeted segments. (OU Business school, 2007) Market segmentation refers to the process of defining and subdividing a large homogenous market into clearly identifiable segments having similar needs, wants, or demand characteristics. Its objective is to design a marketing mix that precisely matches the expectations of customers in the targeted segment (Business dictation, 2013). The goal for segmentation is to break down the target market into different consumers groups. According to Kotler and Armstrong (2011) traditionally customers were classified based on four types of segmentation variables, geographic, demographic, psychographic and behavioural. There are many focuses, beliefs and arguments in the field of market segmentation. Many researchers believe that the traditional variables of demographic and geographic segments are out-dated and the theory regarding segmentation has become too narrow (Quinn and Dibb, 2010). According to Lin (2002), these variables should be a part of a new, expanded view of the market segmentation theory that focuses more on customer’s personalities and values. Dibb and Simkin (2009) argue that priorities of market segmentation research aim to exploring the applicability of new segmentation bases across different products and contexts, developing more flexible data analysis techniques, creating new research designs and data collection approaches, however practical questions about implementation and integration have received less attention. According to S. Dibb (2012) in academic perspective segmentation still has strategic and tactical role as shown on figure below. But in practice as Dibb argues “some things have not changed” and: Segmentation’s strategic role still matters Implementation is as much of a pain as always Even the smartest segments need embedding International Conference on Communication, Media, Technology and Design 24 26 April 2014, Istanbul – Turkey 148 Figure 2: role of segmentation S. Dibb (2012) Dilemmas with the Implementation of digital change arise for various reasons. Some academics believed that greater access to data would reduce the need for more traditional segmentation but research done on the field shows that traditional segmentation works equal to CRM ( W. Boulding et al 2005). Even thought the marketing literature offers insights for improving the effectiveness of digital changes in marketing filed there is limitation on how an organisation adapts its customer information processes once the technology is adjusted into the organisation. (J. Peltier et al 2012) suggest that there is an urgent need for data management studies that captures insights from other disciplines including organisational behaviour, change management and technology implementation. Reibstein et al (2009) also highlights the emergent need for collaboration between academia and market practitioners. They point out that there is a “digital skill gap” within the marketing filed. Authors argue that there are “theories-in – use” which are academically rigorous but still there is gap between implementation of theory in practice. Changes brought by technology and availability of di", "title": "" }, { "docid": "a5a7e3fe9d6eaf8fc25e7fd91b74219e", "text": "We present in this paper a new approach that uses supervised machine learning techniques to improve the performances of optimization algorithms in the context of mixed-integer programming (MIP). We focus on the branch-and-bound (B&B) algorithm, which is the traditional algorithm used to solve MIP problems. In B&B, variable branching is the key component that most conditions the efficiency of the optimization. Good branching strategies exist but are computationally expensive and usually hinder the optimization rather than improving it. Our approach consists in imitating the decisions taken by a supposedly good branching strategy, strong branching in our case, with a fast approximation. To this end, we develop a set of features describing the state of the ongoing optimization and show how supervised machine learning can be used to approximate the desired branching strategy. The approximated function is created by a supervised machine learning algorithm from a set of observed branching decisions taken by the target strategy. The experiments performed on randomly generated and standard benchmark (MIPLIB) problems show promising results.", "title": "" }, { "docid": "c5851a9fe60c0127a351668ba5b0f21d", "text": "We examined salivary C-reactive protein (CRP) levels in the context of tobacco smoke exposure (TSE) in healthy youth. We hypothesized that there would be a dose-response relationship between TSE status and salivary CRP levels. This work is a pilot study (N = 45) for a larger investigation in which we aim to validate salivary CRP against serum CRP, the gold standard measurement of low-grade inflammation. Participants were healthy youth with no self-reported periodontal disease, no objectively measured obesity/adiposity, and no clinical depression, based on the Beck Depression Inventory (BDI-II). We assessed tobacco smoking and confirmed smoking status (non-smoking, passive smoking, and active smoking) with salivary cotinine measurement. We measured salivary CRP by the ELISA method. We controlled for several potential confounders. We found evidence for the existence of a dose-response relationship between the TSE status and salivary CRP levels. Our preliminary findings indicate that salivary CRP seems to have a similar relation to TSE as its widely used serum (systemic inflammatory) biomarker counterpart.", "title": "" }, { "docid": "a4933829bafd2d1e7c3ae3a9ab50c165", "text": "Head drop is a symptom commonly seen in patients with amyotrophic lateral sclerosis. These patients usually experience neck pain and have difficulty in swallowing and breathing. Static neck braces are used in current treatment. These braces, however, immobilize the head in a single configuration, which causes muscle atrophy. This letter presents the design of a dynamic neck brace for the first time in the literature, which can both measure and potentially assist in the head motion of the human user. This letter introduces the brace design method and validates its capability to perform measurements. The brace is designed based on kinematics data collected from a healthy individual via a motion capture system. A pilot study was conducted to evaluate the wearability of the brace and the accuracy of measurements with the brace. This study recruited ten participants who performed a series of head motions. The results of this human study indicate that the brace is wearable by individuals who vary in size, the brace allows nearly $70\\%$ of the overall range of head rotations, and the sensors on the brace give accurate motion of the head with an error of under $5^{\\circ }$ when compared to a motion capture system. We believe that this neck brace can be a valid and accurate measurement tool for human head motion. This brace will be a big improvement in the available technologies to measure head motion as these are currently done in the clinic using hand-held protractors in two orthogonal planes.", "title": "" }, { "docid": "7ccac1f6b753518495c44a48f4ec324a", "text": "We propose a method to recover the shape of a 3D room from a full-view indoor panorama. Our algorithm can automatically infer a 3D shape from a collection of partially oriented superpixel facets and line segments. The core part of the algorithm is a constraint graph, which includes lines and superpixels as vertices, and encodes their geometric relations as edges. A novel approach is proposed to perform 3D reconstruction based on the constraint graph by solving all the geometric constraints as constrained linear least-squares. The selected constraints used for reconstruction are identified using an occlusion detection method with a Markov random field. Experiments show that our method can recover room shapes that can not be addressed by previous approaches. Our method is also efficient, that is, the inference time for each panorama is less than 1 minute.", "title": "" }, { "docid": "126b52ab2e2585eabf3345ef7fb39c51", "text": "We propose a method to build in real-time animated 3D head models using a consumer-grade RGB-D camera. Our framework is the first one to provide simultaneously comprehensive facial motion tracking and a detailed 3D model of the user's head. Anyone's head can be instantly reconstructed and his facial motion captured without requiring any training or pre-scanning. The user starts facing the camera with a neutral expression in the first frame, but is free to move, talk and change his face expression as he wills otherwise. The facial motion is tracked using a blendshape representation while the fine geometric details are captured using a Bump image mapped over the template mesh. We propose an efficient algorithm to grow and refine the 3D model of the head on-the-fly and in real-time. We demonstrate robust and high-fidelity simultaneous facial motion tracking and 3D head modeling results on a wide range of subjects with various head poses and facial expressions. Our proposed method offers interesting possibilities for animation production and 3D video telecommunications.", "title": "" }, { "docid": "d912931af094b91634e2c194e5372c1e", "text": "Threats from social engineering can cause organisations severe damage if they are not considered and managed. In order to understand how to manage those threats, it is important to examine reasons why organisational employees fall victim to social engineering. In this paper, the objective is to understand security behaviours in practice by investigating factors that may cause an individual to comply with a request posed by a perpetrator. In order to attain this objective, we collect data through a scenario-based survey and conduct phishing experiments in three organisations. The results from the experiment reveal that the degree of target information in an attack increases the likelihood that an organisational employee fall victim to an actual attack. Further, an individual’s trust and risk behaviour significantly affects the actual behaviour during the phishing experiment. Computer experience at work, helpfulness and gender (females tend to be less susceptible to a generic attack than men), has a significant correlation with behaviour reported by respondents in the scenario-based survey. No correlation between the performance in the scenario-based survey and experiment was found. We argue that the result does not imply that one or the other method should be ruled out as they have both advantages and disadvantages which should be considered in the context of collecting data in the critical domain of information security. Discussions of the findings, implications and recommendations for future research are further provided.", "title": "" }, { "docid": "f69d31b04233f59dd92127cee5321910", "text": "The subject of this talk is Morse landscapes of natural functionals on infinitedimensional moduli spaces appearing in Riemannian geometry. First, we explain how recursion theory can be used to demonstrate that for many natural functionals on spaces of Riemannian structures, spaces of submanifolds, etc., their Morse landscapes are always more complicated than what follows from purely topological reasons. These Morse landscapes exhibit non-trivial “deep” local minima, cycles in sublevel sets that become nullhomologous only in sublevel sets corresponding to a much higher value of functional, etc. Our second topic is Morse landscapes of the length functional on loop spaces. Here the main conclusion (obtained jointly with Regina Rotman) is that these Morse landscapes can be much more complicated than what follows from topological considerations only if the length functional has “many” “deep” local minima, and the values of the length at these local minima are not “very large”. Mathematics Subject Classification (2000). Primary 53C23, 58E11, 53C20; Secondary 03D80, 68Q30, 53C40, 58E05.", "title": "" }, { "docid": "ab231cbc45541b5bdbd0da82571b44ca", "text": "ABSTRACT Evidence of Sedona magnetic anomaly and brainwave EEG synchronization can be demonstrated with portable equipment on site in the field, during sudden magnetic events. Previously, we have demonstrated magnetic anomaly charts recorded in both known and unrecognized Sedona vortex activity locations. We have also shown a correlation or amplification of vortex phenomena with Schumann Resonance. Adding the third measurable parameter of brain wave activity, we demonstrate resonance and amplification among them. We suggest tiny magnetic crystals, biogenic magnetite, make human beings highly sensitive to ELF field fluctuations. Biological Magnetite could act as a transducer of both low frequency magnetic fields and RF fields.", "title": "" }, { "docid": "ae8f5c568b2fdbb2dbef39ac277ddb24", "text": "Knowledge graph construction consists of two tasks: extracting information from external resources (knowledge population) and inferring missing information through a statistical analysis on the extracted information (knowledge completion). In many cases, insufficient external resources in the knowledge population hinder the subsequent statistical inference. The gap between these two processes can be reduced by an incremental population approach. We propose a new probabilistic knowledge graph factorisation method that benefits from the path structure of existing knowledge (e.g. syllogism) and enables a common modelling approach to be used for both incremental population and knowledge completion tasks. More specifically, the probabilistic formulation allows us to develop an incremental population algorithm that trades off exploitation-exploration. Experiments on three benchmark datasets show that the balanced exploitation-exploration helps the incremental population, and the additional path structure helps to predict missing information in knowledge completion.", "title": "" }, { "docid": "f383934a6b4b5971158e001b41f1f2ac", "text": "A survey of mental health problems of university students was carried out on 1850 participants in the age range 19-26 years. An indigenous Student Problem Checklist (SPCL) developed by Mahmood & Saleem, (2011), 45 items is a rating scale, designed to determine the prevalence rate of mental health problem among university students. This scale relates to four dimensions of mental health problems as reported by university students, such as: Sense of Being Dysfunctional, Loss of Confidence, Lack of self Regulation and Anxiety Proneness. For interpretation of the overall SPCL score, the authors suggest that scores falling above one SD should be considered as indicative of severe problems, where as score about 2 SD represent very severe problems. Our finding show that 31% of the participants fall in the “severe” category, whereas 16% fall in the “very severe” category. As far as the individual dimensions are concerned, 17% respondents comprising sample of the present study fall in very severe category Sense of Being Dysfunctional, followed by Loss of Confidence (16%), Lack of Self Regulation (14%) and Anxiety Proneness (12%). These findings are in lying with similar other studies on mental health of students. The role of variables like sample characteristics, the measure used, cultural and contextual factors are discussed in determining rates as well as their implications for student counseling service in prevention and intervention.", "title": "" }, { "docid": "8439dbba880179895ab98a521b4c254f", "text": "Given the increase in demand for sustainable livelihoods for coastal villagers in developing countries and for the commercial eucheumoid Kappaphycus alvarezii (Doty) Doty, for the carrageenan industry, there is a trend towards introducing K. alvarezii to more countries in the tropical world for the purpose of cultivation. However, there is also increasing concern over the impact exotic species have on endemic ecosystems and biodiversity. Quarantine and introduction procedures were tested in northern Madagascar and are proposed for all future introductions of commercial eucheumoids (K. alvarezii, K. striatum and Eucheuma denticulatum). In addition, the impact and extent of introduction of K. alvarezii was measured on an isolated lagoon in the southern Lau group of Fiji. It is suggested that, in areas with high human population density, the overwhelming benefits to coastal ecosystems by commercial eucheumoid cultivation far outweigh potential negative impacts. However, quarantine and introduction procedures should be followed. In addition, introduction should only take place if a thorough survey has been conducted and indicates the site is appropriate. Subsequently, the project requires that a well designed and funded cultivation development programme, with a management plan and an assured market, is in place in order to make certain cultivation, and subsequently the introduced algae, will not be abandoned at a later date. KAPPAPHYCUS ALVAREZI", "title": "" }, { "docid": "3eee111e4521528031019f83786efab7", "text": "Social media platforms such as Twitter and Facebook enable the creation of virtual customer environments (VCEs) where online communities of interest form around specific firms, brands, or products. While these platforms can be used as another means to deliver familiar e-commerce applications, when firms fail to fully engage their customers, they also fail to fully exploit the capabilities of social media platforms. To gain business value, organizations need to incorporate community building as part of the implementation of social media.", "title": "" }, { "docid": "6573b7d885685615d99f2ef21a7fce99", "text": "Keyword search on graph structured data has attracted a lot of attention in recent years. Graphs are a natural “lowest common denominator” representation which can combine relational, XML and HTML data. Responses to keyword queries are usually modeled as trees that connect nodes matching the keywords. In this paper we address the problem of keyword search on graphs that may be significantly larger than memory. We propose a graph representation technique that combines a condensed version of the graph (the “supernode graph”) which is always memory resident, along with whatever parts of the detailed graph are in a cache, to form a multi-granular graph representation. We propose two alternative approaches which extend existing search algorithms to exploit multigranular graphs; both approaches attempt to minimize IO by directing search towards areas of the graph that are likely to give good results. We compare our algorithms with a virtual memory approach on several real data sets. Our experimental results show significant benefits in terms of reduction in IO due to our algorithms.", "title": "" }, { "docid": "a636f977eb29b870cefe040f3089de44", "text": "We consider the network implications of virtual reality (VR) and augmented reality (AR). While there are intrinsic challenges for AR/VR applications to deliver on their promise, their impact on the underlying infrastructure will be undeniable. We look at augmented and virtual reality and consider a few use cases where they could be deployed. These use cases define a set of requirements for the underlying network. We take a brief look at potential network architectures. We then make the case for Information-centric networks as a potential architecture to assist the deployment of AR/VR and draw a list of challenges and future research directions for next generation networks to better support AR/VR.", "title": "" }, { "docid": "af5a8f2811ff334d742f802c6c1b7833", "text": "Kalman filter extensions are commonly used algorithms for nonlinear state estimation in time series. The structure of the state and measurement models in the estimation problem can be exploited to reduce the computational demand of the algorithms. We review algorithms that use different forms of structure and show how they can be combined. We show also that the exploitation of the structure of the problem can lead to improved accuracy of the estimates while reducing the computational load.", "title": "" } ]
scidocsrr
ae408b6340eee0c0a75498379482cc1a
Land Use Classification in Remote Sensing Images by Convolutional Neural Networks
[ { "docid": "698fb992c5ff7ecc8d2e153f6b385522", "text": "We investigate bag-of-visual-words (BOVW) approaches to land-use classification in high-resolution overhead imagery. We consider a standard non-spatial representation in which the frequencies but not the locations of quantized image features are used to discriminate between classes analogous to how words are used for text document classification without regard to their order of occurrence. We also consider two spatial extensions, the established spatial pyramid match kernel which considers the absolute spatial arrangement of the image features, as well as a novel method which we term the spatial co-occurrence kernel that considers the relative arrangement. These extensions are motivated by the importance of spatial structure in geographic data.\n The methods are evaluated using a large ground truth image dataset of 21 land-use classes. In addition to comparisons with standard approaches, we perform extensive evaluation of different configurations such as the size of the visual dictionaries used to derive the BOVW representations and the scale at which the spatial relationships are considered.\n We show that even though BOVW approaches do not necessarily perform better than the best standard approaches overall, they represent a robust alternative that is more effective for certain land-use classes. We also show that extending the BOVW approach with our proposed spatial co-occurrence kernel consistently improves performance.", "title": "" }, { "docid": "b6da971f13c1075ce1b4aca303e7393f", "text": "In this paper, we evaluate the generalization power of deep features (ConvNets) in two new scenarios: aerial and remote sensing image classification. We evaluate experimentally ConvNets trained for recognizing everyday objects for the classification of aerial and remote sensing images. ConvNets obtained the best results for aerial images, while for remote sensing, they performed well but were outperformed by low-level color descriptors, such as BIC. We also present a correlation analysis, showing the potential for combining/fusing different ConvNets with other descriptors or even for combining multiple ConvNets. A preliminary set of experiments fusing ConvNets obtains state-of-the-art results for the well-known UCMerced dataset.", "title": "" } ]
[ { "docid": "d02e87a00aaf29a86cf94ad0c539fd0d", "text": "Future advanced driver assistance systems will contain multiple sensors that are used for several applications, such as highly automated driving on freeways. The problem is that the sensors are usually asynchronous and their data possibly out-of-sequence, making fusion of the sensor data non-trivial. This paper presents a novel approach to track-to-track fusion for automotive applications with asynchronous and out-of-sequence sensors using information matrix fusion. This approach solves the problem of correlation between sensor data due to the common process noise and common track history, which eliminates the need to replace the global track estimate with the fused local estimate at each fusion cycle. The information matrix fusion approach is evaluated in simulation and its performance demonstrated using real sensor data on a test vehicle designed for highly automated driving on freeways.", "title": "" }, { "docid": "1971e12a6792991f77f59cbb42dedb32", "text": "The use of deep learning to solve the problems in literary arts has been a recent trend that gained a lot of attention and automated generation of music has been an active area. This project deals with the generation of music using raw audio files in the frequency domain relying on various LSTM architectures. Fully connected and convolutional layers are used along with LSTM’s to capture rich features in the frequency domain and increase the quality of music generated. The work is focused on unconstrained music generation and uses no information about musical structure(notes or chords) to aid learning.The music generated from various architectures are compared using blind fold tests. Using the raw audio to train models is the direction to tapping the enormous amount of mp3 files that exist over the internet without requiring the manual effort to make structured MIDI files. Moreover, not all audio files can be represented with MIDI files making the study of these models an interesting prospect to the future of such models.", "title": "" }, { "docid": "f071a3d699ba4b3452043b6efb14b508", "text": "BACKGROUND\nThe medical subdomain of a clinical note, such as cardiology or neurology, is useful content-derived metadata for developing machine learning downstream applications. To classify the medical subdomain of a note accurately, we have constructed a machine learning-based natural language processing (NLP) pipeline and developed medical subdomain classifiers based on the content of the note.\n\n\nMETHODS\nWe constructed the pipeline using the clinical NLP system, clinical Text Analysis and Knowledge Extraction System (cTAKES), the Unified Medical Language System (UMLS) Metathesaurus, Semantic Network, and learning algorithms to extract features from two datasets - clinical notes from Integrating Data for Analysis, Anonymization, and Sharing (iDASH) data repository (n = 431) and Massachusetts General Hospital (MGH) (n = 91,237), and built medical subdomain classifiers with different combinations of data representation methods and supervised learning algorithms. We evaluated the performance of classifiers and their portability across the two datasets.\n\n\nRESULTS\nThe convolutional recurrent neural network with neural word embeddings trained-medical subdomain classifier yielded the best performance measurement on iDASH and MGH datasets with area under receiver operating characteristic curve (AUC) of 0.975 and 0.991, and F1 scores of 0.845 and 0.870, respectively. Considering better clinical interpretability, linear support vector machine-trained medical subdomain classifier using hybrid bag-of-words and clinically relevant UMLS concepts as the feature representation, with term frequency-inverse document frequency (tf-idf)-weighting, outperformed other shallow learning classifiers on iDASH and MGH datasets with AUC of 0.957 and 0.964, and F1 scores of 0.932 and 0.934 respectively. We trained classifiers on one dataset, applied to the other dataset and yielded the threshold of F1 score of 0.7 in classifiers for half of the medical subdomains we studied.\n\n\nCONCLUSION\nOur study shows that a supervised learning-based NLP approach is useful to develop medical subdomain classifiers. The deep learning algorithm with distributed word representation yields better performance yet shallow learning algorithms with the word and concept representation achieves comparable performance with better clinical interpretability. Portable classifiers may also be used across datasets from different institutions.", "title": "" }, { "docid": "bb72e4d6f967fb88473756cdcbb04252", "text": "GF (Grammatical Framework) is a grammar formalism based on the distinction between abstract and concrete syntax. An abstract syntax is a free algebra of trees, and a concrete syntax is a mapping from trees to nested records of strings and features. These mappings are naturally defined as functions in a functional programming language; the GF language provides the customary functional programming constructs such as algebraic data types, pattern matching, and higher-order functions, which enable productive grammar writing and linguistic generalizations. Given the seemingly transformational power of the GF language, its computational properties are not obvious. However, all grammars written in GF can be compiled into a simple and austere core language, Canonical GF (CGF). CGF is well suited for implementing parsing and generation with grammars, as well as for proving properties of GF. This paper gives a concise description of both the core and the source language, the algorithm used in compiling GF to CGF, and some back-end optimizations on CGF.", "title": "" }, { "docid": "1c415034b3e9e0e2013624c69c386f13", "text": "For a microgrid (MG) to participate in a real-time and demand-side bidding market, high-level control strategies aiming at optimizing the operation of the MG are necessary. One of the difficulties for research of a competitive MG power market is the absence of efficient computational tools. Although many commercial power system simulators are available, these power system simulators are usually not directly applicable to solve the optimal power dispatch problem for an MG power market and to perform MG power-flow study. This paper analyzes the typical MG market policies and investigates how these policies can be converted in such a way that one can use commercial power system software for MG power market study. The paper also develops a mechanism suitable for the power-flow study of an MG containing inverter-interfaced distributed energy sources. The extensive simulation analyses are conducted for grid-tied and islanded operations of a benchmark MG network.", "title": "" }, { "docid": "409f3b2768a8adf488eaa6486d1025a2", "text": "The aim of the study was to investigate prospectively the direction of the relationship between adolescent girls' body dissatisfaction and self-esteem. Participants were 242 female high school students who completed questionnaires at two points in time, separated by 2 years. The questionnaire contained measures of weight (BMI), body dissatisfaction (perceived overweight, figure dissatisfaction, weight satisfaction) and self-esteem. Initial body dissatisfaction predicted self-esteem at Time 1 and Time 2, and initial self-esteem predicted body dissatisfaction at Time 1 and Time 2. However, linear panel analysis (regression analyses controlling for Time 1 variables) found that aspects of Time 1 weight and body dissatisfaction predicted change in self-esteem, but not vice versa. It was concluded that young girls with heavier actual weight and perceptions of being overweight were particularly vulnerable to developing low self-esteem.", "title": "" }, { "docid": "a014644ccccb2a06d820ee975cfdfa88", "text": "Analyzing customer feedback is the best way to channelize the data into new marketing strategies that benefit entrepreneurs as well as customers. Therefore an automated system which can analyze the customer behavior is in great demand. Users may write feedbacks in any language, and hence mining appropriate information often becomes intractable. Especially in a traditional feature-based supervised model, it is difficult to build a generic system as one has to understand the concerned language for finding the relevant features. In order to overcome this, we propose deep Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN) based approaches that do not require handcrafting of features. We evaluate these techniques for analyzing customer feedback sentences on four languages, namely English, French, Japanese and Spanish. Our empirical analysis shows that our models perform well in all the four languages on the setups of IJCNLP Shared Task on Customer Feedback Analysis. Our model achieved the second rank in French, with an accuracy of 71.75% and third ranks for all the other languages.", "title": "" }, { "docid": "23eb979ec3e17db2b162b659e296a10e", "text": "The authors would like to thank the Marketing Science Institute for their generous assistance in funding this research. We would also like to thank Claritas for providing us with data. We are indebted to Vincent Bastien, former CEO of Louis Vuitton, for the time he has spent with us critiquing our framework.", "title": "" }, { "docid": "31d055afdf6d40a5a2e897e9a78a0867", "text": "Photoluminescent graphene quantum dots (GQDs) have received enormous attention because of their unique chemical, electronic and optical properties. Here a series of GQDs were synthesized under hydrothermal processes in order to investigate the formation process and optical properties of N-doped GQDs. Citric acid (CA) was used as a carbon precursor and self-assembled into sheet structure in a basic condition and formed N-free GQD graphite framework through intermolecular dehydrolysis reaction. N-doped GQDs were prepared using a series of N-containing bases such as urea. Detailed structural and property studies demonstrated the formation mechanism of N-doped GQDs for tunable optical emissions. Hydrothermal conditions promote formation of amide between -NH₂ and -COOH with the presence of amine in the reaction. The intramoleculur dehydrolysis between neighbour amide and COOH groups led to formation of pyrrolic N in the graphene framework. Further, the pyrrolic N transformed to graphite N under hydrothermal conditions. N-doping results in a great improvement of PL quantum yield (QY) of GQDs. By optimized reaction conditions, the highest PL QY (94%) of N-doped GQDs was obtained using CA as a carbon source and ethylene diamine as a N source. The obtained N-doped GQDs exhibit an excitation-independent blue emission with single exponential lifetime decay.", "title": "" }, { "docid": "45712feb68b83cc054027807c1a30130", "text": "A solar energy semiconductor cooling box is presented in the paper. The cooling box is compact and easy to carry, can be made a special refrigeration unit which is smaller according to user needs. The characteristics of the cooling box are its simple use and maintenance, safe performance, decentralized power supply, convenient energy storage, no environmental pollution, and so on. In addition, compared with the normal mechanical refrigeration, the semiconductor refrigeration system which makes use of Peltier effect does not require pumps, compressors and other moving parts, and so there is no wear and noise. It does not require refrigerant so it will not produce environmental pollution, and it also eliminates the complex transmission pipeline. The concrete realization form of power are “heat - electric - cold”, “light - electric - cold”, “light - heat - electric - cold”. In order to achieve the purpose of cooling, solar cells generate electricity to drive the semiconductor cooling devices. The working principle is mainly photovoltaic effect and the Peltier effect.", "title": "" }, { "docid": "288ce84b9dd3244cce2044d53f35cd4b", "text": "Margaret-Anne Storey University of Victoria Victoria, BC, Canada mstorey@uvic.ca Abstract Modern software developers rely on an extensive set of social media tools and communication channels. The adoption of team communication platforms has led to the emergence of conversation-based tools and integrations, many of which are chatbots. Understanding how software developers manage their complex constellation of collaborators in conjunction with the practices and tools they use can bring valuable insights into socio-technical collaborative work in software development and other knowledge work domains.", "title": "" }, { "docid": "2afb992058eb720ff0baf4216e3a22c2", "text": "In most cases authors are permitted to post their version of the article (e.g. in Word or Tex form) to their personal website or institutional repository. Authors requiring further information regarding Elsevier's archiving and manuscript policies are encouraged to visit: Summary. — A longitudinal anthropological study of cotton farming in Warangal District of Andhra Pradesh, India, compares a group of villages before and after adoption of Bt cotton. It distinguishes \" field-level \" and \" farm-level \" impacts. During this five-year period yields rose by 18% overall, with greater increases among poor farmers with the least access to information. Insecticide sprayings dropped by 55%, although predation by non-target pests was rising. However shifting from the field to the historically-situated context of the farm recasts insect attacks as a symptom of larger problems in agricultural decision-making. Bt cotton's opponents have failed to recognize real benefits at the field level, while its backers have failed to recognize systemic problems that Bt cotton may exacerbate.", "title": "" }, { "docid": "e5f2e7b7dfdfaee33a2187a0a7183cfb", "text": "BACKGROUND\nPossible associations between television viewing and video game playing and children's aggression have become public health concerns. We did a systematic review of studies that examined such associations, focussing on children and young people with behavioural and emotional difficulties, who are thought to be more susceptible.\n\n\nMETHODS\nWe did computer-assisted searches of health and social science databases, gateways, publications from relevant organizations and for grey literature; scanned bibliographies; hand-searched key journals; and corresponded with authors. We critically appraised all studies.\n\n\nRESULTS\nA total of 12 studies: three experiments with children with behavioural and emotional difficulties found increased aggression after watching aggressive as opposed to low-aggressive content television programmes, one found the opposite and two no clear effect, one found such children no more likely than controls to imitate aggressive television characters. One case-control study and one survey found that children and young people with behavioural and emotional difficulties watched more television than controls; another did not. Two studies found that children and young people with behavioural and emotional difficulties viewed more hours of aggressive television programmes than controls. One study on video game use found that young people with behavioural and emotional difficulties viewed more minutes of violence and played longer than controls. In a qualitative study children with behavioural and emotional difficulties, but not their parents, did not associate watching television with aggression. All studies had significant methodological flaws. None was based on power calculations.\n\n\nCONCLUSION\nThis systematic review found insufficient, contradictory and methodologically flawed evidence on the association between television viewing and video game playing and aggression in children and young people with behavioural and emotional difficulties. If public health advice is to be evidence-based, good quality research is needed.", "title": "" }, { "docid": "4ef20b58ce1418e25e503d929798b0e4", "text": "The findings of 54 research studies were integrated through meta-analysis to determine the effects of calculators on student achievement and attitude levels. Effect sizes were generated through Glassian techniques of meta-analysis, and Hedges and Olkin’s (1985) inferential statistical methods were used to test the significance of effect size data. Results revealed that students’ operational skills and problem-solving skills improved when calculators were an integral part of testing and instruction. The results for both skill types were mixed when calculators were not part of assessment, but in all cases, calculator use did not hinder the development of mathematical skills. Students using calculators had better attitudes toward mathematics than their noncalculator counterparts. Further research is needed in the retention of mathematics skills after instruction and transfer of skills to other mathematics-related subjects.", "title": "" }, { "docid": "04c34a13eecc8f652e3231fcc8cb9aaa", "text": "C. Midgley et al. (2001) raised important questions about the effects of performance-approach goals. The present authors disagree with their characterization of the research findings and implications for theory. They discuss 3 reasons to revise goal theory: (a) the importance of separating approach from avoidance strivings, (b) the positive potential of performance-approach goals, and (c) identification of the ways performance-approach goals can combine with mastery goals to promote optimal motivation. The authors review theory and research to substantiate their claim that goal theory is in need of revision, and they endorse a multiple goal perspective. The revision of goal theory is underway and offers a more complex, but necessary, perspective on important issues of motivation, learning, and achievement.", "title": "" }, { "docid": "6b6285cd8512a2376ae331fda3fedf20", "text": "The Facial Action Coding System (FACS) (Ekman & Friesen, 1978) is a comprehensive and widely used method of objectively describing facial activity. Little is known, however, about inter-observer reliability in coding the occurrence, intensity, and timing of individual FACS action units. The present study evaluated the reliability of these measures. Observational data came from three independent laboratory studies designed to elicit a wide range of spontaneous expressions of emotion. Emotion challenges included olfactory stimulation, social stress, and cues related to nicotine craving. Facial behavior was video-recorded and independently scored by two FACS-certified coders. Overall, we found good to excellent reliability for the occurrence, intensity, and timing of individual action units and for corresponding measures of more global emotion-specified combinations.", "title": "" }, { "docid": "598f73160eae35c94d2f77a7b9c0ecb3", "text": "Homocysteine (HCY) is a degradation product of the methionine pathway. The B vitamins, in particular vitamin B12 and folate, are the primary nutritional determinant of HCY levels and therefore their deficiencies result in hyperhomocysteinaemia (HHCY). Prevalence of hyperhomocysteinemia (HHCY) and related dietary deficiencies in B vitamins and folate increase with age and have been related to osteoporosis and abnormal development of epiphyseal cartilage and bone in rodents. Here we provide a review of experimental and population studies. The negative effects of HHCY and/or B vitamins and folate deficiencies on bone formation and remodeling are documented by cell models, including primary osteoblasts, osteoclast and bone progenitor cells as well as by animal and human studies. However, underlying pathophysiological mechanisms are complex and remain poorly understood. Whether these associations are the direct consequences of impaired one carbon metabolism is not clarified and more studies are still needed to translate these findings to human population. To date, the evidence is limited and somewhat conflicting, however further trials in groups most vulnerable to impaired one carbon metabolism are required.", "title": "" }, { "docid": "ffc521b597ab5332c3541a06a01c5531", "text": "This research deals with a vital and important issue in computer world. It is concerned with the software management processes that examine the area of software development through the development models, which are known as software development life cycle. It represents five of the development models namely, waterfall, Iteration, V-shaped, spiral and Extreme programming. These models have advantages and disadvantages as well. Therefore, the main objective of this research is to represent different models of software development and make a comparison between them to show the features and defects of each model.", "title": "" }, { "docid": "f57fbb53b069fe60d7dcd3d450fd3783", "text": "Host-based security tools such as anti-virus and intrusion detection systems are not adequately protected on today's computers. Malware is often designed to immediately disable any security tools upon installation, rendering them useless. While current research has focused on moving these vulnerable security tools into an isolated virtual machine, this approach cripples security tools by preventing them from doing active monitoring. This paper describes an architecture that takes a hybrid approach, giving security tools the ability to do active monitoring while still benefiting from the increased security of an isolated virtual machine. We discuss the architecture and a prototype implementation that can process hooks from a virtual machine running Windows XP on Xen. We conclude with a security analysis and show the performance of a single hook to be 28 musecs in the best case.", "title": "" } ]
scidocsrr
9769f8fd969f8b42a3643e01d04ea6fc
CLUSTERGEN: a statistical parametric synthesizer using trajectory modeling
[ { "docid": "6d517b4459ee29c5554280e8339adbcc", "text": "This paper describes an HMM-based speech synthesis system (HTS), in which speech waveform is generated from HMMs themselves, and applies it to English speech synthesis using the general speech synthesis architecture of Festival. Similarly to other datadriven speech synthesis approaches, HTS has a compact language dependent module: a list of contextual factors. Thus, it could easily be extended to other languages, though the first version of HTS was implemented for Japanese. The resulting run-time engine of HTS has the advantage of being small: less than 1 M bytes, excluding text analysis part. Furthermore, HTS can easily change voice characteristics of synthesized speech by using a speaker adaptation technique developed for speech recognition. The relation between the HMM-based approach and other unit selection approaches is also discussed.", "title": "" } ]
[ { "docid": "2cea5f37c8c03fc0b6abc9e5d70bb1b3", "text": "This paper summarize our approach to author profiling task – a part of evaluation lab PAN’13. We have used ensemble-based classification on large features set. All the features are roughly described and experimental section provides evaluation of different methods and classification approaches.", "title": "" }, { "docid": "0fdc468347fc6c50767687d5364a098e", "text": "We study a generalization of the setting of regenerating codes, motivated by applications to storage systems consisting of clusters of storage nodes. There are n clusters in total, with m nodes per cluster. A data file is coded and stored across the mn nodes, with each node storing α symbols. For availability of data, we demand that the file is retrievable by downloading the entire content from any subset of k clusters. Nodes represent entities that can fail, and here we distinguish between intra-cluster and inter-cluster bandwidth-costs during node repair. Node-repair is accomplished by downloading β symbols each from any set of d other clusters. The replacement-node also downloads content from any set of ` surviving nodes in the same cluster during the repair process. We identity the optimal trade-off between storage-overhead and inter-cluster (IC) repair-bandwidth under functional repair, and also present optimal exact-repair code constructions for a class of parameters. Our results imply that it is possible to simultaneously achieve both optimal storage overhead and optimal minimum IC bandwidth, for sufficiently large values of nodes per cluster. The simultaneous optimality comes at the expense of intra-cluster bandwidth, and we obtain lower bounds on the necessary intra-cluster repair-bandwidth. Simulation results based on random linear network codes suggest optimality of the bounds on intra-cluster repair-bandwidth.", "title": "" }, { "docid": "2a5f555c00d98a87fe8dd6d10e27dc38", "text": "Neurodegeneration is a phenomenon that occurs in the central nervous system through the hallmarks associating the loss of neuronal structure and function. Neurodegeneration is observed after viral insult and mostly in various so-called 'neurodegenerative diseases', generally observed in the elderly, such as Alzheimer's disease, multiple sclerosis, Parkinson's disease and amyotrophic lateral sclerosis that negatively affect mental and physical functioning. Causative agents of neurodegeneration have yet to be identified. However, recent data have identified the inflammatory process as being closely linked with multiple neurodegenerative pathways, which are associated with depression, a consequence of neurodegenerative disease. Accordingly, pro‑inflammatory cytokines are important in the pathophysiology of depression and dementia. These data suggest that the role of neuroinflammation in neurodegeneration must be fully elucidated, since pro‑inflammatory agents, which are the causative effects of neuroinflammation, occur widely, particularly in the elderly in whom inflammatory mechanisms are linked to the pathogenesis of functional and mental impairments. In this review, we investigated the role played by the inflammatory process in neurodegenerative diseases.", "title": "" }, { "docid": "a0f4b7f3f9f2a5d430a3b8acead2b746", "text": "Imagining a scene described in natural language with realistic layout and appearance of entities is the ultimate test of spatial, visual, and semantic world knowledge. Towards this goal, we present the Composition, Retrieval and Fusion Network (Craft), a model capable of learning this knowledge from video-caption data and applying it while generating videos from novel captions. Craft explicitly predicts a temporal-layout of mentioned entities (characters and objects), retrieves spatio-temporal entity segments from a video database and fuses them to generate scene videos. Our contributions include sequential training of components of Craft while jointly modeling layout and appearances, and losses that encourage learning compositional representations for retrieval. We evaluate Craft on semantic fidelity to caption, composition consistency, and visual quality. Craft outperforms direct pixel generation approaches and generalizes well to unseen captions and to unseen video databases with no text annotations. We demonstrate Craft on Flintstones, a new richly annotated video-caption dataset with over 25000 videos. For a glimpse of videos generated by Craft, see https://youtu.be/688Vv86n0z8. Fred wearing a red hat is walking in the living room Retrieve Compose Retrieve Compose Retrieve Pebbles is sitting at a table in a room watching the television Retrieve Compose Retrieve Compose Compose Retrieve Retrieve Fuse", "title": "" }, { "docid": "eea288f275b0eab62dddd64a469a2d63", "text": "Glucose control serves as the primary method of diabetes management. Current digital therapeutic approaches for subjects with Type 1 diabetes mellitus (T1DM) such as the artificial pancreas and bolus calculators leverage machine learning techniques for predicting subcutaneous glucose for improved control. Deep learning has recently been applied in healthcare and medical research to achieve state-of-the-art results in a range of tasks including disease diagnosis, and patient state prediction among others. In this work, we present a deep learning model that is capable of predicting glucose levels over a 30-minute horizon with leading accuracy for simulated patient cases (RMSE = 10.02±1.28 [mg/dl] and MARD = 5.95±0.64%) and real patient cases (RMSE = 21.23±1.15 [mg/dl] and MARD = 10.53±1.28%). In addition, the model also provides competitive performance in forecasting adverse glycaemic events with minimal time lag both in a simulated patient dataset (MCChyperglycaemia = 0.82±0.06 and MCChypoglycaemia = 0.76±0.13) and in a real patient dataset (MCChyperglycaemia = 0.79±0.04 and MCChypoglycaemia = 0.28±0.11). This approach is evaluated on a dataset of 10 simulated cases generated from the UVa/Padova simulator and a clinical dataset of 5 real cases each containing glucose readings, insulin bolus, and meal (carbohydrate) data. Performance of the recurrent convolutional neural network is benchmarked against four state-of-the-art algorithms: support vector regression (SVR), latent variable (LVX) model, autoregressive model (ARX), and neural network for predicting glucose algorithm (NNPG).", "title": "" }, { "docid": "c67010d61ec7f9ea839bbf7d2dce72a1", "text": "Almost all cellular mobile communications including first generation analog systems, second generation digital systems, third generation WCDMA, and fourth generation OFDMA systems use Ultra High Frequency (UHF) band of radio spectrum with frequencies in the range of 300MHz-3GHz. This band of spectrum is becoming increasingly crowded due to spectacular growth in mobile data and other related services. More recently, there have been proposals to explore mmWave spectrum (3-300GHz) for commercial mobile applications due to its unique advantages such as spectrum availability and small component sizes. In this paper, we discuss system design aspects such as antenna array design, base station and mobile station requirements. We also provide system performance and SINR geometry results to demonstrate the feasibility of an outdoor mmWave mobile broadband communication system. We note that with adaptive antenna array beamforming, multi-Gbps data rates can be supported for mobile cellular deployments.", "title": "" }, { "docid": "61359ded391acaaaab0d4b9a0d851b8c", "text": "A laparoscopic Heller myotomy with partial fundoplication is considered today in most centers in the United States and abroad the treatment of choice for patients with esophageal achalasia. Even though the operation has initially a very high success rate, dysphagia eventually recurs in some patients. In these cases, it is important to perform a careful work-up to identify the cause of the failure and to design a tailored treatment plan by either endoscopic means or revisional surgery. The best results are obtained by a team approach, in Centers where radiologists, gastroenterologists, and surgeons have experience in the diagnosis and treatment of this disease.", "title": "" }, { "docid": "8f29de514e2a266a02be4b75d62be44f", "text": "In this work, we apply word embeddings and neural networks with Long Short-Term Memory (LSTM) to text classification problems, where the classification criteria are decided by the context of the application. We examine two applications in particular. The first is that of Actionability, where we build models to classify social media messages from customers of service providers as Actionable or Non-Actionable. We build models for over 30 different languages for actionability, and most of the models achieve accuracy around 85%, with some reaching over 90% accuracy. We also show that using LSTM neural networks with word embeddings vastly outperform traditional techniques. Second, we explore classification of messages with respect to political leaning, where social media messages are classified as Democratic or Republican. The model is able to classify messages with a high accuracy of 87.57%. As part of our experiments, we vary different hyperparameters of the neural networks, and report the effect of such variation on the accuracy. These actionability models have been deployed to production and help company agents provide customer support by prioritizing which messages to respond to. The model for political leaning has been opened and made available for wider use.", "title": "" }, { "docid": "c974e6b4031fde2b8e1de3ade33caef4", "text": "A large literature has considered predictability of the mean or volatility of stock returns but little is known about whether the distribution of stock returns more generally is predictable. We explore this issue in a quantile regression framework and consider whether a range of economic state variables are helpful in predicting different quantiles of stock returns representing left tails, right tails or shoulders of the return distribution. Many variables are found to have an asymmetric effect on the return distribution, affecting lower, central and upper quantiles very differently. Out-of-sample forecasts suggest that upper quantiles of the return distribution can be predicted by means of economic state variables although the center of the return distribution is more difficult to predict. Economic gains from utilizing information in time-varying quantile forecasts are demonstrated through portfolio selection and option trading experiments. ∗We thank Torben Andersen, Tim Bollerslev, Peter Christoffersen as well as seminar participants at HEC, University of Montreal, University of Toronto, Goldman Sachs and CREATES, University of Aarhus, for helpful comments.", "title": "" }, { "docid": "247eced239dfd8c1631d80a592593471", "text": "In this paper, we propose new algorithms for learning segmentation strategies for simultaneous speech translation. In contrast to previously proposed heuristic methods, our method finds a segmentation that directly maximizes the performance of the machine translation system. We describe two methods based on greedy search and dynamic programming that search for the optimal segmentation strategy. An experimental evaluation finds that our algorithm is able to segment the input two to three times more frequently than conventional methods in terms of number of words, while maintaining the same score of automatic evaluation.1", "title": "" }, { "docid": "049f0308869c53bbb60337874789d569", "text": "In machine learning, one of the main requirements is to build computational models with a high ability to generalize well the extracted knowledge. When training e.g. artificial neural networks, poor generalization is often characterized by over-training. A common method to avoid over-training is the hold-out crossvalidation. The basic problem of this method represents, however, appropriate data splitting. In most of the applications, simple random sampling is used. Nevertheless, there are several sophisticated statistical sampling methods suitable for various types of datasets. This paper provides a survey of existing sampling methods applicable to the data splitting problem. Supporting experiments evaluating the benefits of the selected data splitting techniques involve artificial neural networks of the back-propagation type.", "title": "" }, { "docid": "bbdd4ffd6797d00c3547626959118b92", "text": "A vision system was designed to detect multiple lanes on structured highway using an “estimate and detect” scheme. It detected the lane in which the vehicle was driving (the central lane) and estimated the possible position of two adjacent lanes. Then the detection was made based on these estimations. The vehicle was first recognized if it was driving on a straight road or in a curve using its GPS position and the OpenStreetMap digital map. The two cases were processed differently. For straight road, the central lane was detected in the original image using Hough transformation and a simplified perspective transformation was designed to make estimations. In the case of curve path, a complete perspective transformation was performed and the central lane was detected by scanning at each row in the top view image. The system was able to detected lane marks that were not distinct or even obstructed by other vehicles.", "title": "" }, { "docid": "9952748e3d86ac550a30c2e59ac1ccd3", "text": "Targeting Interleukin-1 in Heart Disease Print ISSN: 0009-7322. Online ISSN: 1524-4539 Copyright © 2013 American Heart Association, Inc. All rights reserved. is published by the American Heart Association, 7272 Greenville Avenue, Dallas, TX 75231 Circulation doi: 10.1161/CIRCULATIONAHA.113.003199 2013;128:1910-1923 Circulation. http://circ.ahajournals.org/content/128/17/1910 World Wide Web at: The online version of this article, along with updated information and services, is located on the", "title": "" }, { "docid": "8c46f24d8e710c5fb4e25be76fc5b060", "text": "This paper presents the novel design of a wideband circularly polarized (CP) Radio Frequency Identification (RFID) reader microstrip patch antenna for worldwide Ultra High Frequency (UHF) band which covers 840–960 MHz. The proposed antenna, which consists of a microstrip patch with truncated corners and a cross slot, is placed on a foam substrate (εr = 1.06) above a ground plane and is fed through vias through ground plane holes that extend from the quadrature 3 dB branch line hybrid coupler placed below the ground plane. This helps to separate feed network radiation, from the patch antenna and keeping the CP purity. The prototype antenna was fabricated with a total size of 225 × 250 × 12.8 mm3 which shows a measured impedance matching band of 840–1150MHz (31.2%) as well as measured rotating linear based circularly polarized radiation patterns. The simulated and measured 3 dB Axial Ratio (AR) bandwidth is better than 23% from 840–1050 MHz meeting and exceeding the target worldwide RFID UHF band.", "title": "" }, { "docid": "432e8e346b2407cef8b6deabeea5d94e", "text": "Plant-based psychedelics, such as psilocybin, have an ancient history of medicinal use. After the first English language report on LSD in 1950, psychedelics enjoyed a short-lived relationship with psychology and psychiatry. Used most notably as aids to psychotherapy for the treatment of mood disorders and alcohol dependence, drugs such as LSD showed initial therapeutic promise before prohibitive legislature in the mid-1960s effectively ended all major psychedelic research programs. Since the early 1990s, there has been a steady revival of human psychedelic research: last year saw reports on the first modern brain imaging study with LSD and three separate clinical trials of psilocybin for depressive symptoms. In this circumspective piece, RLC-H and GMG share their opinions on the promises and pitfalls of renewed psychedelic research, with a focus on the development of psilocybin as a treatment for depression.", "title": "" }, { "docid": "3420aa0f36f8114a7c3962bf443bf884", "text": "In this paper, for the first time, 600 ∼ 6500 V IGBTs utilizing a new vertical structure of “Light Punch-Through (LPT) (II)” with Thin Wafer Process Technology demonstrate high total performance with low overall loss and high safety operating area (SOA) capability. This collector structure enables a wide position in the trade-off characteristics between on-state voltage (VCE(sat)) and turn-off loss (EOFF) without utilizing any conventional carrier lifetime technique. In addition, this device concept achieves a wide operating junction temperature (@218 ∼ 423 K) of IGBT without the snap-back phenomena (≤298 K) and thermal destruction (≥398 K). From the viewpoint of the high performance of IGBT, the breaking limitation of any Si wafer size, the proposed LPT(II) concept that utilizes an FZ silicon wafer and Thin Wafer Technology is the most promising candidate as a vertical structure of IGBT for the any voltage class.", "title": "" }, { "docid": "a049749849761dc4cd65d4442fd135f8", "text": "Local classifiers are sometimes called lazy learners because they do not train a classifier until presented with a test sample. However, such methods are generally not completely lazy because the neighborhood size k (or other locality parameter) is usually chosen by cross validation on the training set, which can require significant preprocessing and risks overfitting. We propose a simple alternative to cross validation of the neighborhood size that requires no preprocessing: instead of committing to one neighborhood size, average the discriminants for multiple neighborhoods. We show that this forms an expected estimated posterior that minimizes the expected Bregman loss with respect to the uncertainty about the neighborhood choice. We analyze this approach for six standard and state-of-the-art local classifiers, including discriminative adaptive metric kNN (DANN), a local support vector machine (SVM-KNN), hyperplane distance nearest neighbor (HKNN), and a new local Bayesian quadratic discriminant analysis (local BDA). The empirical effectiveness of this technique versus cross validation is confirmed with experiments on seven benchmark data sets, showing that similar classification performance can be attained without any training.", "title": "" }, { "docid": "567f48fef5536e9f44a6c66deea5375b", "text": "The principle of control signal amplification is found in all actuation systems, from engineered devices through to the operation of biological muscles. However, current engineering approaches require the use of hard and bulky external switches or valves, incompatible with both the properties of emerging soft artificial muscle technology and those of the bioinspired robotic systems they enable. To address this deficiency a biomimetic molecular-level approach is developed that employs light, with its excellent spatial and temporal control properties, to actuate soft, pH-responsive hydrogel artificial muscles. Although this actuation is triggered by light, it is largely powered by the resulting excitation and runaway chemical reaction of a light-sensitive acid autocatalytic solution in which the actuator is immersed. This process produces actuation strains of up to 45% and a three-fold chemical amplification of the controlling light-trigger, realising a new strategy for the creation of highly functional soft actuating systems.", "title": "" }, { "docid": "ad78f226f21bd020e625659ad3ddbf74", "text": "We study the approach to jamming in hard-sphere packings and, in particular, the pair correlation function g(2) (r) around contact, both theoretically and computationally. Our computational data unambiguously separate the narrowing delta -function contribution to g(2) due to emerging interparticle contacts from the background contribution due to near contacts. The data also show with unprecedented accuracy that disordered hard-sphere packings are strictly isostatic: i.e., the number of exact contacts in the jamming limit is exactly equal to the number of degrees of freedom, once rattlers are removed. For such isostatic packings, we derive a theoretical connection between the probability distribution of interparticle forces P(f) (f) , which we measure computationally, and the contact contribution to g(2) . We verify this relation for computationally generated isostatic packings that are representative of the maximally random jammed state. We clearly observe a maximum in P(f) and a nonzero probability of zero force, shedding light on long-standing questions in the granular-media literature. We computationally observe an unusual power-law divergence in the near-contact contribution to g(2) , persistent even in the jamming limit, with exponent -0.4 clearly distinguishable from previously proposed inverse-square-root divergence. Additionally, we present high-quality numerical data on the two discontinuities in the split-second peak of g(2) and use a shared-neighbor analysis of the graph representing the contact network to study the local particle clusters responsible for the peculiar features. Finally, we present the computational data on the contact contribution to g(2) for vacancy-diluted fcc crystal packings and also investigate partially crystallized packings along the transition from maximally disordered to fully ordered packings. We find that the contact network remains isostatic even when ordering is present. Unlike previous studies, we find that ordering has a significant impact on the shape of P(f) for small forces.", "title": "" }, { "docid": "f582f73b7a7a252d6c17766a9c5f8dee", "text": "The modern image search system requires semantic understanding of image, and a key yet under-addressed problem is to learn a good metric for measuring the similarity between images. While deep metric learning has yielded impressive performance gains by extracting high level abstractions from image data, a proper objective loss function becomes the central issue to boost the performance. In this paper, we propose a novel angular loss, which takes angle relationship into account, for learning better similarity metric. Whereas previous metric learning methods focus on optimizing the similarity (contrastive loss) or relative similarity (triplet loss) of image pairs, our proposed method aims at constraining the angle at the negative point of triplet triangles. Several favorable properties are observed when compared with conventional methods. First, scale invariance is introduced, improving the robustness of objective against feature variance. Second, a third-order geometric constraint is inherently imposed, capturing additional local structure of triplet triangles than contrastive loss or triplet loss. Third, better convergence has been demonstrated by experiments on three publicly available datasets.", "title": "" } ]
scidocsrr
522bb46a58652c1f314665fd7088ede0
Track k: medical information systems.
[ { "docid": "cdc3e4b096be6775547a8902af52e798", "text": "OBJECTIVE\nThe aim of the study was to present a systematic review of studies that investigate the effects of robot-assisted therapy on motor and functional recovery in patients with stroke.\n\n\nMETHODS\nA database of articles published up to October 2006 was compiled using the following Medline key words: cerebral vascular accident, cerebral vascular disorders, stroke, paresis, hemiplegia, upper extremity, arm, and robot. References listed in relevant publications were also screened. Studies that satisfied the following selection criteria were included: (1) patients were diagnosed with cerebral vascular accident; (2) effects of robot-assisted therapy for the upper limb were investigated; (3) the outcome was measured in terms of motor and/or functional recovery of the upper paretic limb; and (4) the study was a randomized clinical trial (RCT). For each outcome measure, the estimated effect size (ES) and the summary effect size (SES) expressed in standard deviation units (SDU) were calculated for motor recovery and functional ability (activities of daily living [ADLs]) using fixed and random effect models. Ten studies, involving 218 patients, were included in the synthesis. Their methodological quality ranged from 4 to 8 on a (maximum) 10-point scale.\n\n\nRESULTS\nMeta-analysis showed a nonsignificant heterogeneous SES in terms of upper limb motor recovery. Sensitivity analysis of studies involving only shoulder-elbow robotics subsequently demonstrated a significant homogeneous SES for motor recovery of the upper paretic limb. No significant SES was observed for functional ability (ADL).\n\n\nCONCLUSION\nAs a result of marked heterogeneity in studies between distal and proximal arm robotics, no overall significant effect in favor of robot-assisted therapy was found in the present meta-analysis. However, subsequent sensitivity analysis showed a significant improvement in upper limb motor function after stroke for upper arm robotics. No significant improvement was found in ADL function. However, the administered ADL scales in the reviewed studies fail to adequately reflect recovery of the paretic upper limb, whereas valid instruments that measure outcome of dexterity of the paretic arm and hand are mostly absent in selected studies. Future research into the effects of robot-assisted therapy should therefore distinguish between upper and lower robotics arm training and concentrate on kinematical analysis to differentiate between genuine upper limb motor recovery and functional recovery due to compensation strategies by proximal control of the trunk and upper limb.", "title": "" } ]
[ { "docid": "b0b024072e7cde0b404a9be5862ecdd1", "text": "Recent studies have led to the recognition of the epidermal growth factor receptor HER3 as a key player in cancer, and consequently this receptor has gained increased interest as a target for cancer therapy. We have previously generated several Affibody molecules with subnanomolar affinity for the HER3 receptor. Here, we investigate the effects of two of these HER3-specific Affibody molecules, Z05416 and Z05417, on different HER3-overexpressing cancer cell lines. Using flow cytometry and confocal microscopy, the Affibody molecules were shown to bind to HER3 on three different cell lines. Furthermore, the receptor binding of the natural ligand heregulin (HRG) was blocked by addition of Affibody molecules. In addition, both molecules suppressed HRG-induced HER3 and HER2 phosphorylation in MCF-7 cells, as well as HER3 phosphorylation in constantly HER2-activated SKBR-3 cells. Importantly, Western blot analysis also revealed that HRG-induced downstream signalling through the Ras-MAPK pathway as well as the PI3K-Akt pathway was blocked by the Affibody molecules. Finally, in an in vitro proliferation assay, the two Affibody molecules demonstrated complete inhibition of HRG-induced cancer cell growth. Taken together, our findings demonstrate that Z05416 and Z05417 exert an anti-proliferative effect on two breast cancer cell lines by inhibiting HRG-induced phosphorylation of HER3, suggesting that the Affibody molecules are promising candidates for future HER3-targeted cancer therapy.", "title": "" }, { "docid": "efb305d95cf7197877de0b2fb510f33a", "text": "Drug-induced cardiotoxicity is emerging as an important issue among cancer survivors. For several decades, this topic was almost exclusively associated with anthracyclines, for which cumulative dose-related cardiac damage was the limiting step in their use. Although a number of efforts have been directed towards prediction of risk, so far no consensus exists on the strategies to prevent and monitor chemotherapy-related cardiotoxicity. Recently, a new dimension of the problem has emerged when drugs targeting the activity of certain tyrosine kinases or tumor receptors were recognized to carry an unwanted effect on the cardiovascular system. Moreover, the higher than expected incidence of cardiac dysfunction occurring in patients treated with a combination of old and new chemotherapeutics (e.g. anthracyclines and trastuzumab) prompted clinicians and researchers to find an effective approach to the problem. From the pharmacological standpoint, putative molecular mechanisms involved in chemotherapy-induced cardiotoxicity will be reviewed. From the clinical standpoint, current strategies to reduce cardiotoxicity will be critically addressed. In this perspective, the precise identification of the antitarget (i.e. the unwanted target causing heart damage) and the development of guidelines to monitor patients undergoing treatment with cardiotoxic agents appear to constitute the basis for the management of drug-induced cardiotoxicity.", "title": "" }, { "docid": "cf1c04b4d0c61632d7a3969668d5e751", "text": "A 3 dB power divider/combiner in substrate integrated waveguide (SIW) technology is presented. The divider consists of an E-plane SIW bifurcation with an embedded thick film resistor. The transition divides a full-height SIW into two SIWs of half the height. The resistor provides isolation between these two. The divider is fabricated in a multilayer process using high frequency substrates. For the resistor carbon paste is printed on the middle layer of the stack-up. Simulation and measurement results are presented. The measured divider exhibits an isolation of better than 22 dB within a bandwidth of more than 3GHz at 20 GHz.", "title": "" }, { "docid": "7c27bfa849ba0bd49f9ddaec9beb19b5", "text": "Very High Spatial Resolution (VHSR) large-scale SAR image databases are still an unresolved issue in the Remote Sensing field. In this work, we propose such a dataset and use it to explore patch-based classification in urban and periurban areas, considering 7 distinct semantic classes. In this context, we investigate the accuracy of large CNN classification models and pre-trained networks for SAR imaging systems. Furthermore, we propose a Generative Adversarial Network (GAN) for SAR image generation and test, whether the synthetic data can actually improve classification accuracy.", "title": "" }, { "docid": "eb101664f08f0c5c7cf6bcf8e058b180", "text": "Rapidly progressive renal failure (RPRF) is an initial clinical diagnosis in patients who present with progressive renal impairment of short duration. The underlying etiology may be a primary renal disease or a systemic disorder. Important differential diagnoses include vasculitis (systemic or renal-limited), systemic lupus erythematosus, multiple myeloma, thrombotic microangiopathy and acute interstitial nephritis. Good history taking, clinical examination and relevant investigations including serology and ultimately kidney biopsy are helpful in clinching the diagnosis. Early definitive diagnosis of RPRF is essential to reverse the otherwise relentless progression to end-stage kidney disease.", "title": "" }, { "docid": "9441113599194d172b6f618058b2ba88", "text": "Vegetable quality is frequently referred to size, shape, mass, firmness, color and bruises from which fruits can be classified and sorted. However, technological by small and middle producers implementation to assess this quality is unfeasible, due to high costs of software, equipment as well as operational costs. Based on these considerations, the proposal of this research is to evaluate a new open software that enables the classification system by recognizing fruit shape, volume, color and possibly bruises at a unique glance. The software named ImageJ, compatible with Windows, Linux and MAC/OS, is quite popular in medical research and practices, and offers algorithms to obtain the above mentioned parameters. The software allows calculation of volume, area, averages, border detection, image improvement and morphological operations in a variety of image archive formats as well as extensions by means of “plugins” written in Java.", "title": "" }, { "docid": "d4fff9c75f3e8e699bbf5815b81e77b0", "text": "We compare the robustness of humans and current convolutional deep neural networks (DNNs) on object recognition under twelve different types of image degradations. First, using three well known DNNs (ResNet-152, VGG-19, GoogLeNet) we find the human visual system to be more robust to nearly all of the tested image manipulations, and we observe progressively diverging classification error-patterns between humans and DNNs when the signal gets weaker. Secondly, we show that DNNs trained directly on distorted images consistently surpass human performance on the exact distortion types they were trained on, yet they display extremely poor generalisation abilities when tested on other distortion types. For example, training on salt-and-pepper noise does not imply robustness on uniform white noise and vice versa. Thus, changes in the noise distribution between training and testing constitutes a crucial challenge to deep learning vision systems that can be systematically addressed in a lifelong machine learning approach. Our new dataset consisting of 83K carefully measured human psychophysical trials provide a useful reference for lifelong robustness against image degradations set by the human visual system.", "title": "" }, { "docid": "69624d1ab7b438d5ff4b5192f492a11a", "text": "1. SLICED PROGRAMMABLE NETWORKS OpenFlow [4] has been demonstrated as a way for researchers to run networking experiments in their production network. Last year, we demonstrated how an OpenFlow controller running on NOX [3] could move VMs seamlessly around an OpenFlow network [1]. While OpenFlow has potential [2] to open control of the network, only one researcher can innovate on the network at a time. What is required is a way to divide, or slice, network resources so that researchers and network administrators can use them in parallel. Network slicing implies that actions in one slice do not negatively affect other slices, even if they share the same underlying physical hardware. A common network slicing technique is VLANs. With VLANs, the administrator partitions the network by switch port and all traffic is mapped to a VLAN by input port or explicit tag. This coarse-grained type of network slicing complicates more interesting experiments such as IP mobility or wireless handover. Here, we demonstrate FlowVisor, a special purpose OpenFlow controller that allows multiple researchers to run experiments safely and independently on the same production OpenFlow network. To motivate FlowVisor’s flexibility, we demonstrate four network slices running in parallel: one slice for the production network and three slices running experimental code (Figure 1). Our demonstration runs on real network hardware deployed on our production network at Stanford and a wide-area test-bed with a mix of wired and wireless technologies.", "title": "" }, { "docid": "d035f857c5f9a57957314a574bb2b6ff", "text": "uted through the environments’ material and cultural artifacts and through other people in collaborative efforts to complete complex tasks (Latour, 1987; Pea, 1993). For example, Hutchins (1995a) documents how the task of landing a plane can be best understood through investigating a unit of analysis that includes the pilot, the manufactured tools, and the social context. In this case, the tools and social context are not merely “aides” to the pilot’s cognition but rather essential features of a composite. Similarly, tools such as calculators enable students to complete computational tasks in ways that would be distinctly different if the calculators were absent (Pea, 1993). In these cases, cognitive activity is “stretched over” actors and artifacts. Hence, human activity is best understood by considering both artifacts and actors together through cycles of task completion because the artifacts and actors are essentially intertwined in action contexts (Lave, 1988). In addition to material tools, action is distributed across language, theories of action, and interpretive schema, providing the “mediational means” that enable and transform intelligent social activity (Brown & Duguid, 1991; Leont’ev, 1975, 1981; Vygotsky, 1978; Wertsch, 1991). These material and cultural artifacts form identifiable aspects of the “sociocultural” context as products of particular social and cultural situations (Vygotsky, 1978; Wertsch, 1991). Actors develop common understandings and draw on cultural, social, and historical norms in order to think and act. Thus, even when a particular cognitive task is undertaken by an individual apparently in solo, the individual relies on a variety of sociocultural artifacts such as computational methods and language that are social in origin (Wertsch, 1991). HowWhile there is an expansive literature about what school structures, programs, and processes are necessary for instructional change, we know less about how these changes are undertaken or enacted by school leaders in their daily work. To study school leadership we must attend to leadership practice rather than chiefly or exclusively to school structures, programs, and designs. An in-depth analysis of the practice of school leaders is necessary to render an account of how school leadership works. Knowing what leaders do is one thing, but without a rich understanding of how and why they do it, our understanding of leadership is incomplete. To do that, it is insufficient to simply observe school leadership in action and generate thick descriptions of the observed practice. We need to observe from within a conceptual framework. In our opinion, the prevailing framework of individual agency, focused on positional leaders such as principals, is inadequate because leadership is not just a function of what these leaders know and do. Hence, our intent in this paper is to frame an exploration of how leaders think and act by developing a distributed perspective on leadership practice. The Distributed Leadership Study, a study we are currently conducting in Chicago, uses the distributed framework outlined in this paper to frame a program of research that examines the practice of leadership in urban elementary schools working to change mathematics, science, and literacy instruction (see http://www.letus.org/ dls/index.htm). This 4-year longitudinal study, funded by the National Science Foundation and the Spencer Foundation, is designed to make the “black box” of leadership practice more transparent through an in-depth analysis of leadership practice. This research identifies the tasks, actors, actions, and interactions of school leadership as they unfold together in the daily life of schools. The research program involves in-depth observations and interviews with formal and informal leaders and classroom teachers as well as a social network analysis in schools in the Chicago metropolitan area. We outline the distributed framework below, beginning with a brief review of the theoretical underpinnings for this work—distributed cognition and activity theory—which we then use to re-approach the subject of leadership practice. Next we develop our distributed theory of leadership around four ideas: leadership tasks and functions, task enactment, social distribution of task enactment, and situational distribution of task enactment. Our central argument is that school leadership is best understood as a distributed practice, stretched over the school’s social and situational contexts.", "title": "" }, { "docid": "e808fa6ebe5f38b7672fad04c5f43a3a", "text": "A series of GeoVoCamps, run at least twice a year in locations in the U.S., have focused on ontology design patterns as an approach to inform metadata and data models, and on applications in the GeoSciences. In this note, we will redraw the brief history of the series as well as rationales for the particular approach which was chosen, and report on the ongoing uptake of the approach.", "title": "" }, { "docid": "95746fa1170e0498e92a443e6fc92336", "text": "A paradigm shift is taking place in medicine from using synthetic implants and tissue grafts to a tissue engineering approach that uses degradable porous material scaffolds integrated with biological cells or molecules to regenerate tissues. This new paradigm requires scaffolds that balance temporary mechanical function with mass transport to aid biological delivery and tissue regeneration. Little is known quantitatively about this balance as early scaffolds were not fabricated with precise porous architecture. Recent advances in both computational topology design (CTD) and solid free-form fabrication (SFF) have made it possible to create scaffolds with controlled architecture. This paper reviews the integration of CTD with SFF to build designer tissue-engineering scaffolds. It also details the mechanical properties and tissue regeneration achieved using designer scaffolds. Finally, future directions are suggested for using designer scaffolds with in vivo experimentation to optimize tissue-engineering treatments, and coupling designer scaffolds with cell printing to create designer material/biofactor hybrids.", "title": "" }, { "docid": "f3348f2323a5a97980551f00367703d1", "text": "Bacterial samples had been isolated from clinically detected diseased juvenile Pangasius, collected from Mymensingh, Bangladesh. Primarily, the isolates were found as Gram-negative, motile, oxidase-positive, fermentative, and O/129 resistant Aeromonas bacteria. The species was exposed as Aeromonas hydrophila from esculin hydrolysis test. Ten isolates of A. hydrophila were identified from eye lesions, kidney, and liver of the infected fishes. Further characterization of A. hydrophila was accomplished using API-20E and antibiotic sensitivity test. Isolates were highly resistant to amoxyclav among ten different antibiotics. All isolates were found as immensely pathogenic to healthy fishes while intraperitoneal injection. Histopathologically, necrotic hematopoietic tissues with pyknotic nuclei, mild hemorrhage, and wide vacuolation in kidney, liver, and muscle were principally noticed due to Aeromonad infection. So far, this is the first full note on characterizing A. hydrophila from diseased farmed Pangasius in Bangladesh. The present findings will provide further direction to develop theranostic strategies of A. hydrophila infection.", "title": "" }, { "docid": "bb28519ca1161bafb9b3812b1fd66ed1", "text": "Considering the variations of inertia in real applications, an adaptive control scheme for the permanent-magnet synchronous motor speed-regulation system is proposed in this paper. First, a composite control method, i.e., the extended-state-observer (ESO)-based control method, is employed to ensure the performance of the closed-loop system. The ESO can estimate both the states and the disturbances simultaneously so that the composite speed controller can have a corresponding part to compensate for the disturbances. Then, considering the case of variations of load inertia, an adaptive control scheme is developed by analyzing the control performance relationship between the feedforward compensation gain and the system inertia. By using inertia identification techniques, a fuzzy-inferencer-based supervisor is designed to automatically tune the feedforward compensation gain according to the identified inertia. Simulation and experimental results both show that the proposed method achieves a better speed response in the presence of inertia variations.", "title": "" }, { "docid": "9b8317646ce6cad433e47e42198be488", "text": "OBJECTIVE\nDigital mental wellbeing interventions are increasingly being used by the general public as well as within clinical treatment. Among these, mindfulness and meditation programs delivered through mobile device applications are gaining popularity. However, little is known about how people use and experience such applications and what are the enabling factors and barriers to effective use. To address this gap, the study reported here sought to understand how users adopt and experience a popular mobile-based mindfulness intervention.\n\n\nMETHODS\nA qualitative semi-structured interview study was carried out with 16 participants aged 25-38 (M=32.5) using the commercially popular mindfulness application Headspace for 30-40days. All participants were employed and living in a large UK city. The study design and interview schedule were informed by an autoethnography carried out by the first author for thirty days before the main study began. Results were interpreted in terms of the Reasoned Action Approach to understand behaviour change.\n\n\nRESULTS\nThe core concern of users was fitting the application into their busy lives. Use was also influenced by patterns in daily routines, on-going reflections about the consequences of using the app, perceived self-efficacy, emotion and mood states, personal relationships and social norms. Enabling factors for use included positive attitudes towards mindfulness and use of the app, realistic expectations and positive social influences. Barriers to use were found to be busy lifestyles, lack of routine, strong negative emotions and negative perceptions of mindfulness.\n\n\nCONCLUSIONS\nMobile wellbeing interventions should be designed with consideration of people's beliefs, affective states and lifestyles, and should be flexible to meet the needs of different users. Designers should incorporate features in the design of applications that manage expectations about use and that support users to fit app use into a busy lifestyle. The Reasoned Action Approach was found to be a useful theory to inform future research and design of persuasive mental wellbeing technologies.", "title": "" }, { "docid": "865ca372a2b073e672c535a94c04c2ad", "text": "The work presented here involves the design of a Multi Layer Perceptron (MLP) based pattern classifier for recognition of handwritten Bangla digits using a 76 element feature vector. Bangla is the second most popular script and language in the Indian subcontinent and the fifth most popular language in the world. The feature set developed for representing handwritten Bangla numerals here includes 24 shadow features, 16 centroid features and 36 longest-run features. On experimentation with a database of 6000 samples, the technique yields an average recognition rate of 96.67% evaluated after three-fold cross validation of results. It is useful for applications related to OCR of handwritten Bangla Digit and can also be extended to include OCR of handwritten characters of Bangla alphabet.", "title": "" }, { "docid": "8c47d9a93e3b9d9f31b77b724bf45578", "text": "A high-sensitivity fully passive 868-MHz wake-up radio (WUR) front-end for wireless sensor network nodes is presented. The front-end does not have an external power source and extracts the entire energy from the radio-frequency (RF) signal received at the antenna. A high-efficiency differential RF-to-DC converter rectifies the incident RF signal and drives the circuit blocks including a low-power comparator and reference generators; and at the same time detects the envelope of the on-off keying (OOK) wake-up signal. The front-end is designed and simulated 0.13μm CMOS and achieves a sensitivity of -33 dBm for a 100 kbps wake-up signal.", "title": "" }, { "docid": "17f171d0d91c1d914600a238f6446650", "text": "One of the cornerstones of the field of signal processing on graphs are graph filters, direct analogues of classical filters, but intended for signals defined on graphs. This work brings forth new insights on the distributed graph filtering problem. We design a family of autoregressive moving average (ARMA) recursions, which (i) are able to approximate any desired graph frequency response, and (ii) give exact solutions for tasks such as graph signal denoising and interpolation. The design philosophy, which allows us to design the ARMA coefficients independently from the underlying graph, renders the ARMA graph filters suitable in static and, particularly, time-varying settings. The latter occur when the graph signal and/or graph are changing over time. We show that in case of a time-varying graph signal our approach extends naturally to a two-dimensional filter, operating concurrently in the graph and regular time domains. We also derive sufficient conditions for filter stability when the graph and signal are time-varying. The analytical and numerical results presented in this paper illustrate that ARMA graph filters are practically appealing for static and time-varying settings, accompanied by strong theoretical guarantees. Keywords— distributed graph filtering, signal processing on graphs, time-varying graph signals, time-varying graphs", "title": "" }, { "docid": "257b4e500cb0342835cd139e4eb11570", "text": "The capability of avoid obstacles is the one of the key issues in autonomous search-and-rescue robots research area. In this study, the avoiding obstacles capability has been provided to the virtula robots in USARSim environment. The aim is finding the minimum movement when robot faces an obstacle in path. For obstacle avoidance we used an real time path planning method which is called Vector Field Histogram (VFH). After experiments we observed that VFH method is successful method for obstacle avoidance. Moreover, the usage of VFH method is highly incresing the amount of the visited places per unit time.", "title": "" }, { "docid": "ce9238236040aed852b1c8f255088b61", "text": "This paper proposes a high efficiency LLC resonant inverter for induction heating applications by using asymmetrical voltage cancellation control. The proposed control method is implemented in a full-bridge topology for induction heating application. The operating frequency is automatically adjusted to maintain a small constant lagging phase angle under load parameter variation. The output power is controlled using the asymmetrical voltage cancellation technique. The LLC resonant tank is designed without the use of output transformer. This results in an increase of the net efficiency of the induction heating system. The validity of the proposed method is verified through computer simulation and hardware experiment at the operating frequency of 93 to 96 kHz.", "title": "" }, { "docid": "6806ff9626d68336dce539a8f2c440af", "text": "Obesity and hypertension, major risk factors for the metabolic syndrome, render individuals susceptible to an increased risk of cardiovascular complications, such as adverse cardiac remodeling and heart failure. There has been much investigation into the role that an increase in the renin-angiotensin-aldosterone system (RAAS) plays in the pathogenesis of metabolic syndrome and in particular, how aldosterone mediates left ventricular hypertrophy and increased cardiac fibrosis via its interaction with the mineralocorticoid receptor (MR). Here, we review the pertinent findings that link obesity with elevated aldosterone and the development of cardiac hypertrophy and fibrosis associated with the metabolic syndrome. These studies illustrate a complex cross-talk between adipose tissue, the heart, and the adrenal cortex. Furthermore, we discuss findings from our laboratory that suggest that cardiac hypertrophy and fibrosis in the metabolic syndrome may involve cross-talk between aldosterone and adipokines (such as adiponectin).", "title": "" } ]
scidocsrr
e43f03d688e52d00c7d017e0e029e7a4
Design of LTCC Wideband Patch Antenna for LMDS Band Applications
[ { "docid": "bf77cd91ec7a5133998e60dfd4ec520f", "text": "A simple procedure for the design of compact stacked-patch antennas is presented based on LTCC multilayer packaging technology. The advantage of this topology is that only one parameter, i.e., the substrate thickness (or equivalently the number of LTCC layers), needs to be adjusted in order to achieve an optimized bandwidth performance. The validity of the new design strategy is verified through applying it to practical compact antenna design for several wireless communication bands, including ISM 2.4-GHz band, IEEE 802.11a 5.8-GHz, and LMDS 28-GHz band. It is shown that a 10-dB return-loss bandwidth of 7% can be achieved for the LTCC (/spl epsiv//sub r/=5.6) multilayer structure with a thickness of less than 0.03 wavelengths, which can be realized using a different number of laminated layers for different frequencies (e.g., three layers for the 28-GHz band).", "title": "" } ]
[ { "docid": "25e6f4b6c86fac766c09aae302ec9516", "text": "ABSTRACT. The purpose of this study is to construct doctors’ acceptance model of Electronic Medical Records (EMR) in private hospitals. The model extends the Technology Acceptance Model (TAM) with two factors of Individual Capabilities; Self-Efficacy (SE) and Perceived Behavioral Control (PBC). The initial findings proposes additional factors over the original factors in TAM making Perceived Usefulness (PU), Perceived Ease Of Use (PEOU), Behavioral Intention to use (BI), SE, and PBC working in incorporation. A cross-sectional survey was used in which data were gathered by a personal administered questionnaire as the instrument for data collection. Doctors of public hospitals were involved in this study which proves that all factors are reliable.", "title": "" }, { "docid": "dfdd857de86c75e769492b56a092b242", "text": "Understanding the anatomy of the ankle ligaments is important for correct diagnosis and treatment. Ankle ligament injury is the most frequent cause of acute ankle pain. Chronic ankle pain often finds its cause in laxity of one of the ankle ligaments. In this pictorial essay, the ligaments around the ankle are grouped, depending on their anatomic orientation, and each of the ankle ligaments is discussed in detail.", "title": "" }, { "docid": "48513729ea0b9ad7cf74626ca5eed686", "text": "We consider a generalization of the lcm-sum function, and we give two kinds of asymptotic formulas for the sum of that function. Our results include a generalization ofBordelì es's results and a refinement of the error estimate of Alladi's result. We prove these results by the method similar to those ofBordelì es.", "title": "" }, { "docid": "4408d485de63034cb2225ee7aa9e3afe", "text": "We present the characterization of dry spiked biopotential electrodes and test their suitability to be used in anesthesia monitoring systems based on the measurement of electroencephalographic signals. The spiked electrode consists of an array of microneedles penetrating the outer skin layers. We found a significant dependency of the electrode-skin-electrode impedance (ESEI) on the electrode size (i.e., the number of spikes) and the coating material of the spikes. Electrodes larger than 3/spl times/3 mm/sup 2/ coated with Ag-AgCl have sufficiently low ESEI to be well suited for electroencephalograph (EEG) recordings. The maximum measured ESEI was 4.24 k/spl Omega/ and 87 k/spl Omega/, at 1 kHz and 0.6 Hz, respectively. The minimum ESEI was 0.65 k/spl Omega/ an 16 k/spl Omega/, at the same frequencies. The ESEI of spiked electrodes is stable over an extended period of time. The arithmetic mean of the generated DC offset voltage is 11.8 mV immediately after application on the skin and 9.8 mV after 20-30 min. A spectral study of the generated potential difference revealed that the AC part was unstable at frequencies below approximately 0.8 Hz. Thus, the signal does not interfere with a number of clinical applications using real-time EEG. Comparing raw EEG recordings of the spiked electrode with commercial Zipprep electrodes showed that both signals were similar. Due to the mechanical strength of the silicon microneedles and the fact that neither skin preparation nor electrolytic gel is required, use of the spiked electrode is convenient. The spiked electrode is very comfortable for the patient.", "title": "" }, { "docid": "0cd1f01d1b2a5afd8c6eba13ef5082fa", "text": "Automatic differentiation—the mechanical transformation of numeric computer programs to calculate derivatives efficiently and accurately—dates to the origin of the computer age. Reverse mode automatic differentiation both antedates and generalizes the method of backwards propagation of errors used in machine learning. Despite this, practitioners in a variety of fields, including machine learning, have been little influenced by automatic differentiation, and make scant use of available tools. Here we review the technique of automatic differentiation, describe its two main modes, and explain how it can benefit machine learning practitioners. To reach the widest possible audience our treatment assumes only elementary differential calculus, and does not assume any knowledge of linear algebra.", "title": "" }, { "docid": "f97ed9ef35355feffb1ebf4242d7f443", "text": "Moore’s law has allowed the microprocessor market to innovate at an astonishing rate. We believe microchip implants are the next frontier for the integrated circuit industry. Current health monitoring technologies are large, expensive, and consume significant power. By miniaturizing and reducing power, monitoring equipment can be implanted into the body and allow 24/7 health monitoring. We plan to implement a new transmitter topology, compressed sensing, which can be used for wireless communications with microchip implants. This paper focuses on the ADC used in the compressed sensing signal chain. Using the Cadence suite of tools and a 32/28nm process, we produced simulations of our compressed sensing Analog to Digital Converter to feed into a Digital Compression circuit. Our results indicate that a 12-bit, 20Ksample, 9.8nW Successive Approximation ADC is possible for diagnostic resolution (10 bits). By incorporating a hybrid-C2C DAC with differential floating voltage shields, it is possible to obtain 9.7 ENOB. Thus, we recommend this ADC for use in compressed sensing for biomedical purposes. Not only will it be useful in digital compressed sensing, but this can also be repurposed for use in analog compressed sensing.", "title": "" }, { "docid": "0c863db545e890a2f0d58f188692999b", "text": "Digital investigation in the cloud is challenging, but there's also opportunities for innovations in digital forensic solutions (such as remote forensic collection of evidential data from cloud servers client devices and the underlying supporting infrastructure such as distributed file systems). This column describes the challenges and opportunities in cloud forensics.", "title": "" }, { "docid": "ca550339bd91ba8e431f1e82fbaf5a99", "text": "In several previous papers and particularly in [3] we presented the use of logic equations and their solution using ternary vectors and set-theoretic considerations as well as binary codings and bit-parallel vector operations. In this paper we introduce a new and elegant model for the game of Sudoku that uses the same approach and solves this problem without any search always finding all solutions (including no solutions or several solutions). It can also be extended to larger Sudokus and to a whole class of similar discrete problems, such as Queens’ problems on the chessboard, graph-coloring problems etc. Disadvantages of known SAT approaches for such problems were overcome by our new method.", "title": "" }, { "docid": "68fe4f62d48270395ca3f257bbf8a18a", "text": "Adjectives like warm, hot, and scalding all describe temperature but differ in intensity. Understanding these differences between adjectives is a necessary part of reasoning about natural language. We propose a new paraphrasebased method to automatically learn the relative intensity relation that holds between a pair of scalar adjectives. Our approach analyzes over 36k adjectival pairs from the Paraphrase Database under the assumption that, for example, paraphrase pair really hot↔ scalding suggests that hot < scalding. We show that combining this paraphrase evidence with existing, complementary patternand lexicon-based approaches improves the quality of systems for automatically ordering sets of scalar adjectives and inferring the polarity of indirect answers to yes/no questions.", "title": "" }, { "docid": "48aea9478d2a9f1edb108202bd65e8dd", "text": "The popularity of mobile devices and location-based services (LBSs) has raised significant concerns regarding the location privacy of their users. A popular approach to protect location privacy is anonymizing the users of LBS systems. In this paper, we introduce an information-theoretic notion for location privacy, which we call perfect location privacy. We then demonstrate how anonymization should be used by LBS systems to achieve the defined perfect location privacy. We study perfect location privacy under two models for user movements. First, we assume that a user’s current location is independent from her past locations. Using this independent identically distributed (i.i.d.) model, we show that if the pseudonym of the user is changed before <inline-formula> <tex-math notation=\"LaTeX\">$O\\left({n^{\\frac {2}{r-1}}}\\right)$ </tex-math></inline-formula> observations are made by the adversary for that user, then the user has perfect location privacy. Here, <inline-formula> <tex-math notation=\"LaTeX\">$n$ </tex-math></inline-formula> is the number of the users in the network and <inline-formula> <tex-math notation=\"LaTeX\">$r$ </tex-math></inline-formula> is the number of all possible locations. Next, we model users’ movements using Markov chains to better model real-world movement patterns. We show that perfect location privacy is achievable for a user if the user’s pseudonym is changed before <inline-formula> <tex-math notation=\"LaTeX\">$O\\left({n^{\\frac {2}{|E|-r}}}\\right)$ </tex-math></inline-formula> observations are collected by the adversary for that user, where <inline-formula> <tex-math notation=\"LaTeX\">$|E|$ </tex-math></inline-formula> is the number of edges in the user’s Markov chain model.", "title": "" }, { "docid": "f32477f15fb7f550c74bc052c487a14b", "text": "This paper demonstrates the sketch drawing capability of NAO humanoid robot. Two redundant degrees of freedom elbow yaw (RElbowYaw) and wrist yaw (RWristYaw) of the right hand have been sacrificed because of their less contribution in drawing. The Denavit-Hartenberg (DH) parameters of the system has been defined in order to measure the working envelop of the right hand as well as to achieve the inverse kinematic solution. A linear transformation has been used to transform the image points with respect to real world coordinate system and novel 4 point calibration technique has been proposed to calibrate the real world coordinate system with respect to NAO end effector.", "title": "" }, { "docid": "848dd074e4615ea5ecb164c96fac6c63", "text": "A simultaneous analytical method for etizolam and its main metabolites (alpha-hydroxyetizolam and 8-hydroxyetizolam) in whole blood was developed using solid-phase extraction, TMS derivatization and ion trap gas chromatography tandem mass spectrometry (GC-MS/MS). Separation of etizolam, TMS derivatives of alpha-hydroxyetizolam and 8-hydroxyetizolam and fludiazepam as internal standard was performed within about 17 min. The inter-day precision evaluated at the concentration of 50 ng/mL etizolam, alpha-hydroxyetizolam and 8-hydroxyetizolam was evaluated 8.6, 6.4 and 8.0% respectively. Linearity occurred over the range in 5-50 ng/mL. This method is satisfactory for clinical and forensic purposes. This method was applied to two unnatural death cases suspected to involve etizolam. Etizolam and its two metabolites were detected in these cases.", "title": "" }, { "docid": "0cef7d9df5606df8becd2226233e3c99", "text": "Telecare medical information systems (TMISs) are increasingly popular technologies for healthcare applications. Using TMISs, physicians and caregivers can monitor the vital signs of patients remotely. Since the database of TMISs stores patients’ electronic medical records (EMRs), only authorized users should be granted the access to this information for the privacy concern. To keep the user anonymity, recently, Chen et al. proposed a dynamic ID-based authentication scheme for telecare medical information system. They claimed that their scheme is more secure and robust for use in a TMIS. However, we will demonstrate that their scheme fails to satisfy the user anonymity due to the dictionary attacks. It is also possible to derive a user password in case of smart card loss attacks. Additionally, an improved scheme eliminating these weaknesses is also presented.", "title": "" }, { "docid": "794e78423eaa3484ba28127d76e4bd74", "text": "Classification of environmental sounds is a fundamental procedure for a wide range of real-world applications. In this paper, we propose a novel acoustic feature extraction method for classifying the environmental sounds. The proposed method is motivated from the image processing technique, local binary pattern (LBP), and works on a spectrogram which forms two-dimensional (time-frequency) data like an image. Since the spectrogram contains noisy pixel values, for improving classification performance, it is crucial to extract the features which are robust to the fluctuations in pixel values. We effectively incorporate the local statistics, mean and standard deviation on local pixels, to establish robust LBP. In addition, we provide the technique of L2-Hellinger normalization which is efficiently applied to the proposed features so as to further enhance the discriminative power while increasing the robustness. In the experiments on environmental sound classification using RWCP dataset that contains 105 sound categories, the proposed method produces the superior performance (98.62%) compared to the other methods, exhibiting significant improvements over the standard LBP method as well as robustness to noise and low computation time.", "title": "" }, { "docid": "8bdd02547be77f4c825c9aed8016ddf8", "text": "Global terrestrial ecosystems absorbed carbon at a rate of 1–4 Pg yr-1 during the 1980s and 1990s, offsetting 10–60 per cent of the fossil-fuel emissions. The regional patterns and causes of terrestrial carbon sources and sinks, however, remain uncertain. With increasing scientific and political interest in regional aspects of the global carbon cycle, there is a strong impetus to better understand the carbon balance of China. This is not only because China is the world’s most populous country and the largest emitter of fossil-fuel CO2 into the atmosphere, but also because it has experienced regionally distinct land-use histories and climate trends, which together control the carbon budget of its ecosystems. Here we analyse the current terrestrial carbon balance of China and its driving mechanisms during the 1980s and 1990s using three different methods: biomass and soil carbon inventories extrapolated by satellite greenness measurements, ecosystem models and atmospheric inversions. The three methods produce similar estimates of a net carbon sink in the range of 0.19–0.26 Pg carbon (PgC) per year, which is smaller than that in the conterminous United States but comparable to that in geographic Europe. We find that northeast China is a net source of CO2 to the atmosphere owing to overharvesting and degradation of forests. By contrast, southern China accounts for more than 65 per cent of the carbon sink, which can be attributed to regional climate change, large-scale plantation programmes active since the 1980s and shrub recovery. Shrub recovery is identified as the most uncertain factor contributing to the carbon sink. Our data and model results together indicate that China’s terrestrial ecosystems absorbed 28–37 per cent of its cumulated fossil carbon emissions during the 1980s and 1990s.", "title": "" }, { "docid": "1bc95cb394896d57c601358574ea4f89", "text": "The transition from an informative to a service oriented interactive governmental portals has become a necessity due to the time and cost saving benefits for both governments and users. User experience is a key factor in maintaining these benefits. In this study we propose an E-government Portal Assessment Method (EGPAM), which is a direct method for measuring user experience in e-government portals. We present a case study assessing the portal of the Ministry of Public Works (MOW) in Kuwait. Results showed that having a direct measurement to user experience enabled easier identification of the current level of user satisfaction and provided a guidance on ways to improve user experience and addressing identified issues.", "title": "" }, { "docid": "eae0f8a921b301e52c822121de6c6b58", "text": "Recent work has made significant progress in improving spatial resolution for pixelwise labeling with Fully Convolutional Network (FCN) framework by employing Dilated/Atrous convolution, utilizing multi-scale features and refining boundaries. In this paper, we explore the impact of global contextual information in semantic segmentation by introducing the Context Encoding Module, which captures the semantic context of scenes and selectively highlights class-dependent featuremaps. The proposed Context Encoding Module significantly improves semantic segmentation results with only marginal extra computation cost over FCN. Our approach has achieved new state-of-the-art results 51.7% mIoU on PASCAL-Context, 85.9% mIoU on PASCAL VOC 2012. Our single model achieves a final score of 0.5567 on ADE20K test set, which surpasses the winning entry of COCO-Place Challenge 2017. In addition, we also explore how the Context Encoding Module can improve the feature representation of relatively shallow networks for the image classification on CIFAR-10 dataset. Our 14 layer network has achieved an error rate of 3.45%, which is comparable with state-of-the-art approaches with over 10× more layers. The source code for the complete system are publicly available1.", "title": "" }, { "docid": "f9d2305bc8dd4921970529f4c816b98b", "text": "Chaos scales graph processing from secondary storage to multiple machines in a cluster. Earlier systems that process graphs from secondary storage are restricted to a single machine, and therefore limited by the bandwidth and capacity of the storage system on a single machine. Chaos is limited only by the aggregate bandwidth and capacity of all storage devices in the entire cluster.\n Chaos builds on the streaming partitions introduced by X-Stream in order to achieve sequential access to storage, but parallelizes the execution of streaming partitions. Chaos is novel in three ways. First, Chaos partitions for sequential storage access, rather than for locality and load balance, resulting in much lower pre-processing times. Second, Chaos distributes graph data uniformly randomly across the cluster and does not attempt to achieve locality, based on the observation that in a small cluster network bandwidth far outstrips storage bandwidth. Third, Chaos uses work stealing to allow multiple machines to work on a single partition, thereby achieving load balance at runtime.\n In terms of performance scaling, on 32 machines Chaos takes on average only 1.61 times longer to process a graph 32 times larger than on a single machine. In terms of capacity scaling, Chaos is capable of handling a graph with 1 trillion edges representing 16 TB of input data, a new milestone for graph processing capacity on a small commodity cluster.", "title": "" }, { "docid": "cfc2c98e3422d32ca4c30fea1f18b74a", "text": "While it is known that academic searchers differ from typical web searchers, little is known about the search behavior of academic searchers over longer periods of time. In this study we take a look at academic searchers through a large-scale log analysis on a major academic search engine. We focus on two aspects: query reformulation patterns and topic shifts in queries. We first analyze how each of these aspects evolve over time. We identify important query reformulation patterns: revisiting and issuing new queries tend to happen more often over time. We also find that there are two distinct types of users: one type of users becomes increasingly focused on the topics they search for as time goes by, and the other becomes increasingly diversifying. After analyzing these two aspects separately, we investigate whether, and to which degree, there is a correlation between topic shifts and query reformulations. Surprisingly, users’ preferences of query reformulations correlate little with their topic shift tendency. However, certain reformulations may help predict the magnitude of the topic shift that happens in the immediate next timespan. Our results shed light on academic searchers’ information seeking behavior and may benefit search personalization.", "title": "" } ]
scidocsrr
dbaf6f105044a7944eb6467095edbc1f
Why do narcissists take more risks ? Testing the roles of perceived risks and benefits of risky behaviors
[ { "docid": "0f9b073461047d698b6bba8d9ee7bff2", "text": "Different psychotherapeutic theories provide contradictory accounts of adult narcissism as the product of either parental coldness or excessive parental admiration during childhood. Yet, none of these theories has been tested systematically in a nonclinical sample. The authors compared four structural equation models predicting overt and covert narcissism among 120 United Kingdom adults. Both forms of narcissism were predicted by both recollections of parental coldness and recollections of excessive parental admiration. Moreover, a suppression relationship was detected between these predictors: The effects of each were stronger when modeled together than separately. These effects were found after controlling for working models of attachment; covert narcissism was predicted also by attachment anxiety. This combination of childhood experiences may help to explain the paradoxical combination of grandiosity and fragility in adult narcissism.", "title": "" } ]
[ { "docid": "1420ca15b9abeb003cee176d8825bad9", "text": "Academic study of cloud computing is an emerging research field in Saudi Arabia. Saudi Arabia represents the largest economy in the Arab Gulf region, which makes it a potential market of cloud computing technologies. This cross-sectional exploratory empirical research is based on technology–organization–environment (TOE) framework, targeting higher education institutions. In this study, the factors that affect the cloud adoption by higher education institutions were identified and tested using SmartPLS software, a powerful statistical analysis tool for structural equation modeling. Three factors were found significant in this context. Relative advantage, complexity and data concern were the most significant factors. The model explained 47.9 % of the total adoption variance. The findings offer education institutions and cloud computing service providers with better understanding of factors affecting the adoption of cloud computing.", "title": "" }, { "docid": "3a090b6fdf404e5262c7c36e3ae5879e", "text": "Background: While several benefits are attributed to the Internet and video games, an important proportion of the population presents symptoms related to possible new technological addictions and there has been little discussion of treatment of problematic technology use. Although demand for knowledge is growing, only a small number of treatments have been described. Objective: To conduct a systematic review of the literature, to establish Cognitive Behavioral Therapy (CBT) as a possible strategy for treating Internet and video game addictions. Method: The review was conducted in the following databases: Science Direct on Line, PubMed, PsycINFO, Cochrane Clinical Trials Library, BVS and SciELO. The keywords used were: Cognitive Behavioral Therapy; therapy; treatment; with association to the terms Internet addiction and video game addiction. Given the scarcity of studies in the field, no restrictions to the minimum period of publication were made, so that articles found until October 2013 were accounted. Results: Out of 72 articles found, 23 described CBT as a psychotherapy for Internet and video game addiction. The manuscripts showed the existence of case studies and protocols with satisfactory efficacy. Discussion: Despite the novelty of technological dependencies, CBT seems to be applicable and allows an effective treatment for this population. Lemos IL, et al. / Rev Psiq Clín. 2014;41(3):82-8", "title": "" }, { "docid": "5fd10b2277918255133f2e37a55e1103", "text": "Cross-modal retrieval has become a highlighted research topic for retrieval across multimedia data such as image and text. A two-stage learning framework is widely adopted by most existing methods based on deep neural network (DNN): The first learning stage is to generate separate representation for each modality and the second learning stage is to get the cross-modal common representation. However the existing methods have three limitations: 1) In the first learning stage they only model intramodality correlation but ignore intermodality correlation with rich complementary context. 2) In the second learning stage they only adopt shallow networks with single-loss regularization but ignore the intrinsic relevance of intramodality and intermodality correlation. 3) Only original instances are considered while the complementary fine-grained clues provided by their patches are ignored. For addressing the above problems this paper proposes a cross-modal correlation learning (CCL) approach with multigrained fusion by hierarchical network and the contributions are as follows: 1) In the first learning stage CCL exploits multilevel association with joint optimization to preserve the complementary context from intramodality and intermodality correlation simultaneously. 2) In the second learning stage a multitask learning strategy is designed to adaptively balance the intramodality semantic category constraints and intermodality pairwise similarity constraints. 3) CCL adopts multigrained modeling which fuses the coarse-grained instances and fine-grained patches to make cross-modal correlation more precise. Comparing with 13 state-of-the-art methods on 6 widely-used cross-modal datasets the experimental results show our CCL approach achieves the best performance.", "title": "" }, { "docid": "7aad80319743ac72d2c4e117e5f831fa", "text": "In this letter, we propose a novel method for classifying ambulatory activities using eight plantar pressure sensors within smart shoes. Using these sensors, pressure data of participants can be collected regarding level walking, stair descent, and stair ascent. Analyzing patterns of the ambulatory activities, we present new features with which to describe the ambulatory activities. After selecting critical features, a multi-class support vector machine algorithm is applied to classify these activities. Applying the proposed method to the experimental database, we obtain recognition rates up to 95.2% after six steps.", "title": "" }, { "docid": "4bbe3b4512ff5bf18aa17d54b6645049", "text": "The aim of this study is to find a minimal size of text samples for authorship attribution that would provide stable results independent of random noise. A few controlled tests for different sample lengths, languages and genres are discussed and compared. Although I focus on Delta methodology, the results are valid for many other multidimensional methods relying on word frequencies and \"nearest neighbor\" classifications.", "title": "" }, { "docid": "4cfe999fa7b2594327b6109084f0164f", "text": "A large number of post-transcriptional modifications of transfer RNAs (tRNAs) have been described in prokaryotes and eukaryotes. They are known to influence their stability, turnover, and chemical/physical properties. A specific subset of tRNAs contains a thiolated uridine residue at the wobble position to improve the codon-anticodon interaction and translational accuracy. The proteins involved in tRNA thiolation are reminiscent of prokaryotic sulfur transfer reactions and of the ubiquitylation process in eukaryotes. In plants, some of the proteins involved in this process have been identified and show a high degree of homology to their non-plant equivalents. For other proteins, the identification of the plant homologs is much less clear, due to the low conservation in protein sequence. This manuscript describes the identification of CTU2, the second CYTOPLASMIC THIOURIDYLASE protein of Arabidopsis thaliana. CTU2 is essential for tRNA thiolation and interacts with ROL5, the previously identified CTU1 homolog of Arabidopsis. CTU2 is ubiquitously expressed, yet its activity seems to be particularly important in root tissue. A ctu2 knock-out mutant shows an alteration in root development. The analysis of CTU2 adds a new component to the so far characterized protein network involved in tRNA thiolation in Arabidopsis. CTU2 is essential for tRNA thiolation as a ctu2 mutant fails to perform this tRNA modification. The identified Arabidopsis CTU2 is the first CTU2-type protein from plants to be experimentally verified, which is important considering the limited conservation of these proteins between plant and non-plant species. Based on the Arabidopsis protein sequence, CTU2-type proteins of other plant species can now be readily identified.", "title": "" }, { "docid": "5b96fcbe3ac61265ef5407f4e248193e", "text": "Modelling the similarity of sentence pairs is an important problem in natural language processing and information retrieval, with applications in tasks such as paraphrase identification and answer selection in question answering. The Multi-Perspective Convolutional Neural Network (MP-CNN) is a model that improved previous state-of-the-art models in 2015 and has remained a popular model for sentence similarity tasks. However, until now, there has not been a rigorous study of how the model actually achieves competitive accuracy. In this thesis, we report on a series of detailed experiments that break down the contribution of each component of MP-CNN towards its statistical accuracy and how they affect model robustness. We find that two key components of MP-CNN are non-essential to achieve competitive accuracy and they make the model less robust to changes in hyperparameters. Furthermore, we suggest simple changes to the architecture and experimentally show that we improve the accuracy of MP-CNN when we remove these two major components of MP-CNN and incorporate these small changes, pushing its scores closer to more recent works on competitive semantic textual similarity and answer selection datasets, while using eight times fewer parameters.", "title": "" }, { "docid": "d11fc4a2a799356380354af144aafe37", "text": "[Context and motivation] For the past several years, Cyber Physical Systems (CPS) have emerged as a new system type like embedded systems or information systems. CPS are highly context-dependent, observe the world through sensors, act upon it through actuators, and communicate with one another through powerful networks. It has been widely argued that these properties pose new challenges for the development process. [Question/problem] Yet, how these CPS properties impact the development process has thus far been subject to conjecture. An investigation of a development process from a cyber physical perspective has thus far not been undertaken. [Principal ideas/results] In this paper, we conduct initial steps into such an investigation. We present a case study involving the example of a software simulator of an airborne traffic collision avoidance system. [Contribution] The goal of the case study is to investigate which of the challenges from the literature impact the development process of CPS the most.", "title": "" }, { "docid": "275cdc97004df1886c8da247c7206a71", "text": "This paper considers optimal synthesis of a special type of four-bar linkages. Combination of this optimal four-bar linkage with on of it’s cognates and elimination of two redundant cognates will result in a Watt’s six-bar mechanism, which generates straight and parallel motion. This mechanism can be utilized for legged machines. The advantage of this mechanism is that the leg remains straight during it’s contact period and because of it’s parallel motion, the legs can be as wide as desired to increase contact area and decrease the number of legs required to keep body’s stability statically and dynamically. “Genetic algorithm” optimization method is used to find optimal lengths. It is especially useful for problems like the coupler curve equation which are completely nonlinear or extremely difficult to solve.", "title": "" }, { "docid": "f8062f3ece1ff887047303d53cf37323", "text": "The task of automatically tracking the visual attention in dynamic visual scenes is highly challenging. To approach it, we propose a Bayesian online learning algorithm. As the visual scene changes and new objects appear, based on a mixture model, the algorithm can identify and tell visual saccades (transitions) from visual fixation clusters (regions of interest). The approach is evaluated on real-world data, collected from eye-tracking experiments in driving sessions.", "title": "" }, { "docid": "199527da97881d37606ddf2416b46fe4", "text": "Driven by the demands on healthcare resulting from the shift toward more sedentary lifestyles, considerable effort has been devoted to the monitoring and classification of human activity. In previous studies, various classification schemes and feature extraction methods have been used to identify different activities from a range of different datasets. In this paper, we present a comparison of 14 methods to extract classification features from accelerometer signals. These are based on the wavelet transform and other well-known time- and frequency-domain signal characteristics. To allow an objective comparison between the different features, we used two datasets of activities collected from 20 subjects. The first set comprised three commonly used activities, namely, level walking, stair ascent, and stair descent, and the second a total of eight activities. Furthermore, we compared the classification accuracy for each feature set across different combinations of three different accelerometer placements. The classification analysis has been performed with robust subject-based cross-validation methods using a nearest-neighbor classifier. The findings show that, although the wavelet transform approach can be used to characterize nonstationary signals, it does not perform as accurately as frequency-based features when classifying dynamic activities performed by healthy subjects. Overall, the best feature sets achieved over 95% intersubject classification accuracy.", "title": "" }, { "docid": "2d4357831f83de026759776e019934da", "text": "Mapping the physical location of nodes within a wireless sensor network (WSN) is critical in many applications such as tracking and environmental sampling. Passive RFID tags pose an interesting solution to localizing nodes because an outside reader, rather than the tag, supplies the power to the tag. Thus, utilizing passive RFID technology allows a localization scheme to not be limited to objects that have wireless communication capability because the technique only requires that the object carries a RFID tag. This paper illustrates a method in which objects can be localized without the need to communicate received signal strength information between the reader and the tagged item. The method matches tag count percentage patterns under different signal attenuation levels to a database of tag count percentages, attenuations and distances from the base station reader.", "title": "" }, { "docid": "be5e1336187b80bc418b2eb83601fbd4", "text": "Pedestrian detection has been an important problem for decades, given its relevance to a number of applications in robotics, including driver assistance systems, road scene understanding and surveillance systems. The two main practical requirements for fielding such systems are very high accuracy and real-time speed: we need pedestrian detectors that are accurate enough to be relied on and are fast enough to run on systems with limited compute power. This paper addresses both of these requirements by combining very accurate deep-learning-based classifiers within very efficient cascade classifier frameworks. Deep neural networks (DNN) have been shown to excel at classification tasks [5], and their ability to operate on raw pixel input without the need to design special features is very appealing. However, deep nets are notoriously slow at inference time. In this paper, we propose an approach that cascades deep nets and fast features, that is both very fast and accurate. We apply it to the challenging task of pedestrian detection. Our algorithm runs in real-time at 15 frames per second (FPS). The resulting approach achieves a 26.2% average miss rate on the Caltech Pedestrian detection benchmark, which is the first work we are aware of that achieves high accuracy while running in real-time. To achieve this, we combine a fast cascade [2] with a cascade of classifiers, which we propose to be DNNs. Our approach is unique, as it is the only one to produce a pedestrian detector at real-time speeds (15 FPS) that is also very accurate. Figure 1 visualizes existing methods as plotted on the accuracy computational time axis, measured on the challenging Caltech pedestrian detection benchmark [4]. As can be seen in this figure, our approach is the only one to reside in the high accuracy, high speed region of space, which makes it particularly appealing for practical applications. Fast Deep Network Cascade. Our main architecture is a cascade structure in which we take advantage of the fast features for elimination, VeryFast [2] as an initial stage and combine it with small and large deep networks [1, 5] for high accuracy. The VeryFast algorithm is a cascade itself, but of boosting classifiers. It reduces recall with each stage, producing a high average miss rate in the end. Since the goal is eliminate many non-pedestrian patches and at the same time keep the recall high, we used only 10% of the stages in that cascade. Namely, we use a cascade of only 200 stages, instead of the 2000 in the original work. The first stage of our deep cascade processes all image patches that have high confidence values and pass through the VeryFast classifier. We here utilize the idea of a tiny convolutional network proposed by our prior work [1]. The tiny deep network has three layers only and features a 5x5 convolution, a 1x1 convolution and a very shallow fully-connected layer of 512 units. It reduces the massive computational time that is needed to evaluate a full DNN at all candidate locations filtered by the previous stage. The speedup produced by the tiny network, is a crucial component in achieving real-time performance in our fast cascade method. The baseline deep neural network is based on the original deep network of Krizhevsky et al [5]. As mentioned, this network in general is extremely slow to be applied alone. To achieve real-time speeds, we first apply it to only the remaining filtered patches from the previous two stages. Another key difference is that we reduced the depths of some of the convolutional layers and the sizes of the receptive fields, which is specifically done to gain speed advantage. Runtime. Our deep cascade works at 67ms on a standard NVIDIA K20 Tesla GPU per 640x480 image, which is a runtime of 15 FPS. The time breakdown is as follows. The soft-cascade takes about 7 milliseconds (ms). About 1400 patches are passed through per image from the fast cascade. The tiny DNN runs at 0.67 ms per batch of 128, so it can process the patches in 7.3 ms. The final stage of the cascade (which is the baseline classifier) takes about 53ms. This is an overall runtime of 67ms. Experimental evaluation. We evaluate the performance of the Fast Deep Network Cascade using the training and test protocols established in the Caltech pedestrian benchmark [4]. We tested several scenarios by training on the Caltech data only, denoted as DeepCascade, on an indeFigure 1: Performance of pedestrian detection methods on the accuracy vs speed axis. Our DeepCascade method achieves both smaller missrates and real-time speeds. Methods for which the runtime is more than 5 seconds per image, or is unknown, are plotted on the left hand side. The SpatialPooling+/Katamari methods use additional motion information.", "title": "" }, { "docid": "9a4bdfe80a949ec1371a917585518ae4", "text": "This article presents the event calculus, a logic-based formalism for representing actions and their effects. A circumscriptive solution to the frame problem is deployed which reduces to monotonic predicate completion. Using a number of benchmark examples from the literature, the formalism is shown to apply to a variety of domains, including those featuring actions with indirect effects, actions with non-deterministic effects, concurrent actions, and continuous change.", "title": "" }, { "docid": "0bf292fdbc04805b4bd671d6f5099cf7", "text": "We consider the stochastic optimization of finite sums over a Riemannian manifold where the functions are smooth and convex. We present MASAGA, an extension of the stochastic average gradient variant SAGA on Riemannian manifolds. SAGA is a variance-reduction technique that typically outperforms methods that rely on expensive full-gradient calculations, such as the stochastic variance-reduced gradient method. We show that MASAGA achieves a linear convergence rate with uniform sampling, and we further show that MASAGA achieves a faster convergence rate with non-uniform sampling. Our experiments show that MASAGA is faster than the recent Riemannian stochastic gradient descent algorithm for the classic problem of finding the leading eigenvector corresponding to the maximum eigenvalue.", "title": "" }, { "docid": "8c35fd3040e4db2d09e3d6dc0e9ae130", "text": "Internet of Things is referred to a combination of physical devices having sensors and connection capabilities enabling them to interact with each other (machine to machine) and can be controlled remotely via cloud engine. Success of an IoT device depends on the ability of systems and devices to securely sample, collect, and analyze data, and then transmit over link, protocol, or media selections based on stated requirements, all without human intervention. Among the requirements of the IoT, connectivity is paramount. It's hard to imagine that a single communication technology can address all the use cases possible in home, industry and smart cities. Along with the existing low power technologies like Zigbee, Bluetooth and 6LoWPAN, 802.11 WiFi standards are also making its way into the market with its own advantages in high range and better speed. Along with IEEE, WiFi Alliance has a new standard for the proximity applications. Neighbor Awareness Network (NAN) popularly known as WiFi Aware is that standard which enables low power discovery over WiFi and can light up many proximity based used cases. In this paper we discuss how NAN can influence the emerging IoT market as a connectivity solution for proximity assessment and contextual notifications with its benefits in some of the scenarios. When we consider WiFi the infrastructure already exists in terms of access points all around in public and smart phones or tablets come with WiFi as a default feature hence enabling NAN can be easy and if we can pair them with IoT, many innovative use cases can evolve.", "title": "" }, { "docid": "8bae8e7937f4c9a492a7030c62d7d9f4", "text": "Although there is considerable interest in the advance bookings model as a forecasting method in the hotel industry, there has been little research analyzing the use of an advance booking curve in forecasting hotel reservations. The mainstream of advance booking models reviewed in the literature uses only the bookings-on-hand data on a certain day and ignores the previous booking data. This empirical study analyzes the entire booking data set for one year provided by the Hotel ICON in Hong Kong, and identifies the trends and patterns in the data. The analysis demonstrates the use of an advance booking curve in forecasting hotel reservations at property level.", "title": "" }, { "docid": "b1bced32626640b0078f4782d6ab1d40", "text": "This report summarizes my overview talk on software clone detection research. It first discusses the notion of software redundancy, cloning, duplication, and similarity. Then, it describes various categorizations of clone types, empirical studies on the root causes for cloning, current opinions and wisdom of consequences of cloning, empirical studies on the evolution of clones, ways to remove, to avoid, and to detect them, empirical evaluations of existing automatic clone detector performance (such as recall, precision, time and space consumption) and their fitness for a particular purpose, benchmarks for clone detector evaluations, presentation issues, and last but not least application of clone detection in other related fields. After each summary of a subarea, I am listing open research questions.", "title": "" }, { "docid": "db2ebec1eeec213a867b10fe9550bfc7", "text": "Photovoltaic method is very popular for generating electrical power. Its energy production depends on solar radiation on that location and orientation. Shadow rapidly decreases performance of the Photovoltaic system. In this research, it is being investigated that how exactly real-time shadow can be detected. In principle, 3D city models containing roof structure, vegetation, thematically differentiated surface and texture, are suitable to simulate exact real-time shadow. An automated procedure to measure exact shadow effect from the 3D city models and a long-term simulation model to determine the produced energy from the photovoltaic system is being developed here. In this paper, a method for detecting shadow for direct radiation has been discussed with its result using a 3D city model to perform a solar energy potentiality analysis. Figure 1. Partial Shadow on PV array (Reisa 2011). Former military area Scharnhauser Park shown in figure 2 has been choosen as the case study area for this research. It is an urban conversion and development area of 150 hecta res in the community of Ostfildern on the southern border near Stuttgart with 7000 inhabitants. About 80% heating energy demand of the whole area is supplied by renewable energies and a small portion of electricity is delivered by existing roof top photovoltaic system (Tereci et al, 2009). This has been selected as the study area for this research because of availability CityGML and LIDAR data, building footprints and existing photovoltaic cells on roofs and façades. Land Survey Office Baden-Wüttemberg provides the laser scanning data with a density of 4 points per square meter at a high resolution of 0.2 meter. The paper has been organized with a brief introduction at the beginning explaining background of photovoltaic energy and motivation for this research in. Then the effect of shadow on photovoltaic cells and a methodology for detecting shadow from direct radiation. Then result has been shown applying the methodology and some brief idea about the future work of this research has been presented.", "title": "" }, { "docid": "f7edc938429e5f085e355004325b7698", "text": "We present a large scale unified natural language inference (NLI) dataset for providing insight into how well sentence representations capture distinct types of reasoning. We generate a large-scale NLI dataset by recasting 11 existing datasets from 7 different semantic tasks. We use our dataset of approximately half a million context-hypothesis pairs to test how well sentence encoders capture distinct semantic phenomena that are necessary for general language understanding. Some phenomena that we consider are event factuality, named entity recognition, figurative language, gendered anaphora resolution, and sentiment analysis, extending prior work that included semantic roles and frame semantic parsing. Our dataset will be available at https:// www.decomp.net, to grow over time as additional resources are recast.", "title": "" } ]
scidocsrr
9c50b948f6621f5dbacc2a9ce01b2f6e
Monopole Antenna With Inkjet-Printed EBG Array on Paper Substrate for Wearable Applications
[ { "docid": "6f13503bf65ff58b7f0d4f3282f60dec", "text": "Body centric wireless communication is now accepted as an important part of 4th generation (and beyond) mobile communications systems, taking the form of human to human networking incorporating wearable sensors and communications. There are also a number of body centric communication systems for specialized occupations, such as paramedics and fire-fighters, military personnel and medical sensing and support. To support these developments there is considerable ongoing research into antennas and propagation for body centric communications systems, and this paper will summarise some of it, including the characterisation of the channel on the body, the optimisation of antennas for these channels, and communications to medical implants where advanced antenna design and characterisation and modelling of the internal body channel are important research needs. In all of these areas both measurement and simulation pose very different and challenging issues to be faced by the researcher.", "title": "" }, { "docid": "e99d7b425ab1a2a9a2de4e10a3fbe766", "text": "In this paper, a review of the authors' work on inkjet-printed flexible antennas, fabricated on paper substrates, is given. This is presented as a system-level solution for ultra-low-cost mass production of UHF radio-frequency identification (RFID) tags and wireless sensor nodes (WSN), in an approach that could be easily extended to other microwave and wireless applications. First, we discuss the benefits of using paper as a substrate for high-frequency applications, reporting its very good electrical/dielectric performance up to at least 1 GHz. The RF characteristics of the paper-based substrate are studied by using a microstrip-ring resonator, in order to characterize the dielectric properties (dielectric constant and loss tangent). We then give details about the inkjet-printing technology, including the characterization of the conductive ink, which consists of nano-silver particles. We highlight the importance of this technology as a fast and simple fabrication technique, especially on flexible organic (e.g., LCP) or paper-based substrates. A compact inkjet-printed UHF ldquopassive RFIDrdquo antenna, using the classic T-match approach and designed to match the IC's complex impedance, is presented as a demonstration prototype for this technology. In addition, we briefly touch upon the state-of-the-art area of fully-integrated wireless sensor modules on paper. We show the first-ever two-dimensional sensor integration with an RFID tag module on paper, as well as the possibility of a three-dimensional multilayer paper-based RF/microwave structure.", "title": "" }, { "docid": "784f3100dbd852b249c0e9b0761907f1", "text": "The bi-directional beam from an equiangular spiral antenna (EAS) is changed to a unidirectional beam using an electromagnetic band gap (EBG) reflector. The antenna height, measured from the upper surface of the EBG reflector to the spiral arms, is chosen to be extremely small to realize a low-profile antenna: 0.07 wavelength at the lowest analysis frequency of 3 GHz. The analysis shows that the EAS backed by the EBG reflector does not reproduce the inherent wideband axial ratio characteristic observed when the EAS is isolated in free space. The deterioration in the axial ratio is examined by decomposing the total radiation field into two field components: one component from the equiangular spiral and the other from the EBG reflector. The examination reveals that the amplitudes and phases of these two field components do not satisfy the constructive relationship necessary for circularly polarized radiation. Based on this finding, next, the EBG reflector is modified by gradually removing the patch elements from the center region of the reflector, thereby satisfying the required constructive relationship between the two field components. This equiangular spiral with a modified EBG reflector shows wideband characteristics with respect to the axial ratio, input impedance and gain within the design frequency band (4-9 GHz). Note that, for comparison, the antenna characteristics for an EAS isolated in free space and an EAS backed by a perfect electric conductor are also presented.", "title": "" } ]
[ { "docid": "0cd96187b257ee09060768650432fe6d", "text": "Sustainable urban mobility is an important dimension in a Smart City, and one of the key issues for city sustainability. However, innovative and often costly mobility policies and solutions introduced by cities are liable to fail, if not combined with initiatives aimed at increasing the awareness of citizens, and promoting their behavioural change. This paper explores the potential of gamification mechanisms to incentivize voluntary behavioural changes towards sustainable mobility solutions. We present a service-based gamification framework, developed within the STREETLIFE EU Project, which can be used to develop games on top of existing services and systems within a Smart City, and discuss the empirical findings of an experiment conducted in the city of Rovereto on the effectiveness of gamification to promote sustainable urban mobility.", "title": "" }, { "docid": "69bb52e45db91f142b8c5297abd21282", "text": "IP-based solutions to accommodate mobile hosts within existing internetworks do not address the distinctive features of wireless mobile computing. IP-based transport protocols thus suffer from poor performance when a mobile host communicates with a host on the fixed network. This is caused by frequent disruptions in network layer connectivity due to — i) mobility and ii) unreliable nature of the wireless link. We describe the design and implementation of I-TCP, which is an indirect transport layer protocol for mobile hosts. I-TCP utilizes the resources of Mobility Support Routers (MSRs) to provide transport layer communication between mobile hosts and hosts on the fixed network. With I-TCP, the problems related to mobility and the unreliability of wireless link are handled entirely within the wireless link; the TCP/IP software on the fixed hosts is not modified. Using I-TCP on our testbed, the throughput between a fixed host and a mobile host improved substantially in comparison to regular TCP.", "title": "" }, { "docid": "3b47a88f37a06ec44d510a4dbfc0993d", "text": "Governance, Risk and Compliance (GRC) as an integrated concept has gained great interest recently among researchers in the Information Systems (IS) field. The need for more effective and efficient business processes in the area of financial controls drives enterprises to successfully implement GRC systems as an overall goal when they are striving for enterprise value of their integrated systems. The GRC implementation process is a significant parameter influencing the success of operational performance and financial governance and supports the practices for competitive advantage within the organisations. However, GRC literature is limited regarding the analysis of their implementation and adoption success. Therefore, there is a need for further research and contribution in the area of GRC systems and more specifically their implementation process. The research at hand recognizes GRC as a fundamental business requirement and focuses on the need to analyse the implementation process of such enterprise solutions. The research includes theoretical and empirical investigation of the GRC implementation within an enterprise and develops a framework for the analysis of the GRC adoption. The approach suggests that the three success factors (integration, optimisation, information) influence the adoption of the GRC and more specifically their implementation process. The proposed framework followed a case study approach to confirm its functionality and is evaluated through interviews with stakeholders involved in GRC implementations. Furthermore, it can be used by the organisations when considering the adoption of GRC solutions and can also suggest a tool for researchers to analyse and explain further the GRC implementation process.", "title": "" }, { "docid": "d7c2d97fbd7591bdd53e711ed5582f6c", "text": "Progress in Information and Communication Technologies (ICTs) is shaping more and more the healthcare domain. ICTs adoption provides new opportunities, as well as discloses novel and unforeseen application scenarios. As a result, the overall health sector is potentially benefited, as the quality of medical services is expected to be enhanced and healthcare costs are reduced, in spite of the increasing demand due to the aging population. Notwithstanding the above, the scientific literature appears to be still quite scattered and fragmented, also due to the interaction of scientific communities with different background, skills, and approaches. A number of specific terms have become of widespread use (e.g., regarding ICTs-based healthcare paradigms as well as at health-related data formats), but without commonly-agreed definitions. While scientific surveys and reviews have also been proposed, none of them aims at providing a holistic view of how today ICTs are able to support healthcare. This is the more and more an issue, as the integrated application of most if not all the main ICTs pillars is the most agreed upon trend, according to the Industry 4.0 paradigm about ongoing and future industrial revolution. In this paper we aim at shedding light on how ICTs and healthcare are related, identifying the most popular ICTs-based healthcare paradigms, together with the main ICTs backing them. Studying more than 300 papers, we survey outcomes of literature analyses and results from research activities carried out in this field. We characterize the main ICTs-based healthcare paradigms stemmed out in recent years fostered by the evolution of ICTs. Dissecting the scientific literature, we also identify the technological pillars underpinning the novel applications fueled by these technological advancements. Guided by the scientific literature, we review a number of application scenarios gaining momentum thanks to the beneficial impact of ICTs. As the evolution of ICTs enables to gather huge and invaluable data from numerous and highly varied sources in easier ways, here we also focus on the shapes that this healthcare-related data may take. This survey provides an up-to-date picture of the novel healthcare applications enabled by the ICTs advancements, with a focus on their specific hottest research challenges. It helps the interested readership (from both technological and medical fields) not to lose orientation in the complex landscapes possibly generated when advanced ICTs are adopted in application scenarios dictated by the critical healthcare domain.", "title": "" }, { "docid": "ce6e5532c49b02988588f2ac39724558", "text": "hlany modern computing environments involve dynamic peer groups. Distributed Simdation, mtiti-user games, conferencing and replicated servers are just a few examples. Given the openness of today’s networks, communication among group members must be secure and, at the same time, efficient. This paper studies the problem of authenticated key agreement. in dynamic peer groups with the emphasis on efficient and provably secure key authentication, key confirmation and integrity. It begins by considering 2-party authenticateed key agreement and extends the restits to Group Dfi*Hehart key agreement. In the process, some new security properties (unique to groups) are discussed.", "title": "" }, { "docid": "46465926afb62b9f73386a962047875d", "text": "Cervical cancer represents the second leading cause of death for women worldwide. The importance of the diet and its impact on specific types of neoplasia has been highlighted, focusing again interest in the analysis of dietary phytochemicals. Polyphenols have shown a wide range of cellular effects: they may prevent carcinogens from reaching the targeted sites, support detoxification of reactive molecules, improve the elimination of transformed cells, increase the immune surveillance and the most important factor is that they can influence tumor suppressors and inhibit cellular proliferation, interfering in this way with the steps of carcinogenesis. From the studies reviewed in this paper, it is clear that certain dietary polyphenols hold great potential in the prevention and therapy of cervical cancer, because they interfere in carcinogenesis (in the initiation, development and progression) by modulating the critical processes of cellular proliferation, differentiation, apoptosis, angiogenesis and metastasis. Specifically, polyphenols inhibit the proliferation of HPV cells, through induction of apoptosis, growth arrest, inhibition of DNA synthesis and modulation of signal transduction pathways. The effects of combinations of polyphenols with chemotherapy and radiotherapy used in the treatment of cervical cancer showed results in the resistance of cervical tumor cells to chemo- and radiotherapy, one of the main problems in the treatment of cervical neoplasia that can lead to failure of the treatment because of the decreased efficiency of the therapy.", "title": "" }, { "docid": "6085fab45784706f5c99e7c316a0fc55", "text": "The localization of photosensitizers in the subcellular compartments during photodynamic therapy (PDT) plays a major role in the cell destruction; therefore, the aim of this study was to investigate the intracellular localization of Chlorin e6-PVP (Photolon™) in malignant and normal cells. Our study involves the characterization of the structural determinants of subcellular localization of Photolon, and how subcellular localization affects the selective toxicity of Photolon towards tumor cells. Using confocal laser scanning microscopy (CLSM) and fluorescent organelle probes; we examined the subcellular localization of Photolon™ in the murine colon carcinoma CT-26 and normal fibroblast (NHLC) cells. Our results demonstrated that after 30 min of incubation, the distribution of Photolon was localized mainly in the cytoplasmic organelles including the mitochondria, lysosomes, Golgi apparatus, around the nuclear envelope and also in the nucleus but not in the endo-plasmic reticulum whereas in NHLC cells, Photolon was found to be localized minimally only in the nucleus not in other organelles studied. The relationship between subcellular localization of Photolon and PDT-induced apoptosis was investigated. Apoptotic cell death was judged by the formation of known apoptotic hallmarks including, the phosphatidylserine externalization (PS), PARP cleavage, a substrate for caspase-3 and the formation of apoptotic nuclei. At the irradiation dose of 1 J/cm2, the percentage of apoptotic cells was 80%, respectively. This study provided substantial evidence that Photolon preferentially localized in the subcellular organelles in the following order: nucleus, mitochondria, lysosomes and the Golgi apparatus and subsequent photodamage of the mitochondria and lyso-somes played an important role in PDT-mediated apoptosis CT-26 cells. Our results based on the cytoplasmic organelles and the intranuclear localization extensively enhance the efficacy of PDT with appropriate photosensitizer and light dose and support the idea that PDT can contribute to elimination of malignant cells by inducing apoptosis, which is of physiological significance.", "title": "" }, { "docid": "9b5bccc259b512de43e5fe49a5b3fa21", "text": "A combination of techniques that is becoming increasingly popular is the construction of part-based object representations using the outputs of interest-point detectors. Our contributions in this paper are twofold: first, we propose a primal-sketch-based set of image tokens that are used for object representation and detection. Second, top-down information is introduced based on an efficient method for the evaluation of the likelihood of hypothesized part locations. This allows us to use graphical model techniques to complement bottom-up detection, by proposing and finding the parts of the object that were missed by the front-end feature detection stage. Detection results for four object categories validate the merits of this joint top-down and bottom-up approach.", "title": "" }, { "docid": "ac6b3d140b2e31b8b19dc37d25207eca", "text": "In this paper, a comparative study on frequency and time domain analyses for the evaluation of the seismic response of subsoil to the earthquake shaking is presented. After some remarks on the solutions given by the linear elasticity theory for this type of problem, the use of some widespread numerical codes is illustrated and the results are compared with the available theoretical predictions. Bedrock elasticity, viscous and hysteretic damping, stress-dependency of the stiffness and nonlinear behaviour of the soil are taken into account. A series of comparisons between the results obtained by the different computer programs is shown.", "title": "" }, { "docid": "ee727069682d1ed5181f05327e96aced", "text": "The problem of place recognition appears in different mobile robot navigation problems including localization, SLAM, or change detection in dynamic environments. Whereas this problem has been studied intensively in the context of robot vision, relatively few approaches are available for three-dimensional range data. In this paper, we present a novel and robust method for place recognition based on range images. Our algorithm matches a given 3D scan against a database using point features and scores potential transformations by comparing significant points in the scans. A further advantage of our approach is that the features allow for a computation of the relative transformations between scans which is relevant for registration processes. Our approach has been implemented and tested on different 3D data sets obtained outdoors. In several experiments we demonstrate the advantages of our approach also in comparison to existing techniques.", "title": "" }, { "docid": "2fee5493d0cec652a403f5659f6a2a2a", "text": "The lethal(3)malignant brain tumor [t(3)mbt] gene causes, when mutated, malignant growth of the adult optic neuroblasts and ganglion mother cells in the larval brain and imaginal disc overgrowth. Via overlapping deficiencies a genomic region of approximately 6.0 kb was identified, containing l(3)mbt+ gene sequences. The l(3)mbt+ gene encodes seven transcripts of 5.8 kb, 5.65 kb, 5.35 kb, 5.25 kb, 5.0 kb, 4.4 kb and 1.8 kb. The putative MBT163 protein, encompassing 1477 amino acids, is proline-rich and contains a novel zinc finger. In situ hybridizations of whole mount embryos and larval tissues revealed l(3)mbt+ RNA ubiquitously present in stage 1 embryos and throughout embryonic development in most tissues. In third instar larvae l(3)mbt+ RNA is detected in the adult optic anlagen and the imaginal discs, the tissues directly affected by l(3)mbt mutations, but also in tissues, showing normal development in the mutant, such as the gut, the goblet cells and the hematopoietic organs.", "title": "" }, { "docid": "47ddc934a733f5b2d05dcd0275c7fb06", "text": "Accurately forecasting pollution concentration of PM2.5 can provide early warning for the government to alert the persons suffering from air pollution. Many existing approaches fail at providing favorable results duo to shallow architecture in forecasting model that can not learn suitable features. In addition, multiple meteorological factors increase the difficulty for understanding the influence of the PM2.5 concentration. In this paper, a deep neural network is proposed for accurately forecasting PM2.5 pollution concentration based on manifold learning. Firstly, meteorological factors are specified by the manifold learning method, reducing the dimension without any expert knowledge. Secondly, a deep belief network (DBN) is developed to learn the features of the input candidates obtained by the manifold learning and the one-day ahead PM2.5 concentration. Finally, the deep features are modeled by a regression neural network, and the local PM2.5 forecast is yielded. The addressed model is evaluated by the dataset in the period of 28/10/2013 to 31/3/2017 in Chongqing municipality of China. The study suggests that deep learning is a promising technique in PM2.5 concentration forecasting based on the manifold learning.", "title": "" }, { "docid": "f6d9efb7cfee553bc02a5303a86fd626", "text": "OBJECTIVE\nTo perform a cross-cultural adaptation of the Portuguese version of the Maslach Burnout Inventory for students (MBI-SS), and investigate its reliability, validity and cross-cultural invariance.\n\n\nMETHODS\nThe face validity involved the participation of a multidisciplinary team. Content validity was performed. The Portuguese version was completed in 2009, on the internet, by 958 Brazilian and 556 Portuguese university students from the urban area. Confirmatory factor analysis was carried out using as fit indices: the χ²/df, the Comparative Fit Index (CFI), the Goodness of Fit Index (GFI) and the Root Mean Square Error of Approximation (RMSEA). To verify the stability of the factor solution according to the original English version, cross-validation was performed in 2/3 of the total sample and replicated in the remaining 1/3. Convergent validity was estimated by the average variance extracted and composite reliability. The discriminant validity was assessed, and the internal consistency was estimated by the Cronbach's alpha coefficient. Concurrent validity was estimated by the correlational analysis of the mean scores of the Portuguese version and the Copenhagen Burnout Inventory, and the divergent validity was compared to the Beck Depression Inventory. The invariance of the model between the Brazilian and the Portuguese samples was assessed.\n\n\nRESULTS\nThe three-factor model of Exhaustion, Disengagement and Efficacy showed good fit (c 2/df = 8.498, CFI = 0.916, GFI = 0.902, RMSEA = 0.086). The factor structure was stable (λ:χ²dif = 11.383, p = 0.50; Cov: χ²dif = 6.479, p = 0.372; Residues: χ²dif = 21.514, p = 0.121). Adequate convergent validity (VEM = 0.45;0.64, CC = 0.82;0.88), discriminant (ρ² = 0.06;0.33) and internal consistency (α = 0.83;0.88) were observed. The concurrent validity of the Portuguese version with the Copenhagen Inventory was adequate (r = 0.21, 0.74). The assessment of the divergent validity was impaired by the approach of the theoretical concept of the dimensions Exhaustion and Disengagement of the Portuguese version with the Beck Depression Inventory. Invariance of the instrument between the Brazilian and Portuguese samples was not observed (λ:χ²dif = 84.768, p<0.001; Cov: χ²dif = 129.206, p < 0.001; Residues: χ²dif = 518.760, p < 0.001).\n\n\nCONCLUSIONS\nThe Portuguese version of the Maslach Burnout Inventory for students showed adequate reliability and validity, but its factor structure was not invariant between the countries, indicating the absence of cross-cultural stability.", "title": "" }, { "docid": "32ca9711622abd30c7c94f41b91fa3f6", "text": "The Elliptic Curve Digital Signature Algorithm (ECDSA) is the elliptic curve analogue of the Digital Signature Algorithm (DSA). It was accepted in 1999 as an ANSI standard and in 2000 as IEEE and NIST standards. It was also accepted in 1998 as an ISO standard and is under consideration for inclusion in some other ISO standards. Unlike the ordinary discrete logarithm problem and the integer factorization problem, no subexponential-time algorithm is known for the elliptic curve discrete logarithm problem. For this reason, the strength-per-key-bit is substantially greater in an algorithm that uses elliptic curves. This paper describes the ANSI X9.62 ECDSA, and discusses related security, implementation, and interoperability issues.", "title": "" }, { "docid": "7735668d4f8407d9514211d9f5492ce6", "text": "This revision to the EEG Guidelines is an update incorporating current EEG technology and practice. The role of the EEG in making the determination of brain death is discussed as are suggested technical criteria for making the diagnosis of electrocerebral inactivity.", "title": "" }, { "docid": "f91238b11b84099cdbb16c8c4b7c75ae", "text": "This study investigates the case-based learning experience of 133 undergraduate veterinarian science students. Using qualitative methodologies from relational Student Learning Research, variation in the quality of the learning experience was identified, ranging from coherent, deep, quality experiences of the cases, to experiences that separated significant aspects, such as the online case histories, laboratory test results, and annotated images emphasizing symptoms, from the meaning of the experience. A key outcome of this study was that a significant percentage of the students surveyed adopted a poor approach to learning with online resources in a blended experience even when their overall learning experience was related to cohesive conceptions of veterinary science, and that the difference was even more marked for less successful students. The outcomes from the study suggest that many students are unsure of how to approach the use of online resources in ways that are likely to maximise benefits for learning in blended experiences, and that the benefits from case-based learning such as authenticity and active learning can be threatened if issues closely associated with qualitative variation arising from incoherence in the experience are not addressed.", "title": "" }, { "docid": "050c701f2663f4fa85aadd65a5dc96f2", "text": "The availability of multiple, essentially complete genome sequences of prokaryotes and eukaryotes spurred both the demand and the opportunity for the construction of an evolutionary classification of genes from these genomes. Such a classification system based on orthologous relationships between genes appears to be a natural framework for comparative genomics and should facilitate both functional annotation of genomes and large-scale evolutionary studies. We describe here a major update of the previously developed system for delineation of Clusters of Orthologous Groups of proteins (COGs) from the sequenced genomes of prokaryotes and unicellular eukaryotes and the construction of clusters of predicted orthologs for 7 eukaryotic genomes, which we named KOGs after euk aryotic o rthologous g roups. The COG collection currently consists of 138,458 proteins, which form 4873 COGs and comprise 75% of the 185,505 (predicted) proteins encoded in 66 genomes of unicellular organisms. The euk aryotic o rthologous g roups (KOGs) include proteins from 7 eukaryotic genomes: three animals (the nematode Caenorhabditis elegans, the fruit fly Drosophila melanogaster and Homo sapiens), one plant, Arabidopsis thaliana, two fungi (Saccharomyces cerevisiae and Schizosaccharomyces pombe), and the intracellular microsporidian parasite Encephalitozoon cuniculi. The current KOG set consists of 4852 clusters of orthologs, which include 59,838 proteins, or ~54% of the analyzed eukaryotic 110,655 gene products. Compared to the coverage of the prokaryotic genomes with COGs, a considerably smaller fraction of eukaryotic genes could be included into the KOGs; addition of new eukaryotic genomes is expected to result in substantial increase in the coverage of eukaryotic genomes with KOGs. Examination of the phyletic patterns of KOGs reveals a conserved core represented in all analyzed species and consisting of ~20% of the KOG set. This conserved portion of the KOG set is much greater than the ubiquitous portion of the COG set (~1% of the COGs). In part, this difference is probably due to the small number of included eukaryotic genomes, but it could also reflect the relative compactness of eukaryotes as a clade and the greater evolutionary stability of eukaryotic genomes. The updated collection of orthologous protein sets for prokaryotes and eukaryotes is expected to be a useful platform for functional annotation of newly sequenced genomes, including those of complex eukaryotes, and genome-wide evolutionary studies.", "title": "" }, { "docid": "18c885e8cb799086219585e419140ba5", "text": "Reaction-time and eye-fixation data are analyzed to investigate how people infer the kinematics of simple mechanical systems (pulley systems) from diagrams showing their static configuration. It is proposed that this mental animation process involves decomposing the representation of a pulley system into smaller units corresponding to the machine components and animating these components in a sequence corresponding to the causal sequence of events in the machine's operation. Although it is possible for people to make inferences against the chain of causality in the machine, these inferences are more difficult, and people have a preference for inferences in the direction of causality. The mental animation process reflects both capacity limitations and limitations of mechanical knowledge.", "title": "" }, { "docid": "0a732282dc782b8893628697e39c9153", "text": "Neural networks have had many great successes in recent years, particularly with the advent of deep learning and many novel training techniques. One issue that has prevented reinforcement learning from taking full advantage of scalable neural networks is that of catastrophic forgetting. The latter affects supervised learning systems when highly correlated input samples are presented, as well as when input patterns are non-stationary. However, most real-world problems are non-stationary in nature, resulting in prolonged periods of time separating inputs drawn from different regions of the input space. Unfortunately, reinforcement learning presents a worst-case scenario when it comes to precipitating catastrophic forgetting in neural networks. Meaningful training examples are acquired as the agent explores different regions of its state/action space. When the agent is in one such region, only highly correlated samples from that region are typically acquired. Moreover, the regions that the agent is likely to visit will depend on its current policy, suggesting that an agent that has a good policy may avoid exploring particular regions. The confluence of these factors means that without some mitigation techniques, supervised neural networks as function approximation in temporal-difference learning will only be applicable to the simplest test cases. In this work, we develop a feed forward neural network architecture that mitigates catastrophic forgetting by partitioning the input space in a manner that selectively activates a different subset of hidden neurons for each region of the input space. We demonstrate the effectiveness of the proposed framework on a cart-pole balancing problem for which other neural network architectures exhibit training instability likely due to catastrophic forgetting. We demonstrate that our technique produces better results, particularly with respect to a performance-stability measure.", "title": "" }, { "docid": "0f699e9f14753b2cbfb7f7a3c7057f40", "text": "There has been much recent work on training neural attention models at the sequencelevel using either reinforcement learning-style methods or by optimizing the beam. In this paper, we survey a range of classical objective functions that have been widely used to train linear models for structured prediction and apply them to neural sequence to sequence models. Our experiments show that these losses can perform surprisingly well by slightly outperforming beam search optimization in a like for like setup. We also report new state of the art results on both IWSLT’14 German-English translation as well as Gigaword abstractive summarization. On the large WMT’14 English-French task, sequence-level training achieves 41.5 BLEU which is on par with the state of the art.1", "title": "" } ]
scidocsrr
abb586c09275c904f91719164e593524
Sentence Ranking with the Semantic Link Network in Scientific Paper
[ { "docid": "0836e5d45582b0a0eec78234776aa419", "text": "‘Description’: ‘Microsoft will accelerate your journey to cloud computing with an! agile and responsive datacenter built from your existing technology investments.’,! ‘DisplayUrl’: ‘www.microsoft.com/en-us/server-cloud/ datacenter/virtualization.aspx’,! ‘ID’: ‘a42b0908-174e-4f25-b59c-70bdf394a9da’,! ‘Title’: ‘Microsoft | Server & Cloud | Datacenter | Virtualization ...’,! ‘Url’: ‘http://www.microsoft.com/en-us/server-cloud/datacenter/ virtualization.aspx’,! ...! Data! #Topics: 228! #Candidate Labels: ~6,000! Domains: BLOGS, BOOKS, NEWS, PUBMED! Candidate labels rated by humans (0-3) ! Published by Lau et al. (2011). 4. Scoring Candidate Labels! Candidate Label: L = {w1, w2, ..., wm}! Scoring Function: Task: The aim of the task is to associate labels with automatically generated topics.", "title": "" } ]
[ { "docid": "ef6040561aaae594f825a6cabd4aa259", "text": "This study investigated the extent of young adults’ (N = 393; 17–30 years old) experience of cyberbullying, from the perspectives of cyberbullies and cyber-victims using an online questionnaire survey. The overall prevalence rate shows cyberbullying is still present after the schooling years. No significant gender differences were noted, however females outnumbered males as cyberbullies and cyber-victims. Overall no significant differences were noted for age, but younger participants were found to engage more in cyberbullying activities (i.e. victims and perpetrators) than the older participants. Significant differences were noted for Internet frequency with those spending 2–5 h online daily reported being more victimized and engage in cyberbullying than those who spend less than an hour daily. Internet frequency was also found to significantly predict cyber-victimization and cyberbullying, indicating that as the time spent on Internet increases, so does the chances to be bullied and to bully someone. Finally, a positive significant association was observed between cyber-victims and cyberbullies indicating that there is a tendency for cyber-victims to become cyberbullies, and vice versa. Overall it can be concluded that cyberbullying incidences are still taking place, even though they are not as rampant as observed among the younger users. 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "edacac86802497e0e43c4a03bfd3b925", "text": "This paper presents a novel tightly-coupled monocular visual-inertial Simultaneous Localization and Mapping algorithm, which provides accurate and robust localization within the globally consistent map in real time on a standard CPU. This is achieved by firstly performing the visual-inertial extended kalman filter(EKF) to provide motion estimate at a high rate. However the filter becomes inconsistent due to the well known linearization issues. So we perform a keyframe-based visual-inertial bundle adjustment to improve the consistency and accuracy of the system. In addition, a loop closure detection and correction module is also added to eliminate the accumulated drift when revisiting an area. Finally, the optimized motion estimates and map are fed back to the EKF-based visual-inertial odometry module, thus the inconsistency and estimation error of the EKF estimator are reduced. In this way, the system can continuously provide reliable motion estimates for the long-term operation. The performance of the algorithm is validated on public datasets and real-world experiments, which proves the superiority of the proposed algorithm.", "title": "" }, { "docid": "a0c92111e9d821ffd26e08f69b434002", "text": "Cell phones are a pervasive new communication technology, especially among college students. This paper examines college students cell phone usage from a behavioral and psychological perspective. Utilizing both qualitative (focus groups) and quantitative (survey) approaches, the study suggests these individuals use the devices for a variety of purposes: to help them feel safe, for financial benefits, to manage time efficiently, to keep in touch with friends and family members, et al. The degree to which the individuals are dependent on the cell phones and what they view as the negatives of their utilization are also examined. The findings suggest people have various feelings and attitudes toward cell phone usage. This study serves as a foundation on which future studies will be built. 2003 Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "1880bb9c3229cab3e614ca39079c7781", "text": "Emerging low-power radio triggering techniques for wireless motes are a promising approach to prolong the lifetime of Wireless Sensor Networks (WSNs). By allowing nodes to activate their main transceiver only when data need to be transmitted or received, wake-up-enabled solutions virtually eliminate the need for idle listening, thus drastically reducing the energy toll of communication. In this paper we describe the design of a novel wake-up receiver architecture based on an innovative pass-band filter bank with high selectivity capability. The proposed concept, demonstrated by a prototype implementation, combines both frequency-domain and time-domain addressing space to allow selective addressing of nodes. To take advantage of the functionalities of the proposed receiver, as well as of energy-harvesting capabilities modern sensor nodes are equipped with, we present a novel wake-up-enabled harvesting-aware communication stack that supports both interest dissemination and converge casting primitives. This stack builds on the ability of the proposed WuR to support dynamic address assignment, which is exploited to optimize system performance. Comparison against traditional WSN protocols shows that the proposed concept allows to optimize performance tradeoffs with respect to existing low-power communication stacks.", "title": "" }, { "docid": "4d12a4269e4969148f6d5331f5d8afdd", "text": "Money laundering has become of increasing concern to law makers in recent years, principally because of its associations with terrorism. Recent legislative changes in the United Kingdom mean that auditors risk becoming state law enforcement agents in the private sector. We examine this legislation from the perspective of the changing nature of the relationship between auditors and the state, and the surveillant assemblage within which this is located. Auditors are statutorily obliged to file Suspicious Activity Reports (SARs) into an online database, ELMER, but without much guidance regarding how suspicion is determined. Criminal rather than civil or regulatory sanctions apply to auditors’ instances of non-compliance. This paper evaluates the surveillance implications of the legislation for auditors through lenses developed in the accounting and sociological literature by Brivot andGendron, Neu andHeincke, Deleuze and Guattari, and Haggerty and Ericson. It finds that auditors are generating information flows which are subsequently reassembled into discrete and virtual ‘data doubles’ to be captured and utilised by authorised third parties for unknown purposes. The paper proposes that the surveillant assemblage has extended into the space of the auditor-client relationship, but this extension remains inhibited as a result of auditors’ relatively weak level of engagement in providing SARs, thereby pointing to a degree of resistance in professional service firms regarding the deployment of regulation that compromises the foundations of this", "title": "" }, { "docid": "869e01855c8cfb9dc3e64f7f3e73cd60", "text": "Sparse singular value decomposition (SSVD) is proposed as a new exploratory analysis tool for biclustering or identifying interpretable row-column associations within high-dimensional data matrices. SSVD seeks a low-rank, checkerboard structured matrix approximation to data matrices. The desired checkerboard structure is achieved by forcing both the left- and right-singular vectors to be sparse, that is, having many zero entries. By interpreting singular vectors as regression coefficient vectors for certain linear regressions, sparsity-inducing regularization penalties are imposed to the least squares regression to produce sparse singular vectors. An efficient iterative algorithm is proposed for computing the sparse singular vectors, along with some discussion of penalty parameter selection. A lung cancer microarray dataset and a food nutrition dataset are used to illustrate SSVD as a biclustering method. SSVD is also compared with some existing biclustering methods using simulated datasets.", "title": "" }, { "docid": "9ce1401e072fc09749d12f9132aa6b1e", "text": "In many applications based on the use of unmanned aerial vehicles (UAVs), it is possible to establish a cluster of UAVs in which each UAV knows the other vehicle's position. Assuming that the common channel condition between any two nodes of UAVs is line-of-sight (LOS), the time and energy consumption for data transmission on each path that connecting two nodes may be estimated by a node itself. In this paper, we use a modified Bellman-Ford algorithm to find the best selection of relay nodes in order to minimize the time and energy consumption for data transmission between any UAV node in the cluster and the UAV acting as the cluster head. This algorithm is applied with a proposed cooperative MAC protocol that is compatible with the IEEE 802.11 standard. The evaluations under data saturation conditions illustrate noticeable benefits in successful packet delivery ratio, average delay, and in particular the cost of time and energy.", "title": "" }, { "docid": "2917b7b1453f9e6386d8f47129b605fb", "text": "We introduce a model for constructing vector representations of words by composing characters using bidirectional LSTMs. Relative to traditional word representation models that have independent vectors for each word type, our model requires only a single vector per character type and a fixed set of parameters for the compositional model. Despite the compactness of this model and, more importantly, the arbitrary nature of the form–function relationship in language, our “composed” word representations yield state-of-the-art results in language modeling and part-of-speech tagging. Benefits over traditional baselines are particularly pronounced in morphologically rich languages (e.g., Turkish).", "title": "" }, { "docid": "6573629e918822c0928e8cf49f20752c", "text": "The past several years have seen remarkable progress in generative models which produce convincing samples of images and other modalities. A shared component of many powerful generative models is a decoder network, a parametric deep neural net that defines a generative distribution. Examples include variational autoencoders, generative adversarial networks, and generative moment matching networks. Unfortunately, it can be difficult to quantify the performance of these models because of the intractability of log-likelihood estimation, and inspecting samples can be misleading. We propose to use Annealed Importance Sampling for evaluating log-likelihoods for decoder-based models and validate its accuracy using bidirectional Monte Carlo. The evaluation code is provided at https:// github.com/tonywu95/eval_gen. Using this technique, we analyze the performance of decoder-based models, the effectiveness of existing log-likelihood estimators, the degree of overfitting, and the degree to which these models miss important modes of the data distribution.", "title": "" }, { "docid": "1aa51d3ef39773eb3250564ae87c6205", "text": "relatedness between terms using the links found within their corresponding Wikipedia articles. Unlike other techniques based on Wikipedia, WLM is able to provide accurate measures efficiently, using only the links between articles rather than their textual content. Before describing the details, we first outline the other systems to which it can be compared. This is followed by a description of the algorithm, and its evaluation against manually-defined ground truth. The paper concludes with a discussion of the strengths and weaknesses of the new approach. Abstract", "title": "" }, { "docid": "7063d3eb38008bcd344f0ae1508cca61", "text": "The fitness of an evolutionary individual can be understood in terms of its two basic components: survival and reproduction. As embodied in current theory, trade-offs between these fitness components drive the evolution of life-history traits in extant multicellular organisms. Here, we argue that the evolution of germ-soma specialization and the emergence of individuality at a new higher level during the transition from unicellular to multicellular organisms are also consequences of trade-offs between the two components of fitness-survival and reproduction. The models presented here explore fitness trade-offs at both the cell and group levels during the unicellular-multicellular transition. When the two components of fitness negatively covary at the lower level there is an enhanced fitness at the group level equal to the covariance of components at the lower level. We show that the group fitness trade-offs are initially determined by the cell level trade-offs. However, as the transition proceeds to multicellularity, the group level trade-offs depart from the cell level ones, because certain fitness advantages of cell specialization may be realized only by the group. The curvature of the trade-off between fitness components is a basic issue in life-history theory and we predict that this curvature is concave in single-celled organisms but becomes increasingly convex as group size increases in multicellular organisms. We argue that the increasingly convex curvature of the trade-off function is driven by the initial cost of reproduction to survival which increases as group size increases. To illustrate the principles and conclusions of the model, we consider aspects of the biology of the volvocine green algae, which contain both unicellular and multicellular members.", "title": "" }, { "docid": "b66846f076d41c8be3f5921cc085d997", "text": "We present a novel hierarchical force-directed method for drawing large graphs. The algorithm produces a graph embedding in an Euclidean space E of any dimension. A two or three dimensional drawing of the graph is then obtained by projecting a higher-dimensional embedding into a two or three dimensional subspace of E. Projecting high-dimensional drawings onto two or three dimensions often results in drawings that are “smoother” and more symmetric. Among the other notable features of our approach are the utilization of a maximal independent set filtration of the set of vertices of a graph, a fast energy function minimization strategy, efficient memory management, and an intelligent initial placement of vertices. Our implementation of the algorithm can draw graphs with tens of thousands of vertices using a negligible amount of memory in less than one minute on a mid-range PC.", "title": "" }, { "docid": "59ac2e47ed0824eeba1621673f2dccf5", "text": "In this paper we present a framework for grasp planning with a humanoid robot arm and a five-fingered hand. The aim is to provide the humanoid robot with the ability of grasping objects that appear in a kitchen environment. Our approach is based on the use of an object model database that contains the description of all the objects that can appear in the robot workspace. This database is completed with two modules that make use of this object representation: an exhaustive offline grasp analysis system and a real-time stereo vision system. The offline grasp analysis system determines the best grasp for the objects by employing a simulation system, together with CAD models of the objects and the five-fingered hand. The results of this analysis are added to the object database using a description suited to the requirements of the grasp execution modules. A stereo camera system is used for a real-time object localization using a combination of appearance-based and model-based methods. The different components are integrated in a controller architecture to achieve manipulation task goals for the humanoid robot", "title": "" }, { "docid": "af5645e4c2b37d229b525ff3bbac505f", "text": "PURPOSE OF REVIEW\nTo analyze the role of prepuce preservation in various disorders and discuss options available to reconstruct the prepuce.\n\n\nRECENT FINDINGS\nThe prepuce can be preserved in selected cases of penile degloving procedures, phimosis or hypospadias repair, and penile cancer resection. There is no clear evidence that debilitating and persistent preputial lymphedema develops after a prepuce-sparing penile degloving procedure. In fact, the prepuce can at times be preserved even if lymphedema develops. The prepuce can potentially be preserved in both phimosis and hypospadias repair. Penile cancer localized to the prepuce can be excised using Mohs' micrographic surgery without compromising survival. Reconstruction of the prepuce still remains a theoretical topic. There has been no study that has systematically evaluated efficacy of any reconstructive procedures.\n\n\nSUMMARY\nThe standard practice for preputial disorders remains circumcision. However, prepuce preservation is often technically feasible without compromising treatment. Preservative surgery combined with reconstruction may lead to better patient satisfaction and quality of life.", "title": "" }, { "docid": "7a67bccffa6222f8129a90933962e285", "text": "BACKGROUND\nPast research has found that playing a classic prosocial video game resulted in heightened prosocial behavior when compared to a control group, whereas playing a classic violent video game had no effect. Given purported links between violent video games and poor social behavior, this result is surprising. Here our aim was to assess whether this finding may be due to the specific games used. That is, modern games are experienced differently from classic games (more immersion in virtual environments, more connection with characters, etc.) and it may be that playing violent video games impacts prosocial behavior only when contemporary versions are used.\n\n\nMETHODS AND FINDINGS\nExperiments 1 and 2 explored the effects of playing contemporary violent, non-violent, and prosocial video games on prosocial behavior, as measured by the pen-drop task. We found that slight contextual changes in the delivery of the pen-drop task led to different rates of helping but that the type of game played had little effect. Experiment 3 explored this further by using classic games. Again, we found no effect.\n\n\nCONCLUSIONS\nWe failed to find evidence that playing video games affects prosocial behavior. Research on the effects of video game play is of significant public interest. It is therefore important that speculation be rigorously tested and findings replicated. Here we fail to substantiate conjecture that playing contemporary violent video games will lead to diminished prosocial behavior.", "title": "" }, { "docid": "8649d115dea8cb6b3353745476b5c57d", "text": "OBJECTIVES\nTo test a brief, non-sectarian program of meditation training for effects on perceived stress and negative emotion, and to determine effects of practice frequency and test the moderating effects of neuroticism (emotional lability) on treatment outcome.\n\n\nDESIGN AND SETTING\nThe study used a single-group, open-label, pre-test post-test design conducted in the setting of a university medical center.\n\n\nPARTICIPANTS\nHealthy adults (N=200) interested in learning meditation for stress-reduction were enrolled. One hundred thirty-three (76% females) completed at least 1 follow-up visit and were included in data analyses.\n\n\nINTERVENTION\nParticipants learned a simple mantra-based meditation technique in 4, 1-hour small-group meetings, with instructions to practice for 15-20 minutes twice daily. Instruction was based on a psychophysiological model of meditation practice and its expected effects on stress.\n\n\nOUTCOME MEASURES\nBaseline and monthly follow-up measures of Profile of Mood States; Perceived Stress Scale; State-Trait Anxiety Inventory (STAI); and Brief Symptom Inventory (BSI). Practice frequency was indexed by monthly retrospective ratings. Neuroticism was evaluated as a potential moderator of treatment effects.\n\n\nRESULTS\nAll 4 outcome measures improved significantly after instruction, with reductions from baseline that ranged from 14% (STAI) to 36% (BSI). More frequent practice was associated with better outcome. Higher baseline neuroticism scores were associated with greater improvement.\n\n\nCONCLUSIONS\nPreliminary evidence suggests that even brief instruction in a simple meditation technique can improve negative mood and perceived stress in healthy adults, which could yield long-term health benefits. Frequency of practice does affect outcome. Those most likely to experience negative emotions may benefit the most from the intervention.", "title": "" }, { "docid": "d51f0b51f03e310dd183e3a7cb199288", "text": "Traditional vision-based localization methods such as visual SLAM suffer from practical problems in outdoor environments such as unstable feature detection and inability to perform location recognition under lighting, perspective, weather and appearance change. Additionally map construction on a large scale in these systems presents its own challenges. In this work, we present a novel method for precisely localizing vehicles on the road using signs marked on the road (road markings), which have the advantage of being distinct and easy to detect, their detection being robust under changes in lighting and weather. Our method uses corners detected on road markings to perform localization in global coordinates. The method consists of two phases - a mapping phase when a high-quality GPS device is used to automatically survey road marks and add them to a light-weight “map” or database, and a localization phase where road mark detection and look-up in the map, combined with visual odometry, produces precise localization. We present experiments using a real-time implementation operating in a car that demonstrates the improved localization robustness and accuracy of our system even when using road marks alone. However, in this case the trajectory between road marks has to be filled-in by visual odometry, which contributes drift. Hence, we also present a mechanism for combining road-mark-based maps with sparse feature-based maps that results in greater accuracy still. We see our use of road marks as a significant step in the general trend of using higher-level features for improved localization performance irrespective of environment conditions.", "title": "" }, { "docid": "215b65a1777fd4076c97770ad339c59f", "text": "Interactive visualization requires the translation of data into a screen space of limited resolution. While currently ignored by most visualization models, this translation entails a loss of information and the introduction of a number of artifacts that can be useful, (e.g., aggregation, structures) or distracting (e.g., over-plotting, clutter) for the analysis. This phenomenon is observed in parallel coordinates, where overlapping lines between adjacent axes form distinct patterns, representing the relation between variables they connect. However, even for a small number of dimensions, the challenge is to effectively convey the relationships for all combinations of dimensions. The size of the dataset and a large number of dimensions only add to the complexity of this problem. To address these issues, we propose Pargnostics, parallel coordinates diagnostics, a model based on screen-space metrics that quantify the different visual structures. Pargnostics metrics are calculated for pairs of axes and take into account the resolution of the display as well as potential axis inversions. Metrics include the number of line crossings, crossing angles, convergence, overplotting, etc. To construct a visualization view, the user can pick from a ranked display showing pairs of coordinate axes and the structures between them, or examine all possible combinations of axes at once in a matrix display. Picking the best axes layout is an NP-complete problem in general, but we provide a way of automatically optimizing the display according to the user's preferences based on our metrics and model.", "title": "" }, { "docid": "b6f026f8b2e37406ee68b9214fb82955", "text": "Human visual behaviour has significant potential for activity recognition and computational behaviour analysis, but previous works focused on supervised methods and recognition of predefined activity classes based on short-term eye movement recordings. We propose a fully unsupervised method to discover users' everyday activities from their long-term visual behaviour. Our method combines a bag-of-words representation of visual behaviour that encodes saccades, fixations, and blinks with a latent Dirichlet allocation (LDA) topic model. We further propose different methods to encode saccades for their use in the topic model. We evaluate our method on a novel long-term gaze dataset that contains full-day recordings of natural visual behaviour of 10 participants (more than 80 hours in total). We also provide annotations for eight sample activity classes (outdoor, social interaction, focused work, travel, reading, computer work, watching media, eating) and periods with no specific activity. We show the ability of our method to discover these activities with performance competitive with that of previously published supervised methods.", "title": "" }, { "docid": "c07516bc86b7a082bcc2bd405757d387", "text": "The trend towards more commercial-off-the-shelf (COTS) components in complex safety-critical systems is increasing the difficulty of verifying system correctness. Runtime verification (RV) is a lightweight technique to verify that certain properties hold over execution traces. RV is usually implemented as runtime monitors that can be used as runtime fault detectors or test oracles to analyze a system under test for bad behaviors. Most existing RV methods utilize some form of system or code instrumentation and thus are not designed to monitor potentially black-box COTS components. This thesis presents a suitable runtime monitoring framework for monitoring safety-critical embedded systems with black-box components. We provide an end-to-end framework including proven correct monitoring algorithms, a formal specification language with semi-formal techniques to map the system onto our formal system trace model, specification design patterns to aid translating informal specifications into the formal specification language, and a safety-case pattern example showing the argument that our monitor design can be safely integrated with a target system. We utilized our monitor implementation to check test logs from several system tests. We show the monitor being used to check system test logs offline for interesting properties. We also performed real-time replay of logs from a system network bus, demonstrating the feasibility of our embedded monitor implementation in real-time operation.", "title": "" } ]
scidocsrr
657325690b0c7222e3fd594d52d6521c
Lessons and Insights from Creating a Synthetic Optical Flow Benchmark
[ { "docid": "cc4c58f1bd6e5eb49044353b2ecfb317", "text": "Today, visual recognition systems are still rarely employed in robotics applications. Perhaps one of the main reasons for this is the lack of demanding benchmarks that mimic such scenarios. In this paper, we take advantage of our autonomous driving platform to develop novel challenging benchmarks for the tasks of stereo, optical flow, visual odometry/SLAM and 3D object detection. Our recording platform is equipped with four high resolution video cameras, a Velodyne laser scanner and a state-of-the-art localization system. Our benchmarks comprise 389 stereo and optical flow image pairs, stereo visual odometry sequences of 39.2 km length, and more than 200k 3D object annotations captured in cluttered scenarios (up to 15 cars and 30 pedestrians are visible per image). Results from state-of-the-art algorithms reveal that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world. Our goal is to reduce this bias by providing challenging benchmarks with novel difficulties to the computer vision community. Our benchmarks are available online at: www.cvlibs.net/datasets/kitti.", "title": "" } ]
[ { "docid": "62c71a412a8b715e2fda64cd8b6a2a66", "text": "We study the design of local algorithms for massive graphs. A local graph algorithm is one that finds a solution containing or near a given vertex without looking at the whole graph. We present a local clustering algorithm. Our algorithm finds a good cluster—a subset of vertices whose internal connections are significantly richer than its external connections—near a given vertex. The running time of our algorithm, when it finds a nonempty local cluster, is nearly linear in the size of the cluster it outputs. The running time of our algorithm also depends polylogarithmically on the size of the graph and polynomially on the conductance of the cluster it produces. Our clustering algorithm could be a useful primitive for handling massive graphs, such as social networks and webgraphs. As an application of this clustering algorithm, we present a partitioning algorithm that finds an approximate sparsest cut with nearly optimal balance. Our algorithm takes time nearly linear in the number edges of the graph. Using the partitioning algorithm of this paper, we have designed a nearly linear time algorithm for constructing spectral sparsifiers of graphs, which we in turn use in a nearly linear time algorithm for solving linear systems in symmetric, diagonally dominant matrices. The linear system solver also leads to a nearly linear time algorithm for approximating the secondsmallest eigenvalue and corresponding eigenvector of the Laplacian matrix of a graph. These other results are presented in two companion papers.", "title": "" }, { "docid": "ad58798807256cff2eff9d3befaf290a", "text": "Centrality indices are an essential concept in network analysis. For those based on shortest-path distances the computation is at least quadratic in the number of nodes, since it usually involves solving the single-source shortest-paths (SSSP) problem from every node. Therefore, exact computation is infeasible for many large networks of interest today. Centrality scores can be estimated, however, from a limited number of SSSP computations. We present results from an experimental study of the quality of such estimates under various selection strategies for the source vertices. ∗Research supported in part by DFG under grant Br 2158/2-3", "title": "" }, { "docid": "ec0bc85d241f71f5511b54f107987e5a", "text": "We present a new deep learning approach to pose-guided resynthesis of human photographs. At the heart of the new approach is the estimation of the complete body surface texture based on a single photograph. Since the input photograph always observes only a part of the surface, we suggest a new inpainting method that completes the texture of the human body. Rather than working directly with colors of texture elements, the inpainting network estimates an appropriate source location in the input image for each element of the body surface. This correspondence field between the input image and the texture is then further warped into the target image coordinate frame based on the desired pose, effectively establishing the correspondence between the source and the target view even when the pose change is drastic. The final convolutional network then uses the established correspondence and all other available information to synthesize the output image using a fully-convolutional architecture with deformable convolutions. We show stateof-the-art result for pose-guided image synthesis. Additionally, we demonstrate the performance of our system for garment transfer and pose-guided face resynthesis.", "title": "" }, { "docid": "dcd21065898c9dd108617a3db8dad6a1", "text": "Advanced driver assistance systems are the newest addition to vehicular technology. Such systems use a wide array of sensors to provide a superior driving experience. Vehicle safety and driver alert are important parts of these system. This paper proposes a driver alert system to prevent and mitigate adjacent vehicle collisions by proving warning information of on-road vehicles and possible collisions. A dynamic Bayesian network (DBN) is utilized to fuse multiple sensors to provide driver awareness. It detects oncoming adjacent vehicles and gathers ego vehicle motion characteristics using an on-board camera and inertial measurement unit (IMU). A histogram of oriented gradient feature based classifier is used to detect any adjacent vehicles. Vehicles front-rear end and side faces were considered in training the classifier. Ego vehicles heading, speed and acceleration are captured from the IMU and feed into the DBN. The network parameters were learned from data via expectation maximization(EM) algorithm. The DBN is designed to provide two type of warning to the driver, a cautionary warning and a brake alert for possible collision with other vehicles. Experiments were completed on multiple public databases, demonstrating successful warnings and brake alerts in most situations.", "title": "" }, { "docid": "a19d9517866e3f482a35dd0fb26d4405", "text": "Recent rapid advances in ICTs, specifically in Internet and mobile technologies, have highlighted the rising importance of the Business Model (BM) in Information Systems (IS). Despite agreement on its importance to an organization’s success, the concept is still fuzzy and vague, and there is no consensus regarding its definition. Furthermore, understanding the BM domain by identifying its meaning, fundamental pillars, and its relevance to other business concepts is by no means complete. In this paper we aim to provide further clarification by first presenting a classification of definitions found in the IS literature; second, proposing guidelines on which to develop a more comprehensive definition in order to reach consensus; and third, identifying the four main business model concepts and values and their interaction, and thus place the business model within the world of digital business. Based on this discussion, we propose a new definition for the business model that we argue is more appropriate to this new world.", "title": "" }, { "docid": "b9d6744630ed392e5807a56cb2dfaeab", "text": "This document and any map included herein are without prejudice to the status of or sovereignty over any territory, to the delimitation of international frontiers and boundaries and to the name of any territory, city or area. In recent years, the cost of delivering health care in developed and developing countries has been rising exponentially. Governments around the world are searching for alternative mechanisms to reduce costs while increasing the capacity of social programmes with significant investments in infrastructure. A number of jurisdictions have begun to utilise public-private partnerships (PPPs) as a means of achieving these objectives. The use of PPPs in the Canadian health system is a relatively new phenomenon. Generally, the success of PPP projects is evaluated on the basis of the qualitative outcomes of the project, most commonly in a value-for-money analysis. In this article, we explore whether quantitative elements are sufficient to measure PPPs in politically sensitive areas of public policy, such as health care. We propose that the best way to evaluate the outcomes of PPPs in public health system projects requires both quantitative and qualitative criteria. We use a framework developed from neo-institutional economics that contextualises outcomes through a balance of quantitative and qualitative assessment criteria. We apply this evaluation framework to a specific Canadian case study in order to determine key success factors for future PPP health infrastructure projects. The analysis concludes that, given the complex and politically sensitive nature of health care, particular attention must be paid to communications and public relations and to design and post-construction planning in order to deliver a successful PPP. 2 PPP relationships differ in a fundamental way from conventional procurement contracting. In conventional procurement, risks are assumed to be contained in a contract focused on a short-term infrastructure deliverable, such as construction of a road, airport, water and sewer facility, or hospital. In PPPs, developing risk-sharing mechanisms is key to enhancing the returns to both the public and private sector. PPPs are based upon a stewardship model in which the private sector takes a more aggressive role in aspects of the project from which it had previously been excluded in the conventional procurement approach, such as design, financing, operations and maintenance. The hypothesis is that when the private sector assumes greater responsibility in the project, there will be incentives to ensure a steady stream of revenue for the private sector over the life of the project. …", "title": "" }, { "docid": "8c78e7c93153284deb46464082e04a69", "text": "This paper presents the design and construction of a microstrip Yagi array antenna operating at 5.3 GHz, to be used with an avalanche sensor in avalanche measurement. The advantage of the antenna is it can achieve a high gain of 15.2 dB with bandwidth of 8% in compact size. The gain enhancement is achieved by using a compact microstrip Yagi antenna as the array element; separating the feed network from the main radiating elements; and increasing the antenna height by installing the feed layer at the back of the patch layer, sharing the same ground plane. In order to ensure the power is transferred smoothly from the main input port to the radiating elements, the corporate feed is also design and tested. The fabricated antenna shows an agreeable performance with the simulated version.", "title": "" }, { "docid": "c5021fd377f1d7ebd8f87fb114ed07d9", "text": "In this essay a new theory of stress and linguistic rhythm will be elaborated, based on the proposals of Liberman (1975).' It will be argued that certain features of prosodic systems like that of English, in particular the phenomenon of \"stress subordination\", are not to be referred primarily to the properties of individual segments (or syllables), but rather reflect a hierarchical rhythmic structuring that organizes the syllables, words, and syntactic phrases of a sentence. The character of this structuring, properly understood, will give fresh insight into phenomena that have been apprehended in terms of the phonological cycle, the stress-subordination convention, the theory of disjunctive ordering, and the use of crucial variables in phonological rules. Our theory will employ two basic ideas about the representation of traditional prosodic concepts: first, we represent the notion relative prominence in terms of a relation defined on constituent structure; and second, we represent certain aspects of the notion linguistic rhythm in terms of the alignment of linguistic material with a \"metrical grid\". The perceived \"stressing\" of an utterance, we think, reflects the combined influence of a constituent-structure pattern and its grid alignment. This pattern-grid combination is reminiscent of the traditional picture of verse scansion, so that the theory as a whole deserves the name \"metrical\". We will also use the expression \"'metrical theory\" as a convenient term for that portion of the theory which deals with the assignment of relative prominence in terms of a relation defined on constituent structure. Section 1 will apply the metrical theory of stress-pattern assignment to the system of English phrasal stress, arguing this theory's value in rationalizing otherwise arbitrary characteristics of stress features and stress rules. Section 2 will extend this treatment to the domain of English word stress, adopting a somewhat traditional view of the assignment of the feature [+stress], but explaining the generation of word-level * We would like to thank", "title": "" }, { "docid": "4ff2e867a47fa27a95e5c190136dd73a", "text": "Lack of trust is one of the most frequently cited reasons for consumers not purchasing from Internet vendors. During the last four years a number of empirical studies have investigated the role of trust in the specific context of e-commerce, focusing on different aspects of this multi-dimensional construct. However, empirical research in this area is beset by conflicting conceptualizations of the trust construct, inadequate understanding of the relationships between trust, its antecedents and consequents, and the frequent use of trust scales that are neither theoretically derived nor rigorously validated. The major objective of this paper is to provide an integrative review of the empirical literature on trust in e-commerce in order to allow cumulative analysis of results. The interpretation and comparison of different empirical studies on on-line trust first requires conceptual clarification. A set of trust constructs is proposed that reflects both institutional phenomena (system trust) and personal and interpersonal forms of trust (dispositional trust, trusting beliefs, trusting intentions and trust-related behaviours), thus facilitating a multi-level and multi-dimensional analysis of research problems related to trust in e-commerce. r 2003 Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "18dcf52ce2b8c6bf8fb5c4eb839b6795", "text": "The use of information technology (IT) as a competitive weapon has become a popular cliché; but there is still a marked lack of understanding of the issues that determine the influence of information technology on a particular organization and the processes that will allow a smooth coordination of technology and corporate strategy. This article surveys the major efforts to arrive at a relevant framework and attempts to integrate them in a more comprehensive viewpoint. The focus then turns to the major research issues in understanding the impact of information technology on competitive strategy. Copyright © 1986 Yannis Bakos and Michael Treacy", "title": "" }, { "docid": "6e3e881cb1bb05101ad0f38e3f21e547", "text": "Mechanical valves used for aortic valve replacement (AVR) continue to be associated with bleeding risks because of anticoagulation therapy, while bioprosthetic valves are at risk of structural valve deterioration requiring reoperation. This risk/benefit ratio of mechanical and bioprosthetic valves has led American and European guidelines on valvular heart disease to be consistent in recommending the use of mechanical prostheses in patients younger than 60 years of age. Despite these recommendations, the use of bioprosthetic valves has significantly increased over the last decades in all age groups. A systematic review of manuscripts applying propensity-matching or multivariable analysis to compare the usage of mechanical vs. bioprosthetic valves found either similar outcomes between the two types of valves or favourable outcomes with mechanical prostheses, particularly in younger patients. The risk/benefit ratio and choice of valves will be impacted by developments in valve designs, anticoagulation therapy, reducing the required international normalized ratio, and transcatheter and minimally invasive procedures. However, there is currently no evidence to support lowering the age threshold for implanting a bioprosthesis. Physicians in the Heart Team and patients should be cautious in pursuing more bioprosthetic valve use until its benefit is clearly proven in middle-aged patients.", "title": "" }, { "docid": "b05d36b98d68c9407e6cb213bcf03709", "text": "With the continuous increase in data velocity and volume nowadays, preserving system and data security is particularly affected. In order to handle the huge amount of data and to discover security incidents in real-time, analyses of log data streams are required. However, most of the log anomaly detection techniques fall short in considering continuous data processing. Thus, this paper aligns an anomaly detection technique for data stream processing. It thereby provides a conceptual basis for future adaption of other techniques and further delivers proof of concept by prototype implementation.", "title": "" }, { "docid": "e59bd7353cdbd4f353e45990a2c24c63", "text": "We describe CACTI-IO, an extension to CACTI [4] that includes power, area and timing models for the IO and PHY of the off-chip memory interface for various server and mobile configurations. CACTI-IO enables design space exploration of the off-chip IO along with the DRAM and cache parameters. We describe the models added and three case studies that use CACTI-IO to study the tradeoffs between memory capacity, bandwidth and power.\n The case studies show that CACTI-IO helps (i) provide IO power numbers that can be fed into a system simulator for accurate power calculations, (ii) optimize off-chip configurations including the bus width, number of ranks, memory data width and off-chip bus frequency, especially for novel buffer-based topologies, and (iii) enable architects to quickly explore new interconnect technologies, including 3-D interconnect. We find that buffers on board and 3-D technologies offer an attractive design space involving power, bandwidth and capacity when appropriate interconnect parameters are deployed.", "title": "" }, { "docid": "5293dc28da110096fee7be1da7bf52b2", "text": "The function of brown adipose tissue is to transfer energy from food into heat; physiologically, both the heat produced and the resulting decrease in metabolic efficiency can be of significance. Both the acute activity of the tissue, i.e., the heat production, and the recruitment process in the tissue (that results in a higher thermogenic capacity) are under the control of norepinephrine released from sympathetic nerves. In thermoregulatory thermogenesis, brown adipose tissue is essential for classical nonshivering thermogenesis (this phenomenon does not exist in the absence of functional brown adipose tissue), as well as for the cold acclimation-recruited norepinephrine-induced thermogenesis. Heat production from brown adipose tissue is activated whenever the organism is in need of extra heat, e.g., postnatally, during entry into a febrile state, and during arousal from hibernation, and the rate of thermogenesis is centrally controlled via a pathway initiated in the hypothalamus. Feeding as such also results in activation of brown adipose tissue; a series of diets, apparently all characterized by being low in protein, result in a leptin-dependent recruitment of the tissue; this metaboloregulatory thermogenesis is also under hypothalamic control. When the tissue is active, high amounts of lipids and glucose are combusted in the tissue. The development of brown adipose tissue with its characteristic protein, uncoupling protein-1 (UCP1), was probably determinative for the evolutionary success of mammals, as its thermogenesis enhances neonatal survival and allows for active life even in cold surroundings.", "title": "" }, { "docid": "e10b5a0363897f6e7cbb128a4d2f7cd7", "text": "We introduce a method to stabilize Generative Adversarial Networks (GANs) by defining the generator objective with respect to an unrolled optimization of the discriminator. This allows training to be adjusted between using the optimal discriminator in the generator’s objective, which is ideal but infeasible in practice, and using the current value of the discriminator, which is often unstable and leads to poor solutions. We show how this technique solves the common problem of mode collapse, stabilizes training of GANs with complex recurrent generators, and increases diversity and coverage of the data distribution by the generator.", "title": "" }, { "docid": "4f02e48932129dd77f48f99478c08ab2", "text": "A low-power low-voltage OTA with rail-to-rail output is introduced. The proposed topology is based on the common current mirror OTA topology and provide gain enhancement without extra power consumption. Implemented in a standard 0.25/spl mu/m CMOS technology, the proposed OTA achieves 50 dB DC gain in 0.8 V supply voltage. The GBW is 1.2MHz and the static power consumption is 8/spl mu/W while driving 18pF load. The class AB operation increases the slew rate and still maintains low static biasing current. This topology is suitable for low-power low-voltage switched-capacitor application.", "title": "" }, { "docid": "45342a42547f265da8ae9b0e8f8fde1b", "text": "YAGO is a large knowledge base that is built automatically from Wikipedia, WordNet and GeoNames. The project combines information from Wikipedias in 10 di erent languages, thus giving the knowledge a multilingual dimension. It also attaches spatial and temporal information to many facts, and thus allows the user to query the data over space and time. YAGO focuses on extraction quality and achieves a manually evaluated precision of 95%. In this paper, we explain from a general perspective how YAGO is built from its sources, how its quality is evaluated, how a user can access it, and how other projects utilize it.", "title": "" }, { "docid": "adf3678a3f1fcd5db580a417194239f2", "text": "In training deep neural networks for semantic segmentation, the main limiting factor is the low amount of ground truth annotation data that is available in currently existing datasets. The limited availability of such data is due to the time cost and human effort required to accurately and consistently label real images on a pixel level. Modern sandbox video game engines provide open world environments where traffic and pedestrians behave in a pseudo-realistic manner. This caters well to the collection of a believable road-scene dataset. Utilizing open-source tools and resources found in single-player modding communities, we provide a method for persistent, ground truth, asset annotation of a game world. By collecting a synthetic dataset containing upwards of 1, 000, 000 images, we demonstrate realtime, on-demand, ground truth data annotation capability of our method. Supplementing this synthetic data to Cityscapes dataset, we show that our data generation method provides qualitative as well as quantitative improvements—for training networks—over previous methods that use video games as surrogate.", "title": "" }, { "docid": "9a8f782acaf09a6a09ceeacfa0fd9fee", "text": "The objective of the current study was to compare the effects of sensory-integration therapy (SIT) and a behavioral intervention on rates of challenging behavior (including self-injurious behavior) in four children diagnosed with Autism Spectrum Disorder. For each of the participants a functional assessment was conducted to identify the variables maintaining challenging behavior. Results of these assessments were used to design function-based behavioral interventions for each participant. Recommendations for the sensory-integration treatment were designed by an Occupational Therapist, trained in the use of sensory-integration theory and techniques. The sensory-integration techniques were not dependent on the results of the functional assessments. The study was conducted within an alternating treatments design, with initial baseline and final best treatment phase. For each participant, results demonstrated that the behavioral intervention was more effective than the sensory integration therapy in the treatment of challenging behavior. In the best treatment phase, the behavioral intervention alone was implemented and further reduction was observed in the rate of challenging behavior. Analysis of saliva samples revealed relatively low levels of cortisol and very little stress-responsivity across the SIT condition and the behavioral intervention condition, which may be related to the participants' capacity to perceive stress in terms of its social significance.", "title": "" }, { "docid": "936c1c708beea8a40831cf72094636ff", "text": "PURPOSE\nTo evaluate the problems encountered on revising a multiply operated nose and the methods used in correcting such problems.\n\n\nPATIENTS AND METHODS\nThe study included 50 cases presenting for revision rhinoplasty after having had 2 or more previous rhinoplasties. An external rhinoplasty approach was used in all cases. Simultaneous septal surgery was done whenever indicated. All cases were followed for a mean period of 32 months (range, 1.5-8 years). Evaluation of the surgical result depended on clinical examination, comparison of pre- and postoperative photographs, and degree of patients' satisfaction with their aesthetic and functional outcome.\n\n\nRESULTS\nFunctionally, 68% suffered nasal obstruction that was mainly caused by septal deviations and nasal valve problems. Aesthetically, the most common deformities of the upper two thirds of the nose included pollybeak (64%), dorsal irregularities (54%), dorsal saddle (44%), and open roof deformity (42%), whereas the deformities of lower third included depressed tip (68%), tip contour irregularities (60%), and overrotated tip (42%). Nasal grafting was necessary in all cases; usually more than 1 type of graft was used in each case. Postoperatively, 79% of the patients, with preoperative nasal obstruction, reported improved breathing; 84% were satisfied with their aesthetic result; and only 8 cases (16%) requested further revision to correct minor deformities.\n\n\nCONCLUSION\nRevision of a multiply operated nose is a complex and technically demanding task, yet, in a good percentage of cases, aesthetic as well as functional improvement are still possible.", "title": "" } ]
scidocsrr
162ce68b88ea90b547036e7048071c4f
A DAPTIVE PREDICTION TIME FOR SEQUENCE CLASSIFICATION
[ { "docid": "8306c40722bb956253c6e7cf112836d7", "text": "Recurrent Neural Networks are showing much promise in many sub-areas of natural language processing, ranging from document classification to machine translation to automatic question answering. Despite their promise, many recurrent models have to read the whole text word by word, making it slow to handle long documents. For example, it is difficult to use a recurrent network to read a book and answer questions about it. In this paper, we present an approach of reading text while skipping irrelevant information if needed. The underlying model is a recurrent network that learns how far to jump after reading a few words of the input text. We employ a standard policy gradient method to train the model to make discrete jumping decisions. In our benchmarks on four different tasks, including number prediction, sentiment analysis, news article classification and automatic Q&A, our proposed model, a modified LSTM with jumping, is up to 6 times faster than the standard sequential LSTM, while maintaining the same or even better accuracy.", "title": "" }, { "docid": "75b64f9106b2c334c572bc3180d93aef", "text": "This paper proposes a deep learning architecture based on Residual Network that dynamically adjusts the number of executed layers for the regions of the image. This architecture is end-to-end trainable, deterministic and problem-agnostic. It is therefore applicable without any modifications to a wide range of computer vision problems such as image classification, object detection and image segmentation. We present experimental results showing that this model improves the computational efficiency of Residual Networks on the challenging ImageNet classification and COCO object detection datasets. Additionally, we evaluate the computation time maps on the visual saliency dataset cat2000 and find that they correlate surprisingly well with human eye fixation positions.", "title": "" }, { "docid": "2db49e1c2020875f2453d4b614fd2116", "text": "Text Categorization (TC), also known as Text Classification, is the task of automatically classifying a set of text documents into different categories from a predefined set. If a document belongs to exactly one of the categories, it is a single-label classification task; otherwise, it is a multi-label classification task. TC uses several tools from Information Retrieval (IR) and Machine Learning (ML) and has received much attention in the last years from both researchers in the academia and industry developers. In this paper, we first categorize the documents using KNN based machine learning approach and then return the most relevant documents.", "title": "" } ]
[ { "docid": "6533ee7e13ab293f33f1747adff92fe5", "text": "The stochastic approximation method is behind the solution to many important, actively-studied problems in machine learning. Despite its farreaching application, there is almost no work on applying stochastic approximation to learning problems with general constraints. The reason for this, we hypothesize, is that no robust, widely-applicable stochastic approximation method exists for handling such problems. We propose that interior-point methods are a natural solution. We establish the stability of a stochastic interior-point approximation method both analytically and empirically, and demonstrate its utility by deriving an on-line learning algorithm that also performs feature selection via L1 regularization.", "title": "" }, { "docid": "94013936968a4864167ed4e764398deb", "text": "A prime requirement for autonomous driving is a fast and reliable estimation of the motion state of dynamic objects in the ego-vehicle's surroundings. An instantaneous approach for extended objects based on two Doppler radar sensors has recently been proposed. In this paper, that approach is augmented by prior knowledge of the object's heading angle and rotation center. These properties can be determined reliably by state-of-the-art methods based on sensors such as LIDAR or cameras. The information fusion is performed utilizing an appropriate measurement model, which directly maps the motion state in the Doppler velocity space. This model integrates the geometric properties. It is used to estimate the object's motion state using a linear regression. Additionally, the model allows a straightforward calculation of the corresponding variances. The resulting method shows a promising accuracy increase of up to eight times greater than the original approach.", "title": "" }, { "docid": "5f8b0a15477bf0ee5787269a578988c6", "text": "Suppose your netmail is being erratically censored by Captain Yossarian. Whenever you send a message, he censors each bit of the message with probability 1/2, replacing each censored bit by some reserved character. Well versed in such concepts as redundancy, this is no real problem to you. The question is, can it actually be turned around and used to your advantage? We answer this question strongly in the affirmative. We show that this protocol, more commonly known as oblivious transfer, can be used to simulate a more sophisticated protocol, known as oblivious circuit evaluation([Y]). We also show that with such a communication channel, one can have completely noninteractive zero-knowledge proofs of statements in NP. These results do not use any complexity-theoretic assumptions. We can show that they have applications to a variety of models in which oblivious transfer can be done.", "title": "" }, { "docid": "328a3e05fac7d118a99afd6197dac918", "text": "Neural networks have recently had a lot of success for many tasks. However, neural network architectures that perform well are still typically designed manually by experts in a cumbersome trial-and-error process. We propose a new method to automatically search for well-performing CNN architectures based on a simple hill climbing procedure whose operators apply network morphisms, followed by short optimization runs by cosine annealing. Surprisingly, this simple method yields competitive results, despite only requiring resources in the same order of magnitude as training a single network. E.g., on CIFAR-10, our method designs and trains networks with an error rate below 6% in only 12 hours on a single GPU; training for one day reduces this error further, to almost 5%.", "title": "" }, { "docid": "f59fd6af9dea570b49c453de02297f4c", "text": "OBJECTIVES\nThe role of social media as a source of timely and massive information has become more apparent since the era of Web 2.0.Multiple studies illustrated the use of information in social media to discover biomedical and health-related knowledge.Most methods proposed in the literature employ traditional document classification techniques that represent a document as a bag of words.These techniques work well when documents are rich in text and conform to standard English; however, they are not optimal for social media data where sparsity and noise are norms.This paper aims to address the limitations posed by the traditional bag-of-word based methods and propose to use heterogeneous features in combination with ensemble machine learning techniques to discover health-related information, which could prove to be useful to multiple biomedical applications, especially those needing to discover health-related knowledge in large scale social media data.Furthermore, the proposed methodology could be generalized to discover different types of information in various kinds of textual data.\n\n\nMETHODOLOGY\nSocial media data is characterized by an abundance of short social-oriented messages that do not conform to standard languages, both grammatically and syntactically.The problem of discovering health-related knowledge in social media data streams is then transformed into a text classification problem, where a text is identified as positive if it is health-related and negative otherwise.We first identify the limitations of the traditional methods which train machines with N-gram word features, then propose to overcome such limitations by utilizing the collaboration of machine learning based classifiers, each of which is trained to learn a semantically different aspect of the data.The parameter analysis for tuning each classifier is also reported.\n\n\nDATA SETS\nThree data sets are used in this research.The first data set comprises of approximately 5000 hand-labeled tweets, and is used for cross validation of the classification models in the small scale experiment, and for training the classifiers in the real-world large scale experiment.The second data set is a random sample of real-world Twitter data in the US.The third data set is a random sample of real-world Facebook Timeline posts.\n\n\nEVALUATIONS\nTwo sets of evaluations are conducted to investigate the proposed model's ability to discover health-related information in the social media domain: small scale and large scale evaluations.The small scale evaluation employs 10-fold cross validation on the labeled data, and aims to tune parameters of the proposed models, and to compare with the stage-of-the-art method.The large scale evaluation tests the trained classification models on the native, real-world data sets, and is needed to verify the ability of the proposed model to handle the massive heterogeneity in real-world social media.\n\n\nFINDINGS\nThe small scale experiment reveals that the proposed method is able to mitigate the limitations in the well established techniques existing in the literature, resulting in performance improvement of 18.61% (F-measure).The large scale experiment further reveals that the baseline fails to perform well on larger data with higher degrees of heterogeneity, while the proposed method is able to yield reasonably good performance and outperform the baseline by 46.62% (F-Measure) on average.", "title": "" }, { "docid": "5c26713d33001fc91ce19f551adac492", "text": "Recurrent neural network language models (RNNLMs) have recently become increasingly popular for many applications i ncluding speech recognition. In previous research RNNLMs have normally been trained on well-matched in-domain data. The adaptation of RNNLMs remains an open research area to be explored. In this paper, genre and topic based RNNLM adaptation techniques are investigated for a multi-genre broad cast transcription task. A number of techniques including Proba bilistic Latent Semantic Analysis, Latent Dirichlet Alloc ation and Hierarchical Dirichlet Processes are used to extract sh ow level topic information. These were then used as additional input to the RNNLM during training, which can facilitate unsupervised test time adaptation. Experiments using a state-o f-theart LVCSR system trained on 1000 hours of speech and more than 1 billion words of text showed adaptation could yield pe rplexity reductions of 8% relatively over the baseline RNNLM and small but consistent word error rate reductions.", "title": "" }, { "docid": "9e2dc31edf639e1201c3a3d59f3381af", "text": "The AMBA-AHB Multilayer Bus matrix Self-Motivated Arbitration scheme proposed three methods for data transmiting from master to slave for on chip communication. Multilayer advanced high-performance bus (ML-AHB) busmatrix employs slave-side arbitration. Slave-side arbitration is different from master-side arbitration in terms of request and grant signals since, in the former, the master merely starts a burst transaction and waits for the slave response to proceed to the next transfer. Therefore, in the former, the unit of arbitration can be a transaction or a transfer. However, the ML-AHB busmatrix of ARM offers only transferbased fixed-pri-ority and round-robin arbitration schemes. In this paper, we propose the design and implementation of a flexible arbiter for the ML-AHB busmatrix to support three priority policies fixed priority, round robin, and dynamic priority and three data multiplexing modes transfer, transaction, and desired transfer length. In total, there are nine possible arbitration schemes. The proposed arbiter, which is self-motivated (SM), selects one of the nine possible arbitration schemes based upon the priority-level notifications and the desired transfer length from the masters so that arbitration leads to the maximum performance. Experimental results show that, although the area overhead of the proposed SM arbitration scheme is 9%–25% larger than those of the other arbitration schemes, our arbiter improves the throughput by 14%–62% compared to other schemes.", "title": "" }, { "docid": "58f505558cda55abf70b143d52030a2d", "text": "Given a finite set of points P ⊆ R, we would like to find a small subset S ⊆ P such that the convex hull of S approximately contains P . More formally, every point in P is within distance from the convex hull of S. Such a subset S is called an -hull. Computing an -hull is an important problem in computational geometry, machine learning, and approximation algorithms. In many applications, the set P is too large to fit in memory. We consider the streaming model where the algorithm receives the points of P sequentially and strives to use a minimal amount of memory. Existing streaming algorithms for computing an -hull require O( (1−d)/2) space, which is optimal for a worst-case input. However, this ignores the structure of the data. The minimal size of an -hull of P , which we denote by OPT, can be much smaller. A natural question is whether a streaming algorithm can compute an -hull using only O(OPT) space. We begin with lower bounds that show, under a reasonable streaming model, that it is not possible to have a single-pass streaming algorithm that computes an -hull with O(OPT) space. We instead propose three relaxations of the problem for which we can compute -hulls using space near-linear to the optimal size. Our first algorithm for points in R2 that arrive in random-order uses O(logn ·OPT) space. Our second algorithm for points in R2 makes O(log( −1)) passes before outputting the -hull and requires O(OPT) space. Our third algorithm, for points in R for any fixed dimension d, outputs, with high probability, an -hull for all but δ-fraction of directions and requires O(OPT · log OPT) space. 1 This work was supported in part by the National Science Foundation under grant CCF-1525971. Work was done while the author was at Carnegie Mellon University. 2 This material is based upon work supported in part by the National Science Foundation under Grants No. 1447639, 1650041 and 1652257, Cisco faculty award, and by the ONR Award N00014-18-1-2364. 3 Now at DeepMind. 4 This research was supported by the Franco-American Fulbright Commission and supported in part by National Science Foundation under Grant No. 1447639, 1650041 and 1652257. The author thanks INRIA (l’Institut national de recherche en informatique et en automatique) for hosting him during the writing of this paper. 5 This material is based upon work supported in part by National Science Foundation under Grant No. 1447639, 1650041 and 1652257. Work was done while the author was at Johns Hopkins University. EA T C S © Avrim Blum, Vladimir Braverman, Ananya Kumar, Harry Lang, and Lin F. Yang; licensed under Creative Commons License CC-BY 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Editors: Ioannis Chatzigiannakis, Christos Kaklamanis, Dániel Marx, and Donald Sannella; Article No. 21; pp. 21:1–21:13 Leibniz International Proceedings in Informatics Schloss Dagstuhl – Leibniz-Zentrum für Informatik, Dagstuhl Publishing, Germany 21:2 Approximate Convex Hull of Data Streams 2012 ACM Subject Classification Theory of computation → Computational geometry, Theory of computation → Sketching and sampling, Theory of computation → Streaming models", "title": "" }, { "docid": "3259c90b96b3ebbe885f73c2febe863d", "text": "Human-Following robots are being actively researched for their immense potential to carry out mundane tasks like load carrying and monitoring of target individual through interaction and collaboration. The recent advancements in vision and sensor technologies have helped in creating more user-friendly robots that are able to coexist with humans by leveraging the sensors for human detection, human movement estimation, collision avoidance, and obstacle avoidance. But most of these sensors are suitable only for Line of Sight following of human. In the case of loss of sight of the target, most of them fail to re-acquire their target. In this paper, we are proposing a novel method to develop a human following robot using Bluetooth and Inertial Measurement Unit (IMU) on Smartphones which can work under high interference environment and can reacquire the target when lost. The proposed method leverages IMU sensors on the smartphone to estimate the direction of human movement while estimating the distance traveled from the RSSI of the Bluetooth. Thus, the Follow Me robot which estimates the position of target human and direction of heading and effectively track the person was implemented using Smartphone on a differential drive robot.", "title": "" }, { "docid": "ab8599cbe4b906cea6afab663cbe2caf", "text": "Real-time ETL and data warehouse multidimensional modeling (DMM) of business operational data has become an important research issue in the area of real-time data warehousing (RTDW). In this study, some of the recently proposed real-time ETL technologies from the perspectives of data volumes, frequency, latency, and mode have been discussed. In addition, we highlight several advantages of using semi-structured DMM (i.e. XML) in RTDW instead of traditional structured DMM (i.e., relational). We compare the two DMMs on the basis of four characteristics: heterogeneous data integration, types of measures supported, aggregate query processing, and incremental maintenance. We implemented the RTDW framework for an example telecommunication organization. Our experimental analysis shows that if the delay comes from the incremental maintenance of DMM, no ETL technology (full-reloading or incremental-loading) can help in real-time business intelligence.", "title": "" }, { "docid": "f24bba45a1905cd4658d52bc7e9ee046", "text": "In continuous action domains, standard deep reinforcement learning algorithms like DDPG suffer from inefficient exploration when facing sparse or deceptive reward problems. Conversely, evolutionary and developmental methods focusing on exploration like Novelty Search, QualityDiversity or Goal Exploration Processes explore more robustly but are less efficient at fine-tuning policies using gradient-descent. In this paper, we present the GEP-PG approach, taking the best of both worlds by sequentially combining a Goal Exploration Process and two variants of DDPG. We study the learning performance of these components and their combination on a low dimensional deceptive reward problem and on the larger Half-Cheetah benchmark. We show that DDPG fails on the former and that GEP-PG improves over the best DDPG variant in both environments. Supplementary videos and discussion can be found at frama.link/gep_pg, the code at github.com/flowersteam/geppg.", "title": "" }, { "docid": "5cbd331652b69714bc4ff0eeacc8f85a", "text": "A survey was conducted from May to Oct of 2011 of the parasitoid community of the imported cabbageworm, Pieris rapae (Lepidoptera: Pieridae), in cole crops in part of the eastern United States and southeastern Canada. The findings of our survey indicate that Cotesia rubecula (Hymenoptera: Braconidae) now occurs as far west as North Dakota and has become the dominant parasitoid of P. rapae in the northeastern and north central United States and adjacent parts of southeastern Canada, where it has displaced the previously common parasitoid Cotesia glomerata (Hymenoptera: Braconidae). Cotesia glomerata remains the dominant parasitoid in the mid-Atlantic states, from Virginia to North Carolina and westward to southern Illinois, below latitude N 38° 48’. This pattern suggests that the released populations of C. rubecula presently have a lower latitudinal limit south of which they are not adapted.", "title": "" }, { "docid": "1757c61b82376d05a869034b2c3e8455", "text": "DMA-capable interconnects, providing ultra-low latency and high bandwidth, are increasingly being used in the context of distributed storage and data processing systems. However, the deployment of such systems in virtualized data centers is currently inhibited by the lack of a flexible and high-performance virtualization solution for RDMA network interfaces.\n In this work, we present a hybrid virtualization architecture which builds upon the concept of separation of paths for control and data operations available in RDMA. With hybrid virtualization, RDMA control operations are virtualized using hypervisor involvement, while data operations are set up to bypass the hypervisor completely. We describe HyV (Hybrid Virtualization), a virtualization framework for RDMA devices implementing such a hybrid architecture. In the paper, we provide a detailed evaluation of HyV for different RDMA technologies and operations. We further demonstrate the advantages of HyV in the context of a real distributed system by running RAMCloud on a set of HyV-enabled virtual machines deployed across a 6-node RDMA cluster. All of the performance results we obtained illustrate that hybrid virtualization enables bare-metal RDMA performance inside virtual machines while retaining the flexibility typically associated with paravirtualization.", "title": "" }, { "docid": "49f21df66ac901e5f37cff022353ed20", "text": "This paper presents the implementation of the interval type-2 to control the process of production of High-strength low-alloy (HSLA) steel in a secondary metallurgy process in a simply way. The proposal evaluate fuzzy techniques to ensure the accuracy of the model, the most important advantage is that the systems do not need pretreatment of the historical data, it is used as it is. The system is a multiple input single output (MISO) and the main goal of this paper is the proposal of a system that optimizes the resources: computational, time, among others.", "title": "" }, { "docid": "e50320cfddc32a918389fbf8707d599f", "text": "Psilocybin, an indoleamine hallucinogen, produces a psychosis-like syndrome in humans that resembles first episodes of schizophrenia. In healthy human volunteers, the psychotomimetic effects of psilocybin were blocked dose-dependently by the serotonin-2A antagonist ketanserin or the atypical antipsychotic risperidone, but were increased by the dopamine antagonist and typical antipsychotic haloperidol. These data are consistent with animal studies and provide the first evidence in humans that psilocybin-induced psychosis is due to serotonin-2A receptor activation, independently of dopamine stimulation. Thus, serotonin-2A overactivity may be involved in the pathophysiology of schizophrenia and serotonin-2A antagonism may contribute to therapeutic effects of antipsychotics.", "title": "" }, { "docid": "01ea69cfc6b81e431717c6b090df37b0", "text": "Physical trauma to the brain has always been known to affect brain functions and subsequent neurobiological development. Research primarily since the early 1990s has shown that psychological trauma can have detrimental effects on brain function that are not only lasting but that may alter patterns of subsequent neurodevelopment, particularly in children although developmental effects may be seen in adults as well. Childhood trauma produces a diverse range of symptoms and defining the brain's response to trauma and the factors that mediate the body's stress response systems is at the forefront of scientific investigation. This paper reviews the current evidence relating psychological trauma to anatomical and functional changes in the brain and discusses the need for accurate diagnosis and treatment to minimize such effects and to recognize their existence in developing treatment programs.", "title": "" }, { "docid": "c66e38f3be7760c8ca0b6ef2dfc5bec2", "text": "Gesture recognition remains a very challenging task in the field of computer vision and human computer interaction (HCI). A decade ago the task seemed to be almost unsolvable with the data provided by a single RGB camera. Due to recent advances in sensing technologies, such as time-of-flight and structured light cameras, there are new data sources available, which make hand gesture recognition more feasible. In this work, we propose a highly precise method to recognize static gestures from a depth data, provided from one of the above mentioned devices. The depth images are used to derive rotation-, translation- and scale-invariant features. A multi-layered random forest (MLRF) is then trained to classify the feature vectors, which yields to the recognition of the hand signs. The training time and memory required by MLRF are much smaller, compared to a simple random forest with equivalent precision. This allows to repeat the training procedure of MLRF without significant effort. To show the advantages of our technique, we evaluate our algorithm on synthetic data, on publicly available dataset, containing 24 signs from American Sign Language(ASL) and on a new dataset, collected using recently appeared Intel Creative Gesture Camera.", "title": "" }, { "docid": "6990c4f7bde94cb0e14245872e670f91", "text": "The UK's recent move to polymer banknotes has seen some of the currently used fingermark enhancement techniques for currency potentially become redundant, due to the surface characteristics of the polymer substrates. Possessing a non-porous surface with some semi-porous properties, alternate processes are required for polymer banknotes. This preliminary investigation explored the recovery of fingermarks from polymer notes via vacuum metal deposition using elemental copper. The study successfully demonstrated that fresh latent fingermarks, from an individual donor, could be clearly developed and imaged in the near infrared. By varying the deposition thickness of the copper, the contrast between the fingermark minutiae and the substrate could be readily optimised. Where the deposition thickness was thin enough to be visually indistinguishable, forensic gelatin lifters could be used to lift the fingermarks. These lifts could then be treated with rubeanic acid to produce a visually distinguishable mark. The technique has shown enough promise that it could be effectively utilised on other semi- and non-porous substrates.", "title": "" }, { "docid": "cd11e079db25441a1a5801c71fcff781", "text": "Quad-robot type (QRT) unmanned aerial vehicles (UAVs) have been developed for quick detection and observation of the circumstances under calamity environment such as indoor fire spots. The UAV is equipped with four propellers driven by each electric motor, an embedded controller, an Inertial Navigation System (INS) using three rate gyros and accelerometers, a CCD (Charge Coupled Device) camera with wireless communication transmitter for observation, and an ultrasonic range sensor for height control. Accurate modeling and robust flight control of QRT UAVs are mainly discussed in this work. Rigorous dynamic model of a QRT UAV is obtained both in the reference and body frame coordinate systems. A disturbance observer (DOB) based controller using the derived dynamic models is also proposed for robust hovering control. The control input induced by DOB is helpful to use simple equations of motion satisfying accurately derived dynamics. The developed hovering robot shows stable flying performances under the adoption of DOB and the vision based localization method. Although a model is incorrect, DOB method can design a controller by regarding the inaccurate part of the model J. Kim Department of Mechanical Engineering, Seoul National University of Technology, Seoul, South Korea e-mail: jinhyun@snut.ac.kr M.-S. Kang Department of Mechatronics Engineering, Hanyang University, Ansan, South Korea e-mail: wowmecha@gmail.com S. Park (B) Division of Applied Robot Technology, Korea Institute of Industrial Technology, Ansan, South Korea e-mail: sdpark@kitech.re.kr 10 J Intell Robot Syst (2010) 57:9–26 and sensor noises as disturbances. The UAV can also avoid obstacles using eight IR (Infrared) and four ultrasonic range sensors. This kind of micro UAV can be widely used in various calamity observation fields without danger of human beings under harmful environment. The experimental results show the performance of the proposed control algorithm.", "title": "" } ]
scidocsrr
6ebaf2722502a9553803a05b66bfa95e
There's No Free Lunch, Even Using Bitcoin: Tracking the Popularity and Profits of Virtual Currency Scams
[ { "docid": "bc8b40babfc2f16144cdb75b749e3a90", "text": "The Bitcoin scheme is a rare example of a large scale global payment system in which all the transactions are publicly accessible (but in an anonymous way). We downloaded the full history of this scheme, and analyzed many statistical properties of its associated transaction graph. In this paper we answer for the first time a variety of interesting questions about the typical behavior of users, how they acquire and how they spend their bitcoins, the balance of bitcoins they keep in their accounts, and how they move bitcoins between their various accounts in order to better protect their privacy. In addition, we isolated all the large transactions in the system, and discovered that almost all of them are closely related to a single large transaction that took place in November 2010, even though the associated users apparently tried to hide this fact with many strange looking long chains and fork-merge structures in the transaction graph.", "title": "" }, { "docid": "8ee24b38d7cf4f63402cd4f2c0beaf79", "text": "At the current stratospheric value of Bitcoin, miners with access to significant computational horsepower are literally printing money. For example, the first operator of a USD $1,500 custom ASIC mining platform claims to have recouped his investment in less than three weeks in early February 2013, and the value of a bitcoin has more than tripled since then. Not surprisingly, cybercriminals have also been drawn to this potentially lucrative endeavor, but instead are leveraging the resources available to them: stolen CPU hours in the form of botnets. We conduct the first comprehensive study of Bitcoin mining malware, and describe the infrastructure and mechanism deployed by several major players. By carefully reconstructing the Bitcoin transaction records, we are able to deduce the amount of money a number of mining botnets have made.", "title": "" } ]
[ { "docid": "091c57447d5a3c97d3ff1afb57ebb4e3", "text": "We present ORB-SLAM2, a complete simultaneous localization and mapping (SLAM) system for monocular, stereo and RGB-D cameras, including map reuse, loop closing, and relocalization capabilities. The system works in real time on standard central processing units in a wide variety of environments from small hand-held indoors sequences, to drones flying in industrial environments and cars driving around a city. Our back-end, based on bundle adjustment with monocular and stereo observations, allows for accurate trajectory estimation with metric scale. Our system includes a lightweight localization mode that leverages visual odometry tracks for unmapped regions and matches with map points that allow for zero-drift localization. The evaluation on 29 popular public sequences shows that our method achieves state-of-the-art accuracy, being in most cases the most accurate SLAM solution. We publish the source code, not only for the benefit of the SLAM community, but with the aim of being an out-of-the-box SLAM solution for researchers in other fields.", "title": "" }, { "docid": "7a6ae2e12dbd9f4a0a3355caec648ca7", "text": "Near Field Communication (NFC) is an emerging wireless short-range communication technology that is based on existing standards of the Radio Frequency Identification (RFID) infrastructure. In combination with NFC-capable smartphones it enables intuitive application scenarios for contactless transactions, in particular services for mobile payment and over-theair ticketing. The intention of this paper is to describe basic characteristics and benefits of the underlaying technology, to classify modes of operation and to present various use cases. Both existing NFC applications and possible future scenarios will be analyzed in this context. Furthermore, security concerns, challenges and present conflicts will be discussed eventually.", "title": "" }, { "docid": "2bdfeabf15a4ca096c2fe5ffa95f3b17", "text": "This paper studies how to incorporate the external word correlation knowledge to improve the coherence of topic modeling. Existing topic models assume words are generated independently and lack the mechanism to utilize the rich similarity relationships among words to learn coherent topics. To solve this problem, we build a Markov Random Field (MRF) regularized Latent Dirichlet Allocation (LDA) model, which defines a MRF on the latent topic layer of LDA to encourage words labeled as similar to share the same topic label. Under our model, the topic assignment of each word is not independent, but rather affected by the topic labels of its correlated words. Similar words have better chance to be put into the same topic due to the regularization of MRF, hence the coherence of topics can be boosted. In addition, our model can accommodate the subtlety that whether two words are similar depends on which topic they appear in, which allows word with multiple senses to be put into different topics properly. We derive a variational inference method to infer the posterior probabilities and learn model parameters and present techniques to deal with the hardto-compute partition function in MRF. Experiments on two datasets demonstrate the effectiveness of our model.", "title": "" }, { "docid": "4a9da1575b954990f98e6807deae469e", "text": "Recently, there has been considerable debate concerning key sizes for publ i c key based cry p t o graphic methods. Included in the debate have been considerations about equivalent key sizes for diffe rent methods and considerations about the minimum re q u i red key size for diffe rent methods. In this paper we propose a method of a n a lyzing key sizes based upon the value of the data being protected and the cost of b reaking ke y s . I . I n t ro d u c t i o n A . W H Y I S K E Y S I Z E I M P O R T A N T ? In order to keep transactions based upon public key cryptography secure, one must ensure that the underlying keys are sufficiently large as to render the best possible attack infeasible. However, this really just begs the question as one is now left with the task of defining ‘infeasible’. Does this mean infeasible given access to (say) most of the Internet to do the computations? Does it mean infeasible to a large adversary with a large (but unspecified) budget to buy the hardware for an attack? Does it mean infeasible with what hardware might be obtained in practice by utilizing the Internet? Is it reasonable to assume that if utilizing the entire Internet in a key breaking effort makes a key vulnerable that such an attack might actually be conducted? If a public effort involving a substantial fraction of the Internet breaks a single key, does this mean that similar sized keys are unsafe? Does one need to be concerned about such public efforts or does one only need to be concerned about possible private, sur reptitious efforts? After all, if a public attack is known on a particular key, it is easy to change that key. We shall attempt to address these issues within this paper. number 13 Apr i l 2000 B u l l e t i n News and A dv i c e f rom RSA La bo rat o r i e s I . I n t ro d u c t i o n I I . M et ho ds o f At tac k I I I . H i s tor i ca l R es u l t s and t he R S A Ch a l le nge I V. Se cu r i t y E st i m ate s", "title": "" }, { "docid": "ae6d36ccbf79ae6f62af3a62ef3e3bb2", "text": "This paper presents a new neural network system called the Evolving Tree. This network resembles the Self-Organizing map, but deviates from it in several aspects, which are desirable in many analysis tasks. First of all the Evolving Tree grows automatically, so the user does not have to decide the network’s size before training. Secondly the network has a hierarchical structure, which makes network training and use computationally very efficient. Test results with both synthetic and actual data show that the Evolving Tree works quite well.", "title": "" }, { "docid": "7d5d2f819a5b2561db31645d534836b8", "text": "Recent work has suggested enhancing Bloom filters by using a pre-filter, based on applying machine learning to model the data set the Bloom filter is meant to represent. Here we model such learned Bloom filters, clarifying what guarantees can and cannot be associated with such a structure.", "title": "" }, { "docid": "1eba8eccf88ddb44a88bfa4a937f648f", "text": "We present a deep learning framework for probabilistic pixel-wise semantic segmentation, which we term Bayesian SegNet. Semantic segmentation is an important tool for visual scene understanding and a meaningful measure of uncertainty is essential for decision making. Our contribution is a practical system which is able to predict pixelwise class labels with a measure of model uncertainty using Bayesian deep learning. We achieve this by Monte Carlo sampling with dropout at test time to generate a posterior distribution of pixel class labels. In addition, we show that modelling uncertainty improves segmentation performance by 2-3% across a number of datasets and architectures such as SegNet, FCN, Dilation Network and DenseNet.", "title": "" }, { "docid": "0d747bd516498ae314e3197b7e7ad1e3", "text": "Neurotoxins and fillers continue to remain in high demand, comprising a large part of the growing business of cosmetic minimally invasive procedures. Multiple Food and Drug Administration-approved safe yet different products exist within each category, and the role of each product continues to expand. The authors review the literature to provide an overview of the use of neurotoxins and fillers and their future directions.", "title": "" }, { "docid": "2edcf1a54bded9a77345cbe88cc02533", "text": "Although the uncanny exists, the inherent, unavoidable dip (or valley) may be an illusion. Extremely abstract robots can be uncanny if the aesthetic is off, as can cosmetically atypical humans. Thus, the uncanny occupies a continuum ranging from the abstract to the real, although norms of acceptability may narrow as one approaches human likeness. However, if the aesthetic is right, any level of realism or abstraction can be appealing. If so, then avoiding or creating an uncanny effect just depends on the quality of the aesthetic design, regardless of the level of realism. The author’s preliminary experiments on human reaction to near-realistic androids appear to support this hypothesis.", "title": "" }, { "docid": "56998c03c373dfae07460a7b731ef03e", "text": "52 This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/ by-nc/3.0) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited. Statistical notes for clinical researchers: assessing normal distribution (2) using skewness and kurtosis", "title": "" }, { "docid": "a084e7dd5485e01d97ccf628bc00d644", "text": "A novel concept called gesture-changeable under-actuated (GCUA) function is proposed to improve the dexterities of traditional under-actuated hands and reduce the control difficulties of dexterous hands. Based on the GCUA function, a new humanoid robot hand, GCUA Hand is designed and manufactured. The GCUA Hand can grasp different objects self-adaptively and change its initial gesture dexterously before contacting objects. The hand has 5 fingers and 15 DOFs, each finger is based on screw-nut transmission, flexible drawstring constraint and belt-pulley under-actuated mechanism to realize GCUA function. The analyses on grasping static forces and grasping stabilities are put forward. The analyses and Experimental results show that the GCUA function is very nice and valid. The hands with the GCUA function can meet the requirements of grasping and operating with lower control and cost, which is the middle road between traditional under-actuated hands and dexterous hands.", "title": "" }, { "docid": "e7b42688ce3936604aefa581802040a4", "text": "Identity management through biometrics offer potential advantages over knowledge and possession based methods. A wide variety of biometric modalities have been tested so far but several factors paralyse the accuracy of mono modal biometric systems. Usually, the analysis of multiple modalities offers better accuracy. An extensive review of biometric technology is presented here. Besides the mono modal systems, the article also discusses multi modal biometric systems along with their architecture and information fusion levels. The paper along with the exemplary evidences highlights the potential for biometric technology, market value and prospects. Keywords— Biometrics, Fingerprint, Face, Iris, Retina, Behavioral biometrics, Gait, Voice, Soft biometrics, Multi-modal biometrics.", "title": "" }, { "docid": "69624e1501b897bf1a9f9a5a84132da3", "text": "360° videos and Head-Mounted Displays (HMDs) are geŠing increasingly popular. However, streaming 360° videos to HMDs is challenging. Œis is because only video content in viewers’ Fieldof-Views (FoVs) is rendered, and thus sending complete 360° videos wastes resources, including network bandwidth, storage space, and processing power. Optimizing the 360° video streaming to HMDs is, however, highly data and viewer dependent, and thus dictates real datasets. However, to our best knowledge, such datasets are not available in the literature. In this paper, we present our datasets of both content data (such as image saliency maps and motion maps derived from 360° videos) and sensor data (such as viewer head positions and orientations derived from HMD sensors). We put extra e‚orts to align the content and sensor data using the timestamps in the raw log €les. Œe resulting datasets can be used by researchers, engineers, and hobbyists to either optimize existing 360° video streaming applications (like rate-distortion optimization) and novel applications (like crowd-driven cameramovements). We believe that our dataset will stimulate more research activities along this exciting new research direction. ACM Reference format: Wen-Chih Lo, Ching-Ling Fan, Jean Lee, Chun-Ying Huang, Kuan-Ta Chen, and Cheng-Hsin Hsu. 2017. 360° Video Viewing Dataset in Head-Mounted Virtual Reality. In Proceedings ofMMSys’17, Taipei, Taiwan, June 20-23, 2017, 6 pages. DOI: hŠp://dx.doi.org/10.1145/3083187.3083219 CCS Concept • Information systems→Multimedia streaming", "title": "" }, { "docid": "f519d349d928e7006955943043ab0eae", "text": "A critical application of metabolomics is the evaluation of tissues, which are often the primary sites of metabolic dysregulation in disease. Laboratory rodents have been widely used for metabolomics studies involving tissues due to their facile handing, genetic manipulability and similarity to most aspects of human metabolism. However, the necessary step of administration of anesthesia in preparation for tissue sampling is not often given careful consideration, in spite of its potential for causing alterations in the metabolome. We examined, for the first time using untargeted and targeted metabolomics, the effect of several commonly used methods of anesthesia and euthanasia for collection of skeletal muscle, liver, heart, adipose and serum of C57BL/6J mice. The data revealed dramatic, tissue-specific impacts of tissue collection strategy. Among many differences observed, post-euthanasia samples showed elevated levels of glucose 6-phosphate and other glycolytic intermediates in skeletal muscle. In heart and liver, multiple nucleotide and purine degradation metabolites accumulated in tissues of euthanized compared to anesthetized animals. Adipose tissue was comparatively less affected by collection strategy, although accumulation of lactate and succinate in euthanized animals was observed in all tissues. Among methods of tissue collection performed pre-euthanasia, ketamine showed more variability compared to isoflurane and pentobarbital. Isoflurane induced elevated liver aspartate but allowed more rapid initiation of tissue collection. Based on these findings, we present a more optimal collection strategy mammalian tissues and recommend that rodent tissues intended for metabolomics studies be collected under anesthesia rather than post-euthanasia.", "title": "" }, { "docid": "099a2ee305b703a765ff3579f0e0c1c3", "text": "To enhance the security of mobile cloud users, a few proposals have been presented recently. However we argue that most of them are not suitable for mobile cloud where mobile users might join or leave the mobile networks arbitrarily. In this paper, we design a secure mobile user-based data service mechanism (SDSM) to provide confidentiality and fine-grained access control for data stored in the cloud. This mechanism enables the mobile users to enjoy a secure outsourced data services at a minimized security management overhead. The core idea of SDSM is that SDSM outsources not only the data but also the security management to the mobile cloud in a trust way. Our analysis shows that the proposed mechanism has many advantages over the existing traditional methods such as lower overhead and convenient update, which could better cater the requirements in mobile cloud computing scenarios.", "title": "" }, { "docid": "0e5a11ef4daeb969702e40ea0c50d7f3", "text": "OBJECTIVES\nThe aim of this study was to assess the long-term safety and efficacy of the CYPHER (Cordis, Johnson and Johnson, Bridgewater, New Jersey) sirolimus-eluting coronary stent (SES) in percutaneous coronary intervention (PCI) for ST-segment elevation myocardial infarction (STEMI).\n\n\nBACKGROUND\nConcern over the safety of drug-eluting stents implanted during PCI for STEMI remains, and long-term follow-up from randomized trials are necessary. TYPHOON (Trial to assess the use of the cYPHer sirolimus-eluting stent in acute myocardial infarction treated with ballOON angioplasty) randomized 712 patients with STEMI treated by primary PCI to receive either SES (n = 355) or bare-metal stents (BMS) (n = 357). The primary end point, target vessel failure at 1 year, was significantly lower in the SES group than in the BMS group (7.3% vs. 14.3%, p = 0.004) with no increase in adverse events.\n\n\nMETHODS\nA 4-year follow-up was performed. Complete data were available in 501 patients (70%), and the survival status is known in 580 patients (81%).\n\n\nRESULTS\nFreedom from target lesion revascularization (TLR) at 4 years was significantly better in the SES group (92.4% vs. 85.1%; p = 0.002); there were no significant differences in freedom from cardiac death (97.6% and 95.9%; p = 0.37) or freedom from repeat myocardial infarction (94.8% and 95.6%; p = 0.85) between the SES and BMS groups. No difference in definite/probable stent thrombosis was noted at 4 years (SES: 4.4%, BMS: 4.8%, p = 0.83). In the 580 patients with known survival status at 4 years, the all-cause death rate was 5.8% in the SES and 7.0% in the BMS group (p = 0.61).\n\n\nCONCLUSIONS\nIn the 70% of patients with complete follow-up at 4 years, SES demonstrated sustained efficacy to reduce TLR with no difference in death, repeat myocardial infarction or stent thrombosis. (The Study to Assess AMI Treated With Balloon Angioplasty [TYPHOON]; NCT00232830).", "title": "" }, { "docid": "08a6f27e905a732062ae585d8b324200", "text": "The advent of cost-effectiveness and easy-operation depth cameras has facilitated a variety of visual recognition tasks including human activity recognition. This paper presents a novel framework for recognizing human activities from video sequences captured by depth cameras. We extend the surface normal to polynormal by assembling local neighboring hypersurface normals from a depth sequence to jointly characterize local motion and shape information. We then propose a general scheme of super normal vector (SNV) to aggregate the low-level polynormals into a discriminative representation, which can be viewed as a simplified version of the Fisher kernel representation. In order to globally capture the spatial layout and temporal order, an adaptive spatio-temporal pyramid is introduced to subdivide a depth video into a set of space-time cells. In the extensive experiments, the proposed approach achieves superior performance to the state-of-the-art methods on the four public benchmark datasets, i.e., MSRAction3D, MSRDailyActivity3D, MSRGesture3D, and MSRActionPairs3D.", "title": "" }, { "docid": "957a3970611470b611c024ed3b558115", "text": "SHARE is a unique panel database of micro data on health, socio-economic status and social and family networks covering most of the European Union and Israel. To date, SHARE has collected three panel waves (2004, 2006, 2010) of current living circumstances and retrospective life histories (2008, SHARELIFE); 6 additional waves are planned until 2024. The more than 150 000 interviews give a broad picture of life after the age of 50 years, measuring physical and mental health, economic and non-economic activities, income and wealth, transfers of time and money within and outside the family as well as life satisfaction and well-being. The data are available to the scientific community free of charge at www.share-project.org after registration. SHARE is harmonized with the US Health and Retirement Study (HRS) and the English Longitudinal Study of Ageing (ELSA) and has become a role model for several ageing surveys worldwide. SHARE's scientific power is based on its panel design that grasps the dynamic character of the ageing process, its multidisciplinary approach that delivers the full picture of individual and societal ageing, and its cross-nationally ex-ante harmonized design that permits international comparisons of health, economic and social outcomes in Europe and the USA.", "title": "" }, { "docid": "efe279fbc7307bc6a191ebb397b01823", "text": "Real-time traffic sign detection and recognition has been receiving increasingly more attention in recent years due to the popularity of driver-assistance systems and autonomous vehicles. This paper proposes an accurate and efficient traffic sign detection technique by exploring AdaBoost and support vector regression (SVR) for discriminative detector learning. Different from the reported traffic sign detection techniques, a novel saliency estimation approach is first proposed, where a new saliency model is built based on the traffic sign-specific color, shape, and spatial information. By incorporating the saliency information, enhanced feature pyramids are built to learn an AdaBoost model that detects a set of traffic sign candidates from images. A novel iterative codeword selection algorithm is then designed to generate a discriminative codebook for the representation of sign candidates, as detected by the AdaBoost, and an SVR model is learned to identify the real traffic signs from the detected sign candidates. Experiments on three public data sets show that the proposed traffic sign detection technique is robust and obtains superior accuracy and efficiency.", "title": "" }, { "docid": "764ebb7673237d152995a0b6ae34e82a", "text": "Due to limitations of chemical analysis procedures, small concentrations cannot be precisely measured. These concentrations are said to be below the limit of detection (LOD). In statistical analyses, these values are often censored and substituted with a constant value, such as half the LOD, the LOD divided by the square root of 2, or zero. These methods for handling below-detection values results in two distributions, a uniform distribution for those values below the LOD, and the true distribution. As a result, this can produce questionable descriptive statistics depending upon the percentage of values below the LOD. An alternative method uses the characteristics of the distribution of the values above the LOD to estimate the values below the LOD. This can be done with an extrapolation technique or maximum likelihood estimation. An example program using the same data is presented calculating the mean, standard deviation, t-test, and relative difference in the means for various methods and compares the results. The extrapolation and maximum likelihood estimate techniques have smaller error rates than all the standard replacement techniques. Although more computational, these methods produce more reliable descriptive statistics.", "title": "" } ]
scidocsrr
b4fdf378ed0e152b0ad8c7e77967f38f
Towards intelligent lower limb wearable robots: Challenges and perspectives - State of the art
[ { "docid": "b2199b7be543f0f287e0cbdb7a477843", "text": "We developed a pneumatically powered orthosis for the human ankle joint. The orthosis consisted of a carbon fiber shell, hinge joint, and two artificial pneumatic muscles. One artificial pneumatic muscle provided plantar flexion torque and the second one provided dorsiflexion torque. Computer software adjusted air pressure in each artificial muscle independently so that artificial muscle force was proportional to rectified low-pass-filtered electromyography (EMG) amplitude (i.e., proportional myoelectric control). Tibialis anterior EMG activated the artificial dorsiflexor and soleus EMG activated the artificial plantar flexor. We collected joint kinematic and artificial muscle force data as one healthy participant walked on a treadmill with the orthosis. Peak plantar flexor torque provided by the orthosis was 70 Nm, and peak dorsiflexor torque provided by the orthosis was 38 Nm. The orthosis could be useful for basic science studies on human locomotion or possibly for gait rehabilitation after neurological injury.", "title": "" }, { "docid": "69b1c87a06b1d83fd00d9764cdadc2e9", "text": "Sarcos Research Corporation, and the Center for Engineering Design at the University of Utah, have long been interested in both the fundamental and the applied aspects of robots and other computationally driven machines. We have produced substantial numbers of systems that function as products for commercial applications, and as advanced research tools specifically designed for experimental", "title": "" } ]
[ { "docid": "38c78be386aa3827f39825f9e40aa3cc", "text": "Back Side Illumination (BSI) CMOS image sensors with two-layer photo detectors (2LPDs) have been fabricated and evaluated. The test pixel array has green pixels (2.2um x 2.2um) and a magenta pixel (2.2um x 4.4um). The green pixel has a single-layer photo detector (1LPD). The magenta pixel has a 2LPD and a vertical charge transfer (VCT) path to contact a back side photo detector. The 2LPD and the VCT were implemented by high-energy ion implantation from the circuit side. Measured spectral response curves from the 2LPDs fitted well with those estimated based on lightabsorption theory for Silicon detectors. Our measurement results show that the keys to realize the 2LPD in BSI are; (1) the reduction of crosstalk to the VCT from adjacent pixels and (2) controlling the backside photo detector thickness variance to reduce color signal variations.", "title": "" }, { "docid": "88077fe7ce2ad4a3c3052a988f9f96c1", "text": "When collecting patient-level resource use data for statistical analysis, for some patients and in some categories of resource use, the required count will not be observed. Although this problem must arise in most reported economic evaluations containing patient-level data, it is rare for authors to detail how the problem was overcome. Statistical packages may default to handling missing data through a so-called 'complete case analysis', while some recent cost-analyses have appeared to favour an 'available case' approach. Both of these methods are problematic: complete case analysis is inefficient and is likely to be biased; available case analysis, by employing different numbers of observations for each resource use item, generates severe problems for standard statistical inference. Instead we explore imputation methods for generating 'replacement' values for missing data that will permit complete case analysis using the whole data set and we illustrate these methods using two data sets that had incomplete resource use information.", "title": "" }, { "docid": "80de1fba41f93953ea21a517065f8ca8", "text": "This paper presents the kinematic calibration of a novel 7-degree-of-freedom (DOF) cable-driven robotic arm (CDRA), aimed at improving its absolute positioning accuracy. This CDRA consists of three 'self-calibrated' cable-driven parallel mechanism (CDPM) modules. In order to account for any kinematic errors that might arise when assembling the individual CDPMs, a calibration model is formulated based on the local product-of-exponential formula and the measurement residues in the tool-tip frame poses. An iterative least-squares algorithm is employed to identify the errors in the fixed transformation frames of the sequentially assembled 'self- calibrated' CDPM modules. Both computer simulations and experimental studies were carried out to verify the robustness and effectiveness of the proposed calibration algorithm. From the experimental studies, errors in the fixed kinematic transformation frames were precisely recovered after a minimum of 15 pose measurements.", "title": "" }, { "docid": "8bed049baa03a11867b0205e16402d0e", "text": "The paper investigates potential bias in awards of player disciplinary sanctions, in the form of cautions (yellow cards) and dismissals (red cards) by referees in the English Premier League and the German Bundesliga. Previous studies of behaviour of soccer referees have not adequately incorporated within-game information.Descriptive statistics from our samples clearly show that home teams receive fewer yellow and red cards than away teams. These differences may be wrongly interpreted as evidence of bias where the modeller has failed to include withingame events such as goals scored and recent cards issued.What appears as referee favouritism may actually be excessive and illegal aggressive behaviour by players in teams that are behind in score. We deal with these issues by using a minute-by-minute bivariate probit analysis of yellow and red cards issued in games over six seasons in the two leagues. The significance of a variable to denote the difference in score at the time of sanction suggests that foul play that is induced by a losing position is an important influence on the award of yellow and red cards. Controlling for various pre-game and within-game variables, we find evidence that is indicative of home team favouritism induced by crowd pressure: in Germany home teams with running tracks in their stadia attract more yellow and red cards than teams playing in stadia with less distance between the crowd and the pitch. Separating the competing teams in matches by favourite and underdog status, as perceived by the betting market, yields further evidence, this time for both leagues, that the source of home teams receiving fewer cards is not just that they are disproportionately often the favoured team and disproportionately ahead in score.Thus there is evidence that is consistent with pure referee bias in relative treatments of home and away teams.", "title": "" }, { "docid": "e754c7c7821703ad298d591a3f7a3105", "text": "The rapid growth in the population density in urban cities and the advancement in technology demands real-time provision of services and infrastructure. Citizens, especially travelers, want to be reached within time to the destination. Consequently, they require to be facilitated with smart and real-time traffic information depending on the current traffic scenario. Therefore, in this paper, we proposed a graph-oriented mechanism to achieve the smart transportation system in the city. We proposed to deploy road sensors to get the overall traffic information as well as the vehicular network to obtain location and speed information of the individual vehicle. These Internet of Things (IoT) based networks generate enormous volume of data, termed as Big Data, depicting the traffic information of the city. To process incoming Big Data from IoT devices, then generating big graphs from the data, and processing them, we proposed an efficient architecture that uses the Giraph tool with parallel processing servers to achieve real-time efficiency. Later, various graph algorithms are used to achieve smart transportation by making real-time intelligent decisions to facilitate the citizens as well as the metropolitan authorities. Vehicular Datasets from various reliable resources representing the real city traffic are used for analysis and evaluation purpose. The system is implemented using Giraph and Spark tool at the top of the Hadoop parallel nodes to generate and process graphs with near real-time. Moreover, the system is evaluated in terms of efficiency by considering the system throughput and processing time. The results show that the proposed system is more scalable and efficient.", "title": "" }, { "docid": "96055f0e41d62dc0ef318772fa6d6d9f", "text": "Building Information Modeling (BIM) has rapidly grown from merely being a three-dimensional (3D) model of a facility to serving as “a shared knowledge resource for information about a facility, forming a reliable basis for decisions during its life cycle from inception onward” [1]. BIM with three primary spatial dimensions (width, height, and depth) becomes 4D BIM when time (construction scheduling information) is added, and 5D BIM when cost information is added to it. Although the sixth dimension of the 6D BIM is often attributed to asset information useful for Facility Management (FM) processes, there is no agreement in the research literature on what each dimension represents beyond the fifth dimension [2]. BIM ultimately seeks to digitize the different stages of a building lifecycle such as planning, design, construction, and operation such that consistent digital information of a building project can be used by stakeholders throughout the building life-cycle [3]. The United States National Building Information Model Standard (NBIMS) initially characterized BIMs as digital representations of physical and functional aspects of a facility. But, in the most recent version released in July 2015, the NBIMS’ definition of BIM includes three separate but linked functions, namely business process, digital representation, and organization and control [4]. A number of national-level initiatives are underway in various countries to formally encourage the adoption of BIM technologies in the Architecture, Engineering, and Construction (AEC) and FM industries. Building SMART, with 18 chapters across the globe, including USA, UK, Australasia, etc., was established in 1995 with the aim of developing and driving the active use of open internationally-recognized standards to support the wider adoption of BIM across the building and infrastructure sectors [5]. The UK BIM Task Group, with experts from industry, government, public sector, institutes, and academia, is committed to facilitate the implementation of ‘collaborative 3D BIM’, a UK Government Construction Strategy initiative [6]. Similarly, the EUBIM Task Group was started with a vision to foster the common use of BIM in public works and produce a handbook containing the common BIM principles, guidance and practices for public contracting entities and policy makers [7].", "title": "" }, { "docid": "13cfc33bd8611b3baaa9be37ea9d627e", "text": "Some of the more difficult to define aspects of the therapeutic process (empathy, compassion, presence) remain some of the most important. Teaching them presents a challenge for therapist trainees and educators alike. In this study, we examine our beginning practicum students' experience of learning mindfulness meditation as a way to help them develop therapeutic presence. Through thematic analysis of their journal entries a variety of themes emerged, including the effects of meditation practice, the ability to be present, balancing being and doing modes in therapy, and the development of acceptance and compassion for themselves and for their clients. Our findings suggest that mindfulness meditation may be a useful addition to clinical training.", "title": "" }, { "docid": "f0d3ab8a530d7634149a5c29fa8bfe1b", "text": "In this paper, a novel broadband dual-polarized (slant ±45°) base station antenna element operating at 790–960 MHz is proposed. The antenna element consists of two pairs of symmetrical dipoles, four couples of baluns, a cricoid pedestal and two kinds of plastic fasteners. Specific shape metal reflector is also designed to achieve stable radiation pattern and high front-to-back ratio (FBR). All the simulated and measured results show that the proposed antenna element has wide impedance bandwidth (about 19.4%), low voltage standing wave ratio (VSWR < 1.4) and high port to port isolation (S21 < −25 dB) at the whole operating frequency band. Stable horizontal half-power beam width (HPBW) with 65°±4.83° and high gain (> 9.66 dBi) are also achieved. The proposed antenna element fabricated by integrated metal casting technology has great mechanical properties such as compact structure, low profile, good stability, light weight and easy to fabricate. Due to its good electrical and mechanical characteristics, the antenna element is suitable for European Digital Dividend, CDMA800 and GSM900 bands in base station antenna of modern mobile communication.", "title": "" }, { "docid": "60d6869cadebea71ef549bb2a7d7e5c3", "text": "BACKGROUND\nAcne is a common condition seen in up to 80% of people between 11 and 30 years of age and in up to 5% of older adults. In some patients, it can result in permanent scars that are surprisingly difficult to treat. A relatively new treatment, termed skin needling (needle dermabrasion), seems to be appropriate for the treatment of rolling scars in acne.\n\n\nAIM\nTo confirm the usefulness of skin needling in acne scarring treatment.\n\n\nMETHODS\nThe present study was conducted from September 2007 to March 2008 at the Department of Systemic Pathology, University of Naples Federico II and the UOC Dermatology Unit, University of Rome La Sapienza. In total, 32 patients (20 female, 12 male patients; age range 17-45) with acne rolling scars were enrolled. Each patient was treated with a specific tool in two sessions. Using digital cameras, photos of all patients were taken to evaluate scar depth and, in five patients, silicone rubber was used to make a microrelief impression of the scars. The photographic data were analysed by using the sign test statistic (alpha < 0.05) and the data from the cutaneous casts were analysed by fast Fourier transformation (FFT).\n\n\nRESULTS\nAnalysis of the patient photographs, supported by the sign test and of the degree of irregularity of the surface microrelief, supported by FFT, showed that, after only two sessions, the severity grade of rolling scars in all patients was greatly reduced and there was an overall aesthetic improvement. No patient showed any visible signs of the procedure or hyperpigmentation.\n\n\nCONCLUSION\nThe present study confirms that skin needling has an immediate effect in improving acne rolling scars and has advantages over other procedures.", "title": "" }, { "docid": "d9123053892ce671665a3a4a1694a57c", "text": "Visual perceptual learning (VPL) is defined as a long-term improvement in performance on a visual task. In recent years, the idea that conscious effort is necessary for VPL to occur has been challenged by research suggesting the involvement of more implicit processing mechanisms, such as reinforcement-driven processing and consolidation. In addition, we have learnt much about the neural substrates of VPL and it has become evident that changes in visual areas and regions beyond the visual cortex can take place during VPL.", "title": "" }, { "docid": "7677b67bd95f05c2e4c87022c3caa938", "text": "The semi-supervised learning usually only predict labels for unlabeled data appearing in training data, and cannot effectively predict labels for testing data never appearing in training set. To handle this outof-sample problem, many inductive methods make a constraint such that the predicted label matrix should be exactly equal to a linear model. In practice, this constraint is too rigid to capture the manifold structure of data. Motivated by this deficiency, we relax the rigid linear embedding constraint and propose to use an elastic embedding constraint on the predicted label matrix such that the manifold structure can be better explored. To solve our new objective and also a more general optimization problem, we study a novel adaptive loss with efficient optimization algorithm. Our new adaptive loss minimization method takes the advantages of both L1 norm and L2 norm, and is robust to the data outlier under Laplacian distribution and can efficiently learn the normal data under Gaussian distribution. Experiments have been performed on image classification tasks and our approach outperforms other state-of-the-art methods.", "title": "" }, { "docid": "3646b64ac400c12f9c9c4f8ba4f53591", "text": "Cerebral organoids recapitulate human brain development at a considerable level of detail, even in the absence of externally added signaling factors. The patterning events driving this self-organization are currently unknown. Here, we examine the developmental and differentiative capacity of cerebral organoids. Focusing on forebrain regions, we demonstrate the presence of a variety of discrete ventral and dorsal regions. Clearing and subsequent 3D reconstruction of entire organoids reveal that many of these regions are interconnected, suggesting that the entire range of dorso-ventral identities can be generated within continuous neuroepithelia. Consistent with this, we demonstrate the presence of forebrain organizing centers that express secreted growth factors, which may be involved in dorso-ventral patterning within organoids. Furthermore, we demonstrate the timed generation of neurons with mature morphologies, as well as the subsequent generation of astrocytes and oligodendrocytes. Our work provides the methodology and quality criteria for phenotypic analysis of brain organoids and shows that the spatial and temporal patterning events governing human brain development can be recapitulated in vitro.", "title": "" }, { "docid": "4db29a3fd1f1101c3949d3270b15ef07", "text": "Human goal-directed action emerges from the interaction between stimulus-driven sensorimotor online systems and slower-working control systems that relate highly processed perceptual information to the construction of goal-related action plans. This distribution of labor requires the acquisition of enduring action representations; that is, of memory traces which capture the main characteristics of successful actions and their consequences. It is argued here that these traces provide the building blocks for off-line prospective action planning, which renders the search through stored action representations an essential part of action control. Hence, action planning requires cognitive search (through possible options) and might have led to the evolution of cognitive search routines that humans have learned to employ for other purposes as well, such as searching for perceptual events and through memory. Thus, what is commonly considered to represent different types of search operations may all have evolved from action planning and share the same characteristics. Evidence is discussed which suggests that all types of cognitive search—be it in searching for perceptual events, for suitable actions, or through memory—share the characteristic of following a fi xed sequence of cognitive operations: divergent search followed by convergent search.", "title": "" }, { "docid": "7c295cb178e58298b1f60f5a829118fd", "text": "A dual-band 0.92/2.45 GHz circularly-polarized (CP) unidirectional antenna using the wideband dual-feed network, two orthogonally positioned asymmetric H-shape slots, and two stacked concentric annular-ring patches is proposed for RF identification (RFID) applications. The measurement result shows that the antenna achieves the impedance bandwidths of 15.4% and 41.9%, the 3-dB axial-ratio (AR) bandwidths of 4.3% and 21.5%, and peak gains of 7.2 dBic and 8.2 dBic at 0.92 and 2.45 GHz bands, respectively. Moreover, the antenna provides stable symmetrical radiation patterns and wide-angle 3-dB AR beamwidths in both lower and higher bands for unidirectional wide-coverage RFID reader applications. Above all, the dual-band CP unidirectional patch antenna presented is beneficial to dual-band RFID system on configuration, implementation, as well as cost reduction.", "title": "" }, { "docid": "ba4d30e7ea09d84f8f7d96c426e50f34", "text": "Submission instructions: These questions require thought but do not require long answers. Please be as concise as possible. You should submit your answers as a writeup in PDF format via GradeScope and code via the Snap submission site. Submitting writeup: Prepare answers to the homework questions into a single PDF file and submit it via http://gradescope.com. Make sure that the answer to each question is on a separate page. On top of each page write the number of the question you are answering. Please find the cover sheet and the recommended templates located here: Not including the cover sheet in your submission will result in a 2-point penalty. It is also important to tag your answers correctly on Gradescope. We will deduct 5/N points for each incorrectly tagged subproblem (where N is the number of subproblems). This means you can lose up to 5 points for incorrect tagging. Put all the code for a single question into a single file and upload it. Consider a user-item bipartite graph where each edge in the graph between user U to item I, indicates that user U likes item I. We also represent the ratings matrix for this set of users and items as R, where each row in R corresponds to a user and each column corresponds to an item. If user i likes item j, then R i,j = 1, otherwise R i,j = 0. Also assume we have m users and n items, so matrix R is m × n.", "title": "" }, { "docid": "c695f74a41412606e31c771ec9d2b6d3", "text": "Osteochondrosis dissecans (OCD) is a form of osteochondrosis limited to the articular epiphysis. The most commonly affected areas include, in decreasing order of frequency, the femoral condyles, talar dome and capitellum of the humerus. OCD rarely occurs in the shoulder joint, where it involves either the humeral head or the glenoid. The purpose of this report is to present a case with glenoid cavity osteochondritis dissecans and clinical and radiological outcome after arthroscopic debridement. The patient underwent arthroscopy to remove the loose body and to microfracture the cavity. The patient was followed-up for 4 years and she is pain-free with full range of motion and a stable shoulder joint.", "title": "" }, { "docid": "678ef706d4cb1c35f6b3d82bf25a4aa7", "text": "This article is an extremely rapid survey of the modern theory of partial differential equations (PDEs). Sources of PDEs are legion: mathematical physics, geometry, probability theory, continuum mechanics, optimization theory, etc. Indeed, most of the fundamental laws of the physical sciences are partial differential equations and most papers published in applied math concern PDEs. The following discussion is consequently very broad, but also very shallow, and will certainly be inadequate for any given PDE the reader may care about. The goal is rather to highlight some of the many key insights and unifying principles across the entire subject.", "title": "" }, { "docid": "db190bb0cf83071b6e19c43201f92610", "text": "In this paper, a MATLAB based simulation of Grid connected PV system is presented. The main components of this simulation are PV solar panel, Boost converter; Maximum Power Point Tracking System (MPPT) and Grid Connected PV inverter with closed loop control system is designed and simulated. A simulation studies is carried out in different solar radiation level.", "title": "" }, { "docid": "ac156d7b3069ff62264bd704b7b8dfc9", "text": "Rynes, Colbert, and Brown (2002) presented the following statement to 959 members of the Society for Human Resource Management (SHRM): “Surveys that directly ask employees how important pay is to them are likely to overestimate pay’s true importance in actual decisions” (p. 158). If our interpretation (and that of Rynes et al.) of the research literature is accurate, then the correct true-false answer to the above statement is “false.” In other words, people are more likely to underreport than to overreport the importance of pay as a motivational factor in most situations. Put another way, research suggests that pay is much more important in people’s actual choices and behaviors than it is in their self-reports of what motivates them, much like the cartoon viewers mentioned in the quote above. Yet, only 35% of the respondents in the Rynes et al. study answered in a way consistent with research findings (i.e., chose “false”). Our objective in this article is to show that employee surveys regarding the importance of various factors in motivation generally produce results that are inconsistent with studies of actual employee behavior. In particular, we focus on well-documented findings that employees tend to say that pay THE IMPORTANCE OF PAY IN EMPLOYEE MOTIVATION: DISCREPANCIES BETWEEN WHAT PEOPLE SAY AND WHAT THEY DO", "title": "" }, { "docid": "5008ecf234a3449f524491de04b7868c", "text": "Cross-domain recommendations are currently available in closed, proprietary social networking ecosystems such as Facebook, Twitter and Google+. I propose an open framework as an alternative, which enables cross-domain recommendations with domain-agnostic user profiles modeled as semantic interest graphs. This novel framework covers all parts of a recommender system. It includes an architecture for privacy-enabled profile exchange, a distributed and domain-agnostic user model and a cross-domain recommendation algorithm. This enables users to receive recommendations for a target domain (e.g. food) based on any kind of previous interests.", "title": "" } ]
scidocsrr
c07287090c74ba660018576f21d102d7
How competitive are you: Analysis of people's attractiveness in an online dating system
[ { "docid": "9efa0ff0743edacc4e9421ed45441fde", "text": "Perception of universal facial beauty has long been debated amongst psychologists and anthropologists. In this paper, we perform experiments to evaluate the extent of universal beauty by surveying a number of diverse human referees to grade a collection of female facial images. Results obtained show that there exists a strong central tendency in the human grades, thus exhibiting agreement on beauty assessment. We then trained an automated classifier using the average human grades as the ground truth and used it to classify an independent test set of facial images. The high accuracy achieved proves that this classifier can be used as a general, automated tool for objective classification of female facial beauty. Potential applications exist in the entertainment industry, cosmetic industry, virtual media, and plastic surgery.", "title": "" }, { "docid": "4f8fea97733000d58f2ff229c85aeaa0", "text": "Online dating sites have become popular platforms for people to look for potential romantic partners. Many online dating sites provide recommendations on compatible partners based on their proprietary matching algorithms. It is important that not only the recommended dates match the user’s preference or criteria, but also the recommended users are interested in the user and likely to reciprocate when contacted. The goal of this paper is to predict whether an initial contact message from a user will be replied to by the receiver. The study is based on a large scale real-world dataset obtained from a major dating site in China with more than sixty million registered users. We formulate our reply prediction as a link prediction problem of social networks and approach it using a machine learning framework. The availability of a large amount of user profile information and the bipartite nature of the dating network present unique opportunities and challenges to the reply prediction problem. We extract user-based features from user profiles and graph-based features from the bipartite dating network, apply them in a variety of classification algorithms, and compare the utility of the features and performance of the classifiers. Our results show that the user-based and graph-based features result in similar performance, and can be used to effectively predict the reciprocal links. Only a small performance gain is achieved when both feature sets are used. Among the five classifiers we considered, random forests method outperforms the other four algorithms (naive Bayes, logistic regression, KNN, and SVM). Our methods and results can provide valuable guidelines to the design and performance of recommendation engine for online dating sites.", "title": "" } ]
[ { "docid": "3fbb2bb37f44cb8f300fd28cdbd8bc06", "text": "The synapse is a crucial element in biological neural networks, but a simple electronic equivalent has been absent. This complicates the development of hardware that imitates biological architectures in the nervous system. Now, the recent progress in the experimental realization of memristive devices has renewed interest in artificial neural networks. The resistance of a memristive system depends on its past states and exactly this functionality can be used to mimic the synaptic connections in a (human) brain. After a short introduction to memristors, we present and explain the relevant mechanisms in a biological neural network, such as long-term potentiation and spike time-dependent plasticity, and determine the minimal requirements for an artificial neural network. We review the implementations of these processes using basic electric circuits and more complex mechanisms that either imitate biological systems or could act as a model system for them. (Some figures may appear in colour only in the online journal)", "title": "" }, { "docid": "3567af18bc17efdb0efeb41d08fabb7b", "text": "In this review we examine recent research in the area of motivation in mathematics education and discuss findings from research perspectives in this domain. We note consistencies across research perspectives that suggest a set of generalizable conclusions about the contextual factors, cognitive processes, and benefits of interventions that affect students’ and teachers’ motivational attitudes. Criticisms are leveled concerning the lack of theoretical guidance driving the conduct and interpretation of the majority of studies in the field. Few researchers have attempted to extend current theories of motivation in ways that are consistent with the current research on learning and classroom discourse. In particular, researchers interested in studying motivation in the content domain of school mathematics need to examine the relationship that exists between mathematics as a socially constructed field and students’ desire to achieve.", "title": "" }, { "docid": "6e82e635682cf87a84463f01c01a1d33", "text": "Finger veins have been proved to be an effective biometric for personal identification in the recent years. However, finger vein images are easily affected by influences such as image translation, orientation, scale, scattering, finger structure, complicated background, uneven illumination, and collection posture. All these factors may contribute to inaccurate region of interest (ROI) definition, and so degrade the performance of finger vein identification system. To improve this problem, in this paper, we propose a finger vein ROI localization method that has high effectiveness and robustness against the above factors. The proposed method consists of a set of steps to localize ROIs accurately, namely segmentation, orientation correction, and ROI detection. Accurate finger region segmentation and correct calculated orientation can support each other to produce higher accuracy in localizing ROIs. Extensive experiments have been performed on the finger vein image database, MMCBNU_6000, to verify the robustness of the proposed method. The proposed method shows the segmentation accuracy of 100%. Furthermore, the average processing time of the proposed method is 22 ms for an acquired image, which satisfies the criterion of a real-time finger vein identification system.", "title": "" }, { "docid": "6e60d6b878c35051ab939a03bdd09574", "text": "We propose a new CNN-CRF end-to-end learning framework, which is based on joint stochastic optimization with respect to both Convolutional Neural Network (CNN) and Conditional Random Field (CRF) parameters. While stochastic gradient descent is a standard technique for CNN training, it was not used for joint models so far. We show that our learning method is (i) general, i.e. it applies to arbitrary CNN and CRF architectures and potential functions; (ii) scalable, i.e. it has a low memory footprint and straightforwardly parallelizes on GPUs; (iii) easy in implementation. Additionally, the unified CNN-CRF optimization approach simplifies a potential hardware implementation. We empirically evaluate our method on the task of semantic labeling of body parts in depth images and show that it compares favorably to competing techniques.", "title": "" }, { "docid": "049def2d879d0b873132660b0b856443", "text": "This report explores the relationship between narcissism and unethical conduct in an organization by answering two questions: (1) In what ways does narcissism affect an organization?, and (2) What is the relationship between narcissism and the financial industry? Research suggests the overall conclusion that narcissistic individuals directly influence the identity of an organization and how it behaves. Ways to address these issues are shown using Enron as a case study example.", "title": "" }, { "docid": "d835cb852c482c2b7e14f9af4a5a1141", "text": "This paper investigates the effectiveness of state-of-the-art classification algorithms to categorise road vehicles for an urban traffic monitoring system using a multi-shape descriptor. The analysis is applied to monocular video acquired from a static pole-mounted road side CCTV camera on a busy street. Manual vehicle segmentation was used to acquire a large (>2000 sample) database of labelled vehicles from which a set of measurement-based features (MBF) in combination with a pyramid of HOG (histogram of orientation gradients, both edge and intensity based) features. These are used to classify the objects into four main vehicle categories: car, van, bus and motorcycle. Results are presented for a number of experiments that were conducted to compare support vector machines (SVM) and random forests (RF) classifiers. 10-fold cross validation has been used to evaluate the performance of the classification methods. The results demonstrate that all methods achieve a recognition rate above 95% on the dataset, with SVM consistently outperforming RF. A combination of MBF and IPHOG features gave the best performance of 99.78%.", "title": "" }, { "docid": "9f530b42ae19ddcf52efa41272b2dbc7", "text": "Learning-based methods for appearance-based gaze estimation achieve state-of-the-art performance in challenging real-world settings but require large amounts of labelled training data. Learningby-synthesis was proposed as a promising solution to this problem but current methods are limited with respect to speed, the appearance variability as well as the head pose and gaze angle distribution they can synthesize. We present UnityEyes, a novel method to rapidly synthesize large amounts of variable eye region images as training data. Our method combines a novel generative 3D model of the human eye region with a real-time rendering framework. The model is based on high-resolution 3D face scans and uses realtime approximations for complex eyeball materials and structures as well as novel anatomically inspired procedural geometry methods for eyelid animation. We show that these synthesized images can be used to estimate gaze in difficult in-the-wild scenarios, even for extreme gaze angles or in cases in which the pupil is fully occluded. We also demonstrate competitive gaze estimation results on a benchmark in-the-wild dataset, despite only using a light-weight nearest-neighbor algorithm. We are making our UnityEyes synthesis framework freely available online for the benefit of the research community.", "title": "" }, { "docid": "759a19f60890a11e7e460aecd7bb6477", "text": "The stiff man syndrome (SMS) and its variants, focal SMS, stiff limb (or leg) syndrome (SLS), jerking SMS, and progressive encephalomyelitis with rigidity and myoclonus (PERM), appear to occur more frequently than hitherto thought. A characteristic ensemble of symptoms and signs allows a tentative clinical diagnosis. Supportive ancillary findings include (1) the demonstration of continuous muscle activity in trunk and proximal limb muscles despite attempted relaxation, (2) enhanced exteroceptive reflexes, and (3) antibodies to glutamic acid decarboxylase (GAD) in both serum and spinal fluid. Antibodies to GAD are not diagnostic or specific for SMS and the role of these autoantibodies in the pathogenesis of SMS/SLS/PERM is the subject of debate and difficult to reconcile on the basis of our present knowledge. Nevertheless, evidence is emerging to suggest that SMS/SLS/PERM are manifestations of an immune-mediated chronic encephalomyelitis and immunomodulation is an effective therapeutic approach.", "title": "" }, { "docid": "5ca5cfcd0ed34d9b0033977e9cde2c74", "text": "We study the impact of regulation on competition between brand-names and generics and pharmaceutical expenditures using a unique policy experiment in Norway, where reference pricing (RP) replaced price cap regulation in 2003 for a sub-sample of o¤-patent products. First, we construct a vertical di¤erentiation model to analyze the impact of regulation on prices and market shares of brand-names and generics. Then, we exploit a detailed panel data set at product level covering several o¤-patent molecules before and after the policy reform. O¤-patent drugs not subject to RP serve as our control group. We …nd that RP signi…cantly reduces both brand-name and generic prices, and results in signi…cantly lower brand-name market shares. Finally, we show that RP has a strong negative e¤ect on average molecule prices, suggesting signi…cant cost-savings, and that patients’ copayments decrease despite the extra surcharges under RP. Key words: Pharmaceuticals; Regulation; Generic Competition JEL Classi…cations: I11; I18; L13; L65 We thank David Bardey, Øivind Anti Nilsen, Frode Steen, and two anonymous referees for valuable comments and suggestions. We also thank the Norwegian Research Council, Health Economics Bergen (HEB) for …nancial support. Corresponding author. Department of Economics and Health Economics Bergen, Norwegian School of Economics and Business Administration, Helleveien 30, N-5045 Bergen, Norway. E-mail: kurt.brekke@nhh.no. Uni Rokkan Centre, Health Economics Bergen, Nygårdsgaten 5, N-5015 Bergen, Norway. E-mail: tor.holmas@uni.no. Department of Economics/NIPE, University of Minho, Campus de Gualtar, 4710-057 Braga, Portugal; and University of Bergen (Economics), Norway. E-mail: o.r.straume@eeg.uminho.pt.", "title": "" }, { "docid": "00c17123df0fa10f0d405b4d0c9dfad0", "text": "Touchless hand gesture recognition systems are becoming important in automotive user interfaces as they improve safety and comfort. Various computer vision algorithms have employed color and depth cameras for hand gesture recognition, but robust classification of gestures from different subjects performed under widely varying lighting conditions is still challenging. We propose an algorithm for drivers’ hand gesture recognition from challenging depth and intensity data using 3D convolutional neural networks. Our solution combines information from multiple spatial scales for the final prediction. It also employs spatiotemporal data augmentation for more effective training and to reduce potential overfitting. Our method achieves a correct classification rate of 77.5% on the VIVA challenge dataset.", "title": "" }, { "docid": "8f7428569e1d3036cdf4842d48b56c22", "text": "This paper describes a unified model for role-based access control (RBAC). RBAC is a proven technology for large-scale authorization. However, lack of a standard model results in uncertainty and confusion about its utility and meaning. The NIST model seeks to resolve this situation by unifying ideas from prior RBAC models, commercial products and research prototypes. It is intended to serve as a foundation for developing future standards. RBAC is a rich and open-ended technology which is evolving as users, researchers and vendors gain experience with it. The NIST model focuses on those aspects of RBAC for which consensus is available. It is organized into four levels of increasing functional capabilities called flat RBAC, hierarchical RBAC, constrained RBAC and symmetric RBAC. These levels are cumulative and each adds exactly one new requirement. An alternate approach comprising flat and hierarchical RBAC in an ordered sequence and two unordered features—constraints and symmetry—is also presented. The paper furthermore identifies important attributes of RBAC not included in the NIST model. Some are not suitable for inclusion in a consensus document. Others require further work and agreement before standardization is feasible.", "title": "" }, { "docid": "895f0424cb71c79b86ecbd11a4f2eb8e", "text": "A chronic alcoholic who had also been submitted to partial gastrectomy developed a syndrome of continuous motor unit activity responsive to phenytoin therapy. There were signs of minimal distal sensorimotor polyneuropathy. Symptoms of the syndrome of continuous motor unit activity were fasciculation, muscle stiffness, myokymia, impaired muscular relaxation and percussion myotonia. Electromyography at rest showed fasciculation, doublets, triplets, multiplets, trains of repetitive discharges and myotonic discharges. Trousseau's and Chvostek's signs were absent. No abnormality of serum potassium, calcium, magnesium, creatine kinase, alkaline phosphatase, arterial blood gases and pH were demonstrated, but the serum Vitamin B12 level was reduced. The electrophysiological findings and muscle biopsy were compatible with a mixed sensorimotor polyneuropathy. Tests of neuromuscular transmission showed a significant decrement in the amplitude of the evoked muscle action potential in the abductor digiti minimi on repetitive nerve stimulation. These findings suggest that hyperexcitability and hyperactivity of the peripheral motor axons underlie the syndrome of continuous motor unit activity in the present case. Ein chronischer Alkoholiker, mit subtotaler Gastrectomie, litt an einem Syndrom dauernder Muskelfaseraktivität, das mit Diphenylhydantoin behandelt wurde. Der Patient wies minimale Störungen im Sinne einer distalen sensori-motorischen Polyneuropathie auf. Die Symptome dieses Syndroms bestehen in: Fazikulationen, Muskelsteife, Myokymien, eine gestörte Erschlaffung nach der Willküraktivität und eine Myotonie nach Beklopfen des Muskels. Das Elektromyogramm in Ruhe zeigt: Faszikulationen, Doublets, Triplets, Multiplets, Trains repetitiver Potentiale und myotonische Entladungen. Trousseau- und Chvostek-Zeichen waren nicht nachweisbar. Gleichzeitig lagen die Kalium-, Calcium-, Magnesium-, Kreatinkinase- und Alkalinphosphatase-Werte im Serumspiegel sowie O2, CO2 und pH des arteriellen Blutes im Normbereich. Aber das Niveau des Vitamin B12 im Serumspiegel war deutlich herabgesetzt. Die muskelbioptische und elektrophysiologische Veränderungen weisen auf eine gemischte sensori-motorische Polyneuropathie hin. Die Abnahme der Amplitude der evozierten Potentiale, vom M. abductor digiti minimi abgeleitet, bei repetitiver Reizung des N. ulnaris, stellten eine Störung der neuromuskulären Überleitung dar. Aufgrund unserer klinischen und elektrophysiologischen Befunde könnten wir die Hypererregbarkeit und Hyperaktivität der peripheren motorischen Axonen als Hauptmechanismus des Syndroms dauernder motorischer Einheitsaktivität betrachten.", "title": "" }, { "docid": "d488d9d754c360efb3910c83e3175756", "text": "The most common question asked by patients with inflammatory bowel disease (IBD) is, \"Doctor, what should I eat?\" Findings from epidemiology studies have indicated that diets high in animal fat and low in fruits and vegetables are the most common pattern associated with an increased risk of IBD. Low levels of vitamin D also appear to be a risk factor for IBD. In murine models, diets high in fat, especially saturated animal fats, also increase inflammation, whereas supplementation with omega 3 long-chain fatty acids protect against intestinal inflammation. Unfortunately, omega 3 supplements have not been shown to decrease the risk of relapse in patients with Crohn's disease. Dietary intervention studies have shown that enteral therapy, with defined formula diets, helps children with Crohn's disease and reduces inflammation and dysbiosis. Although fiber supplements have not been shown definitively to benefit patients with IBD, soluble fiber is the best way to generate short-chain fatty acids such as butyrate, which has anti-inflammatory effects. Addition of vitamin D and curcumin has been shown to increase the efficacy of IBD therapy. There is compelling evidence from animal models that emulsifiers in processed foods increase risk for IBD. We discuss current knowledge about popular diets, including the specific carbohydrate diet and diet low in fermentable oligo-, di-, and monosaccharides and polyols. We present findings from clinical and basic science studies to help gastroenterologists navigate diet as it relates to the management of IBD.", "title": "" }, { "docid": "3f2d4df1b0ef315ee910636c9439b049", "text": "Real-Time Line and Disk Light Shading\n Eric Heitz and Stephen Hill\n At SIGGRAPH 2016, we presented a new real-time area lighting technique for polygonal sources. In this talk, we will show how the underlying framework, based on Linearly Transformed Cosines (LTCs), can be extended to support line and disk lights. We will discuss the theory behind these approaches as well as practical implementation tips and tricks concerning numerical precision and performance.\n Physically Based Shading at DreamWorks Animation\n Feng Xie and Jon Lanz\n PDI/DreamWorks was one of the first animation studios to adopt global illumination in production rendering. Concurrently, we have also been developing and applying physically based shading principles to improve the consistency and realism of our material models, while balancing the need for intuitive artistic control required for feature animations.\n In this talk, we will start by presenting the evolution of physically based shading in our films. Then we will present some fundamental principles with respect to importance sampling and energy conservation in our BSDF framework with a pragmatic and efficient approach to transimssion fresnel modeling. Finally, we will present our new set of physically plausible production shaders for our new path tracer, which includes our new hard surface shader, our approach to material layering and some new developments in fabric and glitter shading.\n Volumetric Skin and Fabric Shading at Framestore\n Nathan Walster\n Recent advances in shading have led to the use of free-path sampling to better solve complex light transport within volumetric materials. In this talk, we describe how we have implemented these ideas and techniques within a production environment, their application on recent shows---such as Guardians of the Galaxy Vol. 2 and Alien: Covenant---and the effect this has had on artists' workflow within our studio.\n Practical Multilayered Materials in Call of Duty: Infinite Warfare\n Michał Drobot\n This talk presents a practical approach to multilayer, physically based surface rendering, specifically optimized for Forward+ rendering pipelines. The presented pipeline allows for the creation of complex surface by decomposing them into different mediums, each represented by a simple BRDF/BSSRDF and set of simple, physical macro properties, such as thickness, scattering and absorption. The described model is explained via practical examples of common multilayer materials such as car paint, lacquered wood, ice, and semi-translucent plastics. Finally, the talk describes intrinsic implementation details for achieving a low performance budget for 60 Hz titles as well as supporting multiple rendering modes: opaque, alpha blend, and refractive blend.\n Pixar's Foundation for Materials: PxrSurface and PxrMarschnerHair\n Christophe Hery and Junyi Ling\n Pixar's Foundation Materials, PxrSurface and PxrMarschnerHair, began shipping with RenderMan 21.\n PxrSurface is the standard surface shader developed in the studio for Finding Dory and used more recently for Cars 3 and Coco. This shader contains nine lobes that cover the entire gamut of surface materials for these two films: diffuse, three specular, iridescence, fuzz, subsurface, single scatter and a glass lobe. Each of these BxDF lobes is energy conserving, but conservation is not enforced between lobes on the surface level. We use parameter layering methods to feed a PxrSurface with pre-layered material descriptions. This simultaneously allows us the flexibility of a multilayered shading pipeline together with efficient and consistent rendering behavior.\n We also implemented our individual BxDFs with the latest state-of-the-art techniques. For example, our three specular lobes can be switched between Beckmann and GGX modes. Many compound materials have multiple layers of specular; these lobes interact with each other modulated by the Fresnel effect of the clearcoat layer. We also leverage LEADR mapping to recreate sub-displacement micro features such as metal flakes and clearcoat scratches.\n Another example is that PxrSurface ships with Jensen, d'Eon and Burley diffusion profiles. Additionally, we implemented a novel subsurface model using path-traced volumetric scattering, which represents a significant advancement. It captures zero and single scattering events of subsurface scattering implicit to the path-tracing algorithm. The user can adjust the phase-function of the scattering events and change the extinction profiles, and it also comes with standardized color inversion features for intuitive albedo input. To the best of our knowledge, this is the first commercially available rendering system to model these features and the rendering cost is comparable to classic diffusion subsurface scattering models.\n PxrMarschnerHair implements Marschner's seminal hair illumination model with importance sampling. We also account for the residual energy left after the R, TT, TRT and glint lobes, through a fifth diffuse lobe. We show that this hair surface shader can reproduce dark and blonde hair effectively in a path-traced production context. Volumetric scattering from fiber to fiber changes the perceived hue and saturation of a groom, so we also provide a color inversion scheme to invert input albedos, such that the artistic inputs are straightforward and intuitive.\n Revisiting Physically Based Shading at Imageworks\n Christopher Kulla and Alejandro Conty\n Two years ago, the rendering and shading groups at Sony Imageworks embarked on a project to review the structure of our physically based shaders in an effort to simplify their implementation, improve quality and pave the way to take advantage of future improvements in light transport algorithms.\n We started from classic microfacet BRDF building blocks and investigated energy conservation and artist friendly parametrizations. We continued by unifying volume rendering and subsurface scattering algorithms and put in place a system for medium tracking to improve the setup of nested media. Finally, from all these building blocks, we rebuilt our artist-facing shaders with a simplified interface and a more flexible layering approach through parameter blending.\n Our talk will discuss the details of our various building blocks, what worked and what didn't, as well as some future research directions we are still interested in exploring.", "title": "" }, { "docid": "4689161101a990d17b08e27b3ccf2be3", "text": "The growth of the software game development industry is enormous and is gaining importance day by day. This growth imposes severe pressure and a number of issues and challenges on the game development community. Game development is a complex process, and one important game development choice is to consider the developer’s perspective to produce good-quality software games by improving the game development process. The objective of this study is to provide a better understanding of the developer’s dimension as a factor in software game success. It focuses mainly on an empirical investigation of the effect of key developer’s factors on the software game development process and eventually on the quality of the resulting game. A quantitative survey was developed and conducted to identify key developer’s factors for an enhanced game development process. For this study, the developed survey was used to test the research model and hypotheses. The results provide evidence that game development organizations must deal with multiple key factors to remain competitive and to handle high pressure in the software game industry. The main contribution of this paper is to investigate empirically the influence of key developer’s factors on the game development process.", "title": "" }, { "docid": "934ee0b55bf90eed86fabfff8f1238d1", "text": "Schelling (1969, 1971a,b, 1978) considered a simple proximity model of segregation where individual agents only care about the types of people living in their own local geographical neighborhood, the spatial structure being represented by oneor two-dimensional lattices. In this paper, we argue that segregation might occur not only in the geographical space, but also in social environments. Furthermore, recent empirical studies have documented that social interaction structures are well-described by small-world networks. We generalize Schelling’s model by allowing agents to interact in small-world networks instead of regular lattices. We study two alternative dynamic models where agents can decide to move either arbitrarily far away (global model) or are bound to choose an alternative location in their social neighborhood (local model). Our main result is that the system attains levels of segregation that are in line with those reached in the lattice-based spatial proximity model. Thus, Schelling’s original results seem to be robust to the structural properties of the network.", "title": "" }, { "docid": "c6ebb1f54f42f38dae8c19566f2459ce", "text": "We develop several predictive models linking legislative sentiment to legislative text. Our models, which draw on ideas from ideal point estimation and topic models, predict voting patterns based on the contents of bills and infer the political leanings of legislators. With supervised topics, we provide an exploratory window into how the language of the law is correlated with political support. We also derive approximate posterior inference algorithms based on variational methods. Across 12 years of legislative data, we predict specific voting patterns with high accuracy.", "title": "" }, { "docid": "1865a404c970d191ed55e7509b21fb9e", "text": "Most machine learning methods are known to capture and exploit biases of the training data. While some biases are beneficial for learning, others are harmful. Specifically, image captioning models tend to exaggerate biases present in training data (e.g., if a word is present in 60% of training sentences, it might be predicted in 70% of sentences at test time). This can lead to incorrect captions in domains where unbiased captions are desired, or required, due to over-reliance on the learned prior and image context. In this work we investigate generation of gender-specific caption words (e.g. man, woman) based on the person’s appearance or the image context. We introduce a new Equalizer model that encourages equal gender probability when gender evidence is occluded in a scene and confident predictions when gender evidence is present. The resulting model is forced to look at a person rather than use contextual cues to make a gender-specific prediction. The losses that comprise our model, the Appearance Confusion Loss and the Confident Loss, are general, and can be added to any description model in order to mitigate impacts of unwanted bias in a description dataset. Our proposed model has lower error than prior work when describing images with people and mentioning their gender and more closely matches the ground truth ratio of sentences including women to sentences including men. Finally, we show that our model more often looks at people when predicting their gender. 1", "title": "" }, { "docid": "0b6a3b143dfccd7ca9ea09f7fa5b5e8c", "text": "Cancer has been characterized as a heterogeneous disease consisting of many different subtypes. The early diagnosis and prognosis of a cancer type have become a necessity in cancer research, as it can facilitate the subsequent clinical management of patients. The importance of classifying cancer patients into high or low risk groups has led many research teams, from the biomedical and the bioinformatics field, to study the application of machine learning (ML) methods. Therefore, these techniques have been utilized as an aim to model the progression and treatment of cancerous conditions. In addition, the ability of ML tools to detect key features from complex datasets reveals their importance. A variety of these techniques, including Artificial Neural Networks (ANNs), Bayesian Networks (BNs), Support Vector Machines (SVMs) and Decision Trees (DTs) have been widely applied in cancer research for the development of predictive models, resulting in effective and accurate decision making. Even though it is evident that the use of ML methods can improve our understanding of cancer progression, an appropriate level of validation is needed in order for these methods to be considered in the everyday clinical practice. In this work, we present a review of recent ML approaches employed in the modeling of cancer progression. The predictive models discussed here are based on various supervised ML techniques as well as on different input features and data samples. Given the growing trend on the application of ML methods in cancer research, we present here the most recent publications that employ these techniques as an aim to model cancer risk or patient outcomes.", "title": "" }, { "docid": "2b98fd7a61fd7c521758651191df74d0", "text": "Nowadays, a great effort is done to find new alternative renewable energy sources to replace part of nuclear energy production. In this context, this paper presents a new axial counter-rotating turbine for small-hydro applications which is developed to recover the energy lost in release valves of water supply. The design of the two PM-generators, their mechanical integration in a bulb placed into the water conduit and the AC-DC Vienna converter developed for these turbines are presented. The sensorless regulation of the two generators is also briefly discussed. Finally, measurements done on the 2-kW prototype are analyzed and compared with the simulation.", "title": "" } ]
scidocsrr
ee3564d114c93f663c0467dfb0d06181
Gambling is bad for you. Gamblers may win money from time to time, but in the long run, the House always wins. Why should governments allow an activity that helps their citizens lose the money they have worked so hard to earn? The harm is not just the loss of money and possible bankruptcy; it causes depression, insomnia, and other stress related disorders [4]. The internet has made gambling so much easier to do and encouraged lots of new people to place bets so dramatically multiplying the harm.
[ { "docid": "5ce1d9b6ed0d3b41e470e2807c037972", "text": "economic policy law crime policing digital freedoms freedom expression Every leisure industry attracts a few troubled individuals who take the activity to harmful extremes. For every thousand drinkers there are a few alcoholics. Similarly some sports fans are hooligans. Those who gamble enough to harm themselves would be those who would gamble in casinos if the internet option was not available.\n", "title": "" } ]
[ { "docid": "de909a7b7e21de332a4bbce9a6430cfa", "text": "economic policy law crime policing digital freedoms freedom expression There is no evidence that gambling prevents people from caring for their family. The vast majority who gamble do so responsibly. It isn’t right to ban something that millions of people enjoy just because a few cause problems. And banning gambling, whether online or in the real world will not stop these problems. Sadly, even if it is illegal, people with problems will still find a way to hurt those around them – just look at drugs.\n", "title": "" }, { "docid": "2e08f5bb359b2c9caf5ce492a01912f0", "text": "economic policy law crime policing digital freedoms freedom expression Criminals will always try to exploit any system, but if governments allow legal online gambling they can regulate it. It is in the interest of gambling companies to build trustworthy brands and cooperate with the authorities on stopping any crime. Cheats in several sports have been caught because legal websites reported strange betting patterns. Betfair for example provides the authorities with an early warning system ‘BetMon’ to watch betting patterns.\n", "title": "" }, { "docid": "154ad68e18b3c20384a606614b4ee484", "text": "economic policy law crime policing digital freedoms freedom expression Unlike drugs, gambling is not physically or metabolically addictive. Most gamblers are not addicts, simply ordinary people who enjoy the excitement of a bet on a sporting event or card game. The large majority of people who gamble online keep to clear limits and stop when they reach them. The few people with a problem with being addicted will still find ways to gamble if gambling is illegal either through a casino, or else still online but in a black market that offers no help and that may use criminal violence to enforce payment.\n", "title": "" }, { "docid": "e51474dedeecb206ba3e9c94942ea744", "text": "economic policy law crime policing digital freedoms freedom expression People are not free to do whatever they want whenever they want. When their activities harm society it is the government’s role to step in to prevent that harm. Online gambling simply provides the freedom for more people to get into debt, not a freedom that should be encouraged.\n", "title": "" }, { "docid": "4c6d1733c619690dbf76333b473b9f45", "text": "economic policy law crime policing digital freedoms freedom expression Gambling is quite different from buying stocks and shares. With the stock market investors are buying a stake in an actual company. This share may rise or fall in value, but so can a house or artwork. In each case there is a real asset that is likely to hold its value in the long term, which isn’t the case with gambling. Company shares and bonds can even produce a regular income through dividend and interest payments. It is true that some forms of financial speculation are more like gambling – for example the derivatives market or short-selling, where the investor does not actually own the asset being traded. But these are not types of investment that ordinary people have much to do with. They are also the kinds of financial activity most to blame for the financial crisis, which suggests we need more government control, not less.\n", "title": "" }, { "docid": "72f61b1a779f57be5a7ea0e8aa7707e5", "text": "economic policy law crime policing digital freedoms freedom expression It is only in the interests of big gambling sites that aim to create a long term business to go along with tough regulation. Online gambling sites can get around government regulations that limit the dangers of betting. Because they can be legally sited anywhere in the world, they can pick countries with no rules to protect customers. In the real world governments can ban bets being taken from children and drunks. They can make sure that the odds are not changed to suit the House. And they can check that people running betting operations don’t have criminal records. In online gambling on the other hand 50% of players believe that internet casino’s cheat [14].\n", "title": "" }, { "docid": "81f981d884a7ebc9c66aa0dd772a5c05", "text": "economic policy law crime policing digital freedoms freedom expression Governments have the power to ban online gambling in their own country. Even if citizens could use foreign websites, most will not choose to break the law. When the United States introduced its Unlawful Internet Gambling Enforcement Act in 2006 gambling among those of college-age fell from 5.8% to 1.5% [12]. Blocking the leading websites will also be effective, as it makes it very hard for them to build a trusted brand. And governments can stop their banks handling payments to foreign gambling companies, cutting off their business.\n", "title": "" }, { "docid": "46ba8fc99d8acbdf158083b449f6ec85", "text": "economic policy law crime policing digital freedoms freedom expression Because people will gamble anyway, the best that governments can do is make sure that their people gamble in safe circumstances. This means real world that casinos and other betting places that can easily be monitored.\n\nThe examples of government using gambling for their own purposes are really the government turning gambling into a benefit for the country. Physical casinos benefit the economy and encourage investment, and lotteries can be used to raise money for good causes. Online gambling undermines all this, as it can be sited anywhere in the world but can still compete with, and undercut organised national betting operations.\n", "title": "" }, { "docid": "d75df3012dd41644ccfcc97c5b9b7a79", "text": "economic policy law crime policing digital freedoms freedom expression Online gambling affects families\n\nA parent who gambles can quickly lose the money their family depends on for food and rent. It is a common cause of family break-up and homelessness, so governments should get involved to protect innocent children from getting hurt [5]. Each problem gambler harmfully impacts 10-15 other people [6]. The internet makes it easy for gamblers to bet secretly, without even leaving the house, so people become addicted to gambling without their families realising what is going on until too late.\n", "title": "" }, { "docid": "eaab866fdf1b9283debf296a7cdf07be", "text": "economic policy law crime policing digital freedoms freedom expression Gambling is addictive.\n\nHumans get a buzz from taking a risk and the hope that this time their luck will be in, this is similar to drug addicts [7]. The more people bet, the more they want to bet, so they become hooked on gambling which can wreck their lives. Internet gambling is worse because it is not a social activity. Unlike a casino or race track, you don’t have to go anywhere to do it, which can put a brake on the activity. The websites never shut. There won’t be people around you to talk you out of risky bets. There is nothing to stop you gambling your savings away while drunk.\n", "title": "" }, { "docid": "14be38e43a7e16f44a2871a450dccbe5", "text": "economic policy law crime policing digital freedoms freedom expression Online gambling encourages crime\n\nHuman trafficking, forced prostitution and drugs provide $2.1 billion a year for the Mafia but they need some way through which to put this money into circulation. Online gambling is that way in. They put dirty money in and win clean money back [8]. Because it is so international and outside normal laws, it makes criminal cash hard to track. There is a whole array of other crime associated with online gambling; hacking, phishing, extortion, and identity fraud, all of which can occur on a large scale unconstrained by physical proximity [9]. Online gambling also encourages corruption in sport. By allowing huge sums of money to be bet internationally on the outcome of a game or race, it draws in criminals who can try to bribe or threaten sportsmen.\n", "title": "" }, { "docid": "f1a2f9aaec6eb4fa051fe97e1a9952e2", "text": "economic policy law crime policing digital freedoms freedom expression Government only objects to online gambling because they dont benefit\n\nGovernments are hypocritical about gambling. They say they don’t like it but they often use it for their own purposes. Sometimes they only allow gambling in certain places in order to boost a local economy. Sometimes they profit themselves by running the only legal gambling business, such as a National Lottery [15] or public racecourse betting. This is bad for the public who want to gamble. Online gambling firms can break through government control by offering better odds and attractive new games.\n", "title": "" }, { "docid": "bcf30ccecd8726747480c24d543ef251", "text": "economic policy law crime policing digital freedoms freedom expression Cant enforce an online gambling ban\n\nGovernments can’t actually do anything to enforce a ban on the world wide web. Domestic laws can only stop internet companies using servers and offices in their own country. They cannot stop their citizens going online to gamble using sites based elsewhere. Governments can try to block sites they disapprove of, but new ones will keep springing up and their citizens will find ways around the ban. So practically there is little the government can do to stop people gambling online. Despite it being illegal the American Gambling Association has found that 4% of Americans already engage in online gambling [11].\n", "title": "" }, { "docid": "4da6f98c448e1b1d7fc1482abcb0da32", "text": "economic policy law crime policing digital freedoms freedom expression Other forms of online gambling\n\nWhat is the difference between gambling and playing the stock market? In each case people are putting money at risk in the hope of a particular outcome. Gambling on horse-racing or games involves knowledge and expertise that can improve your chances of success. In the same way, trading in bonds, shares, currency or derivatives is a bet that your understanding of the economy is better than that of other investors. Why should one kind of online risk-taking be legal and the other not?\n", "title": "" }, { "docid": "ed53c9c164b2ca80fedcc4f767bbf27a", "text": "economic policy law crime policing digital freedoms freedom expression Personal freedom\n\nGambling is a leisure activity enjoyed by many millions of people. Governments should not tell people what they can do with their own money. Those who don’t like gambling should be free to buy adverts warning people against it, but they should not be able to use the law to impose their own beliefs. Online gambling has got rid of the rules that in the past made it hard to gamble for pleasure and allowed many more ordinary people to enjoy a bet from time to time. It provides the freedom to gamble, whenever and wherever and with whatever method the individual prefers.\n", "title": "" }, { "docid": "f026abdff01f2a90b1308cbeeb08af16", "text": "economic policy law crime policing digital freedoms freedom expression Only regulation can mitigate harms\n\nIt is where the sites operate, not where they are set up that matters for regulation. It is in gambling sites interest to run a trustworthy, responsible business. Whatever they are looking for online, internet users choose trusted brands that have been around for a while. If a gambling site acts badly, for example by changing its odds unfairly, word will soon get around and no one will want to use it. Regulation will mean that sites will have to verify the age of their users and prevent problem gamblers from accessing their site. When there is regulation consumers will go to the sites that are verified by their government and are providing a legal, safe service [13].\n", "title": "" } ]
arguana
c1905847956ecea36f9af1fe5f1d3179
The internet as a threat to public safety. The internet can be used as a tool to create an imminent threat to the public. If public officials had information that a massive protest is being organized, which could spiral into violence and endanger the safety of the public, it would be irresponsible for the government not to try to prevent such a protest. Governments are entrusted with protecting public safety and security, and not preventing such a treat would constitute a failure in the performance of their duties [1] . An example of this happening was the use first of Facebook and twitter and then of Blackberry messenger to organise and share information on the riots in London in the summer of 2011. [2] [1] Wyatt, Edward, 2012. “FCC Asks for Guidance on Whether, and When to Cut Off Cellphone Service.” New York Times, 2 March 2012. [2] Halliday, Josh, 2011. “London riots: how BlackBerry Messenger played a key role”. Guardian.co.uk, 8 August 2011.
[ { "docid": "5f1aef8d29eafd3f70f7c92067f6339b", "text": "government terrorism digital freedoms access information should Other means can be employed to ensure the safety of the population without disrupting access to the internet, like deploying security forces to make sure protests don’t get out of hand or turn violent. In fact, being able to monitor online activity through social media like Facebook and Twitter might actually aid, rather than hinder law enforcement in ensuring the safety of the public. London’s police force, the Metropolitan Police, in the wake of the riots has are using software to monitor social media to predict where social disorder may take place. [1]\n\n[1] Adams, Lucy, 2012. “Police develop technology to monitor social neworks”. Heraldscotland, 6 August 2012.\n", "title": "" } ]
[ { "docid": "ceef1f7e5b30d1ba0b21509db0e696da", "text": "government terrorism digital freedoms access information should Historical precedent does not apply to the internet. It is very different to media reporting during times of unrest; the internet is not just a means of disseminating information but also for many people their main form of communication; the U.S. government has never tried to ban people from using telephones. There are severe downsides to the censorship of information during times of war or civil unrest, the most notable one being that it is used to hide the real cost and consequences of war from the population which is expected to support it. Conversely, in a world where every mobile phone is now connected to a global network, people all around the world can have access to an unparalleled amount of information from the field. Curtailing such internet access is to their detriment.\n", "title": "" }, { "docid": "432d37713306c981c63f858686094fc4", "text": "government terrorism digital freedoms access information should In July 2012, The United Nations Human Rights Council endorsed a resolution upholding the principle of freedom of expression and information on the internet. In a special report, it also “called upon all states to ensure that Internet access is maintained at all times, including during times of political unrest” [1] . While access to the internet has not yet had time to establish itself legally as a human right, there are compelling reasons to change its legal status, and the UN is leading the charge. Even before internet access is recognized as a human right the idea that national security should take precedence over ‘lesser rights’ is wrong; states should not survive at the expense of the rights of their citizens. States exist to protect their citizens not harm them.\n\n[1] Kravets, David, 2011. “UN Report Declares Internet Access a Human Right”. Wired.com, 6 November 2011.\n", "title": "" }, { "docid": "ea123c1aaad9989c7b7cfaf3f5f308b7", "text": "government terrorism digital freedoms access information should Freedom of expression, assembly, and information are important rights, but restrictions can be placed on all of them if a greater good, like public safety, is at stake. For example, one cannot use her freedom of expression to incite violence towards others and many countries regard hate speech as a crime. [1] Therefore, if the internet is being used for such abuses of ones rights, the disruption of service, even to a large number of people, can be entirely warranted.\n\n[1] Waldron, Jeremy, The Harm in Hate Speech, Harvard University Press, 8 June 2012, p.8.\n", "title": "" }, { "docid": "55f34c7e064bd48c7274695b7a81afb4", "text": "government terrorism digital freedoms access information should Being able to witness atrocities from the field in real time does not change the international community’s capacity or political willingness to intervene in such situations. If anything, it has had the unfortunate side effect of desensitizing international public opinion to the horrors of war and conflicts, like the one in Syria where there have been thousands of videos showing the actions of the Syrian government but this has not resulted in action from the international community. [1] The onslaught of gruesome, graphic imagery has made people more used to witnessing such scenes from afar and less likely to be outraged and to ask their governments to intervene.\n\n[1] Harding, Luke, 2012. “Syria’s video activists give revolution the upper hand in media war”. Guardian.co.uk, 1 August 2012.\n", "title": "" }, { "docid": "f4dd344282c44b8d35ea262291f484c4", "text": "government terrorism digital freedoms access information should Democratic change can come about in a variety of ways. Violent public protests are only one such way, and probably the least desirable one. And now, with access to social media nearly universally available, such protests can be organized faster, on a larger, more dangerous scale than ever before. It encourages opposition movements and leaders in such countries to turn away from incremental, but peaceful changes through political negotiations, and to appeal to mass protests instead, thus endangering the life or their supporters and that of the general public. Governments that respond to violence by cutting off access are not responding with repression but simply trying to reduce the violence. Cutting internet access is a peaceful means of preventing organized violence that potentially saves lives by preventing confrontation between violent groups and riot police.\n", "title": "" }, { "docid": "b174a22c6e88b863f97d61570a80dd8c", "text": "government terrorism digital freedoms access information should Historical precedent.\n\nHistorically, governments have always controlled the access to information and placed restriction on media during times of war. This is an entirely reasonable policy and is done for a number of reasons: to sustain morale and prevent predominantly negative stories from the battlefield reaching the general public, and to intercept propaganda from the enemy, which might endanger the war effort [1] . For example, both Bush administrations imposed media blackouts during wartime over the return of the bodies of dead American soldiers at Dover airport [2] . The internet is simply a new medium of transmitting information, and the same principles can be applied to its regulation, especially when the threat to national security is imminent, like in the case of disseminating information for the organization of a violent protest.\n\n[1] Payne, Kenneth. 2005. “The Media as an Instrument of War”. Parameters, Spring 2005, pp. 81-93.\n\n[2] BBC, 2009. “US War Dead Media Blackout Lifted”.\n", "title": "" }, { "docid": "d94f0651ec750205a84309e1ff377d1b", "text": "government terrorism digital freedoms access information should National security takes precedence.\n\nInternet access is not a fundamental right as recognized by any major human rights convention, if it can be called a right at all. [1] Even if we accept that people should have a right to internet access, in times of war or civil unrest the government should be able to abridge lesser rights for the sake of something that is critical to the survival of the state, like national security. After all, in a war zone few rights survive or can be upheld at all. Preventing such an outcome at the expense of the temporary curtailment of some lesser rights is entirely justified. Under current law, in most states, only the most fundamental of rights, like the right to life, prohibition against torture, slavery, and the right to a fair trial are regarded as inalienable [2] .\n\n[1] For more see the debatabase debate on internet access as a human right.\n\n[2] Article 15 of the European Convention on Human rights: “In time of war or other public emergency threatening the life of the nation any High Contracting Party may take measures derogating from its obligations under this Convention to the extent strictly required by the exigencies of the situation, provided that such measures are not inconsistent with its other obligations under international law.” http://www.hri.org/docs/ECHR50.html\n", "title": "" }, { "docid": "cf47f900746702d040833d9df8416bee", "text": "government terrorism digital freedoms access information should Disrupting internet service is a form of repression.\n\nThe organization of public protests is an invaluable right for citizens living under the rule of oppressive regimes. Like in the case of the Arab Spring, internet access gives them the tools to mobilize, make their message heard, and demand greater freedoms. In such cases, under the guise of concern for public safety, these governments disrupt internet service in an attempt to stamp out legitimate democratic protests and stamp out the dissatisfied voices of their citizens [1] They are concerned not for the safety of the public, but to preserve their own grasp on power. A good example of this are the actions of the government of Myanmar when in 2007 in response to large scale protests the government cut internet access to the whole country in order to prevent reports of the government’s crackdown getting out. [2] Establishing internet access as a fundamental right at international level would make it clear to such governments that they cannot simply cut access as a tactic to prevent legitimate protests against them.\n\n[1] The Telegraph. “Egypt. Internet Service Disrupted Before Large Rally”. 28 January 2011.\n\n[2] Tran, Mark, 2007. “Internet access cut off in Burma”. Guardian.co.uk, 28 September 2007.\n", "title": "" }, { "docid": "2c322b6919bed304eaa50dba196afc8f", "text": "government terrorism digital freedoms access information should The right to internet access as a fundamental right.\n\nInternet access is a “facilitative right”, in that it facilitates access to the exercise of many other rights: like freedom of expression, information, and assembly. It is a “gateway right”. Possessing a right is only as valuable as your capacity to exercise it. A government cannot claim to protect freedom of speech or expression, and freedom of information, if it is taking away from its citizens the tools to access them. And that is exactly what the disruption of internet service does. Internet access needs to be a protected right so that all other rights which flow from it. [1]\n\nThe Internet is a tool of communication so it is important not just to individuals but also to communities. The internet becomes an outlet that can help to preserve groups’ culture or language [2] and so as an enabler of this groups’ culture access to the internet may also be seen as a group right – one which would be being infringed when the state cuts off access to large numbers of individuals.\n\n[1] BBC, 2010. “Internet Access is ‘a Fundamental Right’\".\n\n[2] Jones, Peter, 2008. \"Group Rights\", The Stanford Encyclopedia of Philosophy (Winter 2008 Edition), Edward N. Zalta (ed.).\n", "title": "" }, { "docid": "f2c5216ff441d8762a97ccff560d6a0c", "text": "government terrorism digital freedoms access information should The prevention of atrocities during war and unrest.\n\nIn the past, horrific crimes could be committed in war zones without anyone ever knowing about it, or with news of it reaching the international community with a significant time lag, when it was too late to intervene. But with the presence of internet connected mobile devices everywhere, capable of uploading live footage within seconds of an event occurring, the entire world can monitor and find out what is happening on the scene, in real time. It lets repressive regimes know the entire world is watching them, that they cannot simply massacre their people with impunity, and it creates evidence for potential prosecutions if they do. It, therefore, puts pressure on them to respect the rights of their citizens during such precarious times. To prevent governments from violently stamping out public political dissent without evidence, internet access must be preserved, especially in times of war or political unrest. [1]\n\n[1] Bildt, Carl, 2012. “A Victory for The Internet”. New York Times. 5 July 2012.\n", "title": "" } ]
arguana
a5f4bfabedba21d506eefe847e92df06
National security takes precedence. Internet access is not a fundamental right as recognized by any major human rights convention, if it can be called a right at all. [1] Even if we accept that people should have a right to internet access, in times of war or civil unrest the government should be able to abridge lesser rights for the sake of something that is critical to the survival of the state, like national security. After all, in a war zone few rights survive or can be upheld at all. Preventing such an outcome at the expense of the temporary curtailment of some lesser rights is entirely justified. Under current law, in most states, only the most fundamental of rights, like the right to life, prohibition against torture, slavery, and the right to a fair trial are regarded as inalienable [2] . [1] For more see the debatabase debate on internet access as a human right. [2] Article 15 of the European Convention on Human rights: “In time of war or other public emergency threatening the life of the nation any High Contracting Party may take measures derogating from its obligations under this Convention to the extent strictly required by the exigencies of the situation, provided that such measures are not inconsistent with its other obligations under international law.” http://www.hri.org/docs/ECHR50.html
[ { "docid": "432d37713306c981c63f858686094fc4", "text": "government terrorism digital freedoms access information should In July 2012, The United Nations Human Rights Council endorsed a resolution upholding the principle of freedom of expression and information on the internet. In a special report, it also “called upon all states to ensure that Internet access is maintained at all times, including during times of political unrest” [1] . While access to the internet has not yet had time to establish itself legally as a human right, there are compelling reasons to change its legal status, and the UN is leading the charge. Even before internet access is recognized as a human right the idea that national security should take precedence over ‘lesser rights’ is wrong; states should not survive at the expense of the rights of their citizens. States exist to protect their citizens not harm them.\n\n[1] Kravets, David, 2011. “UN Report Declares Internet Access a Human Right”. Wired.com, 6 November 2011.\n", "title": "" } ]
[ { "docid": "ceef1f7e5b30d1ba0b21509db0e696da", "text": "government terrorism digital freedoms access information should Historical precedent does not apply to the internet. It is very different to media reporting during times of unrest; the internet is not just a means of disseminating information but also for many people their main form of communication; the U.S. government has never tried to ban people from using telephones. There are severe downsides to the censorship of information during times of war or civil unrest, the most notable one being that it is used to hide the real cost and consequences of war from the population which is expected to support it. Conversely, in a world where every mobile phone is now connected to a global network, people all around the world can have access to an unparalleled amount of information from the field. Curtailing such internet access is to their detriment.\n", "title": "" }, { "docid": "5f1aef8d29eafd3f70f7c92067f6339b", "text": "government terrorism digital freedoms access information should Other means can be employed to ensure the safety of the population without disrupting access to the internet, like deploying security forces to make sure protests don’t get out of hand or turn violent. In fact, being able to monitor online activity through social media like Facebook and Twitter might actually aid, rather than hinder law enforcement in ensuring the safety of the public. London’s police force, the Metropolitan Police, in the wake of the riots has are using software to monitor social media to predict where social disorder may take place. [1]\n\n[1] Adams, Lucy, 2012. “Police develop technology to monitor social neworks”. Heraldscotland, 6 August 2012.\n", "title": "" }, { "docid": "ea123c1aaad9989c7b7cfaf3f5f308b7", "text": "government terrorism digital freedoms access information should Freedom of expression, assembly, and information are important rights, but restrictions can be placed on all of them if a greater good, like public safety, is at stake. For example, one cannot use her freedom of expression to incite violence towards others and many countries regard hate speech as a crime. [1] Therefore, if the internet is being used for such abuses of ones rights, the disruption of service, even to a large number of people, can be entirely warranted.\n\n[1] Waldron, Jeremy, The Harm in Hate Speech, Harvard University Press, 8 June 2012, p.8.\n", "title": "" }, { "docid": "55f34c7e064bd48c7274695b7a81afb4", "text": "government terrorism digital freedoms access information should Being able to witness atrocities from the field in real time does not change the international community’s capacity or political willingness to intervene in such situations. If anything, it has had the unfortunate side effect of desensitizing international public opinion to the horrors of war and conflicts, like the one in Syria where there have been thousands of videos showing the actions of the Syrian government but this has not resulted in action from the international community. [1] The onslaught of gruesome, graphic imagery has made people more used to witnessing such scenes from afar and less likely to be outraged and to ask their governments to intervene.\n\n[1] Harding, Luke, 2012. “Syria’s video activists give revolution the upper hand in media war”. Guardian.co.uk, 1 August 2012.\n", "title": "" }, { "docid": "f4dd344282c44b8d35ea262291f484c4", "text": "government terrorism digital freedoms access information should Democratic change can come about in a variety of ways. Violent public protests are only one such way, and probably the least desirable one. And now, with access to social media nearly universally available, such protests can be organized faster, on a larger, more dangerous scale than ever before. It encourages opposition movements and leaders in such countries to turn away from incremental, but peaceful changes through political negotiations, and to appeal to mass protests instead, thus endangering the life or their supporters and that of the general public. Governments that respond to violence by cutting off access are not responding with repression but simply trying to reduce the violence. Cutting internet access is a peaceful means of preventing organized violence that potentially saves lives by preventing confrontation between violent groups and riot police.\n", "title": "" }, { "docid": "b174a22c6e88b863f97d61570a80dd8c", "text": "government terrorism digital freedoms access information should Historical precedent.\n\nHistorically, governments have always controlled the access to information and placed restriction on media during times of war. This is an entirely reasonable policy and is done for a number of reasons: to sustain morale and prevent predominantly negative stories from the battlefield reaching the general public, and to intercept propaganda from the enemy, which might endanger the war effort [1] . For example, both Bush administrations imposed media blackouts during wartime over the return of the bodies of dead American soldiers at Dover airport [2] . The internet is simply a new medium of transmitting information, and the same principles can be applied to its regulation, especially when the threat to national security is imminent, like in the case of disseminating information for the organization of a violent protest.\n\n[1] Payne, Kenneth. 2005. “The Media as an Instrument of War”. Parameters, Spring 2005, pp. 81-93.\n\n[2] BBC, 2009. “US War Dead Media Blackout Lifted”.\n", "title": "" }, { "docid": "8a89fc13e9fd39fe304ec49b0a276003", "text": "government terrorism digital freedoms access information should The internet as a threat to public safety.\n\nThe internet can be used as a tool to create an imminent threat to the public. If public officials had information that a massive protest is being organized, which could spiral into violence and endanger the safety of the public, it would be irresponsible for the government not to try to prevent such a protest. Governments are entrusted with protecting public safety and security, and not preventing such a treat would constitute a failure in the performance of their duties [1] . An example of this happening was the use first of Facebook and twitter and then of Blackberry messenger to organise and share information on the riots in London in the summer of 2011. [2]\n\n[1] Wyatt, Edward, 2012. “FCC Asks for Guidance on Whether, and When to Cut Off Cellphone Service.” New York Times, 2 March 2012.\n\n[2] Halliday, Josh, 2011. “London riots: how BlackBerry Messenger played a key role”. Guardian.co.uk, 8 August 2011.\n", "title": "" }, { "docid": "cf47f900746702d040833d9df8416bee", "text": "government terrorism digital freedoms access information should Disrupting internet service is a form of repression.\n\nThe organization of public protests is an invaluable right for citizens living under the rule of oppressive regimes. Like in the case of the Arab Spring, internet access gives them the tools to mobilize, make their message heard, and demand greater freedoms. In such cases, under the guise of concern for public safety, these governments disrupt internet service in an attempt to stamp out legitimate democratic protests and stamp out the dissatisfied voices of their citizens [1] They are concerned not for the safety of the public, but to preserve their own grasp on power. A good example of this are the actions of the government of Myanmar when in 2007 in response to large scale protests the government cut internet access to the whole country in order to prevent reports of the government’s crackdown getting out. [2] Establishing internet access as a fundamental right at international level would make it clear to such governments that they cannot simply cut access as a tactic to prevent legitimate protests against them.\n\n[1] The Telegraph. “Egypt. Internet Service Disrupted Before Large Rally”. 28 January 2011.\n\n[2] Tran, Mark, 2007. “Internet access cut off in Burma”. Guardian.co.uk, 28 September 2007.\n", "title": "" }, { "docid": "2c322b6919bed304eaa50dba196afc8f", "text": "government terrorism digital freedoms access information should The right to internet access as a fundamental right.\n\nInternet access is a “facilitative right”, in that it facilitates access to the exercise of many other rights: like freedom of expression, information, and assembly. It is a “gateway right”. Possessing a right is only as valuable as your capacity to exercise it. A government cannot claim to protect freedom of speech or expression, and freedom of information, if it is taking away from its citizens the tools to access them. And that is exactly what the disruption of internet service does. Internet access needs to be a protected right so that all other rights which flow from it. [1]\n\nThe Internet is a tool of communication so it is important not just to individuals but also to communities. The internet becomes an outlet that can help to preserve groups’ culture or language [2] and so as an enabler of this groups’ culture access to the internet may also be seen as a group right – one which would be being infringed when the state cuts off access to large numbers of individuals.\n\n[1] BBC, 2010. “Internet Access is ‘a Fundamental Right’\".\n\n[2] Jones, Peter, 2008. \"Group Rights\", The Stanford Encyclopedia of Philosophy (Winter 2008 Edition), Edward N. Zalta (ed.).\n", "title": "" }, { "docid": "f2c5216ff441d8762a97ccff560d6a0c", "text": "government terrorism digital freedoms access information should The prevention of atrocities during war and unrest.\n\nIn the past, horrific crimes could be committed in war zones without anyone ever knowing about it, or with news of it reaching the international community with a significant time lag, when it was too late to intervene. But with the presence of internet connected mobile devices everywhere, capable of uploading live footage within seconds of an event occurring, the entire world can monitor and find out what is happening on the scene, in real time. It lets repressive regimes know the entire world is watching them, that they cannot simply massacre their people with impunity, and it creates evidence for potential prosecutions if they do. It, therefore, puts pressure on them to respect the rights of their citizens during such precarious times. To prevent governments from violently stamping out public political dissent without evidence, internet access must be preserved, especially in times of war or political unrest. [1]\n\n[1] Bildt, Carl, 2012. “A Victory for The Internet”. New York Times. 5 July 2012.\n", "title": "" } ]
arguana
fa9e4d8093001f3701a25ffc30859e68
Considering the amount of data governments produce, compelling them to publish all of it would be counterproductive as citizens would be swamped. It is a misnomer in many things that more is necessarily better but that is, perhaps, more true of information than of most things. Public bodies produce vast quantities of data and are often have a greater tendency to maintain copious records than their private sector equivalents. US government agencies will create data that would require “20 million four-drawer filing cabinets filled with text,” over the next two years. [i] Simply dumping this en masse would be a fairly effective way of masking any information that a public body wanted kept hidden. Deliberately poor referencing would achieve the same result. This ‘burying’ of bad news at a time when everyone is looking somewhere else is one of the oldest tricks in press management. For example Jo Moore, an aide to then Transport Secretary Stephen Byers suggested that September 11 2001 was “a very good day to get out anything we want to bury.” Suggesting burying a u turn on councillors’ expenses. [ii] For it to genuinely help with the transparency and accountability of public agencies it would require inordinately detailed and precise cataloguing and indexing – a process that would be likely to be both time consuming and expensive. The choice would, therefore, be between a mostly useless set of data that would require complex mining by those citizens who were keen to use it or the great expense of effectively cataloguing it in advance. Even this latter option would defeat the objective of greater accountability because whoever had responsibility for the cataloguing would have far greater control of what would be likely to come to light. Instead ensuring a right of access for citizens ensures that they can have a reasonable access to exactly the piece of information they are seeking [iii] . [i] Eddy, Nathan, ‘Big Data Still a Big Challenge for Government IT’, eweek, 8th May 2012, http://www.eweek.com/c/a/Government-IT/Big-Data-Still-a-Big-Challenge-fo... [ii] Sparrow, Andrew, ‘September 11: ‘a good day to bury bad news’’, The Telegraph, 10 October 2001, http://www.telegraph.co.uk/news/uknews/1358985/Sept-11-a-good-day-to-bury-bad-news.html [iii] Freedom of Information as an Internationally Protected Human Right. Toby Mendel, Head of Law at Article 19.
[ { "docid": "232325d4d20cc6e83e9a56d494081b9c", "text": "governmental transparency house believes there should be presumption Although it would be time-consuming to approach so much information, it is not impossible to manage it effectively. As Wikileaks has demonstrated, given access to large quantities of information, it is a relatively straightforward process to start with records that are likely to prove interesting and then follow particular routes from there. In addition, governments, like all organisations, have information management systems, there would be no reason not to use the same model.\n\nAdditionally, the very skill of journalism is going beyond the executive summary to find the embarrassing fact buried away in appendix nineteen. That would still be the case under this model, it would just be easier.\n", "title": "" } ]
[ { "docid": "a193d58b0d74ee2c66795b06f88ee150", "text": "governmental transparency house believes there should be presumption There are, of course some costs to having a truly open and accountable government, but an effective right of access would allow much of that information to be made available. After all what the public sector bodies are paying in commercial transactions is of great interest to the public. If public bodies are getting a particularly good rate from suppliers, it might well raise the question of “Why?” For example, are they failing to enforce regulations on a particular supplier in return for a good price. In that instance, their other customers and their competitors would seem to have every right to know.\n", "title": "" }, { "docid": "db65e38d3bc772a6d4d1e7dd8071fe5e", "text": "governmental transparency house believes there should be presumption It is frequently useful to see the general approach of a public organisation as reflected in routine discussions. Opposition is wrong to suggest that such information would only cast a light on ideas that were never pursued anyway so they don’t matter. It would also highlight ideas that agencies wanted to pursue but felt they couldn’t because of the likely impact of public opinion, knowing such information gives useful insight into the intentions of the public agency in question.\n", "title": "" }, { "docid": "d4f713d94dccc069709e797e465a937a", "text": "governmental transparency house believes there should be presumption Governments have, prima facie, a different relationship with their own citizens than they have with those of other countries. In addition, as with the previous argument, extending the right of access does not, per se, require total access. The approach is also simply impractical as it would require every nation on the planet to take the same approach and to have comparable standards in terms of record keeping and data management. At present most states publish some data but the upper and lower thresholds of what is made public vary between them. To abolish the upper limit (ministerial briefing, security briefings, military contractors, etc.) would require everyone to do it, otherwise it would be deeply unsafe for any one state to act alone. The likelihood of persuading some of the world’s more unsavory or corrupt regimes to play ball seems pretty unlikely. The first of those is improbable, the latter is impossible.\n", "title": "" }, { "docid": "4fea4045c8b6854771a433c1d46fd29a", "text": "governmental transparency house believes there should be presumption It seems unlikely that total publication would save much in the way of time or money. If the data was not indexed in some way it would be absurdly difficult to navigate - and that takes time and money.\n\nThere are advantages to building a delay into systems such as this, if a piece of information genuinely justifies a news story, then it will do so at any time. If it’s only of interest in the middle of a media feeding frenzy, then it seems unlikely that it was all that important.\n", "title": "" }, { "docid": "7b3bcfa525c738e042848d9dcc690876", "text": "governmental transparency house believes there should be presumption The idea that, presented with a vast mass of frequently complex data, everyone would be able to access, process and act on it in the same way is fantasy. Equally the issue of ‘who guards the guards’ that Proposition raises is a misnomer; exactly the groups mentioned are already those with the primary role of scrutinizing government actions because they have the time, interest and skills to do so. Giving a right to access would give them greater opportunities to continue with that in a way that deluging them with information would not.\n", "title": "" }, { "docid": "11d2f7bac64bf74b4df42e19dfe53fa5", "text": "governmental transparency house believes there should be presumption Relying on a right of access would also have addressed the concerns set out by Proposition but would do so in a way that would not endanger actual concerns of national security by allowing citizens the right to challenge such decisions. An independent review could determine where the motivation is genuinely one of national security and those where it is really political expediency. The right to information for citizens is important but should not jeopardize the right to life of combat troops.\n", "title": "" }, { "docid": "9d7a80e90b11471fe5dc3a768893fe57", "text": "governmental transparency house believes there should be presumption Public bodies require the ability to discuss proposals freely away from public scrutiny\n\nKnowing that everything is likely to be recorded and then published is likely to be counter-productive. It seems probable that anything sensitive – such as advice given to ministers by senior officials – would either not be recorded or it would be done in a way so opaque as to make it effectively meaningless [i] .\n\nBy contrast knowing that such conversations, to focus on one particularly example, are recorded and can be subjected to public scrutiny when there is a proven need to do so ensures that genuine accountability – rather than prurience or curiosity, is likely to be both the goal and the outcome.\n\nNone of us would like the process of how we reached decisions made public as it often involves getting things wrong a few times first. However, there are some instances where it is important to know how a particular decision was reached and whether those responsible for that decision were aware of certain facts at the time – notably when public figures are claiming that they were not aware of something and others are insisting that they were. In such an instance the right to access is useful and relevant; having records of every brainstorming session in every public body is not. As the Leveson inquiry is discovering, an extraordinary amount of decisions in government seem to be made informally, by text message or chats at parties. Presumably that would become evermore the case if every formal discussion were to be published [ii] .\n\n[i] The Pitfalls of Britain’s Confidential Civil Service. Samuel Brittan. Financial Time 5 March 2010.\n\n[ii] This is nothing very new, see: Downing Street: Informal Style. BBC website. 14 July 2004.\n", "title": "" }, { "docid": "36e797eb873255c50c67625bc900fb12", "text": "governmental transparency house believes there should be presumption It is reasonable that people have access to information that effects them personally but not information that relates to their neighbours’, employers’, former-partners’ or other citizens who maythose who work for public bodies.\n\nThe right to access allows people to see information that affects them personally or where there is reasonable suspicion of harm or nefarious practices. It doesn’t allow them to invade the privacy of other citizens who just happen to work for public bodies or have some other association [i] .\n\nUnless there is reason to suspect corruption, why should law-abiding citizens who sell goods and services to public bodies have the full details of their negotiations made public for their other buyers, who may have got a worse deal, to see? Why should the memo sent by an otherwise competent official on a bad day be made available for her neighbours to read over? A presumption in favour of publication would ensure that all of these things, and others, would be made a reality with the force of law behind them.\n\nThis would place additional burdens on government in terms of recruitment and negotiations with private firms – not to mention negotiations with other governments with less transparent systems. Let’s assume for the moment that the British government introduced a system, it is quite easy imagine a sense of “For God’s sake don’t tell the British” spreading around the capitals of the world fairly quickly.\n\n[i] Section 40 0(A) od the FOIA. See also Freedom of Information Act Environmental Information Regulations. When Should Salaries be Disclosed? Information Commissioner’s Office.\n", "title": "" }, { "docid": "dee8cac711700d293b9218914332fecb", "text": "governmental transparency house believes there should be presumption Compelling public bodies to publish information ensures that non-citizens, minors, foreign nationals and others have access to information that affects them.\n\nGenuine transparency and accountability of government action is not only in the interests of those who also have the right to vote for that government or who support it through the payment of taxes. The functioning of immigration services would seem to be a prime example. Maximising access to information relating to government decisions by dint of its automatic publication of information relating to those decisions ensures that all those affected will have recourse to the facts behind any decision.\n\nIf, for example, a nation’s aid budget is cut or redirected, why should the citizens of the affected nation not have a right to know why [i] ? If, as is frequently the case, it has happened because of an action or inaction by their own government, then it is important that they know. Equally if such a decision were taken for electoral gain, they at least have the right to know that there is nothing they or their government could do about it.\n\n[i] Publish What You Fund: The Global Campaign For Aid Transparency. Website Introduction.\n", "title": "" }, { "docid": "49a5860842c98055000dd5751d43f596", "text": "governmental transparency house believes there should be presumption Even the most liberal FoI regime tends to pander to certain groups in society full disclosure levels that playing field\n\nPeople have many different interests in the accountability of governments; different areas of concern, differing levels of skill in pursuing those interests and so on. They deserve, however, an equal degree of transparency from governments in relation to those decisions that affect them. Relying on a right to access is almost certainly most likely to favour those who already have the greatest access either through their profession, their skills or their social capital. The use of freedom of information requests in those countries where they are available shows this to be the case, as they have overwhelmingly been used by journalists, with a smattering of representation from researchers, other politicians and lawyers and so on. In the UK between 2005 and 2010 the total number registered by all ‘ordinary’ members of the public is just ahead of journalists, the next largest group. The public are overwhelmingly outnumbered by the listed professional groups [i] .\n\nRequired publication, by contrast, presents an even playing field to all parties. Rather than allowing legislators to determine how and to whom – and for what – they should be accountable, a presumption in favour of publication makes them accountable to all. As a result, it is the only truly effective way of ensuring one of the key aims set out in favour of any freedom of information process.\n\n[i] Who Makes FOI Requests? BBC Open Secrets Website. 14 January 2011.\n", "title": "" }, { "docid": "5374802042af0cfbda4884a42493e865", "text": "governmental transparency house believes there should be presumption If public bodies do not have an obligation to publish information, there will always be a temptation to find any available excuses to avoid transparency.\n\nThe primary advantage of putting the duty on government to publish, rather than on citizens to enquire is that it does not require the citizen to know what they need to know before they know it. Publication en masse allows researchers to investigate areas they think are likely to produce results, specialists to follow decisions relevant to their field and, also, raises the possibility of discovering things by chance. The experience of Wikipedia suggests that even very large quantities of data are relatively easy to mine as long as all the related documentation is available to the researcher – the frustration, by contrast, comes when one has only a single datum with no way of contextualising it. Any other situation, at the very least, panders to the interests of government to find any available excuse for not publishing anything that it is likely to find embarrassing and, virtually by definition, would be of most interest to the active citizen.\n\nKnowing that accounts of discussions, records of payments, agreements with commercial bodies or other areas that might be of interest to citizens will be published with no recourse to ‘national security’ or ‘commercial sensitivity’ is likely to prevent abuses before they happen but will certainly ensure that they are discovered after the event [i] .\n\nThe publication of documents, in both Washington and London, relating to the build-up to war in Iraq is a prime example of where both governments used every available excuse to cover up the fact that that the advice they had been given showed that either they were misguided or had been deliberately lying [ii] . A presumption of publication would have prevented either of those from determining a matter of vital interest to the peoples of the UK, the US and, of course, Iraq. All three of those groups would have had access to the information were there a presumption of publication.\n\n[i] The Public’s Right To Know. Article 19 Global Campaign for Freedom of Expression.\n\n[ii] Whatreallyhappened.com has an overview of this an example of how politicians were misguided – wilfully or otherwise can be found in: Defector admits to lies that triggered the Iraq War. Martin Chulov and Helen Pidd. The Guardian. 15 February 2011.\n", "title": "" }, { "docid": "8c4c0fdbffcf784e055898595f30aa52", "text": "governmental transparency house believes there should be presumption A faster, cheaper and simpler process\n\nThere are cost concerned with processing FoI requests both in terms of time and cash terms. [i] To take one example Britain’s largest local authority, Birmingham, spends £800,000 a year dealing with FoI requests. [ii] There is also a delay from the point of view of the applicant. Such a delay is more than an irritant in the case of, for example, immigration appeals or journalistic investigations. Governments know that journalists usually have to operate within a window of time while a story is still ‘hot’. As a result all they have to do is wait it out until the attention of the media turns elsewhere to ensure that if evidence of misconduct or culpability were found, it would probably be buried as a minor story if not lost altogether. As journalism remains the primary method most societies have of holding government to account, it doesn’t seem unreasonable that the methodology for releasing data should, at least in part, reflect the reality of how journalism works as an industry.\n\n[i] Independent Review of the Impact of the Freedom of Information Act. Frontier Economics. October 2006.\n\n[ii] Dunton, Jim, ‘Cost of FoI requests rises to £34m’, Local Government Chronicle, 16 September 2010, http://www.lgcplus.com/briefings/corporate-core/legal/cost-of-foi-requests-rises-to-34m/5019109.article\n", "title": "" } ]
arguana
0560286557101ef5f553d6f0c4f0388c
Financial dealings can indicate candidates’ willingness to circumvent the system/play by the rules A lot of politicians come from positions of prestige and power before seeking public office. Many politicians have wealth in their own right, or a base of wealthy supporters. Understanding where that wealth came from and how they used their privileged position is very important to citizens when choosing their leaders. Access to candidates’ financial information allows good candidates to show their honesty and financial uprightness, and sometimes even to display their talent and acumen that allowed them to succeed. More importantly, it allows people to scrutinize the dealings of politicians who used their often privileged position to avoid paying high taxes and to shield their wealth from the public taking its legal due. What these insights provide is a valuable snapshot of what candidates are willing to do to promote their own interests versus those of the state and society. It shows if there is a propensity to engage in morally dubious practices, and such behavior could well be extrapolated to be a potential incentive to corrupt practice. While tax avoidance is not illegal, it can well be considered unjust when rigorously applied, especially considered that the special knowledge necessary to profit from it belongs only to those of wealth and privilege. The value of this knowledge was made particularly clear in the case of Mitt Romney’s presidential bid. When Romney released his tax returns it became painfully clear that he was using the system to his advantage, at the expense of the taxpayer. [1] Citizens deserve to know to what lengths, if any, those who wish to represent them are willing to game the system they would be elected to lead. [1] Drucker, J. “Romney Avoids Taxes Via Loophole Cutting Mormon Donations”. Bloomberg. 29 October 2012, http://www.bloomberg.com/news/2012-10-29/romney-avoids-taxes-via-loophole-cutting-mormon-donations.html
[ { "docid": "63efb1514e77cb20193c8505f85a7d61", "text": "governmental transparency house would post full financial history all Tax avoidance is not illegal, and it should not be treated as if it were by the prying media and would-be class-warriors. Even if one might think it unpleasant to look for loopholes to protect private wealth, it is really only natural for people to wish to pay no more than they have to in tax. Mitt Romney was simply using the skills that allowed him to be a great business success to keep his costs as low as possible. Trying to make a political issue out of these sorts of dealings only serves to obscure from the real policy issues, and to focus the debate on divisive and unhelpful issues of class war.\n", "title": "" } ]
[ { "docid": "36a1c30a2282d36b3b3c118960f47af4", "text": "governmental transparency house would post full financial history all Personal finances mean little when it comes to financial policy. Trying to glean any sort of financial acumen on the macro scale from private dealings is extremely misguided. Successful business leaders often make poor political leaders, as the world of business is very different from the horse-trading of politics. [1] In terms of leading others as one leads one’s own life, there is no reason to assume that a candidate who has used the system to his or her advantage would use the additional power of office to enrich themselves or their friends further. Mitt Romney was an effective governor of Massachusetts, and was willing to increase taxes that were personally costly to him.\n\n[1] Jenkins, H. “Good Businessman, Bad President?”. Wall Street Journal. 23 October 2012, http://online.wsj.com/article/SB10001424052970203406404578074620655476826.html\n", "title": "" }, { "docid": "1f1eca959a37ef498c4cf6b0994c5088", "text": "governmental transparency house would post full financial history all So long as politicians do their duty by representing the interests of those that elected them, they are fulfilling their end of the covenant with the people. To demand the financial records of candidates will not offer more than crude snapshot of one aspect of their lives, not giving the desired insight into their character, while massively intruding on the politician’s personal life. As is often the case here the right to know conflicts with the candidates right to privacy. Of course it is right to know if a candidate pays his taxes, but do they need to know every expense he has incurred over the last few years or how much a candidate earned years ago?\n", "title": "" }, { "docid": "af85a817a65c9a0f5fd93b2d7d826187", "text": "governmental transparency house would post full financial history all Privacy is a right but it is not sacrosanct, and certainly should not be for people who serve the public. Freedom of speech is considered sacred in a free society, but anyone reasonable would agree that shouting “Fire!” in a crowded theatre is not given such protection, showing that even the most treasured rights are curtailed in the public interest. Both the special position of politicians as the effective embodiment of the people’s will, and the special power they wield, which is far vaster than that of any private agent, demands a higher level of scrutiny into their backgrounds, which means looking into their financial records, which can divulge much about their competence and character.\n", "title": "" }, { "docid": "d7e4aea5d0fc48db8dc64babb9ef35b7", "text": "governmental transparency house would post full financial history all While elections should of course focus a great deal of attention on policy, it is also critical that voters understand who exactly it is they are voting for. That means looking beyond the manifesto and getting an understanding of the candidate’s character and private dealings. Having access to their private financial records can go a long way toward revealing this information, as they provide valuable insight into both the candidate’s financial abilities, and his or her attitude toward the state.\n", "title": "" }, { "docid": "177950691279f0f2083826f2f446c2ef", "text": "governmental transparency house would post full financial history all Firstly, personal wealth may not be indicative of political belief. Wealthy people can be advocates for higher taxes and workers’ rights. Secondly, maybe creating class awareness is not such a bad thing. The revelation of candidates’ personal finances will help show average voters what their leaders are actually like, that they have acquired great wealth and seek to protect it. Consciousness about these things can only help to galvanize political participation and to stoke real discourse about things like the proper distribution of wealth, issues that often fall foul of the political mainstream of party politics.\n", "title": "" }, { "docid": "0c524c9343b2953472074622a29e458e", "text": "governmental transparency house would post full financial history all This information offers valuable texture to the financial proposals candidates offer as potential policy\n\nWhen candidates make proposals for public spending they often seek to use their own financial stories as evidence of their credibility. Without public knowledge of their actual financial record, besides what can be gleaned from secondary sources and their words, these claims cannot be evaluated fully by the voting public. Publishing their financial records allows the citizens to get a genuine grasp of their –would-be representatives abilities. More importantly, the proposals of candidates can be scrutinized in relation to how the candidate, and those of the same financial stratum as the candidate, would benefit from them. When Mitt Romney proposed new tax and spending reforms in the last US presidential election, it was clear that his policies inordinately favored the rich and increased the tax burden of the middle class. [1] Understanding Romney’s personal position of great wealth served confirm to the public their suspicions that his policies were designed to favor the financial elite of which he was a part. It is in the public’s interest to elect representatives who serve their interests, not those of moneyed elites.\n\n[1] Dwyer, P. “Surprise! Romney Tax Plan Favors the Rich”. Bloomberg. 1 August 2012. http://www.bloomberg.com/news/2012-08-01/surprise-romney-tax-plan-favors-the-rich.html\n", "title": "" }, { "docid": "d8bb4edf897a615ae307b9e1bb609976", "text": "governmental transparency house would post full financial history all Voters have a right to know the background of their would-be representatives, including financial background\n\nIn any society, no matter how liberal, rights of every kind have limitations. Rights are general statements of principles that are then caveated and curtailed to fit the public interest across a range of circumstances. When an individual seeks elevation to public office, he or she must accept that the role they are applying for requires extra transparency. As the representative of the people, the politician is more than just the holder of a job appointed by the people, but is the elected servant, whose duty is to lead, including by example. It is a strange relationship, and it is one that demands the utmost confidence in the holder. This political power will often involve power over the public purse so it is essential for the public to know if the candidate is financially honest and not going to use his election for corrupt purposes. [1] Thus, when citizens place their political power in the hands of an elected representative, they gain the reciprocal right over that representative to have his or her life and character laid bare for their approval. This is done generally through political campaigns that focus on candidates’ character and life story. But often candidates prove reticent to share some details, particularly financial details. But if citizens are to make a good decision about what sort of person they wish to lead them, they require information about the financial background of their representatives, to see that they comport themselves in business in a way that is fitting to the character of a leader.\n\n[1] Rossi, I., and Blackburn, T., “Why do financial disclosure systems matter for corruption?” blogs.worldbank.org, 8 November 2012, http://blogs.worldbank.org/psd/why-do-financial-disclosure-systems-matter-for-corruption\n", "title": "" }, { "docid": "d90135699517a334e6f230c847042a43", "text": "governmental transparency house would post full financial history all Fixating on candidates’ financial records fuels the fire of class war\n\nMore and more the financial dealings of candidates are used against them in politics. In past decades, politicians in many countries were proud to run on the basis of their successes in the private sector. Today, however, that success has often become a liability. One only need look at the paradigmatic example of this occurrence, Mitt Romney. When running for governor in Massachusetts, his strong record in business was touted as a quality favoring him. Yet in the presidential election, Romney’s wealth was touted as an example of capitalist excess, of often ill-gotten gains. [1] The change in rhetoric has indicated marked shift in politics in a number of countries, most visibly the United States, but also places like France, where the development of wealth and success are deemed to be the marks of greed and unfairness. These trends would only be compounded with the release of candidates’ financial records. People with records of wealth and financial ability will be further demonized as being anti-poor. These sorts of political tactics obscure from the realities of politics and seeks to separate people along class, rather than political ideological, lines. Such divisions are exceptionally dangerous to the functioning of a democratic society, which demands buy-in and willing participation from all classes and groups in order to function.\n\n[1] Erb, K. “Why Romney’s ‘Tax Avoidance’ Strategies Don’t Deserve Criticism”. Forbes. 30 October 2012. http://www.forbes.com/sites/kellyphillipserb/2012/10/30/why-romneys-tax-avoidance-strategies-dont-deserve-criticism/\n", "title": "" }, { "docid": "f2b810b7036920b5b385ddb8d1a2ac20", "text": "governmental transparency house would post full financial history all Individuals have a right to privacy, including to their own financial records\n\nPrivacy is a fundamental human right, one that should be defended for all citizens, including those who govern us. [1] What people do with their own finances is their own business. People generally speaking have a basic respect for privacy. Politicians don’t owe the electorate any special privileges like their financial history. A politician is effectively an employee of his constituents and the citizens of the polity. His or her duty is not so special as to demand the handing over of all information on one of the most critical aspects of their private life. Financial affairs like income and taxes are a private matter, and should be treated as such by voters and governments. This is even more the case when it comes to financial history, much of which may have happened long before the individual decided to become a politician. Making politicians’ financial affairs fair game for reporters and others who would exploit the information only serves to undermine the rights that all citizens rightly enjoy.\n\n[1] Privacy International. 2010. “Privacy as a Political Right”. Index on Censorship 39(1): 58-68. https://www.privacyinternational.org/reports/privacy-as-a-political-right\n", "title": "" }, { "docid": "b64292a92bc691d17d8797e56f9ad6ca", "text": "governmental transparency house would post full financial history all The focus of elections should be on policy, not personal issues like financial records\n\nDiscussion of candidates’ personal finances serves only to obscure the real issues facing society. When the focus becomes on how much tax Candidate X paid and what loopholes he or she exploited, the media tends to latch onto it. It sells more newspapers and gets more hits online to make a salacious story about the financial “misdeeds” of a candidate than to actually discuss what he or she stands for. It fuels the growing tendency of the media to attach itself to petty commentary rather than real investigation and analysis. Ultimately, an examination of the personal finances of a candidate tells voters little about what he or she stands for on the issue of state finances. Throughout history, personal financial success has been shown to not necessarily correlate with political acumen. For example, William Pitt became the young, and one of the longest-serving Prime Ministers of the United Kingdom, yet he was in extreme debt when he died. [1] Narrow attention paid to personal finances takes up people’s limited time available to consume useful information to direct their voting, and the news media have limited air time to discuss issues. It is best that both use their time to maximum effect, and not be sidetracked by distractions.\n\n[1] Reilly, Robin (1978). Pitt the Younger 1759–1806. Cassell Publishers.\n", "title": "" } ]
arguana
5aa63cf1ec965eb09c142ea9ddf50815
Democratic states have an obligation to not bolster repression abroad It is common for Western democracies to make sweeping statements about the universality of certain rights, and that their system of government is the one that should be most sought after in the world, that democracy is the only legitimate form of government. As when Obama in Cairo proclaimed “These are not just American ideas; they are human rights. And that is why we will support them everywhere.” [1] They claim to work in the United Nations and other organizations toward the improvement of rights in other countries and clamour about the need for building governments accountability around the world, using their liberal-democratic paradigm as the model. Yet at the same time democratic governments and companies sell technologies to non-democratic allies that are used to systematically abuse the rights of citizens and to entrench the power of those avowedly illegitimate regimes. These hypocrisies read as a litany of shame. A telling example is the Blair government in the United Kingdom selling weapons to an oppressive regime in Indonesia for the sake of political expediency even after proclaiming an ‘ethical foreign policy’. [2] Even if democracies do not feel it is a defensible position to actively seek to subvert all non-democratic states, and that non-democracies should be considered semi-legitimate on the basis of nations’ right to self-determination, they should still feel morally obliged not to abet those regimes by providing the very tools of oppression on which they rely. [3] To continue dealing in these technologies serves only to make democratic countries’ statements hollow, and the rights they claim to uphold seem less absolute, a risk in itself to freedoms within democracies. Respect for rights begins at home, and actively eroding them elsewhere reduces respect for them by home governments. [1] Obama, Barack, “Remarks by the President on a new beginning”, Office of the Press Secretary, 4 June 2009, http://www.whitehouse.gov/the_press_office/Remarks-by-the-President-at-Cairo-University-6-04-09 [2] Burrows, G. “No-Nonsense Guide to the Arms Trade”. New Internationalist. 2002, http://www.thirdworldtraveler.com/Weapons/Arms_Trade.html [3] Elgin, B. “House Bill May Ban US Surveillance Gear Sales”. Bloomberg. 9 December 2012. http://www.bloomberg.com/news/2011-12-09/house-bill-would-ban-surveillance-gear-sales-by-american-firms.html
[ { "docid": "a723961d8bb4da9bfc12bf3149c59cae", "text": "e internet freedom censorship ip digital freedoms freedom expression A democracy, like any state, owes its first duty to its citizens, and its national interest is therefore in selling this equipment to help business at home. While it is convenient, perhaps even morally right at times, to stand publicly for the universality of democratic principles, such stands should not be taken at the expense of national security or influence. It should certainly not be considered an obligation. Sweeping policies like this will alienate valuable allies and make it more difficult for democracies to deal with the undemocratic world. With regard to domestic freedoms, states have long held different standards of action when dealing with their own citizens than those of other states, and that has never served to erode domestic freedoms.\n", "title": "" } ]
[ { "docid": "1087f1b4b53db48bb2437f5e1abd4939", "text": "e internet freedom censorship ip digital freedoms freedom expression Is a minor ban really a good signal? The chances are the government will ignore it and those who it is meant to encourage will never hear about it. In the event that the regimes it is aimed at do take not far from weakening them, this policy serves only to alienate them. The lack of respect the policy is clearly aimed to show will galvanize the leaderships in undemocratic regimes to cut off various ties with democratic states, limiting the flow of ideas and democratic principles that natural adhere to activities like international trade. The result is non-democracies will be less willing to talk about reform in the international community because they see their very form of government as under threat by foreign agents seeking to discredit them. Ultimately, a boost in Western moral does little to promote democracy and human rights while a negative signal will result in regimes being more suspicious and obstinate.\n", "title": "" }, { "docid": "6d138e82e04b5e860f47352fa58f1291", "text": "e internet freedom censorship ip digital freedoms freedom expression Corporations are bound to obey the laws of the societies in which they are based, but they are not so constrained in their foreign dealings, in which they are bound instead by foreign laws that are often much more lax. The nature of the international landscape, with its many incompatible and overlapping forms of government and regulatory frameworks, demands that corporations be flexible in order to survive. The constraints put upon the manufacturers of surveillance equipment put forward by this policy will make them less competitive in the international market, which is often the primary market for these businesses. Furthermore, if they feel constrained they may pull up stakes and move their operations abroad to a more accommodating jurisdiction. This would serve to harm domestic jobs and undermine the ability of democratic states to maintain their edge over others in essential surveillance technology development.\n", "title": "" }, { "docid": "e062a0403ea1b07d8405ea2d44812e01", "text": "e internet freedom censorship ip digital freedoms freedom expression Security services have managed to watch over and infiltrate the efforts of dissidents all through history. The visibility and tactics is all that has changed. The internet was never going to just be an arena that helps dissidents in authoritarian regimes but as with other technological advances, such as the telephone both increases communication and provides methods of monitoring that communication. If non-democratic states were to lose access to Western technology, they would either procure comparable replacements from other non-democracies, or they would pursue more traditional forms of surveillance, ones that tend to be more invasive and physically threatening.\n", "title": "" }, { "docid": "090435bb2cfa2d7bd3814fac42249ad1", "text": "e internet freedom censorship ip digital freedoms freedom expression Real politick is not the only consideration democracies should entertain when they engage in international relations. Indeed, the Western powers have sought since World War II to develop a system of international justice that recognizes the primacy of peoples’ rights irrespective of where they are born. This principle is constantly compromised as democracies jockey for influence with undemocratic regimes, bolstering those regimes and their repressive norms in the process. In order to be consistent, and to serve the true interests of justice, democracies must not aid undemocratic governments in the repression of their people.\n", "title": "" }, { "docid": "cc217ebdf39b31af7fa3a6185f0fa628", "text": "e internet freedom censorship ip digital freedoms freedom expression Oppressive regimes have turned to the use of advanced surveillance technology in response to activists’ learning to evade more conventional methods of surveillance, and by moving their organizations online. Western surveillance technology has filled a niche that was once open for dissidents. By placing this ban, even if the regimes turn back to old methods, they will still be hampered in the crushing of dissent. Furthermore, no regime has the resources or power to have physical surveillance as pervasive as the technology denied them would allow. Electronic surveillance therefore can cast a much broader net that would allow the government to repress many more people who would not be subject to more labour intensive physical surveillance.\n", "title": "" }, { "docid": "62da6385db9e8eb249e509733ffbb2cc", "text": "e internet freedom censorship ip digital freedoms freedom expression Banning the sale of surveillance technology does not mean democracies are declaring all undemocratic regimes illegitimate. Rather, they are simply not allowing their technology to aid in the repression of people, which is the only use to which that technology is put in practice. Reform sometimes demands a firm hand, and while some regimes will be riled by what they perceive as an insult, the greater chance for dissidents to develop networks and voices is worth the cost.\n", "title": "" }, { "docid": "d4b4d2e32b8aa3779a0672ce9fd32c22", "text": "e internet freedom censorship ip digital freedoms freedom expression While Western states are willing to use surveillance technology to restrict their citizens, they do so always with a democratic mandate. That is the key difference. Democracies use surveillance technology to provide their people with the safety and security they demand, a security over which the people always have the veto of the ballot box. The non-democracy is not checked by any such power, and thus its use of surveillance technology faces no constraint.\n", "title": "" }, { "docid": "bb03bea5afcf959744f445d63fd22c9e", "text": "e internet freedom censorship ip digital freedoms freedom expression The right of Western businesses to sell their services abroad can be curtailed when their actions stand counter to the interests of their home governments\n\nCorporations are private entities that have the right to sell their services and to deal with agents foreign and domestic, including governments. However, this right can be limited when those actions are oppositional to the aims of the home state in which they are incorporated. The sale of surveillance technology to undemocratic regimes stands against the avowed aims of democracies and against their strategic interests in bolstering democracy abroad and maintaining a reputation for fair dealing. For this reason it is perfectly legitimate for governments to ban the corporations within their borders from selling dangerous technologies to foreign governments. Such is already the case with many kinds of strategic technology, especially weapons technology. [1] The EU, for example, bans a range of arms sales to various oppressive states on these grounds, [2] China in particular is an example where it would potentially be very lucrative to overturn the ban. [3] Corporations benefit from the protection of democratic states, as they provide bases of operations that shield their right to property and ensure stability and the rule of law. If corporations wish to benefit from these provisions they must be willing to accept the instructions of the states that house them regarding what can and cannot be sold to foreign powers.\n\n[1] Elgin, B. “House Bill May Ban US Surveillance Gear Sales”. Bloomberg. 9 December 2012. http://www.bloomberg.com/news/2011-12-09/house-bill-would-ban-surveillan...\n\n[2] Banks, M. “Senior MEP Calls for Freeze on Arms Sale to North Africa”. The Parliament.com. 7 July 2011. http://www.theparliament.com/latest-news/article/newsarticle/senior-mep-calls-for-freeze-on-arms-sale-to-north-africa/\n\n[3] See the debatabase debate ‘This House believes the European Union should lift its ban on member states selling arms to China’ http://idebate.org/debatabase/debates/international-affairs/european-union/house-believes-lift-arms-sales-ban-china\n", "title": "" }, { "docid": "9b10d302a8fd8413c0103e3ae405e72b", "text": "e internet freedom censorship ip digital freedoms freedom expression Advanced surveillance technology prevents dissidents from being able to organize and sue for freedom\n\nHigh-tech surveillance technology has given repressive governments and police states a new lease on life. Now more than ever they can intrude into every aspect of people’s lives, ensuring that dissent is cowed for fear of the ever present threat of the security services. The vision of Orwell’s 1984 has become a living nightmare for people all over the world. Their power has made it extremely difficult for movements for reform, government accountability, and democracy, which have foundered when faced with these sophisticated security apparatuses (Valentino-Devries, 2011). [1] By dominating the flow of information states have the power to keep their people in check and prevent them from ever posing a threat to their repressive status quo. Thus China blocks access to the internet and to other forms of communications in Tibet to “ensure the absolute security of Tibet’s ideological and cultural realm”. It cuts the Tibetan people off from outside world so as to prevent any rerun of the instability that occurred in 2008, which China blamed on the influence of the Dalai Lama from outside. [2]\n\nOnly external help in alleviating this censorship could allow activists to organize effectively and perhaps to one day bring about genuine reform and justice to their societies. The surveillance equipment on which these regimes rely is often only available from firms and governments in the democratic world where, by and large, technology is generally far more advanced than in the non-democratic world. Without access to these technologies, the regimes would be far more hard-pressed to keep rigid tabs on their citizens, allowing for the seeds of dissent to take root. Only then can the forces clamouring for democracy hope to be able to organise networks of activists, and to have their views considered by the state.\n\n[1] Valentino-Devries, J. “US Firm Acknowledges Syria Uses its Gear to Block Web”. Wall Street Journal. 29 October 2011, http://online.wsj.com/article/SB10001424052970203687504577001911398596328.html\n\n[2] Human Rights Watch, “China: Attempts to Seal Off Tibet from Outside Information”, 13 July 2012, http://www.hrw.org/news/2012/07/13/china-attempts-seal-tibet-outside-information\n", "title": "" }, { "docid": "d46941aee69281e11a33b6d88fee72d4", "text": "e internet freedom censorship ip digital freedoms freedom expression This ban would have a powerful signalling effect expressing disapproval of non-democracies' system of government\n\nA ban on the sale of surveillance technology to non-democracies serves ultimately as a statement of disapproval. It shows that the undemocratic regimes cannot be trusted with the ability to spy on their people. This signal has several effects. An example of this international shaming affecting is the international bans on the use of landmines. Various states created a framework, the Ottawa Convention, [1] in which their condemnation pressured nearly every other state, including authoritarian regimes, to follow suit. [2] Domestically it serves to bolster people’s faith in the system of rights they value highly and enshrine in law. They can point to this ban as an example of their government’s desire to make a better world and not to increase repression for the sake of power or profit. In the undemocratic states themselves, the regime leaders will be faced with a significant public relations blow as they come under criticism. This serves to embolden and empower holders of dissenting opinions and to spark pro-democratic discourse. In the international community it makes an emphatic value judgement on the merit of certain systems of government, namely the superiority of democracy and government accountability to the people, principles most non-democracies still pay some form of lip-service to. Overall, this policy boosts the credibility of democracy, while undermining the influence of undemocratic states.\n\n[1] See the debatabase debate ‘This House (as the USA) would sign the Ottawa convention banning landmines’, http://idebate.org/debatabase/debates/international/house-usa-would-sign-ottawa-convention-banning-landmines\n\n[2] Wexler, L. “The International Deployment of Shame, Second-Best Responses, And Norm Entrepreneurship: The Campaign to Ban Landmine and the landmine Ban Treaty”. Arizona Journal of International and Comparative Law. 2003. http://www.ajicl.org/AJICL2003/vol203/wexlerarticle.pdf\n", "title": "" }, { "docid": "2969e6b98bf4dac7dee3c1f7450ebe0b", "text": "e internet freedom censorship ip digital freedoms freedom expression It is hypocritical for democratic governments to utilize surveillance technology to watch their own people while denying that technology to others\n\nIt is a fatal conceit to consider democracies somehow above the influence of using their surveillance technology to curtail the freedoms of their own citizens. The biggest customers of Western surveillance technology companies are wealthy democracies. The United Kingdom, for example, has one of the most-watched populations in the world, with a saturation of CCTV cameras far in excess of any dictatorship. [1] The PATRIOT Act in America, also, has given the federal government enormous scope for domestic spying. These powers are no less simply because the government is composed in part of elected officials. The security establishment is appointed, not elected, and their servicemen are promoted from within. It is base hypocrisy to pretend that the security systems are inherently more just when employed in democratic states than in undemocratic ones. They are used for the same purpose, to ensure that the state is protected and the status quo maintained. Democracies have no moral basis on which to base this policy.\n\n[1] BBC News. “Britain is ‘Surveillance Society’”. 2 November 2006, http://news.bbc.co.uk/2/hi/uk_news/6108496.stm\n", "title": "" }, { "docid": "3f6eb20625d74476d58496da8f16f521", "text": "e internet freedom censorship ip digital freedoms freedom expression The inability to use advanced technologies merely forces non-democracies to utilize more unsavoury methods to achieve their aims\n\nIf it is the aim of an undemocratic regime to use advanced surveillance technology to gather intelligence on, and ultimately crush, dissent it will find other means of doing so. Their calculus of survival is not changed, only their available methods. Their first port of call will be the more advanced non-democracies that might be able to supply comparable surveillance equipment. China’s military and surveillance technology is fast catching up to that of the West, and makes an appealing alternative source for equipment. [1] The only difference is that the Chinese have no compunction at all about how the technology is used, meaning worse outcomes for pro-democracy groups who run afoul of them. When this strategy fails regimes can turn to the tried and tested models of past decades, using physical force and other less technological modes of coercion to cow dissent. Again, this form of repression is quite effective, but it is also much more painful to those on the receiving end. Given the options, democracies supplying surveillance technology may be the best option for dissidents in undemocratic countries.\n\n[1] Walton, G. “China’s Golden Shield: Corporations and the Development of Surveillance Technology in the People’s Republic of China”. International Centre for Human Rights and Democratic Development. 2001.\n", "title": "" }, { "docid": "510b4d4cf9fb455d53b617b4e7466225", "text": "e internet freedom censorship ip digital freedoms freedom expression Presuming democracy is the only legitimate or worthwhile form of government is both inaccurate and unproductive\n\nAs much as the more liberal citizenry of many of the world’s democracies wish to believe otherwise, democracy as a system of government is not the only game in town. In fact, the growth of the strong-state/state-capitalism approach to government has gained much traction in developing countries that witness the incredible rise of China, which will before long be the world’s largest economy, flourish under an undemocratic model. [1] Chinas ruling communist party have legitimacy as a result of its performance and its historical role reunifying the country. [2] Democracies pretending they are the only meaningful or legitimate states only serve to antagonize their non-democratic neighbours. Such antagonism is doubly damaging, considering that all states, democracies included, rely on alliances and deals with other states to guarantee their security and prosperity. This has meant that through history democracies have had to deal with non-democracies as equal partners on the international stage, and this fact is no different today. States cannot always pick and choose their allies, and democracies best serve their citizens by furthering their genuine interests on the world stage. This policy serves as a wedge between democracies and their undemocratic allies that will only weaken their relations to the detriment of both. When the matter comes to surveillance technology, Western states’ unwillingness to share an important technology they are willing to use themselves causes tension between these states. Non-democracies have just as much right to security that surveillance technology can provide as the more advanced states that develop those technologies.\n\n[1] Acemoglu, D. and Robinson, J. “Is State Capitalism Winning?”. Project Syndicate. 31 December 2012. http://www.project-syndicate.org/commentary/why-china-s-growth-model-will-fail-by-daron-acemoglu-and-james-a--robinson\n\n[2] Li, Eric X, “The Life of the Party”, Foreign Affairs, January/February 2013, http://www.foreignaffairs.com/articles/138476/eric-x-li/the-life-of-the-party?page=4\n", "title": "" }, { "docid": "dcebf8561dec8f8993379002195010e0", "text": "e internet freedom censorship ip digital freedoms freedom expression This ban will alienate non-democracies from discourse and stifle reform efforts\n\nWhen a state is declared illegitimate in the eyes of a large part of the international community, its natural reaction is one of upset and anger. A ban on the sale of surveillance technology to non-democracies would be seen as a brutal slap in the face to many regimes that consider themselves, and are often considered by their people, to be the legitimate government of their country. The ban will result in further tension between non-democracies and democracies, breaking down communication channels. Democracies are best able to effect change in regimes when they seek to engage them constructively, to galvanize them to make gradual connections to the development of civil society and to loosen restrictions on freedoms, such as reducing domestic spying. The ban makes it clear that the ultimate aim of democracies is to effectively overthrow the existing governments of non-democracies in favour of systems more like their own. The outcome of this conclusion is far less willingness on the part of these regimes to discuss reform, and makes it more likely that they will demonize pro-democracy activists within their borders as agents of foreign powers seeking to subvert and conquer them. This particular narrative has been used to great effect by many regimes throughout history, including North Korea and Zimbabwe, Justice Minister Patrick Chinamasa for example denounced a travel and arms sales ban as attempting to “undermine the inclusive government”. [1] By treating non-democracies as responsible actors democracies do much more in effectively furthering their own aims.\n\n[1] BBC News, “Zimbabwean minister denounces EU”, 14 September 2009, http://news.bbc.co.uk/1/hi/world/africa/8254367.stm\n", "title": "" } ]
arguana
185dde2d7f09260f019b61c4fb4c8fc9
The government here may legitimately limit ‘free speech’. We already set boundaries on what constitutes ‘free speech’ within our society. For example, we often endorse a ‘balancing act’ [1] an individual may express their beliefs or opinions, but only up to the point where it does not impede the ‘protection of other human rights’ [2] – other peoples’ right not to be abused. In this case, if an individual expresses abuse towards another – especially racism - they may be deemed to be outside of the boundaries or free speech and can be punished for it. This motion is simply an extension of this principle; the kinds of sites which would be banned are those which perpetuate hatred or attack other groups in society, an so already fall outside of the protection of free speech. The harms that stem from these kinds of sites outweigh any potential harm from limiting speech in a small number of cases. [1] Hera.org, ‘Freedom of Expression’, Human Rights Education Association, http://www.hrea.org/index.php?doc_id=408 on 09/09/11 [2] Hera.org, ‘Freedom of Expression’, Human Rights Education Association, http://www.hrea.org/index.php?doc_id=408 on 09/09/11
[ { "docid": "63701d7fd42ab82224d5ca73ffa55d62", "text": "p ip internet digital freedoms access information house would censor Outright banning this kind of prejudice does not directly tackle it – it ignores it. A better way for the government to tackle derogatory and prejudicial speech is to engage with it in a public forum and reasonably point out the flaws and ignorance that it embodies, rather than desperately trying to hide it from public view. In this way, those who are being attacked by these websites would feel as if the government is actively protecting them and their rights and punishing those who have violated them, rather than simply closing a few websites and allowing their authors to continue in other ways. This motion does not solve the problem of prejudice in the way it claims to.\n", "title": "" } ]
[ { "docid": "9642012fabf69edc21605dffe53c6546", "text": "p ip internet digital freedoms access information house would censor Any information from television or newspapers has already been regulated, so it is not a problem that it may now appear somewhere on the internet. It is exactly because the internet is a forum for free information and expression that so many people engage with it; removing this is a dictatorial move against ordinary citizens who seek information without bias and undue censorship.\n", "title": "" }, { "docid": "e0d72292dbef7f359432250daa48e270", "text": "p ip internet digital freedoms access information house would censor Given the number of people who actually use Facebook [1] and other social networking sites, these occurrences were remarkably small [2] . These riots cannot be attributed to Facebook; it was the mindset of the rioters rather than Facebook itself which provided the raw determination for these riots to occur. If Facebook had been censored, they may have simply used mobile phones to co-ordinate their actions instead. Censoring these sites would not prevent such events, and would anger those who use Facebook to communicate with friends [3] and share photos [4] innocently.\n\n[1] BBC News, ‘Facebook hits 500m user milestone’, 21 July 2010, http://www.bbc.co.uk/news/technology-10713199 09/09/11.\n\n[2] BBC News, ‘UK Riots: Trouble erupts in English cities’, 10 August 2011, http://www.bbc.co.uk/news/uk-england-london-14460554 on 09/09/11.\n\n[3] Santos, Elena, “The ultimate social network”, softonic, http://facebook.en.softonic.com/web-apps on 09/09/11.\n\n[4] Santos, Elena, “The ultimate social network”, softonic, http://facebook.en.softonic.com/web-apps on 09/09/11.\n", "title": "" }, { "docid": "a7e2cb25b88f1db89a49535ba3783453", "text": "p ip internet digital freedoms access information house would censor While in a tiny minority of cases, such social networking sites can be used malevolently, they can also be a powerful force for good. For example, many social networking pages campaign for the end to issues such as domestic abuse [1] and racism [2] , and Facebook and Twitter were even used to bring citizens together to clean the streets after the riots in the UK in 2011. [3] Furthermore, this motion entails a broader move to blanket-ban areas of the internet without outlining a clear divide between what would be banned and what would not. For example, at what point would a website which discusses minority religious views be considered undesirable? Would it be at the expression of hatred for nationals of that country, in which case it might constitute hate speech, or not until it tended towards promoting action i.e. attacking other groups? Allowing censorship in these areas could feasibly be construed as obstructing the free speech of specified groups, which might in fact only increase militancy against a government or culture who are perceived as oppressing their right to an opinion of belief [4] .\n\n[1] BBC News, ‘Teenagers’ poem to aid domestic abuse Facebook campaign’, 4 February 2011, http://www.bbc.co.uk/news/uk-england-12367525 on 16/09/11\n\n[2] Unframing Migrants, ‘meeting for CAMPAIGN AGAINST RACISM’, facebook, 19 October 2010, http://www.facebook.com/events/168254109852708/ on 16/09/2011.\n\n[3] BBC News, ‘England riots: Twitter and Facebook users plan clean-up.’ 9 August 2011, http://www.bbc.co.uk/news/uk-england-london-14456857 on 16/09/11.\n\n[4] Marisol, ‘Nigeria: Boko Haram Jihadists say UN a partner in “oppression of believers”’, JihadWatch, 1 September 2011, http://www.jihadwatch.org/2011/09/nigeria-boko-haram-jihadists-say-un-a-partner-in-oppression-of-believers.html on 09/09/11\n", "title": "" }, { "docid": "7e30a92905e9c1f2c7de5ec464b9ee5d", "text": "p ip internet digital freedoms access information house would censor We already frown upon certain forms of speech [1] as we recognise that it is important to protect groups form prejudice and hatred. Allowing the expression of hatred does not automatically mean that ordinary people will denounce it as evil; rather, it normalises hatred and is more likely to be acceptable in the public domain. It also appears to show implicit acceptance or even support from the government when we take no steps to prevent this kind of damaging expression; as such, the government fails in its duty to ordinary citizens to protect them and represent their best interests.\n\n[1] Tatchell, Peter, ‘Hate speech v free speech’, guardian.co.uk, 10 October 2007, http://www.guardian.co.uk/commentisfree/2007/oct/10/hatespeechvfreespeech on 09/09/11.\n", "title": "" }, { "docid": "89b7d4d043ab16bc40e86ed7f6fad440", "text": "p ip internet digital freedoms access information house would censor Governments are often obliged to do things that the population doesn’t like – raising taxes is an obvious example. However, it is also recognised that sometimes the government has to do these things in order to represent the long-term, best interest of its people – whether or not it is a popular measure at the time.\n", "title": "" }, { "docid": "4b84b7f37087aba0f6512443c23e66f5", "text": "p ip internet digital freedoms access information house would censor The Internet may be a global resource, but if information on it is have a detrimental effect upon a particular country, it certainly is that government’s responsibility and right to tackle it. If it affects their society and the citizens within it, it affects the government and the means by which they can govern, particularly in relation to social policy. Moreover these websites, and specifically religious opinion websites, often seek to ‘recruit’ others to their school of thought or even to action; their purpose is often to gather support and followers [1] . Therefore there certainly is a risk that these people, who are often very intelligent and persuasive [2] , might lure others to them without protection by the government. It is a very real danger, and needs real protection.\n\n[1] Kiley, Sam, ‘Terrorists ‘May Recruit On Social Networks’’, SkyNews, 12 July 2011, http://news.sky.com/home/uk-news/article/16028962 on 09/09/11.\n\n[2] Ali, Iftakhar, ‘Terrorism – The Global Menace’, Universal Journal The Association of Young Journalists and Writers, http://www.ayjw.org/articles.php?id=944449 on 09/09/11.\n", "title": "" }, { "docid": "8657f8f86a3bd7342178eed2024a749e", "text": "p ip internet digital freedoms access information house would censor Even sites that appeared innocent have had a devastating effect on society.\n\nSome governments, such as the Vietnamese government [1] , have already seen sufficient cause to ban social networking sites such as Facebook. Recently in the UK, many major cities witnessed devastation and destruction as social networking sites were used to co-ordinate wide-scale riots which rampaged over London, Manchester, Birmingham, Worcestershire, Gloucester, Croydon, Bristol, Liverpool and Nottingham [2] . Rioters contacted each other through Facebook and blackberry instant messenger to ensure that they could cause maximum damage [3] , which resulted in the destruction of property [4] , physical violence towards others [5] , and even the deaths of three young men [6] . These events prove that seemingly innocent Internet sites can be used by anybody, even apparently normal citizens, to a devastating effect which has caused harm to thousands [7] . To protect the population and maintain order, it is essential that the government is able to act to censor sites that can be used as a forum and a tool for this kind of behaviour when such disruption is occurring.\n\n[1] AsiaNews.it, ‘Internet censorship tightening in Vietnam’, 22 June 2010, http://www.asianews.it/news-en/Internet-censorship-tightening-in-Vietnam... 09/09/11\n\n[2] BBC News, ‘England Riots’, 8 February 2012, http://www.bbc.co.uk/news/uk-14452097 on 09/09/11\n\n[3] BBC News, ‘England riots: Two jailed for using Facebook to incite disorder’, 16 August 2011, http://www.bbc.co.uk/news/uk-england-manchester-14551582 on 09/09/11\n\n[4] Hawkes, Alex, Garside, Juliette and Kollewe, Julia, ‘UK riots could cost taxpayer £100m’, guardian.co.uk, 9 August 2011, http://www.guardian.co.uk/uk/2011/aug/09/uk-riots-cost-taxpayer-100-million on 09/09/11.\n\n[5] Allen, Emily, ‘We will use water cannons on them: At last Cameron orders police to come down hard on the looters (some aged as young as NINE)’, Mail Online, 11 August 2011, http://www.dailymail.co.uk/news/article-2024203/UK-RIOTS-2011-David-Came... on 09/09/11.\n\n[6] Orr, James, ‘Birmingham riots: three men killed ‘protecting homes’’, The Telegraph, 10 August 2011, http://www.telegraph.co.uk/news/uknews/crime/8693095/Birmingham-riots-th... on 09/09/11.\n\n[7] Huffington Post, ‘UK Riots: What Long-Term Effects Could They Have?’, 10 August 2011, http://www.huffingtonpost.co.uk/2011/08/10/uk-riots-cleanup-could-co_n_9... on 09/09/11.\n", "title": "" }, { "docid": "cf8f3e67464b8672986a3e15122f5419", "text": "p ip internet digital freedoms access information house would censor Governments have a moral duty to protect its citizens from harmful sites.\n\nIn recent years, supposedly innocent sites such as social networking sites have been purposely used to harm others. Victims of cyber bullying have even led victims to commit suicide in extreme cases [1] [2] . Given that both physical [3] and psychological [4] damage have occurred through the use of social networking sites, such sites represent a danger to society as a whole. They have become a medium through which others express prejudice, including racism, towards groups and towards individuals [5] . Similarly, if a particularly country has a clear religious or cultural majority, it is fair to censor those sites which seek to undermine these principles and can be damaging to a large portion of the population. If we fail to take the measures required to remove these sites, which would be achieved through censorship, the government essentially fails to act on its principles by allowing such sites to exist. The government has a duty of care to its citizens [6] and must ensure their safety; censoring such sites is the best way to achieve this.\n\n[1] Moore, Victoria, ‘The fake world of Facebook and Bebo: How suicide and cyber bullying lurk behind the facade of “harmless fun”’, MailOnline, 4 August 2009, http://www.dailymail.co.uk/femail/article-1204062/The-fake-world-Facebook-Bebo-How-suicide-cyber-bullying-lurk-facade-harmless-fun.html on 16/09/11\n\n[2] Good Morning America, ‘Parents: Cyber Bullying Led to Teen’s Suicide’, ABC News, 19 November 2007, http://abcnews.go.com/GMA/story?id=3882520&amp;page=1#.T0N_1fFmIQo on 16/09/11\n\n[3] BBC News, ‘England riots: Two jailed for using Facebook to incite disorder’, 16 August 2011, http://www.bbc.co.uk/news/uk-england-manchester-14551582 on 16/09/11.\n\n[4] Good Morning America, ‘Parents: Cyber Bullying Led to Teen’s Suicide’, ABC News, 19 November 2007, http://abcnews.go.com/GMA/story?id=3882520&amp;page=1#.T0N_1fFmIQo on 16/09/11\n\n[5] Counihan, Bella, ‘White power likes this – racist Facebook groups’, The Age, 3 February 2010, http://www.theage.com.au/opinion/society-and-culture/white-power-likes-t... on 16/09/11\n\n[6] Brownejacobson, ‘Councils owe vulnerable citizens duty of care’, 18 June 2008, http://www.brownejacobson.com/press_office/press_releases/councils_owe_v... 09/09/11\n", "title": "" }, { "docid": "5dbd3fbcfe478b4f36a815490fc0f1a2", "text": "p ip internet digital freedoms access information house would censor As an extensive form of media, the Internet should be subject to regulation just as other forms of media are.\n\nUnder the status quo, states already regulate other forms of media that could be used malevolently. Newspapers and books are subject to censorship [1] , and mediums such as television, film and video receive a higher degree of regulation [2] because it is widely recognised that moving pictures and sound can be more emotive and powerful than text and photographs or illustrations. The internet has many means of portraying information and opinion, including film clips and sound, and almost all the information found on television or in newspapers can be found somewhere on the internet [3] , alongside the millions of uploads from internet users themselves [4] .\n\n[1] Foerstel, Herbert N., ‘Banned in the Media’, Publishing Central, http://publishingcentral.com/articles/20030215-85-f98b.html?si=1 on 09/09/11\n\n[2] CityTVweb.com, ‘Television censorship’, 27 August 2007, http://www.citytvweb.com/television-censorship/ on 09/09/11.\n\n[3] Online Newspapers Directory for the World, ‘Thousands of Newspapers Listed by Country &amp; Region’, http://www.onlinenewspapers.com/ on 09/09/11\n\n[4] Boris, Cynthia, ’17 Percent of Photobucket Users Upload Video’s Once a Day’, Marketing Pilgrim, 9 September 2011, http://www.marketingpilgrim.com/2011/09/17-percent-of-photobucket-users-upload-video-once-a-day.html on 09/09/11\n", "title": "" }, { "docid": "e1d192514b54c85a12e0192a2964e9d2", "text": "p ip internet digital freedoms access information house would censor The Internet is a free domain and cannot becontrolled by the government.\n\nGiven that the Internet is used as an international [1] and public space [2] , the government has no right over the information which may be presented via the Internet. In Western liberal democracies, governments are elected on the basis by which they can serve their own country – how they will create or maintain laws that pertain specifically to that nation, and how they will govern the population. The Internet is not country-specific, but international and free. As such, no individual government should have a right to the information on it. Asserting false authority over the internet would paint the government as dictatorial and a ‘nanny state’ [3] , demonstrating a lack of respect for its citizens by assuming that they cannot protect themselves or recognise the nature of extremist or potentially harmful sites and take the individual decision to distance themselves from such sites.\n\n[1] Babel, ‘Towards communicating on the Internet in any language’, http://alis.isoc.org/index.en.html\n\n[2] Papacharissi, Zizi, ‘The virtual sphere’, New Media &amp; Society, Vol. 4 No. 1, pp 9-27, February 2002, http://nms.sagepub.com/content/4/1/9.short on 09/09/11\n\n[3] BBC. ‘A Point of View: In defence of the nanny state’. Published 04/02/2011. Accessed from http://www.bbc.co.uk/news/magazine-12360045 on\n", "title": "" }, { "docid": "1f5a17eaf9a8e63f50bf2d302da0440d", "text": "p ip internet digital freedoms access information house would censor Censorship is fundamentally incompatible with the notion of free speech.\n\nCensoring particular material essentially blinds the public to a complete world view by asserting the patronising view that ordinary citizens simply cannot read extreme material without recognising the flaws in it. This motion assumes that those who have access to material such as religious opinion sites will be influenced by it, rather than realising that it is morally dubious and denouncing it. The best way to combat prejudice is to expose it as a farce; this cannot be done if it is automatically and unthinkingly censored. Meanwhile, it is paradoxical for a government to assert the general benefits of free speech and then act in a contradictory and hypocritical manner by banning certain areas of the Internet. Free speech should not be limited; even if it is an expression of negativity, it should be publicly debated and logically criticised, rather than hidden altogether.\n", "title": "" }, { "docid": "872edd7325b5ec9e694e4693990fa90b", "text": "p ip internet digital freedoms access information house would censor People often react poorly to being censored by their governments.\n\nIn countries that do currently practice censorship of Internet information, their citizens often interpret this as suspicious and dictatorial behaviour. For example, in China growing discontent with the government’s constant censorship has led to public outrage [1] , and political satire which heavily criticises the government [2] . Censorship can easily be used malevolently and is not always in public interest; this motion supports the ignorance of the population by hiding information and the reality of the situation. Therefore the cost of suspicion by the population of the state makes censorship of any kind less than worthwhile and it is better to allow individuals to make their own choices.\n\n[1] Bennett, Isabella, ‘Media Censorship in China’, Council on Foreign Relations, 7 March 2011, http://www.cfr.org/china/media-censorship-china/p11515 on 09/09/11\n\n[2] Bennett, Isabella, ‘Media Censorship in China’, Council on Foreign Relations, 7 March 2011, http://www.cfr.org/china/media-censorship-china/p11515 on 09/09/11.\n", "title": "" } ]
arguana
b64c0628fbb906e5f9a33a6181120c0a
Even sites that appeared innocent have had a devastating effect on society. Some governments, such as the Vietnamese government [1] , have already seen sufficient cause to ban social networking sites such as Facebook. Recently in the UK, many major cities witnessed devastation and destruction as social networking sites were used to co-ordinate wide-scale riots which rampaged over London, Manchester, Birmingham, Worcestershire, Gloucester, Croydon, Bristol, Liverpool and Nottingham [2] . Rioters contacted each other through Facebook and blackberry instant messenger to ensure that they could cause maximum damage [3] , which resulted in the destruction of property [4] , physical violence towards others [5] , and even the deaths of three young men [6] . These events prove that seemingly innocent Internet sites can be used by anybody, even apparently normal citizens, to a devastating effect which has caused harm to thousands [7] . To protect the population and maintain order, it is essential that the government is able to act to censor sites that can be used as a forum and a tool for this kind of behaviour when such disruption is occurring. [1] AsiaNews.it, ‘Internet censorship tightening in Vietnam’, 22 June 2010, http://www.asianews.it/news-en/Internet-censorship-tightening-in-Vietnam... 09/09/11 [2] BBC News, ‘England Riots’, 8 February 2012, http://www.bbc.co.uk/news/uk-14452097 on 09/09/11 [3] BBC News, ‘England riots: Two jailed for using Facebook to incite disorder’, 16 August 2011, http://www.bbc.co.uk/news/uk-england-manchester-14551582 on 09/09/11 [4] Hawkes, Alex, Garside, Juliette and Kollewe, Julia, ‘UK riots could cost taxpayer £100m’, guardian.co.uk, 9 August 2011, http://www.guardian.co.uk/uk/2011/aug/09/uk-riots-cost-taxpayer-100-million on 09/09/11. [5] Allen, Emily, ‘We will use water cannons on them: At last Cameron orders police to come down hard on the looters (some aged as young as NINE)’, Mail Online, 11 August 2011, http://www.dailymail.co.uk/news/article-2024203/UK-RIOTS-2011-David-Came... on 09/09/11. [6] Orr, James, ‘Birmingham riots: three men killed ‘protecting homes’’, The Telegraph, 10 August 2011, http://www.telegraph.co.uk/news/uknews/crime/8693095/Birmingham-riots-th... on 09/09/11. [7] Huffington Post, ‘UK Riots: What Long-Term Effects Could They Have?’, 10 August 2011, http://www.huffingtonpost.co.uk/2011/08/10/uk-riots-cleanup-could-co_n_9... on 09/09/11.
[ { "docid": "e0d72292dbef7f359432250daa48e270", "text": "p ip internet digital freedoms access information house would censor Given the number of people who actually use Facebook [1] and other social networking sites, these occurrences were remarkably small [2] . These riots cannot be attributed to Facebook; it was the mindset of the rioters rather than Facebook itself which provided the raw determination for these riots to occur. If Facebook had been censored, they may have simply used mobile phones to co-ordinate their actions instead. Censoring these sites would not prevent such events, and would anger those who use Facebook to communicate with friends [3] and share photos [4] innocently.\n\n[1] BBC News, ‘Facebook hits 500m user milestone’, 21 July 2010, http://www.bbc.co.uk/news/technology-10713199 09/09/11.\n\n[2] BBC News, ‘UK Riots: Trouble erupts in English cities’, 10 August 2011, http://www.bbc.co.uk/news/uk-england-london-14460554 on 09/09/11.\n\n[3] Santos, Elena, “The ultimate social network”, softonic, http://facebook.en.softonic.com/web-apps on 09/09/11.\n\n[4] Santos, Elena, “The ultimate social network”, softonic, http://facebook.en.softonic.com/web-apps on 09/09/11.\n", "title": "" } ]
[ { "docid": "63701d7fd42ab82224d5ca73ffa55d62", "text": "p ip internet digital freedoms access information house would censor Outright banning this kind of prejudice does not directly tackle it – it ignores it. A better way for the government to tackle derogatory and prejudicial speech is to engage with it in a public forum and reasonably point out the flaws and ignorance that it embodies, rather than desperately trying to hide it from public view. In this way, those who are being attacked by these websites would feel as if the government is actively protecting them and their rights and punishing those who have violated them, rather than simply closing a few websites and allowing their authors to continue in other ways. This motion does not solve the problem of prejudice in the way it claims to.\n", "title": "" }, { "docid": "9642012fabf69edc21605dffe53c6546", "text": "p ip internet digital freedoms access information house would censor Any information from television or newspapers has already been regulated, so it is not a problem that it may now appear somewhere on the internet. It is exactly because the internet is a forum for free information and expression that so many people engage with it; removing this is a dictatorial move against ordinary citizens who seek information without bias and undue censorship.\n", "title": "" }, { "docid": "a7e2cb25b88f1db89a49535ba3783453", "text": "p ip internet digital freedoms access information house would censor While in a tiny minority of cases, such social networking sites can be used malevolently, they can also be a powerful force for good. For example, many social networking pages campaign for the end to issues such as domestic abuse [1] and racism [2] , and Facebook and Twitter were even used to bring citizens together to clean the streets after the riots in the UK in 2011. [3] Furthermore, this motion entails a broader move to blanket-ban areas of the internet without outlining a clear divide between what would be banned and what would not. For example, at what point would a website which discusses minority religious views be considered undesirable? Would it be at the expression of hatred for nationals of that country, in which case it might constitute hate speech, or not until it tended towards promoting action i.e. attacking other groups? Allowing censorship in these areas could feasibly be construed as obstructing the free speech of specified groups, which might in fact only increase militancy against a government or culture who are perceived as oppressing their right to an opinion of belief [4] .\n\n[1] BBC News, ‘Teenagers’ poem to aid domestic abuse Facebook campaign’, 4 February 2011, http://www.bbc.co.uk/news/uk-england-12367525 on 16/09/11\n\n[2] Unframing Migrants, ‘meeting for CAMPAIGN AGAINST RACISM’, facebook, 19 October 2010, http://www.facebook.com/events/168254109852708/ on 16/09/2011.\n\n[3] BBC News, ‘England riots: Twitter and Facebook users plan clean-up.’ 9 August 2011, http://www.bbc.co.uk/news/uk-england-london-14456857 on 16/09/11.\n\n[4] Marisol, ‘Nigeria: Boko Haram Jihadists say UN a partner in “oppression of believers”’, JihadWatch, 1 September 2011, http://www.jihadwatch.org/2011/09/nigeria-boko-haram-jihadists-say-un-a-partner-in-oppression-of-believers.html on 09/09/11\n", "title": "" }, { "docid": "7e30a92905e9c1f2c7de5ec464b9ee5d", "text": "p ip internet digital freedoms access information house would censor We already frown upon certain forms of speech [1] as we recognise that it is important to protect groups form prejudice and hatred. Allowing the expression of hatred does not automatically mean that ordinary people will denounce it as evil; rather, it normalises hatred and is more likely to be acceptable in the public domain. It also appears to show implicit acceptance or even support from the government when we take no steps to prevent this kind of damaging expression; as such, the government fails in its duty to ordinary citizens to protect them and represent their best interests.\n\n[1] Tatchell, Peter, ‘Hate speech v free speech’, guardian.co.uk, 10 October 2007, http://www.guardian.co.uk/commentisfree/2007/oct/10/hatespeechvfreespeech on 09/09/11.\n", "title": "" }, { "docid": "89b7d4d043ab16bc40e86ed7f6fad440", "text": "p ip internet digital freedoms access information house would censor Governments are often obliged to do things that the population doesn’t like – raising taxes is an obvious example. However, it is also recognised that sometimes the government has to do these things in order to represent the long-term, best interest of its people – whether or not it is a popular measure at the time.\n", "title": "" }, { "docid": "4b84b7f37087aba0f6512443c23e66f5", "text": "p ip internet digital freedoms access information house would censor The Internet may be a global resource, but if information on it is have a detrimental effect upon a particular country, it certainly is that government’s responsibility and right to tackle it. If it affects their society and the citizens within it, it affects the government and the means by which they can govern, particularly in relation to social policy. Moreover these websites, and specifically religious opinion websites, often seek to ‘recruit’ others to their school of thought or even to action; their purpose is often to gather support and followers [1] . Therefore there certainly is a risk that these people, who are often very intelligent and persuasive [2] , might lure others to them without protection by the government. It is a very real danger, and needs real protection.\n\n[1] Kiley, Sam, ‘Terrorists ‘May Recruit On Social Networks’’, SkyNews, 12 July 2011, http://news.sky.com/home/uk-news/article/16028962 on 09/09/11.\n\n[2] Ali, Iftakhar, ‘Terrorism – The Global Menace’, Universal Journal The Association of Young Journalists and Writers, http://www.ayjw.org/articles.php?id=944449 on 09/09/11.\n", "title": "" }, { "docid": "43b70cff98ab1bb72d63411d74c1cb2f", "text": "p ip internet digital freedoms access information house would censor The government here may legitimately limit ‘free speech’.\n\nWe already set boundaries on what constitutes ‘free speech’ within our society. For example, we often endorse a ‘balancing act’ [1] an individual may express their beliefs or opinions, but only up to the point where it does not impede the ‘protection of other human rights’ [2] – other peoples’ right not to be abused. In this case, if an individual expresses abuse towards another – especially racism - they may be deemed to be outside of the boundaries or free speech and can be punished for it. This motion is simply an extension of this principle; the kinds of sites which would be banned are those which perpetuate hatred or attack other groups in society, an so already fall outside of the protection of free speech. The harms that stem from these kinds of sites outweigh any potential harm from limiting speech in a small number of cases.\n\n[1] Hera.org, ‘Freedom of Expression’, Human Rights Education Association, http://www.hrea.org/index.php?doc_id=408 on 09/09/11\n\n[2] Hera.org, ‘Freedom of Expression’, Human Rights Education Association, http://www.hrea.org/index.php?doc_id=408 on 09/09/11\n", "title": "" }, { "docid": "cf8f3e67464b8672986a3e15122f5419", "text": "p ip internet digital freedoms access information house would censor Governments have a moral duty to protect its citizens from harmful sites.\n\nIn recent years, supposedly innocent sites such as social networking sites have been purposely used to harm others. Victims of cyber bullying have even led victims to commit suicide in extreme cases [1] [2] . Given that both physical [3] and psychological [4] damage have occurred through the use of social networking sites, such sites represent a danger to society as a whole. They have become a medium through which others express prejudice, including racism, towards groups and towards individuals [5] . Similarly, if a particularly country has a clear religious or cultural majority, it is fair to censor those sites which seek to undermine these principles and can be damaging to a large portion of the population. If we fail to take the measures required to remove these sites, which would be achieved through censorship, the government essentially fails to act on its principles by allowing such sites to exist. The government has a duty of care to its citizens [6] and must ensure their safety; censoring such sites is the best way to achieve this.\n\n[1] Moore, Victoria, ‘The fake world of Facebook and Bebo: How suicide and cyber bullying lurk behind the facade of “harmless fun”’, MailOnline, 4 August 2009, http://www.dailymail.co.uk/femail/article-1204062/The-fake-world-Facebook-Bebo-How-suicide-cyber-bullying-lurk-facade-harmless-fun.html on 16/09/11\n\n[2] Good Morning America, ‘Parents: Cyber Bullying Led to Teen’s Suicide’, ABC News, 19 November 2007, http://abcnews.go.com/GMA/story?id=3882520&amp;page=1#.T0N_1fFmIQo on 16/09/11\n\n[3] BBC News, ‘England riots: Two jailed for using Facebook to incite disorder’, 16 August 2011, http://www.bbc.co.uk/news/uk-england-manchester-14551582 on 16/09/11.\n\n[4] Good Morning America, ‘Parents: Cyber Bullying Led to Teen’s Suicide’, ABC News, 19 November 2007, http://abcnews.go.com/GMA/story?id=3882520&amp;page=1#.T0N_1fFmIQo on 16/09/11\n\n[5] Counihan, Bella, ‘White power likes this – racist Facebook groups’, The Age, 3 February 2010, http://www.theage.com.au/opinion/society-and-culture/white-power-likes-t... on 16/09/11\n\n[6] Brownejacobson, ‘Councils owe vulnerable citizens duty of care’, 18 June 2008, http://www.brownejacobson.com/press_office/press_releases/councils_owe_v... 09/09/11\n", "title": "" }, { "docid": "5dbd3fbcfe478b4f36a815490fc0f1a2", "text": "p ip internet digital freedoms access information house would censor As an extensive form of media, the Internet should be subject to regulation just as other forms of media are.\n\nUnder the status quo, states already regulate other forms of media that could be used malevolently. Newspapers and books are subject to censorship [1] , and mediums such as television, film and video receive a higher degree of regulation [2] because it is widely recognised that moving pictures and sound can be more emotive and powerful than text and photographs or illustrations. The internet has many means of portraying information and opinion, including film clips and sound, and almost all the information found on television or in newspapers can be found somewhere on the internet [3] , alongside the millions of uploads from internet users themselves [4] .\n\n[1] Foerstel, Herbert N., ‘Banned in the Media’, Publishing Central, http://publishingcentral.com/articles/20030215-85-f98b.html?si=1 on 09/09/11\n\n[2] CityTVweb.com, ‘Television censorship’, 27 August 2007, http://www.citytvweb.com/television-censorship/ on 09/09/11.\n\n[3] Online Newspapers Directory for the World, ‘Thousands of Newspapers Listed by Country &amp; Region’, http://www.onlinenewspapers.com/ on 09/09/11\n\n[4] Boris, Cynthia, ’17 Percent of Photobucket Users Upload Video’s Once a Day’, Marketing Pilgrim, 9 September 2011, http://www.marketingpilgrim.com/2011/09/17-percent-of-photobucket-users-upload-video-once-a-day.html on 09/09/11\n", "title": "" }, { "docid": "e1d192514b54c85a12e0192a2964e9d2", "text": "p ip internet digital freedoms access information house would censor The Internet is a free domain and cannot becontrolled by the government.\n\nGiven that the Internet is used as an international [1] and public space [2] , the government has no right over the information which may be presented via the Internet. In Western liberal democracies, governments are elected on the basis by which they can serve their own country – how they will create or maintain laws that pertain specifically to that nation, and how they will govern the population. The Internet is not country-specific, but international and free. As such, no individual government should have a right to the information on it. Asserting false authority over the internet would paint the government as dictatorial and a ‘nanny state’ [3] , demonstrating a lack of respect for its citizens by assuming that they cannot protect themselves or recognise the nature of extremist or potentially harmful sites and take the individual decision to distance themselves from such sites.\n\n[1] Babel, ‘Towards communicating on the Internet in any language’, http://alis.isoc.org/index.en.html\n\n[2] Papacharissi, Zizi, ‘The virtual sphere’, New Media &amp; Society, Vol. 4 No. 1, pp 9-27, February 2002, http://nms.sagepub.com/content/4/1/9.short on 09/09/11\n\n[3] BBC. ‘A Point of View: In defence of the nanny state’. Published 04/02/2011. Accessed from http://www.bbc.co.uk/news/magazine-12360045 on\n", "title": "" }, { "docid": "1f5a17eaf9a8e63f50bf2d302da0440d", "text": "p ip internet digital freedoms access information house would censor Censorship is fundamentally incompatible with the notion of free speech.\n\nCensoring particular material essentially blinds the public to a complete world view by asserting the patronising view that ordinary citizens simply cannot read extreme material without recognising the flaws in it. This motion assumes that those who have access to material such as religious opinion sites will be influenced by it, rather than realising that it is morally dubious and denouncing it. The best way to combat prejudice is to expose it as a farce; this cannot be done if it is automatically and unthinkingly censored. Meanwhile, it is paradoxical for a government to assert the general benefits of free speech and then act in a contradictory and hypocritical manner by banning certain areas of the Internet. Free speech should not be limited; even if it is an expression of negativity, it should be publicly debated and logically criticised, rather than hidden altogether.\n", "title": "" }, { "docid": "872edd7325b5ec9e694e4693990fa90b", "text": "p ip internet digital freedoms access information house would censor People often react poorly to being censored by their governments.\n\nIn countries that do currently practice censorship of Internet information, their citizens often interpret this as suspicious and dictatorial behaviour. For example, in China growing discontent with the government’s constant censorship has led to public outrage [1] , and political satire which heavily criticises the government [2] . Censorship can easily be used malevolently and is not always in public interest; this motion supports the ignorance of the population by hiding information and the reality of the situation. Therefore the cost of suspicion by the population of the state makes censorship of any kind less than worthwhile and it is better to allow individuals to make their own choices.\n\n[1] Bennett, Isabella, ‘Media Censorship in China’, Council on Foreign Relations, 7 March 2011, http://www.cfr.org/china/media-censorship-china/p11515 on 09/09/11\n\n[2] Bennett, Isabella, ‘Media Censorship in China’, Council on Foreign Relations, 7 March 2011, http://www.cfr.org/china/media-censorship-china/p11515 on 09/09/11.\n", "title": "" } ]
arguana
6672511faa7c86551282fd4ad9d924f9
The Internet is a free domain and cannot becontrolled by the government. Given that the Internet is used as an international [1] and public space [2] , the government has no right over the information which may be presented via the Internet. In Western liberal democracies, governments are elected on the basis by which they can serve their own country – how they will create or maintain laws that pertain specifically to that nation, and how they will govern the population. The Internet is not country-specific, but international and free. As such, no individual government should have a right to the information on it. Asserting false authority over the internet would paint the government as dictatorial and a ‘nanny state’ [3] , demonstrating a lack of respect for its citizens by assuming that they cannot protect themselves or recognise the nature of extremist or potentially harmful sites and take the individual decision to distance themselves from such sites. [1] Babel, ‘Towards communicating on the Internet in any language’, http://alis.isoc.org/index.en.html [2] Papacharissi, Zizi, ‘The virtual sphere’, New Media &amp; Society, Vol. 4 No. 1, pp 9-27, February 2002, http://nms.sagepub.com/content/4/1/9.short on 09/09/11 [3] BBC. ‘A Point of View: In defence of the nanny state’. Published 04/02/2011. Accessed from http://www.bbc.co.uk/news/magazine-12360045 on
[ { "docid": "4b84b7f37087aba0f6512443c23e66f5", "text": "p ip internet digital freedoms access information house would censor The Internet may be a global resource, but if information on it is have a detrimental effect upon a particular country, it certainly is that government’s responsibility and right to tackle it. If it affects their society and the citizens within it, it affects the government and the means by which they can govern, particularly in relation to social policy. Moreover these websites, and specifically religious opinion websites, often seek to ‘recruit’ others to their school of thought or even to action; their purpose is often to gather support and followers [1] . Therefore there certainly is a risk that these people, who are often very intelligent and persuasive [2] , might lure others to them without protection by the government. It is a very real danger, and needs real protection.\n\n[1] Kiley, Sam, ‘Terrorists ‘May Recruit On Social Networks’’, SkyNews, 12 July 2011, http://news.sky.com/home/uk-news/article/16028962 on 09/09/11.\n\n[2] Ali, Iftakhar, ‘Terrorism – The Global Menace’, Universal Journal The Association of Young Journalists and Writers, http://www.ayjw.org/articles.php?id=944449 on 09/09/11.\n", "title": "" } ]
[ { "docid": "7e30a92905e9c1f2c7de5ec464b9ee5d", "text": "p ip internet digital freedoms access information house would censor We already frown upon certain forms of speech [1] as we recognise that it is important to protect groups form prejudice and hatred. Allowing the expression of hatred does not automatically mean that ordinary people will denounce it as evil; rather, it normalises hatred and is more likely to be acceptable in the public domain. It also appears to show implicit acceptance or even support from the government when we take no steps to prevent this kind of damaging expression; as such, the government fails in its duty to ordinary citizens to protect them and represent their best interests.\n\n[1] Tatchell, Peter, ‘Hate speech v free speech’, guardian.co.uk, 10 October 2007, http://www.guardian.co.uk/commentisfree/2007/oct/10/hatespeechvfreespeech on 09/09/11.\n", "title": "" }, { "docid": "89b7d4d043ab16bc40e86ed7f6fad440", "text": "p ip internet digital freedoms access information house would censor Governments are often obliged to do things that the population doesn’t like – raising taxes is an obvious example. However, it is also recognised that sometimes the government has to do these things in order to represent the long-term, best interest of its people – whether or not it is a popular measure at the time.\n", "title": "" }, { "docid": "63701d7fd42ab82224d5ca73ffa55d62", "text": "p ip internet digital freedoms access information house would censor Outright banning this kind of prejudice does not directly tackle it – it ignores it. A better way for the government to tackle derogatory and prejudicial speech is to engage with it in a public forum and reasonably point out the flaws and ignorance that it embodies, rather than desperately trying to hide it from public view. In this way, those who are being attacked by these websites would feel as if the government is actively protecting them and their rights and punishing those who have violated them, rather than simply closing a few websites and allowing their authors to continue in other ways. This motion does not solve the problem of prejudice in the way it claims to.\n", "title": "" }, { "docid": "9642012fabf69edc21605dffe53c6546", "text": "p ip internet digital freedoms access information house would censor Any information from television or newspapers has already been regulated, so it is not a problem that it may now appear somewhere on the internet. It is exactly because the internet is a forum for free information and expression that so many people engage with it; removing this is a dictatorial move against ordinary citizens who seek information without bias and undue censorship.\n", "title": "" }, { "docid": "e0d72292dbef7f359432250daa48e270", "text": "p ip internet digital freedoms access information house would censor Given the number of people who actually use Facebook [1] and other social networking sites, these occurrences were remarkably small [2] . These riots cannot be attributed to Facebook; it was the mindset of the rioters rather than Facebook itself which provided the raw determination for these riots to occur. If Facebook had been censored, they may have simply used mobile phones to co-ordinate their actions instead. Censoring these sites would not prevent such events, and would anger those who use Facebook to communicate with friends [3] and share photos [4] innocently.\n\n[1] BBC News, ‘Facebook hits 500m user milestone’, 21 July 2010, http://www.bbc.co.uk/news/technology-10713199 09/09/11.\n\n[2] BBC News, ‘UK Riots: Trouble erupts in English cities’, 10 August 2011, http://www.bbc.co.uk/news/uk-england-london-14460554 on 09/09/11.\n\n[3] Santos, Elena, “The ultimate social network”, softonic, http://facebook.en.softonic.com/web-apps on 09/09/11.\n\n[4] Santos, Elena, “The ultimate social network”, softonic, http://facebook.en.softonic.com/web-apps on 09/09/11.\n", "title": "" }, { "docid": "a7e2cb25b88f1db89a49535ba3783453", "text": "p ip internet digital freedoms access information house would censor While in a tiny minority of cases, such social networking sites can be used malevolently, they can also be a powerful force for good. For example, many social networking pages campaign for the end to issues such as domestic abuse [1] and racism [2] , and Facebook and Twitter were even used to bring citizens together to clean the streets after the riots in the UK in 2011. [3] Furthermore, this motion entails a broader move to blanket-ban areas of the internet without outlining a clear divide between what would be banned and what would not. For example, at what point would a website which discusses minority religious views be considered undesirable? Would it be at the expression of hatred for nationals of that country, in which case it might constitute hate speech, or not until it tended towards promoting action i.e. attacking other groups? Allowing censorship in these areas could feasibly be construed as obstructing the free speech of specified groups, which might in fact only increase militancy against a government or culture who are perceived as oppressing their right to an opinion of belief [4] .\n\n[1] BBC News, ‘Teenagers’ poem to aid domestic abuse Facebook campaign’, 4 February 2011, http://www.bbc.co.uk/news/uk-england-12367525 on 16/09/11\n\n[2] Unframing Migrants, ‘meeting for CAMPAIGN AGAINST RACISM’, facebook, 19 October 2010, http://www.facebook.com/events/168254109852708/ on 16/09/2011.\n\n[3] BBC News, ‘England riots: Twitter and Facebook users plan clean-up.’ 9 August 2011, http://www.bbc.co.uk/news/uk-england-london-14456857 on 16/09/11.\n\n[4] Marisol, ‘Nigeria: Boko Haram Jihadists say UN a partner in “oppression of believers”’, JihadWatch, 1 September 2011, http://www.jihadwatch.org/2011/09/nigeria-boko-haram-jihadists-say-un-a-partner-in-oppression-of-believers.html on 09/09/11\n", "title": "" }, { "docid": "1f5a17eaf9a8e63f50bf2d302da0440d", "text": "p ip internet digital freedoms access information house would censor Censorship is fundamentally incompatible with the notion of free speech.\n\nCensoring particular material essentially blinds the public to a complete world view by asserting the patronising view that ordinary citizens simply cannot read extreme material without recognising the flaws in it. This motion assumes that those who have access to material such as religious opinion sites will be influenced by it, rather than realising that it is morally dubious and denouncing it. The best way to combat prejudice is to expose it as a farce; this cannot be done if it is automatically and unthinkingly censored. Meanwhile, it is paradoxical for a government to assert the general benefits of free speech and then act in a contradictory and hypocritical manner by banning certain areas of the Internet. Free speech should not be limited; even if it is an expression of negativity, it should be publicly debated and logically criticised, rather than hidden altogether.\n", "title": "" }, { "docid": "872edd7325b5ec9e694e4693990fa90b", "text": "p ip internet digital freedoms access information house would censor People often react poorly to being censored by their governments.\n\nIn countries that do currently practice censorship of Internet information, their citizens often interpret this as suspicious and dictatorial behaviour. For example, in China growing discontent with the government’s constant censorship has led to public outrage [1] , and political satire which heavily criticises the government [2] . Censorship can easily be used malevolently and is not always in public interest; this motion supports the ignorance of the population by hiding information and the reality of the situation. Therefore the cost of suspicion by the population of the state makes censorship of any kind less than worthwhile and it is better to allow individuals to make their own choices.\n\n[1] Bennett, Isabella, ‘Media Censorship in China’, Council on Foreign Relations, 7 March 2011, http://www.cfr.org/china/media-censorship-china/p11515 on 09/09/11\n\n[2] Bennett, Isabella, ‘Media Censorship in China’, Council on Foreign Relations, 7 March 2011, http://www.cfr.org/china/media-censorship-china/p11515 on 09/09/11.\n", "title": "" }, { "docid": "43b70cff98ab1bb72d63411d74c1cb2f", "text": "p ip internet digital freedoms access information house would censor The government here may legitimately limit ‘free speech’.\n\nWe already set boundaries on what constitutes ‘free speech’ within our society. For example, we often endorse a ‘balancing act’ [1] an individual may express their beliefs or opinions, but only up to the point where it does not impede the ‘protection of other human rights’ [2] – other peoples’ right not to be abused. In this case, if an individual expresses abuse towards another – especially racism - they may be deemed to be outside of the boundaries or free speech and can be punished for it. This motion is simply an extension of this principle; the kinds of sites which would be banned are those which perpetuate hatred or attack other groups in society, an so already fall outside of the protection of free speech. The harms that stem from these kinds of sites outweigh any potential harm from limiting speech in a small number of cases.\n\n[1] Hera.org, ‘Freedom of Expression’, Human Rights Education Association, http://www.hrea.org/index.php?doc_id=408 on 09/09/11\n\n[2] Hera.org, ‘Freedom of Expression’, Human Rights Education Association, http://www.hrea.org/index.php?doc_id=408 on 09/09/11\n", "title": "" }, { "docid": "8657f8f86a3bd7342178eed2024a749e", "text": "p ip internet digital freedoms access information house would censor Even sites that appeared innocent have had a devastating effect on society.\n\nSome governments, such as the Vietnamese government [1] , have already seen sufficient cause to ban social networking sites such as Facebook. Recently in the UK, many major cities witnessed devastation and destruction as social networking sites were used to co-ordinate wide-scale riots which rampaged over London, Manchester, Birmingham, Worcestershire, Gloucester, Croydon, Bristol, Liverpool and Nottingham [2] . Rioters contacted each other through Facebook and blackberry instant messenger to ensure that they could cause maximum damage [3] , which resulted in the destruction of property [4] , physical violence towards others [5] , and even the deaths of three young men [6] . These events prove that seemingly innocent Internet sites can be used by anybody, even apparently normal citizens, to a devastating effect which has caused harm to thousands [7] . To protect the population and maintain order, it is essential that the government is able to act to censor sites that can be used as a forum and a tool for this kind of behaviour when such disruption is occurring.\n\n[1] AsiaNews.it, ‘Internet censorship tightening in Vietnam’, 22 June 2010, http://www.asianews.it/news-en/Internet-censorship-tightening-in-Vietnam... 09/09/11\n\n[2] BBC News, ‘England Riots’, 8 February 2012, http://www.bbc.co.uk/news/uk-14452097 on 09/09/11\n\n[3] BBC News, ‘England riots: Two jailed for using Facebook to incite disorder’, 16 August 2011, http://www.bbc.co.uk/news/uk-england-manchester-14551582 on 09/09/11\n\n[4] Hawkes, Alex, Garside, Juliette and Kollewe, Julia, ‘UK riots could cost taxpayer £100m’, guardian.co.uk, 9 August 2011, http://www.guardian.co.uk/uk/2011/aug/09/uk-riots-cost-taxpayer-100-million on 09/09/11.\n\n[5] Allen, Emily, ‘We will use water cannons on them: At last Cameron orders police to come down hard on the looters (some aged as young as NINE)’, Mail Online, 11 August 2011, http://www.dailymail.co.uk/news/article-2024203/UK-RIOTS-2011-David-Came... on 09/09/11.\n\n[6] Orr, James, ‘Birmingham riots: three men killed ‘protecting homes’’, The Telegraph, 10 August 2011, http://www.telegraph.co.uk/news/uknews/crime/8693095/Birmingham-riots-th... on 09/09/11.\n\n[7] Huffington Post, ‘UK Riots: What Long-Term Effects Could They Have?’, 10 August 2011, http://www.huffingtonpost.co.uk/2011/08/10/uk-riots-cleanup-could-co_n_9... on 09/09/11.\n", "title": "" }, { "docid": "cf8f3e67464b8672986a3e15122f5419", "text": "p ip internet digital freedoms access information house would censor Governments have a moral duty to protect its citizens from harmful sites.\n\nIn recent years, supposedly innocent sites such as social networking sites have been purposely used to harm others. Victims of cyber bullying have even led victims to commit suicide in extreme cases [1] [2] . Given that both physical [3] and psychological [4] damage have occurred through the use of social networking sites, such sites represent a danger to society as a whole. They have become a medium through which others express prejudice, including racism, towards groups and towards individuals [5] . Similarly, if a particularly country has a clear religious or cultural majority, it is fair to censor those sites which seek to undermine these principles and can be damaging to a large portion of the population. If we fail to take the measures required to remove these sites, which would be achieved through censorship, the government essentially fails to act on its principles by allowing such sites to exist. The government has a duty of care to its citizens [6] and must ensure their safety; censoring such sites is the best way to achieve this.\n\n[1] Moore, Victoria, ‘The fake world of Facebook and Bebo: How suicide and cyber bullying lurk behind the facade of “harmless fun”’, MailOnline, 4 August 2009, http://www.dailymail.co.uk/femail/article-1204062/The-fake-world-Facebook-Bebo-How-suicide-cyber-bullying-lurk-facade-harmless-fun.html on 16/09/11\n\n[2] Good Morning America, ‘Parents: Cyber Bullying Led to Teen’s Suicide’, ABC News, 19 November 2007, http://abcnews.go.com/GMA/story?id=3882520&amp;page=1#.T0N_1fFmIQo on 16/09/11\n\n[3] BBC News, ‘England riots: Two jailed for using Facebook to incite disorder’, 16 August 2011, http://www.bbc.co.uk/news/uk-england-manchester-14551582 on 16/09/11.\n\n[4] Good Morning America, ‘Parents: Cyber Bullying Led to Teen’s Suicide’, ABC News, 19 November 2007, http://abcnews.go.com/GMA/story?id=3882520&amp;page=1#.T0N_1fFmIQo on 16/09/11\n\n[5] Counihan, Bella, ‘White power likes this – racist Facebook groups’, The Age, 3 February 2010, http://www.theage.com.au/opinion/society-and-culture/white-power-likes-t... on 16/09/11\n\n[6] Brownejacobson, ‘Councils owe vulnerable citizens duty of care’, 18 June 2008, http://www.brownejacobson.com/press_office/press_releases/councils_owe_v... 09/09/11\n", "title": "" }, { "docid": "5dbd3fbcfe478b4f36a815490fc0f1a2", "text": "p ip internet digital freedoms access information house would censor As an extensive form of media, the Internet should be subject to regulation just as other forms of media are.\n\nUnder the status quo, states already regulate other forms of media that could be used malevolently. Newspapers and books are subject to censorship [1] , and mediums such as television, film and video receive a higher degree of regulation [2] because it is widely recognised that moving pictures and sound can be more emotive and powerful than text and photographs or illustrations. The internet has many means of portraying information and opinion, including film clips and sound, and almost all the information found on television or in newspapers can be found somewhere on the internet [3] , alongside the millions of uploads from internet users themselves [4] .\n\n[1] Foerstel, Herbert N., ‘Banned in the Media’, Publishing Central, http://publishingcentral.com/articles/20030215-85-f98b.html?si=1 on 09/09/11\n\n[2] CityTVweb.com, ‘Television censorship’, 27 August 2007, http://www.citytvweb.com/television-censorship/ on 09/09/11.\n\n[3] Online Newspapers Directory for the World, ‘Thousands of Newspapers Listed by Country &amp; Region’, http://www.onlinenewspapers.com/ on 09/09/11\n\n[4] Boris, Cynthia, ’17 Percent of Photobucket Users Upload Video’s Once a Day’, Marketing Pilgrim, 9 September 2011, http://www.marketingpilgrim.com/2011/09/17-percent-of-photobucket-users-upload-video-once-a-day.html on 09/09/11\n", "title": "" } ]
arguana
214b34aa103063a14bfabda82d4fc4b3
Transparency prevents public relations disasters Transparency is necessary to avoid public relations disasters; particularly in countries where the media has some freedom to investigate for themselves. It is clearly the best policy for the military to make sure all the information is released along with the reasons behind actions rather than having the media finding individual pieces of a whole and speculating to fill the gaps. A good example would be a collision on 16th January 1966 between a B-52 bomber and a KC-135 tanker while attempting to refuel that destroyed both planes. Accidents happen, and this one cost 11 lives, but could have been much worse as the B-52 had four nuclear bombs on board were not armed and did not detonate. In this case an initial lack of information rapidly turned into a public relations disaster that was stemmed by much more openness by the military and the US Ambassador in Spain. The release of the information reduces the room for the press to fill in the gaps with harmful speculation. [1] In this case there was never much chance of national security implications or a break with Spain as the country was ruled by the dictator Franco, someone who would hardly pay attention to public opinion. But in a democracy a slow and closed response could seriously damage relations. [1] Stiles, David, ‘A Fusion Bomb over Andalucia: U.S. Information Policy and the 1966 Palomares Incident’, Journal of War Studies, Vol.8, No.1, Winter 2006, pp.49-67, p.65
[ { "docid": "ea1bf901b9c016e50b93d6b38fdfb10d", "text": "e media and good government politics defence government digital freedoms This is clearly not always the case. Often transparency means that the public becomes aware when there is little need for them to know. There had been previous nuclear accidents that had caused no damage, and had not been noticed, such as in Goldsboro, N.C. in 1961. [1] If there had been a media frenzy fuelled by released information there would clearly have been much more of a public relations disaster than there was with no one noticing. Since there’re was no harm done there is little reason why such a media circus should have been encouraged. And even without media attention the incident lead to increase safeguards.\n\n[1] Stiles, David, ‘A Fusion Bomb over Andalucia: U.S. Information Policy and the 1966 Palomares Incident’, Journal of War Studies, Vol.8, No.1, Winter 2006, pp.49-67, p.51\n", "title": "" } ]
[ { "docid": "8414e4254b854a82f39dea1f7c7e4b12", "text": "e media and good government politics defence government digital freedoms Being a citizen does not come with a right to know everything that the state does. In much the same way being a shareholder does not mean you get to know absolutely everything every person in a business does. Instead you get the headlines and a summary, most of the time the how the business goes about getting the results is left to the management. Ultimately the state’s purpose is to protect its citizens and this comes before letting them know everything about how that is done.\n", "title": "" }, { "docid": "669b66c5254b042f7fdf8e3dafcb8a0b", "text": "e media and good government politics defence government digital freedoms Transparency may mean that mistakes or problems are found faster, but it does not mean they are going to be corrected faster. Waste in the defense budget has been known about for years yet it still keeps coming up. Transparency shines a light on the problem but that is not helpful if it does not result in action to solve the problem.\n", "title": "" }, { "docid": "111d28308056b7bde04aa9dfb31ed7ef", "text": "e media and good government politics defence government digital freedoms Transparency in situations of international tension is tricky; with complete transparency how do you engage in bluffing? The state that is completely transparent is tying one hand behind its back in international negotiations.\n\nIt is also wrong to assume that transparency will always reduce tensions. Sometimes two countries just have completely incompatible interests. In such instances complete transparency is simply going to set them on a collision course. It is then much better for there to be a bit less transparency so that both sides can fudge the issue and sign up to an agreement while interpreting it in different ways.\n", "title": "" }, { "docid": "8222066b85109d2a33c55a3163c44a4a", "text": "e media and good government politics defence government digital freedoms Trust goes two ways; the people have to trust that on some issues, such as security, the government is doing the right thing to protect them even when it cannot release all relevant information. But even if the military and security services do claim to be completely transparent then how is everyone to know that it really is being as transparent as they say? Unfortunately there are information asymmetry’s between members of the public and the government; the member of the public is unlikely to have the capability to find out if the government if hiding something from them. [1] Other countries too are likely to be suspicious of ‘complete transparency’ and simply believe that this is cover for doing something more nefarious. Trust then cannot only about being transparent in everything.\n\n[1] Stiglitz, Joseph, ‘Transparency in Government’, in Roumeen Islam, The right to tell: the roll of the mass media in economic development, World Bank Publications, 2002, p.28\n", "title": "" }, { "docid": "2eafb3797f068beb299caa9d706002d4", "text": "e media and good government politics defence government digital freedoms Drones are an unusual example (though not unique) because they are a new form of warfare over which there are few clear rules and norms. This means that making it transparent will create new norms. However in the vast majority of covert operations if made public they would clearly be illegal and would have to be ended. Drones are also unusual in that the public sees few downsides to the killing, this means there would be less public pressure than in most such operations.\n", "title": "" }, { "docid": "09379b5481b93d163504bdcf5b12e9c9", "text": "e media and good government politics defence government digital freedoms Coalitions can form behind expansionist policies regardless of whether there is transparency. If there is no transparency then it is simply an invitation for these groups to overestimate the strength of their own state compared to their opponents. Where there is transparency the figures will at least be available to counter their arguments. It should not be surprising that interest groups do not have as much influence in creating expansionist policy in democracies. [1]\n\nTransparency showing when a state is to be eclipsed is a greater concern but a lack of transparency in such a case is just as bad. No transparency will simply encourage the fears of the state that is to be eclipsed that the rising state is hostile and not to be trusted.\n\n[1] Snyder, Jack, Myths of Empire, Cornell University Press, 1991, p.18\n", "title": "" }, { "docid": "5ee863b175d8bc7f083fcb10f9f8e6b7", "text": "e media and good government politics defence government digital freedoms The public is rational and can make its own assessment of risk. The best course in such cases is transparency and education. If all relevant information is released, along with analysis as to the risk presented by the threat, then the public can be best informed about what kind of threats they need to be prepared for. Terrorism has been blown out of proportion because they are single deadly incidents that are simple to report and have a good narrative to provide 24/7 coverage that the public will lap up. [1] As a result there has been much more media coverage than other threats. It can then be no surprise that the public overestimate the threat posed by terrorism as the public are told what risks are relevant by the amount of media coverage. [2]\n\n[1] Engelhardt, Tom, ‘Casualties from Terrorism Are Minor Compared to Other Threats’, Gale Opposing Viewpoints, 2011\n\n[2] Singer, Eleanor, and Endreny, Phyllis Mildred, Reporting on Risk: How the Mass Media Portray Accidents, Diseases, Disasters and Other Hazards, Russell Sage Foundation, 1993\n", "title": "" }, { "docid": "ee9b4226c39f4e02bd155a90b722c72c", "text": "e media and good government politics defence government digital freedoms Transparency clearly does not have to extend to things like technical specifications of weapons. Such information would be a clear benefit to a competitor allowing them to build their own while being of little help in terms of transparency as most people could not understand it. On the other hand knowing what a weapons system does simply prevents misunderstanding and misjudgement.\n", "title": "" }, { "docid": "5b60c17aa62dd87a3c11971948e3c787", "text": "e media and good government politics defence government digital freedoms Clearly transparency in real time might cause some problems allowing the disruption of ongoing operations. However most of the time information could be released very shortly afterwards rather than being considered secret for 25-30 years. [1] A much shorter timeframe is needed if the transparency is to have any meaning or impact upon policy. In the case of WikiLeaks most of the information was already a couple of years old and WikiLeaks said it made sure that there was no information that could endanger lives released.\n\nWe should also remember that a lack of transparency can also endanger lives; this might be the case if it leads to purchases of equipment of shoddy equipment without the proper oversight to ensure everything works as it should. For example many countries purchased bomb detectors that are made out of novelty golf ball finders, just plastic, that do not work from a Briton looking to make a fast buck. It has for example been used to attempt to find car bombs in Iraq. A little transparency in testing and procurement could have gone a long way in protecting those who have to use the equipment. [2]\n\n[1] National Security Forum, No More Secrets, American Bar Association, March 2011, p.8\n\n[2] AFP, ‘Iraq still using phony bomb detectors at checkpoints’, globalpost, 3 May 2013\n", "title": "" }, { "docid": "557898b82fdb4fc4ca3bdcb7096ac0bb", "text": "e media and good government politics defence government digital freedoms Citizens have a right to know what is done in their name\n\nThe nation exits for its citizens; it depends on their consent to maintain order and to raise finances. The main purpose of the state is law and order, and national defence, both of which are covered by security. As an area that is so central to the role of the government it is vital that the stakeholders in that government, its citizens, know what it is the state is doing in their name for their security.\n\nThe Obama administration for example refuses to acknowledge that it is carrying out a campaign using drones while at the same time saying it is “the only game in town in terms of confronting and trying to disrupt the al-Qaeda leadership.” [1] If the US government is bombing another country then the US people have a right to know with much less ambiguity what exactly is being done, who is being hit, when and where. They also need to be informed of any possible consequences.\n\n[1] Kaufman, Brett, ‘In Court Today: Fighting the CIA’s Secrecy Claims on Drones’, ACLU, 20 September 2012\n", "title": "" }, { "docid": "8c971b2f19f8dfd929bf86c54c4978ef", "text": "e media and good government politics defence government digital freedoms Transparency helps reduce international tension\n\nTransparency is necessary in international relations. States need to know what each other are doing to assess their actions. Without any transparency the hole is filled by suspicion and threat inflation that can easily lead to miscalculation and even war.\n\nThe Cuban missile crisis is a clear example where a lack of transparency on either side about what they were willing to accept and what they were doing almost lead to nuclear war. [1] It is notable that one of the responses to prevent a similar crisis was to install a hotline between the White House and Kremlin. A very small, but vital, step in terms of openness.\n\nToday this is still a problem; China currently worries about the US ‘pivot’ towards Asia complaining it “has aroused a great deal of suspicion in China.” “A huge deficit of strategic trust lies at the bottom of all problems between China and the United States.” The result would be an inevitable arms race and possible conflict. [2]\n\n[1] Frohwein, Ashley, ‘Embassy Moscow: A Diplomatic Perspective of the Cuban Missile Crisis’, Georgetown University School of Foreign Service, 7 May 2013\n\n[2] Yafei, He, ‘The Trust Deficit’, Foreign Policy, 13 May 2013\n", "title": "" }, { "docid": "1ca11382acfc6861183dfdf775423f0c", "text": "e media and good government politics defence government digital freedoms Transparency is a good in and of itself\n\nThe most essential commodity within a state is trust. Trust is essential in all sorts of aspect of our lives; we trust that the paper money we have is actually worth more than a scrap of paper, that doctors performing surgery know what they are doing, that we won't be attacked in the street, and that the government is looking after our interests. In order to create that trust there needs to be transparency so that we know that our institutions are trustworthy. It is the ability to check the facts and the accountability that comes with transparency that creates trust. And this in turn is what makes them legitimate. [1]\n\nThe need for trust applies just as much to security as any other walk of life. Citizens need to trust that the security services really are keeping them safe, are spending taxpayers’ money wisely, and are acting in a fashion that is a credit to the country. Unfortunately if there is not transparency there is no way of knowing if this is the case and so often the intelligence services have turned out to be an embarrassment. As has been the case with the CIA and it’s the use of torture following 9/11, for which there are still calls for transparency on past actions. [2]\n\n[1] Ankersmit, Laurens, ‘The Irony of the international relations exception in the transparency regulation’, European Law Blog, 20 March 2013\n\n[2] Traub, James, ‘Out With It’, Foreign Policy, 10 May 2013\n", "title": "" }, { "docid": "2494278a88bfb0294a3cda5ace3a9ba3", "text": "e media and good government politics defence government digital freedoms Transparency prevents, or corrects, mistakes\n\nTransparency is fundamental in making sure that mistakes don’t happen, or when they do that they are found and corrected quickly with appropriate accountability. This applies as much, if not more, to the security apparatus than other walks of life. In security mistakes are much more likely to be a matter of life and death than in most other walks of life. They are also likely to be costly; something the military and national security apparatus is particularly known for. [1] An audit of the Pentagon in 2011 found that the US Department of Defense wasted $70 billion over two years. [2] This kind of waste can only be corrected if it is found out about, and for that transparency is necessary.\n\n[1] Schneier, Bruce, ‘Transparency and Accountability Don’t Hurt Security – They’re Crucial to It’, The Atlantic, 8 May 2012\n\n[2] Schweizer, Peter, ‘Crony Capitalism Creeps Into the Defense Budget’, The Daily Beast, 22 May 2012\n", "title": "" }, { "docid": "a2c61e7d5a261b887f7baffa69e21599", "text": "e media and good government politics defence government digital freedoms In security too much transparency endangers lives\n\nTransparency is all very well when it comes to how much is being spent on a new tank, aircraft, or generals houses, but it is very different when it comes to operations. Transparency in operations can endanger lives. With intelligence services transparency would risk the lives of informants; it is similar with the case of interpreters for US forces in Iraq who were targeted after they were told they could not wear masks because they are considered to be traitors. [1]\n\nIn military operations being open about almost anything could be a benefit to the opposition. Most obviously things like the timing and numbers involved in operations need to be kept under wraps but all sorts of information could be damaging in one way or another. Simply because a state is not involved in a full scale war does not mean it can open up on these operations. This is why the Chairman of the Joint Chiefs Admiral Mike Mullen in response to WikiLeaks said “Mr. Assange can say whatever he likes about the greater good he thinks he and his source are doing… But the truth is they might already have on their hands the blood of some young soldier or that of an Afghan family.” [2]\n\n[1] Londoño, Ernesto, ‘U.S. Ban on Masks Upsets Iraqui Interpreters’, Washington Post, 17 November 2008\n\n[2] Jaffe, Greg, and Partlow, Joshua, ‘Joint Chiefs Chairman Mullen: WikiLeaks release endangers troops, Afghans’, Washington Post, 30 July 2010\n", "title": "" }, { "docid": "c68405c453794f19c4fdf3ceffc8e00b", "text": "e media and good government politics defence government digital freedoms Provides information to competitors\n\nWhere there is international competition transparency can be a problem if there is not transparency on both sides as one side is essentially giving its opponent an advantage. This is ultimately why countries keep national security secrets; they are in competition with other nations and the best way to ensure an advantage over those states is to keep capabilities secret. One side having information while the other does not allows the actor that has the information to act differently in response to that knowledge. Keeping things secret can therefore provide an advantage when making a decision, as the one with most information is most likely to react best. [1] Currently there is information asymmetry between the United States and China to the point where some analysts consider that the United States provides more authoritative information on China’s military than China itself does. [2]\n\n[1] National Security Forum, No More Secrets, American Bar Association, March 2011, p.7\n\n[2] Erickson, Andrew S., ‘Pentagon Report Reveals Chinese Military Developments’, The Diplomat, 8 May 2013\n", "title": "" }, { "docid": "fde44dfde57e4fe7ac171eb412494d22", "text": "e media and good government politics defence government digital freedoms Transparency can lead to conflict\n\nThe idea that transparency is good assumes that the people watching the government be transparent are likely to provide a moderating influence on policy. This is not always the case. Instead transparency can lead to more conflict.\n\nFirst a nationalist population may force the government into taking more action than it wants. One obvious way to quiet such sentiment is to show that the country is not ready for war; something that may not be possible if being transparent. Instead if it is transparent that the military could win then there is nothing to stop a march to war. It then becomes possible for multiple interest groups to form into coalitions each with differing reasons for conflict trading off with each other resulting in overstretch and conflict. [1]\n\nSecondly when there is a rapidly changing balance of power then transparency for the rising power may not be a good thing. Instead as Deng Xiaoping advised they should “Hide your strength, bide your time”. [2] Showing in the open how your military is expanding may simply force action from the current dominant power. Transparency, combined with domestic media worrying about the other’s build up can make the other side seem more and more of a threat that must be dealt with before it can get any more powerful. It is quite a common international relations theory that one way or another relative power and the quest for hegemony is the cause for war, [3] transparency simply encourages this. William C. Wohlforth points out when studying the cause of the First World War that it is perception of relative power that matters. Germany’s leaders believed it had to strike before it out of time as a result of Russia rapidly industrialising. [4] Transparency unfortunately reduces the ability of the government to manage perception.\n\n[1] Snyder, Jack, Myths of Empire, Cornell University Press, 1991, p.17\n\n[2] Allison, Graham, and Blackwill, Robert D., ‘Will China Ever Be No.1?’, YakeGlobal, 20 February 2013\n\n[3] Kaplan, Robert D., ‘Why John J. Mearsheimer Is Right (About Some Things)’, The Atlantic, 20 December 2011\n\n[4] Wohlforth, William C., ‘The Perception of Power: Russia in the Pre-1914 Balance’, World Politics, Vol.39, No.3, (April 1987), pp.353-381, p.362\n", "title": "" }, { "docid": "57d978c0658ee5b8e228d32d58bc1ad7", "text": "e media and good government politics defence government digital freedoms Transparency can result in normalisation\n\nWhile something is secret it is clearly not a normal every day part of government, it is deniable and the assumption is that when it comes to light it has probably been wound up long ago. However making something transparent without winding it up can be a bad thing as it makes it normal which ultimately makes a bad policy much harder to end.\n\nThe use of drones by the CIA may turn out to be an example of this. At the moment we are told almost nothing about drones, not even how many strikes there are or how many are killed. There have however been recent suggestions that the drone program could be transferred to the Department of Defence. This would then make the targeted killing that is carried out seem a normal part of military conflict, somehting it clearly is not. [1] And the public reacts differently to covert and military action; already more Americans support military drones doing targeted killing (75%) than CIA ones (65%). [2]\n\n[1] Waxman, Matthew, ‘Going Clear’, Foreign Policy, 20 March 2013\n\n[2] Zenko, Micah, ‘U.S. Public Opinion on Drone Strikes’, Council on Foreign Relations, 18 March 2013\n", "title": "" }, { "docid": "f98bb4959d33dea7830b3fa122bec2e0", "text": "e media and good government politics defence government digital freedoms Don’t panic!\n\nThe role of the security services is in part to deal with some very dangerous ideas and events. But the point is to deal with them in such a way that does not cause public disorder or even panic. We clearly don’t want every report detailing specific threats to be made public, especially if it is reporting something that could be devastating but there is a low risk of it actually occurring. If such information is taken the wrong way it can potentially cause panic, either over nothing, or else in such a way that it damages any possible response to the crisis. Unfortunately the media and the public often misunderstand risk. For example preventing terrorism has been regularly cited in polls as being the Americans top foreign policy goal with more than 80% thinking it very important in Gallup polls for over a decade [1] even when the chance of being killed by terrorism in Western countries is very low. If the public misunderstands the risk the response is unlikely to be proportionate and can be akin to yelling fire in a packed theatre.\n\nWhile it is not (usually) a security, but rather a public health issue, pandemics make a good example. The question of how much information to release is only slightly different than in security; officials want to release enough information that everyone is informed, but not so much that there is panic whenever there is an unusual death. [2] In 2009 the WHO declared swine flu to be a pandemic despite it being a relatively mild virus that did not cause many deaths, so causing an unnecessary scare and stockpiling of drugs. [3]\n\n[1] Jones, Jeffrey M., ‘Americans Say Preventing Terrorism Top Foreign Policy Goal’, Gallup Politics, 20 February 2013\n\n[2] Honigsbaum, Mark, ‘The coronavirus conundrum: when to press the panic button’, guardian.co.uk, 14 February 2013\n\n[3] Cheng, Maria, ‘WHO’s response to swine flu pandemic flawed’, Phys.org, 10 May 2011\n", "title": "" } ]
arguana
05c33a5137e59c396e9adc0675cd6ccb
In security too much transparency endangers lives Transparency is all very well when it comes to how much is being spent on a new tank, aircraft, or generals houses, but it is very different when it comes to operations. Transparency in operations can endanger lives. With intelligence services transparency would risk the lives of informants; it is similar with the case of interpreters for US forces in Iraq who were targeted after they were told they could not wear masks because they are considered to be traitors. [1] In military operations being open about almost anything could be a benefit to the opposition. Most obviously things like the timing and numbers involved in operations need to be kept under wraps but all sorts of information could be damaging in one way or another. Simply because a state is not involved in a full scale war does not mean it can open up on these operations. This is why the Chairman of the Joint Chiefs Admiral Mike Mullen in response to WikiLeaks said “Mr. Assange can say whatever he likes about the greater good he thinks he and his source are doing… But the truth is they might already have on their hands the blood of some young soldier or that of an Afghan family.” [2] [1] Londoño, Ernesto, ‘U.S. Ban on Masks Upsets Iraqui Interpreters’, Washington Post, 17 November 2008 [2] Jaffe, Greg, and Partlow, Joshua, ‘Joint Chiefs Chairman Mullen: WikiLeaks release endangers troops, Afghans’, Washington Post, 30 July 2010
[ { "docid": "5b60c17aa62dd87a3c11971948e3c787", "text": "e media and good government politics defence government digital freedoms Clearly transparency in real time might cause some problems allowing the disruption of ongoing operations. However most of the time information could be released very shortly afterwards rather than being considered secret for 25-30 years. [1] A much shorter timeframe is needed if the transparency is to have any meaning or impact upon policy. In the case of WikiLeaks most of the information was already a couple of years old and WikiLeaks said it made sure that there was no information that could endanger lives released.\n\nWe should also remember that a lack of transparency can also endanger lives; this might be the case if it leads to purchases of equipment of shoddy equipment without the proper oversight to ensure everything works as it should. For example many countries purchased bomb detectors that are made out of novelty golf ball finders, just plastic, that do not work from a Briton looking to make a fast buck. It has for example been used to attempt to find car bombs in Iraq. A little transparency in testing and procurement could have gone a long way in protecting those who have to use the equipment. [2]\n\n[1] National Security Forum, No More Secrets, American Bar Association, March 2011, p.8\n\n[2] AFP, ‘Iraq still using phony bomb detectors at checkpoints’, globalpost, 3 May 2013\n", "title": "" } ]
[ { "docid": "2eafb3797f068beb299caa9d706002d4", "text": "e media and good government politics defence government digital freedoms Drones are an unusual example (though not unique) because they are a new form of warfare over which there are few clear rules and norms. This means that making it transparent will create new norms. However in the vast majority of covert operations if made public they would clearly be illegal and would have to be ended. Drones are also unusual in that the public sees few downsides to the killing, this means there would be less public pressure than in most such operations.\n", "title": "" }, { "docid": "09379b5481b93d163504bdcf5b12e9c9", "text": "e media and good government politics defence government digital freedoms Coalitions can form behind expansionist policies regardless of whether there is transparency. If there is no transparency then it is simply an invitation for these groups to overestimate the strength of their own state compared to their opponents. Where there is transparency the figures will at least be available to counter their arguments. It should not be surprising that interest groups do not have as much influence in creating expansionist policy in democracies. [1]\n\nTransparency showing when a state is to be eclipsed is a greater concern but a lack of transparency in such a case is just as bad. No transparency will simply encourage the fears of the state that is to be eclipsed that the rising state is hostile and not to be trusted.\n\n[1] Snyder, Jack, Myths of Empire, Cornell University Press, 1991, p.18\n", "title": "" }, { "docid": "5ee863b175d8bc7f083fcb10f9f8e6b7", "text": "e media and good government politics defence government digital freedoms The public is rational and can make its own assessment of risk. The best course in such cases is transparency and education. If all relevant information is released, along with analysis as to the risk presented by the threat, then the public can be best informed about what kind of threats they need to be prepared for. Terrorism has been blown out of proportion because they are single deadly incidents that are simple to report and have a good narrative to provide 24/7 coverage that the public will lap up. [1] As a result there has been much more media coverage than other threats. It can then be no surprise that the public overestimate the threat posed by terrorism as the public are told what risks are relevant by the amount of media coverage. [2]\n\n[1] Engelhardt, Tom, ‘Casualties from Terrorism Are Minor Compared to Other Threats’, Gale Opposing Viewpoints, 2011\n\n[2] Singer, Eleanor, and Endreny, Phyllis Mildred, Reporting on Risk: How the Mass Media Portray Accidents, Diseases, Disasters and Other Hazards, Russell Sage Foundation, 1993\n", "title": "" }, { "docid": "ee9b4226c39f4e02bd155a90b722c72c", "text": "e media and good government politics defence government digital freedoms Transparency clearly does not have to extend to things like technical specifications of weapons. Such information would be a clear benefit to a competitor allowing them to build their own while being of little help in terms of transparency as most people could not understand it. On the other hand knowing what a weapons system does simply prevents misunderstanding and misjudgement.\n", "title": "" }, { "docid": "8414e4254b854a82f39dea1f7c7e4b12", "text": "e media and good government politics defence government digital freedoms Being a citizen does not come with a right to know everything that the state does. In much the same way being a shareholder does not mean you get to know absolutely everything every person in a business does. Instead you get the headlines and a summary, most of the time the how the business goes about getting the results is left to the management. Ultimately the state’s purpose is to protect its citizens and this comes before letting them know everything about how that is done.\n", "title": "" }, { "docid": "ea1bf901b9c016e50b93d6b38fdfb10d", "text": "e media and good government politics defence government digital freedoms This is clearly not always the case. Often transparency means that the public becomes aware when there is little need for them to know. There had been previous nuclear accidents that had caused no damage, and had not been noticed, such as in Goldsboro, N.C. in 1961. [1] If there had been a media frenzy fuelled by released information there would clearly have been much more of a public relations disaster than there was with no one noticing. Since there’re was no harm done there is little reason why such a media circus should have been encouraged. And even without media attention the incident lead to increase safeguards.\n\n[1] Stiles, David, ‘A Fusion Bomb over Andalucia: U.S. Information Policy and the 1966 Palomares Incident’, Journal of War Studies, Vol.8, No.1, Winter 2006, pp.49-67, p.51\n", "title": "" }, { "docid": "669b66c5254b042f7fdf8e3dafcb8a0b", "text": "e media and good government politics defence government digital freedoms Transparency may mean that mistakes or problems are found faster, but it does not mean they are going to be corrected faster. Waste in the defense budget has been known about for years yet it still keeps coming up. Transparency shines a light on the problem but that is not helpful if it does not result in action to solve the problem.\n", "title": "" }, { "docid": "111d28308056b7bde04aa9dfb31ed7ef", "text": "e media and good government politics defence government digital freedoms Transparency in situations of international tension is tricky; with complete transparency how do you engage in bluffing? The state that is completely transparent is tying one hand behind its back in international negotiations.\n\nIt is also wrong to assume that transparency will always reduce tensions. Sometimes two countries just have completely incompatible interests. In such instances complete transparency is simply going to set them on a collision course. It is then much better for there to be a bit less transparency so that both sides can fudge the issue and sign up to an agreement while interpreting it in different ways.\n", "title": "" }, { "docid": "8222066b85109d2a33c55a3163c44a4a", "text": "e media and good government politics defence government digital freedoms Trust goes two ways; the people have to trust that on some issues, such as security, the government is doing the right thing to protect them even when it cannot release all relevant information. But even if the military and security services do claim to be completely transparent then how is everyone to know that it really is being as transparent as they say? Unfortunately there are information asymmetry’s between members of the public and the government; the member of the public is unlikely to have the capability to find out if the government if hiding something from them. [1] Other countries too are likely to be suspicious of ‘complete transparency’ and simply believe that this is cover for doing something more nefarious. Trust then cannot only about being transparent in everything.\n\n[1] Stiglitz, Joseph, ‘Transparency in Government’, in Roumeen Islam, The right to tell: the roll of the mass media in economic development, World Bank Publications, 2002, p.28\n", "title": "" }, { "docid": "c68405c453794f19c4fdf3ceffc8e00b", "text": "e media and good government politics defence government digital freedoms Provides information to competitors\n\nWhere there is international competition transparency can be a problem if there is not transparency on both sides as one side is essentially giving its opponent an advantage. This is ultimately why countries keep national security secrets; they are in competition with other nations and the best way to ensure an advantage over those states is to keep capabilities secret. One side having information while the other does not allows the actor that has the information to act differently in response to that knowledge. Keeping things secret can therefore provide an advantage when making a decision, as the one with most information is most likely to react best. [1] Currently there is information asymmetry between the United States and China to the point where some analysts consider that the United States provides more authoritative information on China’s military than China itself does. [2]\n\n[1] National Security Forum, No More Secrets, American Bar Association, March 2011, p.7\n\n[2] Erickson, Andrew S., ‘Pentagon Report Reveals Chinese Military Developments’, The Diplomat, 8 May 2013\n", "title": "" }, { "docid": "fde44dfde57e4fe7ac171eb412494d22", "text": "e media and good government politics defence government digital freedoms Transparency can lead to conflict\n\nThe idea that transparency is good assumes that the people watching the government be transparent are likely to provide a moderating influence on policy. This is not always the case. Instead transparency can lead to more conflict.\n\nFirst a nationalist population may force the government into taking more action than it wants. One obvious way to quiet such sentiment is to show that the country is not ready for war; something that may not be possible if being transparent. Instead if it is transparent that the military could win then there is nothing to stop a march to war. It then becomes possible for multiple interest groups to form into coalitions each with differing reasons for conflict trading off with each other resulting in overstretch and conflict. [1]\n\nSecondly when there is a rapidly changing balance of power then transparency for the rising power may not be a good thing. Instead as Deng Xiaoping advised they should “Hide your strength, bide your time”. [2] Showing in the open how your military is expanding may simply force action from the current dominant power. Transparency, combined with domestic media worrying about the other’s build up can make the other side seem more and more of a threat that must be dealt with before it can get any more powerful. It is quite a common international relations theory that one way or another relative power and the quest for hegemony is the cause for war, [3] transparency simply encourages this. William C. Wohlforth points out when studying the cause of the First World War that it is perception of relative power that matters. Germany’s leaders believed it had to strike before it out of time as a result of Russia rapidly industrialising. [4] Transparency unfortunately reduces the ability of the government to manage perception.\n\n[1] Snyder, Jack, Myths of Empire, Cornell University Press, 1991, p.17\n\n[2] Allison, Graham, and Blackwill, Robert D., ‘Will China Ever Be No.1?’, YakeGlobal, 20 February 2013\n\n[3] Kaplan, Robert D., ‘Why John J. Mearsheimer Is Right (About Some Things)’, The Atlantic, 20 December 2011\n\n[4] Wohlforth, William C., ‘The Perception of Power: Russia in the Pre-1914 Balance’, World Politics, Vol.39, No.3, (April 1987), pp.353-381, p.362\n", "title": "" }, { "docid": "57d978c0658ee5b8e228d32d58bc1ad7", "text": "e media and good government politics defence government digital freedoms Transparency can result in normalisation\n\nWhile something is secret it is clearly not a normal every day part of government, it is deniable and the assumption is that when it comes to light it has probably been wound up long ago. However making something transparent without winding it up can be a bad thing as it makes it normal which ultimately makes a bad policy much harder to end.\n\nThe use of drones by the CIA may turn out to be an example of this. At the moment we are told almost nothing about drones, not even how many strikes there are or how many are killed. There have however been recent suggestions that the drone program could be transferred to the Department of Defence. This would then make the targeted killing that is carried out seem a normal part of military conflict, somehting it clearly is not. [1] And the public reacts differently to covert and military action; already more Americans support military drones doing targeted killing (75%) than CIA ones (65%). [2]\n\n[1] Waxman, Matthew, ‘Going Clear’, Foreign Policy, 20 March 2013\n\n[2] Zenko, Micah, ‘U.S. Public Opinion on Drone Strikes’, Council on Foreign Relations, 18 March 2013\n", "title": "" }, { "docid": "f98bb4959d33dea7830b3fa122bec2e0", "text": "e media and good government politics defence government digital freedoms Don’t panic!\n\nThe role of the security services is in part to deal with some very dangerous ideas and events. But the point is to deal with them in such a way that does not cause public disorder or even panic. We clearly don’t want every report detailing specific threats to be made public, especially if it is reporting something that could be devastating but there is a low risk of it actually occurring. If such information is taken the wrong way it can potentially cause panic, either over nothing, or else in such a way that it damages any possible response to the crisis. Unfortunately the media and the public often misunderstand risk. For example preventing terrorism has been regularly cited in polls as being the Americans top foreign policy goal with more than 80% thinking it very important in Gallup polls for over a decade [1] even when the chance of being killed by terrorism in Western countries is very low. If the public misunderstands the risk the response is unlikely to be proportionate and can be akin to yelling fire in a packed theatre.\n\nWhile it is not (usually) a security, but rather a public health issue, pandemics make a good example. The question of how much information to release is only slightly different than in security; officials want to release enough information that everyone is informed, but not so much that there is panic whenever there is an unusual death. [2] In 2009 the WHO declared swine flu to be a pandemic despite it being a relatively mild virus that did not cause many deaths, so causing an unnecessary scare and stockpiling of drugs. [3]\n\n[1] Jones, Jeffrey M., ‘Americans Say Preventing Terrorism Top Foreign Policy Goal’, Gallup Politics, 20 February 2013\n\n[2] Honigsbaum, Mark, ‘The coronavirus conundrum: when to press the panic button’, guardian.co.uk, 14 February 2013\n\n[3] Cheng, Maria, ‘WHO’s response to swine flu pandemic flawed’, Phys.org, 10 May 2011\n", "title": "" }, { "docid": "557898b82fdb4fc4ca3bdcb7096ac0bb", "text": "e media and good government politics defence government digital freedoms Citizens have a right to know what is done in their name\n\nThe nation exits for its citizens; it depends on their consent to maintain order and to raise finances. The main purpose of the state is law and order, and national defence, both of which are covered by security. As an area that is so central to the role of the government it is vital that the stakeholders in that government, its citizens, know what it is the state is doing in their name for their security.\n\nThe Obama administration for example refuses to acknowledge that it is carrying out a campaign using drones while at the same time saying it is “the only game in town in terms of confronting and trying to disrupt the al-Qaeda leadership.” [1] If the US government is bombing another country then the US people have a right to know with much less ambiguity what exactly is being done, who is being hit, when and where. They also need to be informed of any possible consequences.\n\n[1] Kaufman, Brett, ‘In Court Today: Fighting the CIA’s Secrecy Claims on Drones’, ACLU, 20 September 2012\n", "title": "" }, { "docid": "04ab403cbe5fefe3364b36e9f3d654bf", "text": "e media and good government politics defence government digital freedoms Transparency prevents public relations disasters\n\nTransparency is necessary to avoid public relations disasters; particularly in countries where the media has some freedom to investigate for themselves. It is clearly the best policy for the military to make sure all the information is released along with the reasons behind actions rather than having the media finding individual pieces of a whole and speculating to fill the gaps.\n\nA good example would be a collision on 16th January 1966 between a B-52 bomber and a KC-135 tanker while attempting to refuel that destroyed both planes. Accidents happen, and this one cost 11 lives, but could have been much worse as the B-52 had four nuclear bombs on board were not armed and did not detonate. In this case an initial lack of information rapidly turned into a public relations disaster that was stemmed by much more openness by the military and the US Ambassador in Spain. The release of the information reduces the room for the press to fill in the gaps with harmful speculation. [1] In this case there was never much chance of national security implications or a break with Spain as the country was ruled by the dictator Franco, someone who would hardly pay attention to public opinion. But in a democracy a slow and closed response could seriously damage relations.\n\n[1] Stiles, David, ‘A Fusion Bomb over Andalucia: U.S. Information Policy and the 1966 Palomares Incident’, Journal of War Studies, Vol.8, No.1, Winter 2006, pp.49-67, p.65\n", "title": "" }, { "docid": "8c971b2f19f8dfd929bf86c54c4978ef", "text": "e media and good government politics defence government digital freedoms Transparency helps reduce international tension\n\nTransparency is necessary in international relations. States need to know what each other are doing to assess their actions. Without any transparency the hole is filled by suspicion and threat inflation that can easily lead to miscalculation and even war.\n\nThe Cuban missile crisis is a clear example where a lack of transparency on either side about what they were willing to accept and what they were doing almost lead to nuclear war. [1] It is notable that one of the responses to prevent a similar crisis was to install a hotline between the White House and Kremlin. A very small, but vital, step in terms of openness.\n\nToday this is still a problem; China currently worries about the US ‘pivot’ towards Asia complaining it “has aroused a great deal of suspicion in China.” “A huge deficit of strategic trust lies at the bottom of all problems between China and the United States.” The result would be an inevitable arms race and possible conflict. [2]\n\n[1] Frohwein, Ashley, ‘Embassy Moscow: A Diplomatic Perspective of the Cuban Missile Crisis’, Georgetown University School of Foreign Service, 7 May 2013\n\n[2] Yafei, He, ‘The Trust Deficit’, Foreign Policy, 13 May 2013\n", "title": "" }, { "docid": "1ca11382acfc6861183dfdf775423f0c", "text": "e media and good government politics defence government digital freedoms Transparency is a good in and of itself\n\nThe most essential commodity within a state is trust. Trust is essential in all sorts of aspect of our lives; we trust that the paper money we have is actually worth more than a scrap of paper, that doctors performing surgery know what they are doing, that we won't be attacked in the street, and that the government is looking after our interests. In order to create that trust there needs to be transparency so that we know that our institutions are trustworthy. It is the ability to check the facts and the accountability that comes with transparency that creates trust. And this in turn is what makes them legitimate. [1]\n\nThe need for trust applies just as much to security as any other walk of life. Citizens need to trust that the security services really are keeping them safe, are spending taxpayers’ money wisely, and are acting in a fashion that is a credit to the country. Unfortunately if there is not transparency there is no way of knowing if this is the case and so often the intelligence services have turned out to be an embarrassment. As has been the case with the CIA and it’s the use of torture following 9/11, for which there are still calls for transparency on past actions. [2]\n\n[1] Ankersmit, Laurens, ‘The Irony of the international relations exception in the transparency regulation’, European Law Blog, 20 March 2013\n\n[2] Traub, James, ‘Out With It’, Foreign Policy, 10 May 2013\n", "title": "" }, { "docid": "2494278a88bfb0294a3cda5ace3a9ba3", "text": "e media and good government politics defence government digital freedoms Transparency prevents, or corrects, mistakes\n\nTransparency is fundamental in making sure that mistakes don’t happen, or when they do that they are found and corrected quickly with appropriate accountability. This applies as much, if not more, to the security apparatus than other walks of life. In security mistakes are much more likely to be a matter of life and death than in most other walks of life. They are also likely to be costly; something the military and national security apparatus is particularly known for. [1] An audit of the Pentagon in 2011 found that the US Department of Defense wasted $70 billion over two years. [2] This kind of waste can only be corrected if it is found out about, and for that transparency is necessary.\n\n[1] Schneier, Bruce, ‘Transparency and Accountability Don’t Hurt Security – They’re Crucial to It’, The Atlantic, 8 May 2012\n\n[2] Schweizer, Peter, ‘Crony Capitalism Creeps Into the Defense Budget’, The Daily Beast, 22 May 2012\n", "title": "" } ]
arguana
a0de4f0640bb3caa6b611c9b62323024
Universities deserve to profit from their work Universities are providing a service just like almost any other business. They provide a service in terms of educating students who are enrolled with them and secondly they conduct research on a wide range of subjects. In both of these cases the university deserves to make a profit out of their work. When acting as an educator universities are in an educational free market, this is the case even when the cost is provided by the state. All universities are aiming to attract as many students as possible and earn as much as possible from fees. If the university is successful it will be able to charge more as it will attract students from further afield. While Universities may make a profit on research or even teaching this profit is for the benefit of society as a whole as the profits are usually simply reinvested in the University’s education and infrastructure. [1] [1] Anon. “What does the money get spent on?” The University of Sheffield, 2013. http://www.shef.ac.uk/finance/staff-information/howfinanceworks/higher_education/money_spent_on
[ { "docid": "52bee454f3b72172298d221ad6905427", "text": "ity digital freedoms access knowledge universities should make all Academic work is not about profit. For most researchers the aim is to satisfy curiosity or to increase the sum of knowledge. Others are motivated by a desire to do good, or possibly for recognition. None of these things require there to be profit for the university.\n\nMoreover we should remember that the profit is not going to the individual who did the research, there is therefore no moral justification that the person has put effort in and so deserves to profit from it. The university does not even take the risk, which is born by the taxpayer who pays the majority of the research budget. Much of the profit from publishing this knowledge does not even go to the university. Instead academic publishers make huge profits through rentier capitalism. They have profit margins of 36% despite not doing the research, or taking any risk that goes into funding the research. [1]\n\n[1] Monbiot, George, “Academic publishers make Murdoch look like a socialist”, The Guardian, 29 August 2011, http://www.guardian.co.uk/commentisfree/2011/aug/29/academic-publishers-murdoch-socialist\n", "title": "" } ]
[ { "docid": "f09fafee9ff1ff245dc1d79d6d1c083e", "text": "ity digital freedoms access knowledge universities should make all This is trying to pull the wool over the eyes of those who fund the research in the first place; the taxpayer. The taxpayer (or in some cases private funder) pays for the research to be done and so is paying for the paper to be written. It then does not make sense that the taxpayer should pay again in order to access the research that they paid to have done in the first place. Yes there are small costs associated with checking and editing the articles but these could easily be added into research budgets especially as it would mean cutting out an extra cost that occurs due to the profit margins of the academic publishers. As Neelie Kroes, European Commission Vice-President for the Digital Agenda, says “Taxpayers should not have to pay twice for scientific research”. [1]\n\n[1] Kroes, Neelie, “Scientific data: open access to research results will boost Europe's innovation capacity”, Europa.eu, 17 July 2012. http://europa.eu/rapid/press-release_IP-12-790_en.htm?locale=en\n", "title": "" }, { "docid": "be781d6fad57e9e4baab0bc0d0238c22", "text": "ity digital freedoms access knowledge universities should make all The vast majority of people who go to University are not doing so simply because they are interested in a subject and want to find out more. Instead they are after the qualification and improved job prospects university provides. Even those few who are in large part studying out of curiosity and interest will likely be doing so at university because they like the student life and want the experience.\n\nHowever having courses and materials out in the open can even help universities with recruitment. Providing open access boosts a university’s reputation abroad which helps it in the international student market. Open access to academic work also helps give potential students a much better idea with what they will be studying which is very useful for students who are unsure where to choose. The benefits are obvious as shown by 35% of the Massachusetts Institute of Technology’s students choose the university after looking at its OpenCourseWare. [1]\n\n[1] Daniel, Sir John, and Killion, David, “Are open educational resources the key to global economic growth?”, Guardian Professional, 4 July 2012, http://www.guardian.co.uk/higher-education-network/blog/2012/jul/04/open-educational-resources-and-economic-growth\n", "title": "" }, { "docid": "493fba695eacceb2567565aee51d8cda", "text": "ity digital freedoms access knowledge universities should make all If business wants certain research to use for profit then it is free to do so. However it should entirely fund that research rather than relying on academic institutions to do the research and the government to come up with part of the funding. This would then allow the government to focus its funding on basic research, the kind of research that pushes forward the boundaries of knowledge which may have many applications but is not specifically designed with these in mind. This kind of curiosity driven research can be very important for example research into retroviruses gave the grounding that meant that antiretrovirals to control AIDS were available within a decade of the disease appearing. [1]\n\n[1] Chakradhar, Shraddha, “The Case for Curiosity”, Harvard Medical School, 10 August 2012, http://hms.harvard.edu/news/case-curiosity-8-10-12\n", "title": "" }, { "docid": "32a528019381b7394f60293b6cba3efd", "text": "ity digital freedoms access knowledge universities should make all Public funding does not mean that everything should be free and open to use by the public. We do not expect to be allowed to use buildings that are built as government offices as if they were our own. The government builds large amounts of infrastructure such as airports and railways but we don’t expect to be able to use them for free.\n", "title": "" }, { "docid": "5e712f61850383eaacb8f05add810273", "text": "ity digital freedoms access knowledge universities should make all Most students most of the time stick to the core areas of their course and thus are not likely to encounter difficulties with finding the relevant information. For those who do require resources that the university library does not have access to they can use interlibrary loan for a small fee to cover the cost of sending the book or article between universities. [1] The universities in most countries can therefore effectively split the cost of access by specialising in certain subjects which limits the number of journals they need to buy while making the resources available to their students if they really need them.\n\n[1] Anon., “Inter-library loans” Birkbeck University of London. http://www.bbk.ac.uk/lib/about/how/ill/illguide Within the UK Cambridge charges £3 to £6, http://www.bbk.ac.uk/lib/about/how/ill/illguide in Europe the University of Vienna charges €2 http://bibliothek.univie.ac.at/english/interlibrary_loans.html while the United States is higher with Yale charging between $20-30 http://www.library.yale.edu/ill/\n", "title": "" }, { "docid": "ecbaac5b29e7a9189b7c32a87ae49be7", "text": "ity digital freedoms access knowledge universities should make all Open access makes little difference to research. If an academic needs to use an article they don’t have access to they can pay for it and gain access quickly and efficiently.\n\nThe benefits to the economy may also be overstated; we don’t know how much benefit it will create. But we do know it would be badly damaging to the academic publishing industry. We also know there are risks with putting everything out in the open as economies that are currently research leaders will be handing out their advances for free. There is an immense amount of stealing of intellectual property, up to $400 billion a year, so research is obviously considered to be economically worth something. [1] With open access the proposal is instead to make everything available for free for others to take as and when they wish.\n\n[1] Permanent Select Committee on Intelligence, “Backgrounder on the Rogers-Ruppersberger Cybersecurity Bill”, U.S. House of Representatives, http://intelligence.house.gov/backgrounder-rogers-ruppersberger-cybersecurity-bill\n", "title": "" }, { "docid": "4e7a566f40f698d67b4fbc2030a0e074", "text": "ity digital freedoms access knowledge universities should make all Making these academic materials available to the general public does not mean they are useful to anyone. Many of the materials universities produce are not useful unless the reader has attended the relevant lectures. Rather than simply putting those lectures that are recorded and course handbooks online what is needed to open up education is systematically designed online courses that are available to all. Unfortunately what this provides will be a profusion of often overlapping and contradictory materials with little guidance for how to navigate through them for those who are not involved in the course in question.\n", "title": "" }, { "docid": "2677118826a60d3d771794875a80e168", "text": "ity digital freedoms access knowledge universities should make all Making everything free to access will damage universities ability to tap private funding\n\nFor most universities even if the government is generous with funding it will still need for some projects require private funding. When providing money for research projects the government often requires cost sharing so the university needs to find other sources of funding. [1] Third parties however are unlikely to be willing to help provide funding for research if they know that all the results of that research will be made open to anyone and everyone. These businesses are funding specific research to solve a particular problem with the intention of profiting from the result. Even if universities themselves don’t want to profit from their research they cannot ignore the private funding as it is rapidly growing, up 250% in the U.S. from 1985-2005, while the government support is shrinking. [2]\n\n[1] Anon. (November 2010), “Research &amp; Sponsored Projects”, University of Michigan. http://orsp.umich.edu/funding/costsharing/cost_sharing_questions.html\n\n[2] Schindler, Adam, “Follow the Money Corporate funding of university research”, Berkley Science Review, Issue 13. http://sciencereview.berkeley.edu/articles/issue13/funding.pdf\n", "title": "" }, { "docid": "8bff7f2a3ca3b4359ec282f1f750d68b", "text": "ity digital freedoms access knowledge universities should make all Who will write and edit the work?\n\nYou can’t take the end result out of the system and assume all the rest of it will continue as usual. Journal articles don’t write themselves; there will still be costs for editors, typesetters, reviewing etc., as well as the time and cost of the writer. The average cost of publishing an article is about £4000. [1]\n\nThere have been two suggested forms of open access ‘Gold’ in which authors pay publishers article publication charges and ‘Green’ under which the author self-archives their papers in open access repositories. The gold option that the UK intends to implement could mean universities having to find an extra £60million a year. [2] In either case the cost is being put on the author.\n\nThis is exactly the same when asking academics to put their lectures, lecture notes, bibliographies etc online. They are being asked to put in more hours grappling with technology without being paid for it.\n\n[1] Moghaddam, Golnessa Galyani, “Why Are Scholarly Journals Costly even with Electronic Publishing?” http://eprints.rclis.org/14213/1/Why_are_scholarly_journals_costly_even_with_electronic_publishing_2009_ILDS_37__3_.pdf p.9\n\n[2] Ayris, Paul, “Why panning for gold may be detrimental to open access research”, Guardian Professional, 23 July 2012. http://www.guardian.co.uk/higher-education-network/blog/2012/jul/23/finch-report-open-access-research\n", "title": "" }, { "docid": "389db7ac8845897c9d6349c66dd482ec", "text": "ity digital freedoms access knowledge universities should make all Less incentive to study at university\n\nIf everything that University provides is open to all then there is less incentive to study at university. Anyone who is studying in order to learn about a subject rather than achieve a particular qualification will no longer need to attend the university in order to fulfil their aim. The actual benefit of university education is less in learning content per se than engaging with new ideas critically, something that is frequently more difficult in an online environment.\n\nMoreover if only some countries or institutions were to implement such open access then it makes more sense for any students who are intending to study internationally to go elsewhere as they will still be able to use the resources made available by that university. Open access if not implemented universally is therefore damaging to universities attempts to attract lucrative international students who often pay high tuition fees.\n", "title": "" }, { "docid": "7dc97546372a1779c211de1379dae39f", "text": "ity digital freedoms access knowledge universities should make all Most universities are publically funded so should have to be open with their materials.\n\nThe United States University system is famously expensive and as a result it is probably the system in a developed country that has least public funding yet $346.8billion was spent, mostly by the states, on higher education in 2008-9. [1] In Europe almost 85% of universities funding came from government sources. [2] Considering the huge amounts of money spent on universities by taxpayers they should be able to demand access to the academic work those institutions produce.\n\nEven in countries where there are tuition fees that make up some of the funding for the university it is right that the public should have access to these materials as the tuition fees are being paid for the personal teaching time provided by the lecturers not for the academics’ publications. Moreover those who have paid for a university course would benefit by the materials still being available to access after they have finished university\n\n[1] Caplan, Bruan, “Correction: Total Government Spending on Higher Education”, Library of Economics and Liberty, 16 November 2012, http://econlog.econlib.org/archives/2012/11/correction_tota.html\n\n[2] Vught, F., et al., “Funding Higher Education: A View Across Europe”, Ben Jongbloed Center for Higher Education Policy Studies University of Twente, 2010. http://www.utwente.nl/mb/cheps/publications/Publications%202010/MODERN_Funding_Report.pdf\n", "title": "" }, { "docid": "1963eb08e8c9fa09f4b159239b0baed0", "text": "ity digital freedoms access knowledge universities should make all Openness benefits research and the economy\n\nOpen access can be immensely beneficial for research. It increases the speed of access to publications and opens research up to a wider audience. [1] Some of the most important research has been made much more accessible due to open access. The Human Genome Project would have been an immense success either way but it is doubtful that its economic impact of $796billion would have been realised without open access.\n\nThe rest of the economy benefits too. It has been estimated that switching to open access would generate £100million of economic activity in the United Kingdom as a result of reduced research costs for business and shorter development as a result of being able to access a much broader range of research. [2]\n\n[1] Anon., “Open access research advantages”, University of Leicester, http://www2.le.ac.uk/library/find/lra/openaccess/advantages\n\n[2] Carr, Dave, and Kiley, Robert, “Open access to science helps us all”, New Statesman, 13 April 2012. http://www.newstatesman.com/blogs/economics/2012/04/open-access-science-helps-us-all\n", "title": "" }, { "docid": "d3d5078ab584269e6432eb880de0647b", "text": "ity digital freedoms access knowledge universities should make all Opens up education\n\nHigher education, as with other levels of education, should be open to all. Universities are universally respected as the highest form of educational institution available and it is a matter of principle that everyone should have access to this higher level of education. Unfortunately not everyone in the world has this access usually because they cannot afford it, but it may also be because they are not academically inclined. This does not however mean that it is right to simply cut them off from higher educational opportunities. Should those who do not attend university not have access to the same resources as those who do?\n\nThis can have an even greater impact globally than within an individual country. 90% of the world’s population currently have no access to higher education. Providing access to all academic work gives them the opportunities that those in developed countries already have. [1]\n\n[1] Daniel, Sir John, and Killion, David, “Are open educational resources the key to global economic growth?”, Guardian Professional, 4 July 2012, http://www.guardian.co.uk/higher-education-network/blog/2012/jul/04/open-educational-resources-and-economic-growth\n", "title": "" }, { "docid": "930b9c516a26d30234efe27a63b644a2", "text": "ity digital freedoms access knowledge universities should make all Students would be able to benefit from being able to use resources at other universities\n\nHaving paid for access to universities and the materials they provide for research students have a right to expect that they will have all the necessary materials available. Unfortunately this is not always the case. University libraries are unable to afford all the university journals they wish to have access to or need for their courses. Therefore any student who wants to go into areas not anticipated by the course they are enrolled with will find that they do not have access to the materials they require. They then face the cost of getting individual access to an online journal article which can be up to $42, despite there being almost zero marginal cost to the publisher. [1] This even affects the biggest and best resourced university libraries. Robert Darnton the director of Harvard University’s library which pays $3.5million per year for journal articles says “The system is absurd” and “academically restrictive” instead “the answer will be open-access journal publishing”. [2]\n\n[1] Sciverse, “Pay-per-view”, Elsevier, http://www.info.sciverse.com/sciencedirect/buying/individual_article_purchase_options/ppv\n\n[2] Sample, Ian, “Harvard University says it can’t afford journal publishers’ prices”, The Guardian, 24 April 2012. http://www.guardian.co.uk/science/2012/apr/24/harvard-university-journal-publishers-prices\n", "title": "" } ]
arguana
24318c8b77e7decc491c5026783aba2c
Damaging to freedom of speech. People are only truly free to say what they wish when they do not have to worry about being personally persecuted, either by peers, strangers, or their government, for what they are saying. [1] Removing the right to post anonymously increases the pressures people feel to post in a particular way, and thus limits the extent to which they can speak freely. [1] ‘Anonymity’. Electric Frontier Foundation. URL: https://www.eff.org/issues/anonymity
[ { "docid": "c7091d47aeda262b8904eaeb2c93bf56", "text": "p ip internet digital freedoms privacy house would ban all anonymous Freedom from consequences is not a necessary component of freedom of speech. If someone is free from legal restraints surrounding their ability to speak, they are free to speak. Freedom of speech does not entitle an individual to absolute freedom of consequences of any kind, including social consequences to their speech. While someone should certainly be free to state their opinion, there is no reason why they should be entitled to not be challenged for holding that opinion.\n", "title": "" } ]
[ { "docid": "b1ba9e64bc3bad50395e126a6001a9b7", "text": "p ip internet digital freedoms privacy house would ban all anonymous Self-improvement through an alias or false identity is unlikely to lead to genuine self-improvement. When individuals have multiple identities, they may think of them as distinct from one another, and are thus unlikely to transfer self-improvement from one to another. For example, a recovering addict may only have a renewed attitude in their online identity, and not in real life where it is more important. This is unlikely to be beneficial, and may be actively harmful in terms of limiting the improvement of real life identities.\n", "title": "" }, { "docid": "e19d797b6d7dc8654ddc7779d3edb26e", "text": "p ip internet digital freedoms privacy house would ban all anonymous Protest of this kind is less meaningful. When an organisation such as this is criticised only by anonymous individuals, who are likely to be difficult to contact or learn more about, it is less likely to lead to any kind of long-term meaningful resistance. In the case of Anonymous and the Church of Scientology, there have been no notable acts of resistance to the Church of Scientology other than Anonymous.\n\nAnonymous resistance makes other kinds of resistance less likely to happen, and rarely leads to significant change or action.\n", "title": "" }, { "docid": "b7ee9fe89ed06256e4e48f2ca5c2d303", "text": "p ip internet digital freedoms privacy house would ban all anonymous Small reduction in ability to seek out help and community outweighed by a large reduction in hate speech. Anonymity is not essential to seeking out help and community. The internet is a large and expansive place, meaning that if an individual posts on an obscure site, people that they know in real life are very likely to see it. Even having your real name attached is unlikely to single you out unless you have a particularly distinctive name. Anonymity adds very little to their ability to seek out this help and community.\n\nAdditionally, anonymity is frequently used as a tool to spread hate speech, [1] which the people this point is concerned with are the primary victims of. Even if a lack of anonymity means a marginal reduction in their ability to seek out a supportive community, this is a worthwhile sacrifice for a significant reduction in the amount of hatred directed at them.\n\n[1] ‘Starting Points for Combating Hate Speech Online’. British Institute of Human Rights. URL: http://act4hre.coe.int/rus/content/download/28301/215409/file/Starting%20points.pdf\n", "title": "" }, { "docid": "d85fa04c18b9a0cdd5d8e5dcf405846d", "text": "p ip internet digital freedoms privacy house would ban all anonymous Hate speech will happen regardless. A significant amount of online hate speech is made through accounts under the real life name of the speaker. It is notable that Facebook has required its users to use their real names since 2011, [1] but has still had significant issues with hate speech long after that. [2] The fact is that an enormous amount of hate speakers see what they are saying as entirely legitimate, and are therefore not afraid of having it connected to their real life identities. The fact is that 'hate speech' is localised and culture-dependent. Since the Internet brings many cultures together, hate speech will happen almost inadvertently.\n\nAdditionally, online hate speech is very difficult to prosecute even when connected to real life identities, [3] so this policy is unlikely to be effective at making those who now would be identified see any more consequences than before. In the Korean example the law was simply avoided by resorting to foreign sites. [4] The similar lack of consequences is likely to lead to a similar lack of disincentive to posting that kind of material.\n\n[1] ‘Twitter rife with hate speech, terror activity’. Jewish Journal. URL: http://www.jewishjournal.com/lifestyle/article/twitter_rife_with_hate_speech_terror_activity\n\n[2] ‘Facebook Admits It Failed On Hate Speech Following #FBrape Twitter Campaign And Advertiser Boycott’. International Business Times. URL: http://www.ibtimes.com/facebook-admits-it-failed-hate-speech-following-fbrape-twitter-campaign-advertiser-boycott-1282815\n\n[3] ‘Racists, Bigots and the Internet’. Anti-Defamation League. URL: http://archive.adl.org/internet/internet_law3.asp\n\n[4] ‘Law on real name use on Internet ruled illegal’, JoonAng Daily, http://koreajoongangdaily.joinsmsn.com/news/article/article.aspx?aid=295...\n", "title": "" }, { "docid": "30e7bf8f2af585091e30064c6aa96586", "text": "p ip internet digital freedoms privacy house would ban all anonymous Similar prevention can be achieved through raising internet awareness. In the case of children, parents taking a more pro-active role in monitoring and controlling their children’s online activities is likely to be more effective than the measures of this policy. Indeed, signalling that they do need to monitor their children can actually put their children in more danger, as there are considerable risks to children online even without anonymous posting.\n\nOther kinds of fraud can be similarly avoided by raising awareness: people should be made to realise that sending money or bank details to people you don’t know is a bad idea. In fact, the removal of internet aliases may even encourage people to trust people they don’t know, but do know the real names of, even though that is no more advisable.\n", "title": "" }, { "docid": "ac56b96b3c33f78eb9a21b8b4a53ecbc", "text": "p ip internet digital freedoms privacy house would ban all anonymous Moves illegal activity in harder to monitor areas. Those partaking in planning illegal activity will not continue to do so if hiding their identities is not possible. Instead, they will return to using more private means of communication, such as meeting in person, or using any online services that do guarantee anonymity such as TOR. While this may make planning illegal activity more difficult, it also makes it more difficult for law enforcement officials to monitor this behaviour, and come anywhere near stopping it: at least under the status quo they have some idea of where and how it is happening, and can use that as a starting point. Forcing criminals further underground may not be desirable. The authorities in cooperation with websites are usually able to find out who users are despite the veil of anonymity for example in the UK the police have arrested people for rape threats made against a campaigner for there to be a woman on UK banknotes.1\n\n1 Masters, Sam, 'Twitter threats: Man arrested over rape-threat tweets against campaigner Caroline Criado-Perez', The Independent, 28, July, 2013, http://www.independent.co.uk/news/uk/crime/twitter-threats-man-arrested-...\n", "title": "" }, { "docid": "79678eb50153611bf9bcf969be935e87", "text": "p ip internet digital freedoms privacy house would ban all anonymous Stopping anonymity does not meaningfully prevent bullying. Internet anonymity is not essentially to bullying: it can be done through a nearly infinite number of media. Importantly, it is not even essential to anonymous bullying. For example, it is quite simple to send anonymous text messages: all that is required is access to a phone that the victim does not have the number of. It is similarly easy to simply write notes or letters, and leave them in places where the victim will find them. Anonymous posting on the internet is far from the only place where these kinds of anonymous attacks are possible.\n\nAll this policy does is shifts the bullying into areas where they may be more difficult to monitor. Rather than sending messages online that can be, albeit with some difficulty, traced back to the perpetrator, or at least used as some kind of evidence, bullies are likely to return to covert classroom bullying that can be much more difficult to identify.\n", "title": "" }, { "docid": "a3d97e3a97af55464ecf276c7959eee7", "text": "p ip internet digital freedoms privacy house would ban all anonymous Limiting ability of oppressed individuals to seek out help and community.\n\nAnonymous posting means people who are made to feel ashamed of themselves, or their identities within their local communities can seek out help and/or like-minded people. For example, a gay teenager in a fiercely homophobic community could find cyber communities that are considerably more tolerant, and even face the same issues as them. This can make an enormous difference to self-acceptance, as people are no longer subjected to a singular, negative view of themselves. [1] Banning anonymous posting removes this ability.\n\n[1] ‘In the Middle East, Marginalized LGBT Youth Find Supportive Communities Online’ Tech President. URL: http://techpresident.com/news/wegov/22823/middle-east-marginalized-lgbt-youth-find-supportive-communities-online\n\n‘Online Identity: Is authenticity or anonymity more important?’ The Guardian. URL: http://www.guardian.co.uk/technology/2012/apr/19/online-identity-authenticity-anonymity\n", "title": "" }, { "docid": "568de474ce4eafb764eeaaaef2ad8001", "text": "p ip internet digital freedoms privacy house would ban all anonymous Limiting ability to experiment with identity.\n\nThe ability to post anonymously on the internet means that people can create a new identity for themselves where they will not be judged in terms of what they have done before. This can be particularly useful for people who are attempting to make significant positive reformations to their lives, such as recovering addicts, thereby facilitating self-improvement. Banning anonymous posting reduces individual’s abilities to better themselves in this way. [1]\n\n[1] ‘Online Identity: Is authenticity or anonymity more important?’ The Guardian. URL: http://www.guardian.co.uk/technology/2012/apr/19/online-identity-authenticity-anonymity\n", "title": "" }, { "docid": "cdcceffdcaae0e851f3909c1f66d1cda", "text": "p ip internet digital freedoms privacy house would ban all anonymous Reducing the extent to which large and powerful organisations can be criticised.\n\nOrganisations with lots of wealth and legal power can be difficult to criticise when one’s name and personal information is attached to all attempts at protest and/or criticism. Internet anonymity means that individuals can criticise these groups without fear of unfair reprisal, and their actions are, as a result, held up to higher levels of scrutiny. For example, internet anonymity were instrumental in the first meaningful and damaging protests against the Church of Scientology by internet group Anonymous. [1] Similarly anonymity has been essential in the model for WikiLeaks and other similar efforts like the New Yorker’s Strongbox. [2]\n\n[1] ‘John Sweeney: Why Church of Scientology’s greatest threat is ‘net’. The Register. URL: http://www.theregister.co.uk/2013/02/21/scientology_internet_threat/\n\n‘Anonymous vs. Scientology’. Ex-Scientology Kids. URL: http://exscientologykids.com/anonymous/\n\n[2] Davidson, Amy, ‘Introducing Strongbox’, The New Yorker, 15 May 2013, http://www.newyorker.com/online/blogs/closeread/2013/05/introducing-strongbox-anonymous-document-sharing-tool.html\n", "title": "" }, { "docid": "05060d7fe24cd1579f72fed6f764c25f", "text": "p ip internet digital freedoms privacy house would ban all anonymous Reducing hate speech.\n\nOpenly racist, sexist, or otherwise discriminatory comments made through public forums are much more likely when made anonymously, as people feel they are unlikely to see any consequences for voicing their hateful opinions. [1] This leads firstly to a propagation of these views in others, and a higher likelihood of attacks based on this hate, as seeing a particular view more often makes people feel it is more legitimate. [2] More importantly, it causes people from the targeted groups to feel alienated or unwelcome in particular places due to facets of their identity that are out of their control, and all people have a right not to be discriminated against for reasons such as these.\n\nThe proposed policy would enormously reduce the amount of online hate speech posted as people would be too afraid to do it. Although not exactly the same a study of abusive and slanderous posts on Korean forums in the six months following the introduction of their ban on anonymity found that such abusive postings dropped 20%. [3] Additionally it would allow governments to pursue that which is posted under the same laws that all other speech is subject to in their country.\n\n[1] ‘Starting Points for Combating Hate Speech Online’. British Institute of Human Rights. URL: http://act4hre.coe.int/rus/content/download/28301/215409/file/Starting%20points.pdf\n\n[2] ‘John Gorenfield, Moon the Messiah, and the Media Echo Chamber’. Daily Kos. URL: http://www.dailykos.com/story/2004/06/24/34812/-John-Gorenfeld-Moon-the-Messiah-and-the-Media-Echo-Chamber\n\n[3] ‘Real Name Verification Law on the Internet: A Poison or Cure for Privacy?’, Carnegie Melon University, http://weis2011.econinfosec.org/papers/Real%20Name%20Verification%20Law%20on%20the%20Internet%20-%20A%20Poison%20or%20Cu.pdf\n", "title": "" }, { "docid": "2bbd5671f39c2e21f2120dc86f1915fc", "text": "p ip internet digital freedoms privacy house would ban all anonymous Reducing currently illegal activity.\n\nInternet anonymity is very useful for planning and organising illegal activity, mostly buying and selling illegal goods, such as drugs, firearms, stolen goods, or child pornography, but also, in more extreme cases, for terrorism or assassinations. This is because it can be useful in making plans and advertisements public, thus enabling wider recruitment and assistance, while at the same time preventing these plans from being easily traced back to specific individuals. [1] For example, the website Silk Road openly offers users the opportunity to buy and sell illegal drugs. Sales on this site alone have double over the course of six months, hitting $1.7million per month. [2]\n\nThis policy makes it easier for the police to track down the people responsible for these public messages, should they continue. If anonymity is still used, it will be significantly easier to put legal pressure on the website and its users, possibly even denying access to it. If anonymity is not used, obviously it is very easy to trace illegal activity back to perpetrators. In the more likely event that they do not continue, it at least makes organising criminal activities considerably more difficult, and less likely to happen. This means the rule of law will be better upheld, and citizens will be kept safer. [3]\n\n[1] Williams, Phil, ‘Organized Crime and Cyber-Crime: Implications for Business’, CERT, 2002, http://www.cert.org/archive/pdf/cybercrime-business.pdf ‎ p.2\n\n[2] ‘Silk Road: the online drug marketplace that officials seem powerless to stop.’ The Guardian. URL: http://www.guardian.co.uk/world/2013/mar/22/silk-road-online-drug-marketplace\n\n[3] ‘Do dark networks aid cyberthieves and abusers?’ BBC News. URL: http://www.bbc.co.uk/news/technology-22754061\n", "title": "" }, { "docid": "8f2722ac2188990dd780dc209a44c128", "text": "p ip internet digital freedoms privacy house would ban all anonymous Reducing cyberbullying.\n\nWhen internet anonymity is used for bullying, it can make the situation much worse. Firstly, perpetrators are much less likely to hold back or be cautious as they are less concerned with the possibility of being caught. This means the bullying is likely to be more intense than when it is done in real life. [1] Additionally, for victims of cyberbullying, being unable to tell who your harasser is, or even how many there are can be particularly distressing. [2]\n\nAnonymous posting being significantly less available takes away the particularly damaging anonymous potential of cyberbullying, and allows cyberbullying to be more effectively dealt with.\n\n[1] ‘Traditional Bullying v. Cyberbullying’. CyberBullying, Google Sites. URL: https://sites.google.com/site/cyberbullyingawareness/traditional-bullying-vs-cyberbullying\n\n‘The Problem of Cyberbullies’ Anonymity’. Leo Burke Academy. URL: http://www.lba.k12.nf.ca/cyberbullying/anonymity.htm\n\n[2] ‘Cyberbullying’. Netsafe. URL: http://www.cyberbullying.org.nz/teachers/\n", "title": "" }, { "docid": "c57b893a1f887bb3879f32cd0acb0da6", "text": "p ip internet digital freedoms privacy house would ban all anonymous Reducing fraud using fake identities.\n\nAnonymous posting can be used to make people believe you are someone who you are not. This can be done in order to acquire money from victims either by establishing a dishonest relationship or offering fraudulent business opportunities. [1] It is also a frequently used tool in child abduction cases, where the perpetrator will pretend to be a child or even classmate to gain enough access to a child in order to make abduction viable. It is estimated that nearly 90% of all sexual solicitations of youth are made in online anonymous chat rooms. Additionally, in the UK alone over 200 cases of meeting a child following online grooming, usually via anonymous sites are recorded. [2]\n\nThese are enormous harms that can be easily avoided with the removal of anonymous posting online.\n\n[1] ‘Online Fraud’. Action Fraud. URL: http://www.actionfraud.police.uk/fraud-az-online-fraud\n\n[2] ‘Online child grooming: a literature review on the misuse of social networking sites for grooming children for sexual offences’. Australian Institute of Criminology. URL: http://www.aic.gov.au/documents/3/C/1/%7B3C162CF7-94B1-4203-8C57-79F827168DD8%7Drpp103.pdf\n", "title": "" } ]
arguana
d3ce60ad686d53e53dde5e4eea5cb4d6
Universal broadband is a necessary prerequisite to developing more efficient and effective power-grids Advanced infrastructure technology often relies on the existence of broadband technology universally installed across the grid. Countries like South Korea and Japan have succeeded in expanding their power grids by means of “smart grids”, power-grids that are far more efficient than existing structures in previously leading states like the United States, that make use of the broadband network in the provision of power. The US government has since committed to creating its own new grid, one that would increase efficiency, supply and management, and lower costs of energy provision to its citizens. [1] Such grids depend on the reliable and advanced broadband networks. The incentive for states to employ broadband across their territory is tremendous, beyond mere access to fast internet. This is why private firms will never be sufficient in efficient provision of broadband, because they do not reap all the benefits directly of the smart grid that can arise from its development. The state providing broadband is an essential part of upgrading energy provision for advanced countries in the 21st century. [1] Kass, D. “FCC Chairman Wants Ultra High Speed Broadband in 100 Million US Households by 2020”. IT Channel Planet. 18 February 2010. http://www.itchannelplanet.com/technology_news/article.php/3865856/FCC-Chairman-Wants-Ultra-High-Speed-Broadband-In-100-Million-US-Households-by-2020.htm
[ { "docid": "a68d81be712d0ea9b429ee271603d443", "text": "digital freedoms access information house believes state should provide States can develop new power-grids without needing to furnish all citizens with broadband in order to avail of the smart grid. The cost of developing these technologies and implementing them across the board are woefully high, and the inefficient nature of government services means they would only be more costly to the taxpayer. A better solution would be to liberalize the energy markets in order to encourage private firms to invest in the development of the smart grid.\n", "title": "" } ]
[ { "docid": "4516fc8eab79ccfaf1322679ee506fe0", "text": "digital freedoms access information house believes state should provide The state is rarely an efficient service provider. Conventionally, it provides a shoddy service when it faces no competition, and when it charges low prices it is usually at the expense of the infrastructure and quality of service. When free of market forces, the state is even more likely to rest on its position of monopoly and provide insufficient service. But even with a state service, prices cannot be guaranteed to be kept low, but rather states can well overcharge and exploit their privileged position.\n", "title": "" }, { "docid": "52cfed879f93cc7e1807949745f95c79", "text": "digital freedoms access information house believes state should provide Broadband is a necessary evolution of internet technology that firms would be wise to avail of if they wish to remain competitive. But it is this very desirability that makes the provision of broadband a lucrative business in which many firms participate. Business on a large scale is rarely organised in diffuse patterns, but clustered in major population centres. Economic development can be furnished by the private sector investing in broadband where there is a market. Growth will not be slowed just because some farmers in Nebraska have slower internet. Singapore is an aberrant example, as it is so small and its population so dense that it would be impossible to compare its provision of broadband access to most other countries.\n", "title": "" }, { "docid": "b61e1601008daff16b7185fac3a337cd", "text": "digital freedoms access information house believes state should provide Internet access is not a fundamental right. It is a useful enabler of rights. But that is not reason to guarantee it to all, any more than states owed every citizen access to a printing press a few centuries ago. Even were it a right, internet access could be provided far more efficiently and effectively through the private, rather than the public, sector.\n", "title": "" }, { "docid": "d9cd0ad9fdcaeba51a046ae8023e8f52", "text": "digital freedoms access information house believes state should provide State firms do not necessarily crowd out private firms. Rather, they can furnish services in areas that private firms consider unprofitable, and can coordinate infrastructural process on a wider area, allowing for gains in economies of scale. Eircom provides an example of this too as its reduction in investment in broadband post privatisation meant that the government had to begin reinvesting in broadband itself. [1] Private firms will still have incentives to develop new technologies because there will still be profits to be made. But absent private firms, innovation will still exist. State investment in innovation and new technology can be very effective, as was the case with the Space Race.\n\n[1] Palcic, D., and Reeves, E., “Privatisation and productivity performance in Ireland”, http://www.forfas.ie/media/productivity_chapter11.pdf P.200\n", "title": "" }, { "docid": "e60572d89ec21993c51ad564a41ece30", "text": "digital freedoms access information house believes state should provide If the state overstepped in its regulation, no doubt private competitors would be able to fill the void. But such an eventuality is rather unlikely given the robustness of civil institutions in free societies and the willingness of people to come out in arms against attacks on their freedoms. The state is not a bogey-man. Rather, it is the best outlet by which to deliver inexpensive, efficient broadband service.\n", "title": "" }, { "docid": "1f0c8fde8c516e95bdca800633697a52", "text": "digital freedoms access information house believes state should provide The private sector will never be able to meet the demands governments would make in order to build a working broadband network and the subsequent smart grid because their profit motives cannot internalize the social benefits of the new grids and technology. Unfortunately the private sector will only build the infrastructure in profitable densely populated areas neglecting rural areas. The state must therefore fill the gap, either by subsidizing private firms to provide service to unprofitable areas, or to service them itself. Furthermore, it can provide the service more freely and more fairly in order to guarantee that citizens get the services they deserve and need to succeed in the 21st century.\n", "title": "" }, { "docid": "651e6c52ef58f64cd79fcca9b47d6687", "text": "digital freedoms access information house believes state should provide It would provide an efficient service for everyone\n\nA single, universal provider of broadband would allow the government to rationalize the management and development of the service. Multiple private service-providers ultimately end up causing three serious problems. The first two are straightforward, that private firms competing in the same area waste money creating multiple distribution channels that are unnecessary for the number of consumers, and that when they opt not to compete they end up dividing up territory into effective utility monopolies. The third problem is especially salient to the state when it is attempting to provide for everyone: many areas are too sparsely populated or economically underdeveloped that private firms are unwilling to invest in them; these areas are entirely dependent on state intervention to allow them to get broadband access. Thus for example, in the United States 19 million people in the United States still have no broadband access. [1] Much like electrical and water utilities, a single provider can create the most efficient outcome for consumers, and when that provider is the state it can guarantee affordable prices and commit to not price-gouging as private firms are wont to do. [2] Broadband should be treated as a utility, and the state has always proven to be the best purveyor of public utilities.\n\n[1] Elgan, M. “Should Wireless Carriers be Nationalized?”. Huffington Post. 10 October 2012, http://www.huffingtonpost.com/mike-elgan/wireless-carriers-nationalized_b_1955633.html\n\n[2] Encyclopaedia Britannica. \"Public Utility.\" Encyclopædia Britannica Online Academic Edition. Encyclopædia Britannica Inc. 2013 http://www.britannica.com/EBchecked/topic/482523/public-utility\n", "title": "" }, { "docid": "90cf81ff2f30a9cd83015aa8b07d126d", "text": "digital freedoms access information house believes state should provide Broad-based access to broadband is essential for countries to be competitive and to excel\n\nInformation technology is critical to the success of contemporary economies, with even the simplest business ventures. Uneven or non-existent penetration of broadband is a major drag on economic progress. [1] The private sector has been unable to effectively adapt with a holistic approach to the provision of data space and internet speed. The state providing these services would guarantee a high quality of service, and penetration across the country, linking all citizens to the network. For a country to compete internationally it needs broadband, and the surest way to provide it, since the private sector has resolutely failed to do so, and where it does provide services, it tends to overcharge. [2] As the Western world is left behind by the internet speeds of erstwhile developing states like Singapore, which has almost total penetration of high quality, state-sponsored broadband, it needs to refocus on what can reverse the trend. [3] Broadband is one of the steps toward the solution.\n\n[1] Elgan, M. “Should Wireless Carriers be Nationalized?”. Huffington Post. 10 October 2012, http://www.huffingtonpost.com/mike-elgan/wireless-carriers-nationalized_b_1955633.html\n\n[2] ibid\n\n[3] Kass, D. “FCC Chairman Wants Ultra High Speed Broadband in 100 Million US Households by 2020”. IT Channel Planet. 18 February 2010. http://www.itchannelplanet.com/technology_news/article.php/3865856/FCC-Chairman-Wants-Ultra-High-Speed-Broadband-In-100-Million-US-Households-by-2020.htm\n", "title": "" }, { "docid": "4660c4060b469baa050ccad75207dd82", "text": "digital freedoms access information house believes state should provide The information age demands a right to broadband access\n\nAs information technology has come more and more to pervade people’s lives, it has become abundantly clear that a new set of positive rights must be considered. In the forefront of this consideration stands broadband. Broadband allows for far more rapid access to the internet, and thus access to the world of information the internet represents. Today, a citizen of a free society must be able to access the internet if he or she is to be able to fully realise their potential. This is because the ability to access the fundamental rights to freedom of expression and civic and social participation are now contingent upon ready access to the internet. Thus access to the internet has itself become a right of citizens, and their access should be guaranteed by the state. This right has been enshrined by several countries, such as France, Finland, Greece, and Spain, thus leading the way toward a more general recognition of this service as a right in the same way other public services are guaranteed. [1] It is a right derived from the evolution of society in the same fashion that the right to healthcare has grown out of countries’ social and economic development.\n\n[1] Lucchi, N. “Access to Network Services and Protection of Constitutional Rights: Recognizing the Essential Role of Internet Access for the Freedom of Expression”. Cardozo J. of Int’L &amp; Comp. Law, Vol.19, 2011, http://www.cjicl.com/uploads/2/9/5/9/2959791/cjicl_19.3_lucchi_article.pdf\n", "title": "" }, { "docid": "51eca6bb9e28ef6241f7dbfa75d6ff7b", "text": "digital freedoms access information house believes state should provide The state can work more effectively through the private sector\n\nIf the state is worried about provision of broadband in areas too sparsely populated or disadvantaged, they can provide subsidies to private firms to develop the areas that are not profitable without needing to develop full government-operated companies. Just because the state is not providing the service does not mean that there cannot be compulsory to provide access to everywhere, many countries post offices for example are obliged to deliver to every address. [1] Government employees tend to be overpaid and underworked, leading to chronic inefficiencies that would be absent in a private firm, even one backed with government money.\n\nFurthermore, the cost to the state is prohibitively expensive to go it alone, because state contracts have a marked tendency to go over budget, ultimately harming the taxpayers. These overruns are a standard part of government projects, but they can be ruinous to large scale information technology projects. Indeed, one-third of all IT projects end with premature cancellation as the direct result of overruns. [2] The future of countries’ economic prosperity cannot be entrusted to an organization that will stack the odds toward failure. This policy does not make sense when it is an area in which the private sector is willing to make substantial contributions to the cost. The only way to guarantee a decent level of service and an appropriate level of cost is to allow the private sector to take the lead, and to supplement it with incentives to build more and better systems. In the United States encouraging private investment in broadbrand infrastructure has led to a total of $1.2trillion ploughed into broadband access while Europe’s more state investment approach is falling behind. [3]\n\n[1] United States Postal Service, “Postal Facts”, 2012, http://about.usps.com/who-we-are/postal-facts/welcome.htm Royal Mail Group, “Universal Service Obligation”, http://www.royalmailgroup.com/regulation/how-were-regulated/universal-service-obligation\n", "title": "" }, { "docid": "767cdc22c5d3aac13887c12a5fbbcbe8", "text": "digital freedoms access information house believes state should provide State intervention would crowd out private firms\n\nThe imposition of a powerful state firm dominating the broadband market would serve to reduce the ability of private providers to compete. The greater resources of the state would be able to give it the power to dictate the market, making it less attractive to private investment. Creating a monopolistic provider would be very dangerous considering that this is a sector upon which much of future national development relies. [1] Crowding out private firms will make them less inclined to invest in new technologies, while the state provider is unlikely to fill the gap, as traditionally state utilities rely upon their power of incumbency and size rather than seeking novel services. An example of this is Eircom which, when it was the state utility, provided broadband of a lower quality and at higher price than most private providers. The end result of state dominance and reduction of private competitors is a loss of innovation, a loss of price competition, and an erosion of customer service.\n\n[1] Atkinson, R. “The Role of Competition in a National Broadband Policy”. Journal on Telecommunications and High Technology Law 7. 2009, http://heinonline.org/HOL/Page?handle=hein.journals/jtelhtel7&amp;div=4&amp;g_sent=1&amp;collection=journals\n", "title": "" }, { "docid": "e2c43e54fb047f45ad6da0dee50eeaf2", "text": "digital freedoms access information house believes state should provide It would give undue power to the government over access to the internet\n\nMonopoly, or near-monopoly, power over broadband is far too great a tool to give to governments. States have a long history of abusing rules to curtail access to information and to limit freedom of speech. Domination of broadband effectively gives the state complete control of what information citizens can or cannot consume online. ISPs function generally under the principle of Net Neutrality, in which they are expected to allow the free transit of information online. If they are the sole gatekeepers of knowledge, people may well be kept from information deemed against the public interest. It is harder for opponents of government regulations to voice their opinions online when they have no viable alternative to the state-controlled network. The internet is a place of almost limitless expression and it has empowered more people to take action to change their societies. That great tool of the people must be protected from any and all threats, and most particularly the state that could so profit from the curtailment of internet freedom.\n", "title": "" } ]
arguana
d73c579a2393a715f18f749ecff3848e
Sanctions are indiscriminate The problem with sanctions is that they are almost always indiscriminate; Iran’s sanctions today are an example where the international community’s concerns are entirely with the government, over nuclear weapons, not the people yet the result has been a doubling in the price of staple foodstuffs and rapidly rising unemployment. [1] This will equally be the case here. While sanctioners will try to target the sanctions the fact is there is nothing to target with sanctions that would not affect everyday lives. Hackers are ordinary people so clearly sanctions will affect others like themselves. The most obvious reactions involve the internet but blocking access to internet services, or penalising ISP’s, or cutting off technology transfers, harm everyone else as much as hackers. Often this harm is in the form of simply making the internet less safe for people in that country because they will have to turn to pirated versions of software. IDC and Microsoft estimate the chances of being infected with malware when using pirated software at one in three [2] so it is no surprise that the Chinese government in October 2012 launched a campaign to have government and companies purchase legal software. [3] [1] The Economist, ‘A red line and a reeling rial’, 6 October 2012, http://www.economist.com/node/21564229 [2] IDC, ‘White Paper: The Dangerous World of Counterfeit and Pirated Software’, Microsoft, March 2013, http://www.microsoft.com/en-us/news/download/presskits/antipiracy/docs/IDC030513.pdf p.3 [3] Xinhua, ‘Chinese gov’t says no to pirated software’, People’s Daily Online, 26 April 2013, http://english.peopledaily.com.cn/90882/8224829.html
[ { "docid": "7b25102f1aec891098e9b29064227f05", "text": "warpeace digital freedoms intellectual property house would use targeted The aim of sanctions does not have to be to directly affect the individuals doing the hacking, though in some cases this may be possible. Rather the aim is to change the attitude towards regulation and enforcement by the central government and possibly by the people as a whole. If the people of a country believe they are suffering as a result of the hackers in their midst they will be much more likely to demand their government make cracking down on such activities a priority.\n", "title": "" } ]
[ { "docid": "1736cd9138e330b632064cbec2c26187", "text": "warpeace digital freedoms intellectual property house would use targeted This will clearly depend on the country engaging sanctions; sanctions from the US or EU will be much more significant than sanctions from the Philippines. Most countries however are a part of larger trade blocks; sanctions from the Philippines may not be much of a threat but sanctions from ASEAN would be much more compelling. Using such regional organisations can help nations get around the problems of agreement associated with broader UN sanctions. There have already been calls for groups such as ASEAN to work together against cyber attacks [1] and these groupings could be expanded to include other nations that agree with the policy on an ad hoc basis in much the same way as Japan is looking to join with ASEAN on such defence. [2]\n\n[1] Minnick, Wendell, ‘Malaysia Calls for ASEAN ‘Master Plan’ to Fight Cyber Attacks’, Defense News, 3 June 2012, http://www.defensenews.com/article/20120603/DEFREG03/306030004/Malaysia-Calls-ASEAN-8216-Master-Plan-8217-Fight-Cyber-Attacks\n\n[2] Westlake, Adam, ‘Japan pushes to form cyber-defense network with other ASEAN countries’, Japan Daily Press, 8 October 2012, http://japandailypress.com/japan-pushes-to-form-cyber-defense-network-with-other-asean-countries-0814818\n", "title": "" }, { "docid": "b6f99f44ca1767dd5cb076ab4aa91a56", "text": "warpeace digital freedoms intellectual property house would use targeted Even taking it at face value that most of these hackers are independent actors not a part of a state policy there would still be solid reasoning behind sanctions. That most cyber-attacks have a financial motive implies that sanctions are the best response; as it is hitting them in an area that the attackers are clearly interested in. As for those who are attacking for ‘patriotic’ reasons if they are truly patriots they will stop when they see their efforts are really harming their country not helping it.\n", "title": "" }, { "docid": "3cdab6c89b4a3f9d8d674f8e39e7058a", "text": "warpeace digital freedoms intellectual property house would use targeted Cooperation is not a helpful alternative as it really means status quo when we can see that the status quo is not going to reduce cyber-attacks or bring recompense. Rather this is precisely what sanctions are needed for; to encourage states that harbour cyber criminals and hackers to use their law enforcement capabilities to crack down on such attacks.\n", "title": "" }, { "docid": "f25d1ab2f94450e2f25f6df5f63003ab", "text": "warpeace digital freedoms intellectual property house would use targeted Sanctions cannot be very finely targeted and will always hit other groups as well as the cyber attackers. The chances of knowing specific individuals who were responsible are next to zero so those individuals cannot be targeted directly. This is the whole problem with cyber-attacks; they are very difficult to pin down. In the best case then sanctions are applied against the right target and happen to hit others as well; for example hackers are not the only new who want advanced computer equipment. At worst the sanctions will completely miss their target; it would be a major embarrassment for a country to impose sanctions for a cyber-attack only for it to later be discovered that the sanctions are against an innocent party through whom the attack had been routed.\n", "title": "" }, { "docid": "f15ef336ef11afa2660c4d31b59e29de", "text": "warpeace digital freedoms intellectual property house would use targeted An asymmetric response to cyber-attacks in the form of sanctions may prevent escalation, but they could also simply encourage a cyber-attacker to do more knowing that sanctions cannot stop cyber-attacks. Sanctions in the past have rarely changed policy; Sanctions against Cuba did not result in overthrowing Castro, sanctions have not changed North Korea or Iran’s policy towards nuclear weapons, so there is little reason that sanctions would stop cyber-attacks. [1] Instead the country being sanctioned will find a way around and quite possibly escalate themselves much as North Korea has upped the stakes whenever new sanctions are imposed, most recently by cancelling a hotline to the South. [2]\n\n[1] Friedman, Lara, ‘Getting over the sanctions delusion’, Foreign Policy The Middle East Channel, 14 March 2010, http://mideast.foreignpolicy.com/posts/2010/03/15/getting_over_the_sanctions_delusion\n\n[2] Branigan, Tania, ‘Expanded UN sanctions on North Korea prompt rage from Pyongyang’, guardian.co.uk, 8 March 2013, http://www.guardian.co.uk/world/2013/mar/08/north-korea-rages-un-sanctions\n", "title": "" }, { "docid": "a7331754800723e686efc05ac2633c7a", "text": "warpeace digital freedoms intellectual property house would use targeted How can there ever be deterrence when the attacker believes they will not be caught, or that if they are the sanctions swill harm others not themselves? When the problem with preventing cyber-attacks is the difficulty of tracing the source [1] then deterrence becomes more and more difficult to apply. This is not like the Cold War where both superpowers could be certain that if they launched an attack there would be a devastating response. In this instance there is no certainly; the attacker believes they a, won't be caught, b, there will be no response and c, that the response won't affect them, and finally even if they are affected unless they are caught most times they will believe they will get away with it next time round.\n\n[1] Greenemeier, Larry, ‘Seeking Address: Why Cyber Attacks Are So Difficult to Trace Back to Hackers’, Scientific American, 11 June 2011, http://www.scientificamerican.com/article.cfm?id=tracking-cyber-hackers\n", "title": "" }, { "docid": "afb4f82aaf8940545165194e95561d85", "text": "warpeace digital freedoms intellectual property house would use targeted How do we determine what is proportionate? If some valuable intellectual property, such as part of the designs for the US's latest fighter jet the F35, which were hacked in 2009. [1] Then what can be the response to this? Can it simply be the cost of developing this design? If so then what about the strategic loss the state has suffered, how can that be calculated in? So long as it is excluded state sanctioned cyber-attacks will not be deterred.\n\n[1] Gorman, Siobhan, Cole, August, and Dreazen, Yochi, ‘Computer Spies Breach Fighter-Jen Project’, The Wall Street Journal, 21 April 2009, http://online.wsj.com/article/SB124027491029837401.html.html\n", "title": "" }, { "docid": "add2f55fb6a906e58b67b1dd90c8fade", "text": "warpeace digital freedoms intellectual property house would use targeted Sanctions won't harm the hackers\n\nSanctions are typically used as a response to the actions of another state, not the actions of a private actor. Much cyber espionage is not carried out by government entities such as the army or intelligence services. It is also not encouraged by government regulation. Rather it is carried out by private actors whether this is criminal organisations or businesses seeking to undermine their rivals and learn their secrets this is usually with a financial motive (75% of data breaches) [1] , or else by individuals motivated by nationalism and patriotism to attack those they see as their nation’s enemies. It is difficult to see how sanctions against the nation as a whole affect these groups and individuals. This is certainly the case in China where many such as the ‘China Eagle Union’ admit to hacking for nationalist reasons rather than being told by the government. [2]\n\nA response such as sanctions are simply likely to breed more resentment that the other power is attempting to bully their nation. The hackers only possible response is then more hacking. For those sponsored by companies if their company is hit by sanctions it simply becomes all the more necessary to find methods of getting ahead to offset any harm by sanctions.\n\n[1] Verizon RISK Team, ‘2013 Data Breach Investigations Report’, Verizon, 23 April 2013, http://www.verizonenterprise.com/DBIR/2013/ p.6\n\n[2] Beech, Hannah, ‘China’s Red Hackers: The Tale of One Patriotic Cyberwarrior’, Time, 21 February 2013, http://world.time.com/2013/02/21/chinas-red-hackers-the-tale-of-one-patriotic-cyberwarrior/\n", "title": "" }, { "docid": "7fe7e8ad0813664d07a6a214878c7716", "text": "warpeace digital freedoms intellectual property house would use targeted Sanctions require international agreement to be effective\n\nWhen is it legitimate to use sanctions in response to an action? Any individual state (or group of states) can use sanctions against any other state. However for these sanctions to be effective they need to have broad based support. Sanctions by an individual country are unlikely to change the behaviour of an aggressor as they will be able to get around the sanctions. Moreover for any country that is a member of the WTO imposing sanctions may be considered illegal allowing the other country to counter them with similar measures.\n\nThe problem then is that there is no international response to hacking and it is unlikely there will be agreement on such a response. When countries like China deny that hacking comes from them are they likely to support the use of sanctions against such actions? Sanctions for much worse actions are often bogged down when they are attempted at the international level such as China and Russia vetoing sanctions against Syria in response to the violence there. [1]\n\n[1] United Nations Security Council, ‘Security Council fails to adopt draft resolution on Syria that would have threatened sanctions, due to negative votes of China, Russian Federation’, un.org, SC/10714, 19 July 2012, https://www.un.org/News/Press/docs/2012/sc10714.doc.htm\n", "title": "" }, { "docid": "e609c6097f654e39e89f4d3629edc297", "text": "warpeace digital freedoms intellectual property house would use targeted Sanctions won't work\n\nThe problem with sanctions is that they almost never work so all they do is provide punishment and damage relations without ever resolving the issue. Numerous studies have shown that sanctions don’t actually change the policy of the country that is being sanctioned. [1] Robert Pape suggests that sanctions are only effective in achieving policy change about 5% of the time because states can take substantial economic punishment before they give up on anything that might be considered to be a national interest, and because states are good at shifting the burden of the sanctions onto opposition groups, [2] or else use the sanctions to rally domestic support against the outside actor. [3]\n\nInstead there need to be renewed cooperation on cyber security. Fundamentally as with things like drug smuggling, and people trafficking this is an international problem that needs to be tackled by law enforcement authorities. To that end there needs to be more cooperation not more recriminations. [4]\n\n[1] Lindsay, James M., ‘Trade Sanctions As Policy Instruments: A Re-Examination’, International Studies Quarterly, Vol.30, Issue 2, June 1986, pp.153-170, http://www.stanford.edu/class/ips216/Readings/lindsay_86.pdf , p.1 provides a list of some of them\n\n[2] Pape, Robert A., ‘Why Economic Sanctions Do Not Work’, International Security, Vol. 22, Issue 2, Autumn 1997, pp.90-137, http://www.stanford.edu/class/ips216/Readings/pape_97%20(jstor).pdf p.106\n\n[3] Snyder, Jack, Myths of Empire, Cornell University Press, 1991\n\n[4] Dingli, Shen, ‘What Kerry Should Tell China’, Foreign Policy, 11 April 2013, http://www.foreignpolicy.com/articles/2013/04/11/what_kerry_should_tell_china\n", "title": "" }, { "docid": "fb833d7369a00211aeda65dbd6f78d3e", "text": "warpeace digital freedoms intellectual property house would use targeted Sanctions can be targeted\n\nThe big advantage of sanctions is that they can be as finely targeted as needed. If the sanctioning country only knows which country the cyber attack originated from then they can be broad brush sanctions, but if there is knowledge of which group initiated the attack then the sanctions can be more specific. For example in the case of unit 61398 Of the Chinese People’s Liberation Army that Mandiant showed has been attacking US companies [1] the United States could target sanctions at the People's Liberation Army by tightening weapons bans. Alternatively if the hackers are private then banning the import of certain computer equipment into that country would be appropriate. If individuals are known then the sanctions can be even more targeted, for example by freezing any bank accounts held outside their own country as the US did against North Korea when it sanctioned Banco Delta Asia through which North Korea laundered money from criminal activities. [2]\n\n[1] Mandiant, ‘Exposing One of China’s Cyber Espionage Units’, mandiant.com, February 2013, http://intelreport.mandiant.com/Mandiant_APT1_Report.pdf\n\n[2] Noland, Marcus, ‘Why Sanctions Can Hurt North Korea’, Council on Foreign Relations, 4 August 2010, http://www.cfr.org/north-korea/why-sanctions-can-hurt-north-korea/p22762\n", "title": "" }, { "docid": "7bb85771b667bf7c01bb5edabfaf21e5", "text": "warpeace digital freedoms intellectual property house would use targeted There needs to be action to deter more cyber attacks\n\nAt the moment the response to cyber-attacks has essentially been nothing. It is however clear that some response is needed as without a reaction there is no deterrence; the attacks will keep coming until something is done. The number of cyber-attacks and the sensitivity of the information stolen have been increasing over recent years and as more and more work is done online and more and more systems are connected to the Internet so cyber-attacks become more attractive. There needs to be a deterrent and the best deterrent is to make sure that such attacks are costly.\n\nAs these attacks are usually cross border (and in this debate we are only concerned with cross border attacks) then the only way to create a cost is through sanctions. These sanctions can either hit the assailant directly or else hit his government so encouraging them to crack down on hacking emanating from their country. It should be remembered that China argues that it does not launch cyber-attacks [1] meaning that any such attacks from China must duly be private. If this is the case then sanctions are the best way of prompting internal law enforcement. Sanctions therefore encourage all nations where there are cyber criminals to make sure they take such cyber-crime seriously. If they do not get their own cyber criminals under control then they may be affected by sanctions.\n\n[1] China Daily, ‘China denies launching cyberattacks on US’, China.org.cn, 20 February 2013, http://www.china.org.cn/world/2013-02/20/content_28003282.htm\n", "title": "" }, { "docid": "e1961b357dfdbac2234988b2bb4a2311", "text": "warpeace digital freedoms intellectual property house would use targeted Sanctions are a proportionate response\n\nCyber-attacks pose a distinct problem for international diplomacy in that they are difficult to prevent and difficult to respond to. Any kind of military response as the United States has threatened would be completely disproportionate against all but the very biggest of cyber-attacks (those that actually result in deaths), [1] diplomacy on the other hand is as good as no response, if the response is simply a tongue lashing then the benefits of cyber espionage will be far higher than the cost.\n\nThe only proportionate, and therefore just, response to a cyber-attack is sanctions. The sanctions can be used to impose a similar economic cost on the offending state as that caused by the cyber-attack. This would be just like the World Trade Organisation's dispute settlement rules. They allow for the imposition of trade sanctions to a similar value to the losses being experienced as a result of protectionist action, with the sanctions sometimes on differing sectors to those where there are unfair trade practices. [2] Alternatively sanctions could mean a proportionate Internet response; users from the offending nation could be prohibited from using Internet services, for example an attack by hackers on the US could result in people from that country being blocked from Google and other US internet services.\n\n[1] Friedman, Benjamin H., Preble, Christopher A., ‘A Military Response to Cyberattacks Is Preposterous’, CATO Institute, 2 June 2011, http://www.cato.org/publications/commentary/military-response-cyberattacks-is-preposterous\n\n[2] World Trade Organisation, ‘Understanding the WTO: Settling Disputes’, 2013, http://www.wto.org/english/thewto_e/whatis_e/tif_e/disp1_e.htm\n", "title": "" }, { "docid": "92179370b52369193e56d69ecbee87e5", "text": "warpeace digital freedoms intellectual property house would use targeted Sanctions will prevent escalation in cyber conflict\n\nCyber conflict favours the offence; when the defender is successful they gain nothing and impose no harm on the attacker who is free to try again elsewhere. The attackers are free to attack until they get past the defences somewhere. [1] That the attacks don’t risk lives helps to encourage an offensive mindset as makes it seem like there is no downside to attempting to dominate your opponent. [2] This means the only cyber response is to attack the attacker so that the same advantages apply.\n\nThe result is that cyber-attacks have a very real danger of long term tension or escalation. If one side is losing a conflict where both sides are attempting to steal the other's intellectual property (or the other has little to steal) the response may be something like the stuxnet attack that involves physical damage, this then would probably be considered an illegal use of force creating a thin line between a cyber-war and a real war. [3] When the cyber war involves physical damage as the US has warned there then may be a military response. Sanctions are a way to apply pressure without this risk of escalation into a military conflict.\n\n[1] Lin, Herbert, ‘Escalation Dynamics and Conflict Termination in Cyberspace’, Strategic Studies Quarterly, Fall 2012, http://www.au.af.mil/au/ssq/2012/fall/lin.pdf p.51\n\n[2] Rothkopf, David, ‘The Cool War’, Foreign Policy, 20 February 2013, http://www.foreignpolicy.com/articles/2013/02/20/the_cool_war_china_cyberwar\n\n[3] Zetter, Kim, ‘Legal Experts: Stuxnet Attack on Iran Was Illegal ‘Act of Force’, Wired, 25 March 2013, http://www.wired.com/threatlevel/2013/03/stuxnet-act-of-force/\n", "title": "" } ]
arguana