query_id
stringlengths 32
32
| query
stringlengths 5
5.38k
| positive_passages
listlengths 1
23
| negative_passages
listlengths 7
25
| subset
stringclasses 5
values |
---|---|---|---|---|
42a373dad8d6004ba571098d62510a8f
|
The Internet of Battle Things
|
[
{
"docid": "42c6ec7e27bc1de6beceb24d52b7216c",
"text": "Internet of Things (IoT) refers to the expansion of Internet technologies to include wireless sensor networks (WSNs) and smart objects by extensive interfacing of exclusively identifiable, distributed communication devices. Due to the close connection with the physical world, it is an important requirement for IoT technology to be self-secure in terms of a standard information security model components. Autonomic security should be considered as a critical priority and careful provisions must be taken in the design of dynamic techniques, architectures and self-sufficient frameworks for future IoT. Over the years, many researchers have proposed threat mitigation approaches for IoT and WSNs. This survey considers specific approaches requiring minimal human intervention and discusses them in relation to self-security. This survey addresses and brings together a broad range of ideas linked together by IoT, autonomy and security. More particularly, this paper looks at threat mitigation approaches in IoT using an autonomic taxonomy and finally sets down future directions. & 2014 Published by Elsevier Ltd.",
"title": ""
}
] |
[
{
"docid": "6c3a166eea824f588e3e3a135e2e7a30",
"text": "BACKGROUND\nMobile health (mHealth) describes the use of portable electronic devices with software applications to provide health services and manage patient information. With approximately 5 billion mobile phone users globally, opportunities for mobile technologies to play a formal role in health services, particularly in low- and middle-income countries, are increasingly being recognized. mHealth can also support the performance of health care workers by the dissemination of clinical updates, learning materials, and reminders, particularly in underserved rural locations in low- and middle-income countries where community health workers deliver integrated community case management to children sick with diarrhea, pneumonia, and malaria.\n\n\nOBJECTIVE\nOur aim was to conduct a thematic review of how mHealth projects have approached the intersection of cellular technology and public health in low- and middle-income countries and identify the promising practices and experiences learned, as well as novel and innovative approaches of how mHealth can support community health workers.\n\n\nMETHODS\nIn this review, 6 themes of mHealth initiatives were examined using information from peer-reviewed journals, websites, and key reports. Primary mHealth technologies reviewed included mobile phones, personal digital assistants (PDAs) and smartphones, patient monitoring devices, and mobile telemedicine devices. We examined how these tools could be used for education and awareness, data access, and for strengthening health information systems. We also considered how mHealth may support patient monitoring, clinical decision making, and tracking of drugs and supplies. Lessons from mHealth trials and studies were summarized, focusing on low- and middle-income countries and community health workers.\n\n\nRESULTS\nThe review revealed that there are very few formal outcome evaluations of mHealth in low-income countries. Although there is vast documentation of project process evaluations, there are few studies demonstrating an impact on clinical outcomes. There is also a lack of mHealth applications and services operating at scale in low- and middle-income countries. The most commonly documented use of mHealth was 1-way text-message and phone reminders to encourage follow-up appointments, healthy behaviors, and data gathering. Innovative mHealth applications for community health workers include the use of mobile phones as job aides, clinical decision support tools, and for data submission and instant feedback on performance.\n\n\nCONCLUSIONS\nWith partnerships forming between governments, technologists, non-governmental organizations, academia, and industry, there is great potential to improve health services delivery by using mHealth in low- and middle-income countries. As with many other health improvement projects, a key challenge is moving mHealth approaches from pilot projects to national scalable programs while properly engaging health workers and communities in the process. By harnessing the increasing presence of mobile phones among diverse populations, there is promising evidence to suggest that mHealth can be used to deliver increased and enhanced health care services to individuals and communities, while helping to strengthen health systems.",
"title": ""
},
{
"docid": "7d197033396c7a55593da79a5a70fa96",
"text": "1. Introduction Fundamental questions about weighting (Fig 1) seem to be ~ most common during the analysis of survey data and I encounter them almost every week. Yet we \"lack a single, reasonably comprehensive, introductory explanation of the process of weighting\" [Sharot 1986], readily available to and usable by survey practitioners, who are looking for simple guidance, and this paper aims to meet some of that need. Some partial treatments have appeared in the survey literature [e.g., Kish 1965], but the topic seldom appears even in the indexes. However, we can expect growing interest, as witnessed by six publications since 1987 listed in the references.",
"title": ""
},
{
"docid": "7b6e811ea3f227c33755049355949eaf",
"text": "We revisit the task of learning a Euclidean metric from data. We approach this problem from first principles and formulate it as a surprisingly simple optimization problem. Indeed, our formulation even admits a closed form solution. This solution possesses several very attractive propertie s: (i) an innate geometric appeal through the Riemannian geometry of positive definite matrices; (ii) ease of interpretability; and (iii) computational speed several orders of magnitude faster tha n the widely used LMNN and ITML methods. Furthermore, on standard benchmark datasets, our closed-form solution consist ently attains higher classification accuracy.",
"title": ""
},
{
"docid": "766dd6c18f645d550d98f6e3e86c7b2f",
"text": "Licorice root has been used for years to regulate gastrointestinal function in traditional Chinese medicine. This study reveals the gastrointestinal effects of isoliquiritigenin, a flavonoid isolated from the roots of Glycyrrhiza glabra (a kind of Licorice). In vivo, isoliquiritigenin produced a dual dose-related effect on the charcoal meal travel, inhibitory at the low doses, while prokinetic at the high doses. In vitro, isoliquiritigenin showed an atropine-sensitive concentration-dependent spasmogenic effect in isolated rat stomach fundus. However, a spasmolytic effect was observed in isolated rabbit jejunums, guinea pig ileums and atropinized rat stomach fundus, either as noncompetitive inhibition of agonist concentration-response curves, inhibition of high K(+) (80 mM)-induced contractions, or displacement of Ca(2+) concentration-response curves to the right, indicating a calcium antagonist effect. Pretreatment with N(omega)-nitro-L-arginine methyl ester (L-NAME; 30 microM), indomethacin (10 microM), methylene blue (10 microM), tetraethylammonium chloride (0.5 mM), glibenclamide (1 microM), 4-aminopyridine (0.1 mM), or clotrimazole (1 microM) did not inhibit the spasmolytic effect. These results indicate that isoliquiritigenin plays a dual role in regulating gastrointestinal motility, both spasmogenic and spasmolytic. The spasmogenic effect may involve the activating of muscarinic receptors, while the spasmolytic effect is predominantly due to blockade of the calcium channels.",
"title": ""
},
{
"docid": "5acf896927ec23d1d11c53f92a4850da",
"text": "Emergence of modern techniques for scientific data collection has resulted in large scale accumulation of data pertaining to diverse fields. Conventional database querying methods are inadequate to extract useful information from huge data banks. Cluster analysis is a primary method for database mining [8]. It is either used as a stand-alone tool to get insight into the distribution of a data set or as a pre-processing step for other algorithms operating on the detected clusters. Almost all of the wellknown clustering algorithms require input parameters which are hard to determine but have a significant influence on the clustering result. Furthermore, for many real-data sets there does not even exist a global parameter setting for which the result of the clustering algorithm describes the intrinsic clustering structure accurately [1], [2]. DBSCAN (Density Based Spatial Clustering of Application with Noise) [1] is a base algorithm for density based clustering techniques. This paper gives a survey of density based clustering algorithms with the proposed enhanced algorithm that automatically selects the input parameters along with its implementation and comparison with the existing DBSCAN algorithm. The experimental results shows that the proposed algorithm can detect the clusters of varied density with different shapes and sizes from large amount of data which contains noise and outliers, requires only one input parameters and gives better output then the DBSCAN algorithm. KeywordsClustering Algorithms, Data mining, DBSCAN, Density, Eps, Minpts, and VDBSCAN.",
"title": ""
},
{
"docid": "b717cd61178ba093026fca5fad62248d",
"text": "This paper proposes a new low power and low area 4x4 array multiplier designed using modified Gate diffusion Input (GDI) technique. By using GDI cell, the transistor count is greatly reduced. Basic GDI technique shows a drawback of low voltage swing at output which prevents it for use in multiple stage circuits efficiently. We have used modified GDI technique which shows full swing output and hence can be used in multistage circuits. The whole design is made and simulated in 180nm UMC technology at a supply voltage of 1.8V using Cadence Virtuoso Environment.",
"title": ""
},
{
"docid": "616ffe5c6cbb6a32a14042d52bd410d3",
"text": "In the demo, we demonstrate a mobile food recognition system with Fisher Vector and liner one-vs-rest SVMs which enable us to record our food habits easily. In the experiments with 100 kinds of food categories, we have achieved the 79.2% classification rate for the top 5 category candidates when the ground-truth bounding boxes are given. The prototype system is open to the public as an Android-based smart-",
"title": ""
},
{
"docid": "073486fe6bcd756af5f5325b27c57912",
"text": "This paper describes the case of a unilateral agraphic patient (GG) who makes letter substitutions only when writing letters and words with his dominant left hand. Accuracy is significantly greater when he is writing with his right hand and when he is asked to spell words orally. GG also makes case errors when writing letters, and will sometimes write words in mixed case. However, these allograph errors occur regardless of which hand he is using to write. In terms of cognitive models of peripheral dysgraphia (e.g., Ellis, 1988), it appears that he has an allograph level impairment that affects writing with both hands, and a separate problem in accessing graphic motor patterns that disrupts writing with the left hand only. In previous studies of left-handed patients with unilateral agraphia (Zesiger & Mayer, 1992; Zesiger, Pegna, & Rilliet, 1994), it has been suggested that allographic knowledge used for writing with both hands is stored exclusively in the left hemisphere, but that graphic motor patterns are represented separately in each hemisphere. The pattern of performance demonstrated by GG strongly supports such a conclusion.",
"title": ""
},
{
"docid": "c47fde74be75b5e909d7657bb64bf23d",
"text": "As the primary stakeholder for the Enterprise Architecture, the Chief Information Officer (CIO) is responsible for the evolution of the enterprise IT system. An important part of the CIO role is therefore to make decisions about strategic and complex IT matters. This paper presents a cost effective and scenariobased approach for providing the CIO with an accurate basis for decision making. Scenarios are analyzed and compared against each other by using a number of problem-specific easily measured system properties identified in literature. In order to test the usefulness of the approach, a case study has been carried out. A CIO needed guidance on how to assign functionality and data within four overlapping systems. The results are quantifiable and can be presented graphically, thus providing a cost-efficient and easily understood basis for decision making. The study shows that the scenario-based approach can make complex Enterprise Architecture decisions understandable for CIOs and other business-orientated stakeholders",
"title": ""
},
{
"docid": "f267b329f52628d3c52a8f618485ae95",
"text": "We present an approach to continuous American Sign Language (ASL) recognition, which uses as input three-dimensional data of arm motions. We use computer vision methods for three-dimensional object shape and motion parameter extraction and an Ascension Technologies Flock of Birds interchangeably to obtain accurate three-dimensional movement parameters of ASL sentences, selected from a 53-sign vocabulary and a widely varied sentence structure. These parameters are used as features for Hidden Markov Models (HMMs). To address coarticulation effects and improve our recognition results, we experimented with two different approaches. The first consists of training context-dependent HMMs and is inspired by speech recognition systems. The second consists of modeling transient movements between signs and is inspired by the characteristics of ASL phonology. Our experiments verified that the second approach yields better recognition results.",
"title": ""
},
{
"docid": "32ca9711622abd30c7c94f41b91fa3f6",
"text": "The Elliptic Curve Digital Signature Algorithm (ECDSA) is the elliptic curve analogue of the Digital Signature Algorithm (DSA). It was accepted in 1999 as an ANSI standard and in 2000 as IEEE and NIST standards. It was also accepted in 1998 as an ISO standard and is under consideration for inclusion in some other ISO standards. Unlike the ordinary discrete logarithm problem and the integer factorization problem, no subexponential-time algorithm is known for the elliptic curve discrete logarithm problem. For this reason, the strength-per-key-bit is substantially greater in an algorithm that uses elliptic curves. This paper describes the ANSI X9.62 ECDSA, and discusses related security, implementation, and interoperability issues.",
"title": ""
},
{
"docid": "4dc015d3400673bfd3e9ab7d60352e33",
"text": "We describe work that is part of a research project on static code analysis between the Alexandru Ioan Cuza University and Bitdefender. The goal of the project is to develop customized static analysis tools for detecting potential vulnerabilities in C/C++ code. We have so far benchmarked several existing static analysis tools for C/C++ against the Toyota ITC test suite in order to determine which tools are best suited to our purpose. We discuss and compare several quality indicators such as precision, recall and running time of the tools. We analyze which tools perform best for various categories of potential vulnerabilities such as buffer overflows, integer overflow, etc.",
"title": ""
},
{
"docid": "85cfda0c6a2964d342035b45d2ad47ab",
"text": "Distributed Denial of Service (DDoS) attacks grow rapidly and become one of the fatal threats to the Internet. Automatically detecting DDoS attack packets is one of the main defense mechanisms. Conventional solutions monitor network traffic and identify attack activities from legitimate network traffic based on statistical divergence. Machine learning is another method to improve identifying performance based on statistical features. However, conventional machine learning techniques are limited by the shallow representation models. In this paper, we propose a deep learning based DDoS attack detection approach (DeepDefense). Deep learning approach can automatically extract high-level features from low-level ones and gain powerful representation and inference. We design a recurrent deep neural network to learn patterns from sequences of network traffic and trace network attack activities. The experimental results demonstrate a better performance of our model compared with conventional machine learning models. We reduce the error rate from 7.517% to 2.103% compared with conventional machine learning method in the larger data set.",
"title": ""
},
{
"docid": "046df1ccbc545db05d0d91fe8f73d64a",
"text": "Precise models of the robot inverse dynamics allow the design of significantly more accurate, energy-efficient and more compliant robot control. However, in some cases the accuracy of rigidbody models does not suffice for sound control performance due to unmodeled nonlinearities arising from hydraulic cable dynamics, complex friction or actuator dynamics. In such cases, estimating the inverse dynamics model from measured data poses an interesting alternative. Nonparametric regression methods, such as Gaussian process regression (GPR) or locally weighted projection regression (LWPR), are not as restrictive as parametric models and, thus, offer a more flexible framework for approximating unknown nonlinearities. In this paper, we propose a local approximation to the standard GPR, called local GPR (LGP), for real-time model online-learning by combining the strengths of both regression methods, i.e., the high accuracy of GPR and the fast speed of LWPR. The approach is shown to have competitive learning performance for high-dimensional data while being sufficiently fast for real-time learning. The effectiveness of LGP is exhibited by a comparison with the state-of-the-art regression techniques, such as GPR, LWPR and ν-SVR. The applicability of the proposed LGP method is demonstrated by real-time online-learning of the inverse dynamics model for robot model-based control on a Barrett WAM robot arm.",
"title": ""
},
{
"docid": "16924ee2e6f301d962948884eeafc934",
"text": "Companies have realized they need to hire data scientists, academic institutions are scrambling to put together data-science programs, and publications are touting data science as a hot-even \"sexy\"-career choice. However, there is confusion about what exactly data science is, and this confusion could lead to disillusionment as the concept diffuses into meaningless buzz. In this article, we argue that there are good reasons why it has been hard to pin down exactly what is data science. One reason is that data science is intricately intertwined with other important concepts also of growing importance, such as big data and data-driven decision making. Another reason is the natural tendency to associate what a practitioner does with the definition of the practitioner's field; this can result in overlooking the fundamentals of the field. We believe that trying to define the boundaries of data science precisely is not of the utmost importance. We can debate the boundaries of the field in an academic setting, but in order for data science to serve business effectively, it is important (i) to understand its relationships to other important related concepts, and (ii) to begin to identify the fundamental principles underlying data science. Once we embrace (ii), we can much better understand and explain exactly what data science has to offer. Furthermore, only once we embrace (ii) should we be comfortable calling it data science. In this article, we present a perspective that addresses all these concepts. We close by offering, as examples, a partial list of fundamental principles underlying data science.",
"title": ""
},
{
"docid": "edfc9cb39fe45a43aed78379bafa2dfc",
"text": "We propose a novel decomposition framework for the distributed optimization of general nonconvex sum-utility functions arising naturally in the system design of wireless multi-user interfering systems. Our main contributions are i) the development of the first class of (inexact) Jacobi best-response algorithms with provable convergence, where all the users simultaneously and iteratively solve a suitably convexified version of the original sum-utility optimization problem; ii) the derivation of a general dynamic pricing mechanism that provides a unified view of existing pricing schemes that are based, instead, on heuristics; and iii) a framework that can be easily particularized to well-known applications, giving rise to very efficient practical (Jacobi or Gauss-Seidel) algorithms that outperform existing ad hoc methods proposed for very specific problems. Interestingly, our framework contains as special cases well-known gradient algorithms for nonconvex sum-utility problems, and many block-coordinate descent schemes for convex functions.",
"title": ""
},
{
"docid": "2be9c1580e78d4c3f9c1e2fe115a89bc",
"text": "Robotic devices have been shown to be efficacious in the delivery of therapy to treat upper limb motor impairment following stroke. However, the application of this technology to other types of neurological injury has been limited to case studies. In this paper, we present a multi degree of freedom robotic exoskeleton, the MAHI Exo II, intended for rehabilitation of the upper limb following incomplete spinal cord injury (SCI). We present details about the MAHI Exo II and initial findings from a clinical evaluation of the device with eight subjects with incomplete SCI who completed a multi-session training protocol. Clinical assessments show significant gains when comparing pre- and post-training performance in functional tasks. This paper explores a range of robotic measures capturing movement quality and smoothness that may be useful in tracking performance, providing as feedback to the subject, or incorporating into an adaptive training protocol. Advantages and disadvantages of the various investigated measures are discussed with regard to the type of movement segmentation that can be applied to the data collected during unassisted movements where the robot is backdriven and encoder data is recorded for post-processing.",
"title": ""
},
{
"docid": "11ae42bedc18dedd0c29004000a4ec00",
"text": "A hand injury can have great impact on a person's daily life. However, the current manual evaluations of hand functions are imprecise and inconvenient. In this research, a data glove embedded with 6-axis inertial sensors is proposed. With the proposed angle calculating algorithm, accurate bending angles are measured to estimate the real-time movements of hands. This proposed system can provide physicians with an efficient tool to evaluate the recovery of patients and improve the quality of hand rehabilitation.",
"title": ""
},
{
"docid": "9cdddf98d24d100c752ea9d2b368bb77",
"text": "Using predictive models to identify patterns that can act as biomarkers for different neuropathoglogical conditions is becoming highly prevalent. In this paper, we consider the problem of Autism Spectrum Disorder (ASD) classification where previous work has shown that it can be beneficial to incorporate a wide variety of meta features, such as socio-cultural traits, into predictive modeling. A graph-based approach naturally suits these scenarios, where a contextual graph captures traits that characterize a population, while the specific brain activity patterns are utilized as a multivariate signal at the nodes. Graph neural networks have shown improvements in inferencing with graph-structured data. Though the underlying graph strongly dictates the overall performance, there exists no systematic way of choosing an appropriate graph in practice, thus making predictive models non-robust. To address this, we propose a bootstrapped version of graph convolutional neural networks (G-CNNs) that utilizes an ensemble of weakly trained G-CNNs, and reduce the sensitivity of models on the choice of graph construction. We demonstrate its effectiveness on the challenging Autism Brain Imaging Data Exchange (ABIDE) dataset and show that our approach improves upon recently proposed graph-based neural networks. We also show that our method remains more robust to noisy graphs.",
"title": ""
},
{
"docid": "83ccee768c29428ea8a575b2e6faab7d",
"text": "Audio-based cough detection has become more pervasive in recent years because of its utility in evaluating treatments and the potential to impact the quality of life for individuals with chronic cough. We critically examine the current state of the art in cough detection, concluding that existing approaches expose private audio recordings of users and bystanders. We present a novel algorithm for detecting coughs from the audio stream of a mobile phone. Our system allows cough sounds to be reconstructed from the feature set, but prevents speech from being reconstructed intelligibly. We evaluate our algorithm on data collected in the wild and report an average true positive rate of 92% and false positive rate of 0.5%. We also present the results of two psychoacoustic experiments which characterize the tradeoff between the fidelity of reconstructed cough sounds and the intelligibility of reconstructed speech.",
"title": ""
}
] |
scidocsrr
|
de5020aeb456aef4b030eff5dffe5f7f
|
Air quality data clustering using EPLS method
|
[
{
"docid": "ff1cc31ab089d5d1d09002866c7dc043",
"text": "In almost every scientific field, measurements are performed over time. These observations lead to a collection of organized data called time series. The purpose of time-series data mining is to try to extract all meaningful knowledge from the shape of data. Even if humans have a natural capacity to perform these tasks, it remains a complex problem for computers. In this article we intend to provide a survey of the techniques applied for time-series data mining. The first part is devoted to an overview of the tasks that have captured most of the interest of researchers. Considering that in most cases, time-series task relies on the same components for implementation, we divide the literature depending on these common aspects, namely representation techniques, distance measures, and indexing methods. The study of the relevant literature has been categorized for each individual aspects. Four types of robustness could then be formalized and any kind of distance could then be classified. Finally, the study submits various research trends and avenues that can be explored in the near future. We hope that this article can provide a broad and deep understanding of the time-series data mining research field.",
"title": ""
},
{
"docid": "6ca20939907ffe75d5c0125b87abecf3",
"text": "Multi-label learning studies the problem where each example is represented by a single instance while associated with a set of labels simultaneously. During the past decade, significant amount of progresses have been made toward this emerging machine learning paradigm. This paper aims to provide a timely review on this area with emphasis on state-of-the-art multi-label learning algorithms. Firstly, fundamentals on multi-label learning including formal definition and evaluation metrics are given. Secondly and primarily, eight representative multi-label learning algorithms are scrutinized under common notations with relevant analyses and discussions. Thirdly, several related learning settings are briefly summarized. As a conclusion, online resources and open research problems on multi-label learning are outlined for reference purposes.",
"title": ""
},
{
"docid": "b52da336c6d70923a1c4606f5076a3ba",
"text": "Given the recent explosion of interest in streaming data and online algorithms, clustering of time-series subsequences, extracted via a sliding window, has received much attention. In this work, we make a surprising claim. Clustering of time-series subsequences is meaningless. More concretely, clusters extracted from these time series are forced to obey a certain constraint that is pathologically unlikely to be satisfied by any dataset, and because of this, the clusters extracted by any clustering algorithm are essentially random. While this constraint can be intuitively demonstrated with a simple illustration and is simple to prove, it has never appeared in the literature. We can justify calling our claim surprising because it invalidates the contribution of dozens of previously published papers. We will justify our claim with a theorem, illustrative examples, and a comprehensive set of experiments on reimplementations of previous work. Although the primary contribution of our work is to draw attention to the fact that an apparent solution to an important problem is incorrect and should no longer be used, we also introduce a novel method that, based on the concept of time-series motifs, is able to meaningfully cluster subsequences on some time-series datasets.",
"title": ""
}
] |
[
{
"docid": "8dce819cc31cf4899cf4bad2dd117dc1",
"text": "BACKGROUND\nCaffeine and sodium bicarbonate ingestion have been suggested to improve high-intensity intermittent exercise, but it is unclear if these ergogenic substances affect performance under provoked metabolic acidification. To study the effects of caffeine and sodium bicarbonate on intense intermittent exercise performance and metabolic markers under exercise-induced acidification, intense arm-cranking exercise was performed prior to intense intermittent running after intake of placebo, caffeine and sodium bicarbonate.\n\n\nMETHODS\nMale team-sports athletes (n = 12) ingested sodium bicarbonate (NaHCO3; 0.4 g.kg(-1) b.w.), caffeine (CAF; 6 mg.kg(-1) b.w.) or placebo (PLA) on three different occasions. Thereafter, participants engaged in intense arm exercise prior to the Yo-Yo intermittent recovery test level-2 (Yo-Yo IR2). Heart rate, blood lactate and glucose as well as rating of perceived exertion (RPE) were determined during the protocol.\n\n\nRESULTS\nCAF and NaHCO3 elicited a 14 and 23% improvement (P < 0.05), respectively, in Yo-Yo IR2 performance, post arm exercise compared to PLA. The NaHCO3 trial displayed higher [blood lactate] (P < 0.05) compared to CAF and PLA (10.5 ± 1.9 vs. 8.8 ± 1.7 and 7.7 ± 2.0 mmol.L(-1), respectively) after the Yo-Yo IR2. At exhaustion CAF demonstrated higher (P < 0.05) [blood glucose] compared to PLA and NaHCO3 (5.5 ± 0.7 vs. 4.2 ± 0.9 vs. 4.1 ± 0.9 mmol.L(-1), respectively). RPE was lower (P < 0.05) during the Yo-Yo IR2 test in the NaHCO3 trial in comparison to CAF and PLA, while no difference in heart rate was observed between trials.\n\n\nCONCLUSIONS\nCaffeine and sodium bicarbonate administration improved Yo-Yo IR2 performance and lowered perceived exertion after intense arm cranking exercise, with greater overall effects of sodium bicarbonate intake.",
"title": ""
},
{
"docid": "acbb920f48119857f598388a39cdebb6",
"text": "Quantitative analyses in landscape ecology have traditionally been dominated by the patch-mosaic concept in which landscapes are modeled as a mosaic of discrete patches. This model is useful for analyzing categorical data but cannot sufficiently account for the spatial heterogeneity present in continuous landscapes. Sub-pixel remote sensing classifications offer a potential data source for capturing continuous spatial heterogeneity but lack discrete land cover classes and therefore cannot be analyzed using standard landscape metric tools. This research introduces the threshold gradient method to allow transformation of continuous sub-pixel classifications into a series of discrete maps based on land cover proportion (i.e., intensity) that can be analyzed using landscape metric tools. Sub-pixel data are reclassified at multiple thresholds along a land cover continuum and landscape metrics are computed for each map. Metrics are plotted in response to intensity and these ‘scalograms’ are mathematically modeled using curve fitting techniques to allow determination of critical land cover thresholds (e.g., inflection points) where considerable landscape changes are occurring. Results show that critical land cover intensities vary between metrics, and the approach can generate increased ecological information not available with other landscape characterization methods.",
"title": ""
},
{
"docid": "a8ff130dcb899214da73f66e12a5a1b1",
"text": "We designed and evaluated an assumption-free, deep learning-based methodology for animal health monitoring, specifically for the early detection of respiratory disease in growing pigs based on environmental sensor data. Two recurrent neural networks (RNNs), each comprising gated recurrent units (GRUs), were used to create an autoencoder (GRU-AE) into which environmental data, collected from a variety of sensors, was processed to detect anomalies. An autoencoder is a type of network trained to reconstruct the patterns it is fed as input. By training the GRU-AE using environmental data that did not lead to an occurrence of respiratory disease, data that did not fit the pattern of \"healthy environmental data\" had a greater reconstruction error. All reconstruction errors were labelled as either normal or anomalous using threshold-based anomaly detection optimised with particle swarm optimisation (PSO), from which alerts are raised. The results from the GRU-AE method outperformed state-of-the-art techniques, raising alerts when such predictions deviated from the actual observations. The results show that a change in the environment can result in occurrences of pigs showing symptoms of respiratory disease within 1⁻7 days, meaning that there is a period of time during which their keepers can act to mitigate the negative effect of respiratory diseases, such as porcine reproductive and respiratory syndrome (PRRS), a common and destructive disease endemic in pigs.",
"title": ""
},
{
"docid": "632f42f71b09f4dea40bc1cccd2d9604",
"text": "The phenomenon of radicalization is investigated within a mixed population composed of core and sensitive subpopulations. The latest includes first to third generation immigrants. Respective ways of life may be partially incompatible. In case of a conflict core agents behave as inflexible about the issue. In contrast, sensitive agents can decide either to live peacefully adjusting their way of life to the core one, or to oppose it with eventually joining violent activities. The interplay dynamics between peaceful and opponent sensitive agents is driven by pairwise interactions. These interactions occur both within the sensitive population and by mixing with core agents. The update process is monitored using a Lotka-Volterra-like Ordinary Differential Equation. Given an initial tiny minority of opponents that coexist with both inflexible and peaceful agents, we investigate implications on the emergence of radicalization. Opponents try to turn peaceful agents to opponents driving radicalization. However, inflexible core agents may step in to bring back opponents to a peaceful choice thus weakening the phenomenon. The required minimum individual core involvement to actually curb radicalization is calculated. It is found to be a function of both the majority or minority status of the sensitive subpopulation with respect to the core subpopulation and the degree of activeness of opponents. The results highlight the instrumental role core agents can have to hinder radicalization within the sensitive subpopulation. Some hints are outlined to favor novel public policies towards social integration.",
"title": ""
},
{
"docid": "eee5ffff364575afad1dcebbf169777b",
"text": "In this paper, we proposed the multiclass support vector machine (SVM) with the error-correcting output codes for the multiclass electroencephalogram (EEG) signals classification problem. The probabilistic neural network (PNN) and multilayer perceptron neural network were also tested and benchmarked for their performance on the classification of the EEG signals. Decision making was performed in two stages: feature extraction by computing the wavelet coefficients and the Lyapunov exponents and classification using the classifiers trained on the extracted features. The purpose was to determine an optimum classification scheme for this problem and also to infer clues about the extracted features. Our research demonstrated that the wavelet coefficients and the Lyapunov exponents are the features which well represent the EEG signals and the multiclass SVM and PNN trained on these features achieved high classification accuracies",
"title": ""
},
{
"docid": "89263084f29469d1c363da55c600a971",
"text": "Today when there are more than 1 billion Android users all over the world, it shows that its popularity has no equal. These days mobile phones have become so intrusive in our daily lives that when they needed can give huge amount of information to forensic examiners. Till the date of writing this paper there are many papers citing the need of mobile device forensic and ways of getting the vital artifacts through mobile devices for different purposes. With vast options of popular and less popular forensic tools and techniques available today, this papers aims to bring them together under a comparative study so that this paper could serve as a starting point for several android users, future forensic examiners and investigators. During our survey we found scarcity for papers on tools for android forensic. In this paper we have analyzed different tools and techniques used in android forensic and at the end tabulated the results and findings.",
"title": ""
},
{
"docid": "40cb853a6ca202fa74f1838673421107",
"text": "The analytics platform at Twitter has experienced tremendous growth over the past few years in terms of size, complexity, number of users, and variety of use cases. In this paper, we discuss the evolution of our infrastructure and the development of capabilities for data mining on \"big data\". One important lesson is that successful big data mining in practice is about much more than what most academics would consider data mining: life \"in the trenches\" is occupied by much preparatory work that precedes the application of data mining algorithms and followed by substantial effort to turn preliminary models into robust solutions. In this context, we discuss two topics: First, schemas play an important role in helping data scientists understand petabyte-scale data stores, but they're insufficient to provide an overall \"big picture\" of the data available to generate insights. Second, we observe that a major challenge in building data analytics platforms stems from the heterogeneity of the various components that must be integrated together into production workflows---we refer to this as \"plumbing\". This paper has two goals: For practitioners, we hope to share our experiences to flatten bumps in the road for those who come after us. For academic researchers, we hope to provide a broader context for data mining in production environments, pointing out opportunities for future work.",
"title": ""
},
{
"docid": "864d1c5a2861acc317f9f2a37c6d3660",
"text": "We report a case of an 8-month-old child with a primitive myxoid mesenchymal tumor of infancy arising in the thenar eminence. The lesion recurred after conservative excision and was ultimately nonresponsive to chemotherapy, necessitating partial amputation. The patient remains free of disease 5 years after this radical surgery. This is the 1st report of such a tumor since it was initially described by Alaggio and colleagues in 2006. The pathologic differential diagnosis is discussed.",
"title": ""
},
{
"docid": "cb7a9b816fc1b83670cb9fb377974e5d",
"text": "BACKGROUND\nCare attendants constitute the main workforce in nursing homes, but their heavy workload, low autonomy, and indefinite responsibility result in high levels of stress and may affect quality of care. However, few studies have focused of this problem.\n\n\nOBJECTIVES\nThe aim of this study was to examine work-related stress and associated factors that affect care attendants in nursing homes and to offer suggestions for how management can alleviate these problems in care facilities.\n\n\nMETHODS\nWe recruited participants from nine nursing homes with 50 or more beds located in middle Taiwan; 110 care attendants completed the questionnaire. The work stress scale for the care attendants was validated and achieved good reliability (Cronbach's alpha=0.93). We also conducted exploratory factor analysis.\n\n\nRESULTS\nSix factors were extracted from the work stress scale: insufficient ability, stressful reactions, heavy workload, trouble in care work, poor management, and working time problems. The explained variance achieved 64.96%. Factors related to higher work stress included working in a hospital-based nursing home, having a fixed schedule, night work, feeling burden, inconvenient facility, less enthusiasm, and self-rated higher stress.\n\n\nCONCLUSION\nWork stress for care attendants in nursing homes is related to human resource management and quality of care. We suggest potential management strategies to alleviate work stress for these workers.",
"title": ""
},
{
"docid": "042431e96028ed9729e6b174a78d642d",
"text": "We address the problem of multi-class classification in the case where the number of classes is very large. We propose a double sampling strategy on top of a multi-class to binary reduction strategy, which transforms the original multi-class problem into a binary classification problem over pairs of examples. The aim of the sampling strategy is to overcome the curse of long-tailed class distributions exhibited in majority of large-scale multi-class classification problems and to reduce the number of pairs of examples in the expanded data. We show that this strategy does not alter the consistency of the empirical risk minimization principle defined over the double sample reduction. Experiments are carried out on DMOZ and Wikipedia collections with 10,000 to 100,000 classes where we show the efficiency of the proposed approach in terms of training and prediction time, memory consumption, and predictive performance with respect to state-of-the-art approaches.",
"title": ""
},
{
"docid": "a4788b60b0fc16551f03557483a8a532",
"text": "The rapid growth in the population density in urban cities demands tolerable provision of services and infrastructure. To meet the needs of city inhabitants. Thus, increase in the request for embedded devices, such as sensors, actuators, and smartphones, etc., which is providing a great business potential towards the new era of Internet of Things (IoT); in which all the devices are capable of interconnecting and communicating with each other over the Internet. Therefore, the Internet technologies provide a way towards integrating and sharing a common communication medium. Having such knowledge, in this paper, we propose a combined IoT-based system for smart city development and urban planning using Big Data analytics. We proposed a complete system, which consists of various types of sensors deployment including smart home sensors, vehicular networking, weather and water sensors, smart parking sensors, and surveillance objects, etc. A four-tier architecture is proposed which include 1) Bottom Tier-1: which is responsible for IoT sources, data generations, and collections 2) Intermediate Tier-1: That is responsible for all type of communication between sensors, relays, base stations, the internet, etc. 3) Intermediate Tier 2: it is responsible for data management and processing using Hadoop framework, and 4) Top tier: is responsible for application and usage of the data analysis and results generated. The system implementation consists of various steps that start from data generation and collecting, aggregating, filtration, classification, preprocessing, computing and decision making. The proposed system is implemented using Hadoop with Spark, voltDB, Storm or S4 for real time processing of the IoT data to generate results in order to establish the smart city. For urban planning or city future development, the offline historical data is analyzed on Hadoop using MapReduce programming. IoT datasets generated by smart homes, smart parking weather, pollution, and vehicle data sets are used for analysis and evaluation. Such type of system with full functionalities does not exist. Similarly, the results show that the proposed system is more scalable and efficient than the existing systems. Moreover, the system efficiency is measured in term of throughput and processing time.",
"title": ""
},
{
"docid": "58677916e11e6d5401b7396d117a517b",
"text": "This work contributes to the development of a common framework for the discussion and analysis of dexterous manipulation across the human and robotic domains. An overview of previous work is first provided along with an analysis of the tradeoffs between arm and hand dexterity. A hand-centric and motion-centric manipulation classification is then presented and applied in four different ways. It is first discussed how the taxonomy can be used to identify a manipulation strategy. Then, applications for robot hand analysis and engineering design are explained. Finally, the classification is applied to three activities of daily living (ADLs) to distinguish the patterns of dexterous manipulation involved in each task. The same analysis method could be used to predict problem ADLs for various impairments or to produce a representative benchmark set of ADL tasks. Overall, the classification scheme proposed creates a descriptive framework that can be used to effectively describe hand movements during manipulation in a variety of contexts and might be combined with existing object centric or other taxonomies to provide a complete description of a specific manipulation task.",
"title": ""
},
{
"docid": "5828218248b4da8991b18dc698ef25ee",
"text": "Little is known about the mechanisms of smartphone features that are used in sealing relationships between psychopathology and problematic smartphone use. Our purpose was to investigate two specific smartphone usage types e process use and social use e for associations with depression and anxiety; and in accounting for relationships between anxiety/depression and problematic smartphone use. Social smartphone usage involves social feature engagement (e.g., social networking, messaging), while process usage involves non-social feature engagement (e.g., news consumption, entertainment, relaxation). 308 participants from Amazon's Mechanical Turk internet labor market answered questionnaires about their depression and anxiety symptoms, and problematic smartphone use along with process and social smartphone use dimensions. Statistically adjusting for age and sex, we discovered the association between anxiety symptoms was stronger with process versus social smartphone use. Depression symptom severity was negatively associated with greater social smartphone use. Process smartphone use was more strongly associated with problematic smartphone use. Finally, process smartphone use accounted for relationships between anxiety severity and problematic smartphone use. © 2016 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "8ef58dee2a9cbda23f642cb07bed013b",
"text": "Audio information plays a rather important role in the increasing digital content that is available today, resulting in a need for methodologies that automatically analyze such content: audio event recognition for home automations and surveillance systems, speech recognition, music information retrieval, multimodal analysis (e.g. audio-visual analysis of online videos for content-based recommendation), etc. This paper presents pyAudioAnalysis, an open-source Python library that provides a wide range of audio analysis procedures including: feature extraction, classification of audio signals, supervised and unsupervised segmentation and content visualization. pyAudioAnalysis is licensed under the Apache License and is available at GitHub (https://github.com/tyiannak/pyAudioAnalysis/). Here we present the theoretical background behind the wide range of the implemented methodologies, along with evaluation metrics for some of the methods. pyAudioAnalysis has been already used in several audio analysis research applications: smart-home functionalities through audio event detection, speech emotion recognition, depression classification based on audio-visual features, music segmentation, multimodal content-based movie recommendation and health applications (e.g. monitoring eating habits). The feedback provided from all these particular audio applications has led to practical enhancement of the library.",
"title": ""
},
{
"docid": "6936462dee2424b92c7476faed5b5a23",
"text": "A significant challenge in scene text detection is the large variation in text sizes. In particular, small text are usually hard to detect. This paper presents an accurate oriented text detector based on Faster R-CNN. We observe that Faster R-CNN is suitable for general object detection but inadequate for scene text detection due to the large variation in text size. We apply feature fusion both in RPN and Fast R-CNN to alleviate this problem and furthermore, enhance model's ability to detect relatively small text. Our text detector achieves comparable results to those state of the art methods on ICDAR 2015 and MSRA-TD500, showing its advantage and applicability.",
"title": ""
},
{
"docid": "17676785398d4ed24cc04cb3363a7596",
"text": "Generative models (GMs) such as Generative Adversary Network (GAN) and Variational Auto-Encoder (VAE) have thrived these years and achieved high quality results in generating new samples. Especially in Computer Vision, GMs have been used in image inpainting, denoising and completion, which can be treated as the inference from observed pixels to corrupted pixels. However, images are hierarchically structured which are quite different from many real-world inference scenarios with non-hierarchical features. These inference scenarios contain heterogeneous stochastic variables and irregular mutual dependences. Traditionally they are modeled by Bayesian Network (BN). However, the learning and inference of BN model are NP-hard thus the number of stochastic variables in BN is highly constrained. In this paper, we adapt typical GMs to enable heterogeneous learning and inference in polynomial time. We also propose an extended autoregressive (EAR) model and an EAR with adversary loss (EARA) model and give theoretical results on their effectiveness. Experiments on several BN datasets show that our proposed EAR model achieves the best performance in most cases compared to other GMs. Except for black box analysis, we’ve also done a serial of experiments on Markov border inference of GMs for white box analysis and give theoretical results.",
"title": ""
},
{
"docid": "af7f83599c163d0f519f1e2636ae8d44",
"text": "There is a set of characterological attributes thought to be associated with developing success at critical thinking (CT). This paper explores the disposition toward CT theoretically, and then as it appears to be manifest in college students. Factor analytic research grounded in a consensus-based conceptual analysis of CT described seven aspects of the overall disposition toward CT: truth-seeking, open-mindedness, analyticity, systematicity, CTconfidence, inquisitiveness, and cognitive maturity. The California Critical Thinking Disposition Inventory (CCTDI), developed in 1992, was used to sample college students at two comprehensive universities. Entering college freshman students showed strengths in openmindedness and inquisitiveness, weaknesses in systematicity and opposition to truth-seeking. Additional research indicates the disposition toward CT is highly correlated with the psychological constructs of absorption and openness to experience, and strongly predictive of ego-resiliency. A preliminary study explores the interesting and potentially complex interrelationship between the disposition toward CT and CT abilities. In addition to the significance of this work for psychological studies of human development, empirical research on the disposition toward CT promises important implications for all levels of education. 1 This essay appeared as Facione, PA, Sánchez, (Giancarlo) CA, Facione, NC & Gainen, J., (1995). The disposition toward critical thinking. Journal of General Education. Volume 44, Number(1). 1-25.",
"title": ""
},
{
"docid": "b2ec062fd7a7a9b124f2663a2fb002cb",
"text": "Major international projects are underway that are aimed at creating a comprehensive catalogue of all the genes responsible for the initiation and progression of cancer. These studies involve the sequencing of matched tumour–normal samples followed by mathematical analysis to identify those genes in which mutations occur more frequently than expected by random chance. Here we describe a fundamental problem with cancer genome studies: as the sample size increases, the list of putatively significant genes produced by current analytical methods burgeons into the hundreds. The list includes many implausible genes (such as those encoding olfactory receptors and the muscle protein titin), suggesting extensive false-positive findings that overshadow true driver events. We show that this problem stems largely from mutational heterogeneity and provide a novel analytical methodology, MutSigCV, for resolving the problem. We apply MutSigCV to exome sequences from 3,083 tumour–normal pairs and discover extraordinary variation in mutation frequency and spectrum within cancer types, which sheds light on mutational processes and disease aetiology, and in mutation frequency across the genome, which is strongly correlated with DNA replication timing and also with transcriptional activity. By incorporating mutational heterogeneity into the analyses, MutSigCV is able to eliminate most of the apparent artefactual findings and enable the identification of genes truly associated with cancer.",
"title": ""
},
{
"docid": "647c10e242a4ceaecf218565e9b9675b",
"text": "After 40 years of investigation, steady-state visually evoked potentials (SSVEPs) have been shown to be useful for many paradigms in cognitive (visual attention, binocular rivalry, working memory, and brain rhythms) and clinical neuroscience (aging, neurodegenerative disorders, schizophrenia, ophthalmic pathologies, migraine, autism, depression, anxiety, stress, and epilepsy). Recently, in engineering, SSVEPs found a novel application for SSVEP-driven brain-computer interface (BCI) systems. Although some SSVEP properties are well documented, many questions are still hotly debated. We provide an overview of recent SSVEP studies in neuroscience (using implanted and scalp EEG, fMRI, or PET), with the perspective of modern theories about the visual pathway. We investigate the steady-state evoked activity, its properties, and the mechanisms behind SSVEP generation. Next, we describe the SSVEP-BCI paradigm and review recently developed SSVEP-based BCI systems. Lastly, we outline future research directions related to basic and applied aspects of SSVEPs.",
"title": ""
},
{
"docid": "595e68cfcf7b2606f42f2ad5afb9713a",
"text": "Mammalian hibernators undergo a remarkable phenotypic switch that involves profound changes in physiology, morphology, and behavior in response to periods of unfavorable environmental conditions. The ability to hibernate is found throughout the class Mammalia and appears to involve differential expression of genes common to all mammals, rather than the induction of novel gene products unique to the hibernating state. The hibernation season is characterized by extended bouts of torpor, during which minimal body temperature (Tb) can fall as low as -2.9 degrees C and metabolism can be reduced to 1% of euthermic rates. Many global biochemical and physiological processes exploit low temperatures to lower reaction rates but retain the ability to resume full activity upon rewarming. Other critical functions must continue at physiologically relevant levels during torpor and be precisely regulated even at Tb values near 0 degrees C. Research using new tools of molecular and cellular biology is beginning to reveal how hibernators survive repeated cycles of torpor and arousal during the hibernation season. Comprehensive approaches that exploit advances in genomic and proteomic technologies are needed to further define the differentially expressed genes that distinguish the summer euthermic from winter hibernating states. Detailed understanding of hibernation from the molecular to organismal levels should enable the translation of this information to the development of a variety of hypothermic and hypometabolic strategies to improve outcomes for human and animal health.",
"title": ""
}
] |
scidocsrr
|
7c0719b2936701c6e4ca5b3ed3cf2d91
|
Curating and contextualizing Twitter stories to assist with social newsgathering
|
[
{
"docid": "463ef40777aaf14406186d5d4d99ba13",
"text": "Social media is already a fixture for reporting for many journalists, especially around breaking news events where non-professionals may already be on the scene to share an eyewitness report, photo, or video of the event. At the same time, the huge amount of content posted in conjunction with such events serves as a challenge to finding interesting and trustworthy sources in the din of the stream. In this paper we develop and investigate new methods for filtering and assessing the verity of sources found through social media by journalists. We take a human centered design approach to developing a system, SRSR (\"Seriously Rapid Source Review\"), informed by journalistic practices and knowledge of information production in events. We then used the system, together with a realistic reporting scenario, to evaluate the filtering and visual cue features that we developed. Our evaluation offers insights into social media information sourcing practices and challenges, and highlights the role technology can play in the solution.",
"title": ""
}
] |
[
{
"docid": "7a6a1bf378f5bdfc6c373dc55cf0dabd",
"text": "In this paper, we propose and study an Asynchronous parallel Greedy Coordinate Descent (Asy-GCD) algorithm for minimizing a smooth function with bounded constraints. At each iteration, workers asynchronously conduct greedy coordinate descent updates on a block of variables. In the first part of the paper, we analyze the theoretical behavior of Asy-GCD and prove a linear convergence rate. In the second part, we develop an efficient kernel SVM solver based on Asy-GCD in the shared memory multi-core setting. Since our algorithm is fully asynchronous—each core does not need to idle and wait for the other cores—the resulting algorithm enjoys good speedup and outperforms existing multi-core kernel SVM solvers including asynchronous stochastic coordinate descent and multi-core LIBSVM.",
"title": ""
},
{
"docid": "e693e811edb2196baa1fd22b25246eaf",
"text": "The chicken is an excellent model organism for studying vertebrate limb development, mainly because of the ease of manipulating the developing limb in vivo. Classical chicken embryology has provided fate maps and elucidated the cell-cell interactions that specify limb pattern. The first defined chemical that can mimic one of these interactions was discovered by experiments on developing chick limbs and, over the last 15 years or so, the role of an increasing number of developmentally important genes has been uncovered. The principles that underlie limb development in chickens are applicable to other vertebrates and there are growing links with clinical genetics. The sequence of the chicken genome, together with other recently assembled chicken genomic resources, will present new opportunities for exploiting the ease of manipulating the limb.",
"title": ""
},
{
"docid": "394d96f18402c7033f27f5ead8219698",
"text": "Today, online social networks in the World Wide Web become increasingly interactive and networked. Web 2.0 technologies provide a multitude of platforms, such as blogs, wikis, and forums where for example consumers can disseminate data about products and manufacturers. This data provides an abundance of information on personal experiences and opinions which are extremely relevant for companies and sales organizations. A new approach based on text mining and social network analysis is presented which allows detecting opinion leaders and opinion trends. This allows getting a better understanding of the opinion formation. The overall concept is presented and illustrated by an example.",
"title": ""
},
{
"docid": "6ccad3fd0fea9102d15bd37306f5f562",
"text": "This paper reviews deposition, integration, and device fabrication of ferroelectric PbZrxTi1−xO3 (PZT) films for applications in microelectromechanical systems. As examples, a piezoelectric ultrasonic micromotor and pyroelectric infrared detector array are presented. A summary of the published data on the piezoelectric properties of PZT thin films is given. The figures of merit for various applications are discussed. Some considerations and results on operation, reliability, and depolarization of PZT thin films are presented.",
"title": ""
},
{
"docid": "2891ce3327617e9e957488ea21e9a20c",
"text": "Recently, remote healthcare systems have received increasing attention in the last decade, explaining why intelligent systems with physiology signal monitoring for e-health care are an emerging area of development. Therefore, this study adopts a system which includes continuous collection and evaluation of multiple vital signs, long-term healthcare, and a cellular connection to a medical center in emergency case and it transfers all acquired raw data by the internet in normal case. The proposed system can continuously acquire four different physiological signs, for example, ECG, SpO2, temperature, and blood pressure and further relayed them to an intelligent data analysis scheme to diagnose abnormal pulses for exploring potential chronic diseases. The proposed system also has a friendly web-based interface for medical staff to observe immediate pulse signals for remote treatment. Once abnormal event happened or the request to real-time display vital signs is confirmed, all physiological signs will be immediately transmitted to remote medical server through both cellular networks and internet. Also data can be transmitted to a family member's mobile phone or doctor's phone through GPRS. A prototype of such system has been successfully developed and implemented, which will offer high standard of healthcare with a major reduction in cost for our society.",
"title": ""
},
{
"docid": "b5831795da97befd3241b9d7d085a20f",
"text": "Want to learn more about the background and concepts of Internet congestion control? This indispensable text draws a sketch of the future in an easily comprehensible fashion. Special attention is placed on explaining the how and why of congestion control mechanisms complex issues so far hardly understood outside the congestion control research community. A chapter on Internet Traffic Management from the perspective of an Internet Service Provider demonstrates how the theory of congestion control impacts on the practicalities of service delivery.",
"title": ""
},
{
"docid": "ec07bddc8bdc96678eebf49c7ee3752e",
"text": "This study aimed to assess the effects of core stability training on lower limbs' muscular asymmetries and imbalances in team sport. Twenty footballers were divided into two groups, either core stability or control group. Before each daily practice, core stability group (n = 10) performed a core stability training programme, while control group (n = 10) did a standard warm-up. The effects of the core stability training programme were assessed by performing isokinetic tests and single-leg countermovement jumps. Significant improvement was found for knee extensors peak torque at 3.14 rad · s(-1) (14%; P < 0.05), knee flexors peak torque at 1.05 and 3.14 rad · s(-1) (19% and 22% with P < 0.01 and P < 0.01, respectively) and peak torque flexors/extensors ratios at 1.05 and 3.14 rad · s(-1) (7.7% and 8.5% with P < 0.05 and P < 0.05, respectively) only in the core stability group. The jump tests showed a significant reduction in the strength asymmetries in core stability group (-71.4%; P = 0.02) while a concurrent increase was seen in the control group (33.3%; P < 0.05). This study provides practical evidence in combining core exercises for optimal lower limbs strength balance development in young soccer players.",
"title": ""
},
{
"docid": "eece6349d77b415115fa6afbbbd85190",
"text": "BACKGROUND\nAcute appendicitis is the most common cause of acute abdomen. Approximately 7% of the population will be affected by this condition during full life. The development of AIR score may contribute to diagnosis associating easy clinical criteria and two simple laboratory tests.\n\n\nAIM\nTo evaluate the score AIR (Appendicitis Inflammatory Response score) as a tool for the diagnosis and prediction of severity of acute appendicitis.\n\n\nMETHOD\nWere evaluated all patients undergoing surgical appendectomy. From 273 patients, 126 were excluded due to exclusion criteria. All patients were submitted o AIR score.\n\n\nRESULTS\nThe value of the C-reactive protein and the percentage of leukocytes segmented blood count showed a direct relationship with the phase of acute appendicitis.\n\n\nCONCLUSION\nAs for the laboratory criteria, serum C-reactive protein and assessment of the percentage of the polymorphonuclear leukocytes count were important to diagnosis and disease stratification.",
"title": ""
},
{
"docid": "c1956e4c6b732fa6a420d4c69cfbe529",
"text": "To improve the safety and comfort of a human-machine system, the machine needs to ‘know,’ in a real time manner, the human operator in the system. The machine’s assistance to the human can be fine tuned if the machine is able to sense the human’s state and intent. Related to this point, this paper discusses issues of human trust in automation, automation surprises, responsibility and authority. Examples are given of a driver assistance system for advanced automobile.",
"title": ""
},
{
"docid": "3f5f8e75af4cc24e260f654f8834a76c",
"text": "The Balanced Scorecard (BSC) methodology focuses on major critical issues of modern business organisations: the effective measurement of corporate performance and the evaluation of the successful implementation of corporate strategy. Despite the increased adoption of the BSC methodology by numerous business organisations during the last decade, limited case studies concern non-profit organisations (e.g. public sector, educational institutions, healthcare organisations, etc.). The main aim of this study is to present the development of a performance measurement system for public health care organisations, in the context of BSC methodology. The proposed approach considers the distinguished characteristics of the aforementioned sector (e.g. lack of competition, social character of organisations, etc.). The proposed measurement system contains the most important financial performance indicators, as well as non-financial performance indicators that are able to examine the quality of the provided services, the satisfaction of internal and external customers, the selfimprovement system of the organisation and the ability of the organisation to adapt and change. These indicators play the role of Key Performance Indicators (KPIs), in the context of BSC methodology. The presented analysis is based on a MCDA approach, where the UTASTAR method is used in order to aggregate the marginal performance of KPIs. This approach is able to take into account the preferences of the management of the organisation regarding the achievement of the defined strategic objectives. The main results of the proposed approach refer to the evaluation of the overall scores for each one of the main dimensions of the BSC methodology (i.e. financial, customer, internal business process, and innovation-learning). These results are able to help the organisation to evaluate and revise its strategy, and generally to adopt modern management approaches in every day practise. & 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "a1df80a201943ad386a7836c7ba3ff94",
"text": "This paper estimates the effect of air pollution on child hospitalizations for asthma using naturally occurring seasonal variations in pollution within zip codes. Of the pollutants considered, carbon monoxide (CO) has a significant effect on asthma for children ages 1-18: if 1998 pollution levels were at their 1992 levels, there would be a 5-14% increase in asthma admissions. Also, households respond to information about pollution with avoidance behavior, suggesting it is important to account for these endogenous responses when measuring the effect of pollution on health. Finally, the effect of pollution is greater for children of lower socio-economic status (SES), indicating that pollution is one potential mechanism by which SES affects health.",
"title": ""
},
{
"docid": "78829447a6cbf0aa020ef098a275a16d",
"text": "Black soldier fly (BSF), Hermetia illucens (L.) is widely used in bio-recycling of human food waste and manure of livestock. Eggs of BSF were commonly collected by egg-trapping technique for mass rearing. To find an efficient lure for BSF egg-trapping, this study compared the number of egg batch trapped by different lures, including fruit, food waste, chicken manure, pig manure, and dairy manure. The result showed that fruit wastes are the most efficient on trapping BSF eggs. To test the effects of fruit species, number of egg batch trapped by three different fruit species, papaya, banana, and pineapple were compared, and no difference were found among fruit species. Environmental factors including temperature, relative humidity, and light intensity were measured and compared in different study sites to examine their effects on egg-trapping. The results showed no differences on temperature, relative humidity, and overall light intensity between sites, but the stability of light environment differed between sites. BSF tend to lay more eggs in site with stable light environment.",
"title": ""
},
{
"docid": "057621c670a9b7253ba829210c530dca",
"text": "Actual challenges in production are individualization and short product lifecycles. To achieve this, the product development and the production planning must be accelerated. In some cases specialized production machines are engineered for automating production processes for a single product. Regarding the engineering of specialized production machines, there is often a sequential process starting with the mechanics, proceeding with the electrics and ending with the automation design. To accelerate this engineering process the different domains have to be parallelized as far as possible (Schlögl, 2008). Thereby the different domains start detailing in parallel after the definition of a common concept. The system integration follows the detailing with the objective to verify the system including the PLC-code. Regarding production machines, the system integration is done either by commissioning of the real machine or by validating the PLCcode against a model of the machine, so called virtual commissioning.",
"title": ""
},
{
"docid": "ca4aa2c6f4096bbffaa2e3e1dd06fbe8",
"text": "Hybrid unmanned aircraft, that combine hover capability with a wing for fast and efficient forward flight, have attracted a lot of attention in recent years. Many different designs are proposed, but one of the most promising is the tailsitter concept. However, tailsitters are difficult to control across the entire flight envelope, which often includes stalled flight. Additionally, their wing surface makes them susceptible to wind gusts. In this paper, we propose incremental nonlinear dynamic inversion control for the attitude and position control. The result is a single, continuous controller, that is able to track the acceleration of the vehicle across the flight envelope. The proposed controller is implemented on the Cyclone hybrid UAV. Multiple outdoor experiments are performed, showing that unmodeled forces and moments are effectively compensated by the incremental control structure, and that accelerations can be tracked across the flight envelope. Finally, we provide a comprehensive procedure for the implementation of the controller on other types of hybrid UAVs.",
"title": ""
},
{
"docid": "eaf30f31b332869bc45ff1288c41da71",
"text": "Search Engines: Information Retrieval In Practice is writen by Bruce Croft in English language. Release on 2009-02-16, this book has 552 page count that consist of helpful information with easy reading experience. The book was publish by Addison-Wesley, it is one of best subjects book genre that gave you everything love about reading. You can find Search Engines: Information Retrieval In Practice book with ISBN 0136072240.",
"title": ""
},
{
"docid": "dce75562a7e8b02364d39fd7eb407748",
"text": "The ability to predict future user activity is invaluable when it comes to content recommendation and personalization. For instance, knowing when users will return to an online music service and what they will listen to increases user satisfaction and therefore user retention.\n We present a model based on Long-Short Term Memory to estimate when a user will return to a site and what their future listening behavior will be. In doing so, we aim to solve the problem of Just-In-Time recommendation, that is, to recommend the right items at the right time. We use tools from survival analysis for return time prediction and exponential families for future activity analysis. We show that the resulting multitask problem can be solved accurately, when applied to two real-world datasets.",
"title": ""
},
{
"docid": "b59c843d687a1dbed0ef1b891c314424",
"text": "Linear spectral unmixing is a popular tool in remotely sensed hyperspectral data interpretation. It aims at estimating the fractional abundances of pure spectral signatures (also called as endmembers) in each mixed pixel collected by an imaging spectrometer. In many situations, the identification of the end-member signatures in the original data set may be challenging due to insufficient spatial resolution, mixtures happening at different scales, and unavailability of completely pure spectral signatures in the scene. However, the unmixing problem can also be approached in semisupervised fashion, i.e., by assuming that the observed image signatures can be expressed in the form of linear combinations of a number of pure spectral signatures known in advance (e.g., spectra collected on the ground by a field spectroradiometer). Unmixing then amounts to finding the optimal subset of signatures in a (potentially very large) spectral library that can best model each mixed pixel in the scene. In practice, this is a combinatorial problem which calls for efficient linear sparse regression (SR) techniques based on sparsity-inducing regularizers, since the number of endmembers participating in a mixed pixel is usually very small compared with the (ever-growing) dimensionality (and availability) of spectral libraries. Linear SR is an area of very active research, with strong links to compressed sensing, basis pursuit (BP), BP denoising, and matching pursuit. In this paper, we study the linear spectral unmixing problem under the light of recent theoretical results published in those referred to areas. Furthermore, we provide a comparison of several available and new linear SR algorithms, with the ultimate goal of analyzing their potential in solving the spectral unmixing problem by resorting to available spectral libraries. Our experimental results, conducted using both simulated and real hyperspectral data sets collected by the NASA Jet Propulsion Laboratory's Airborne Visible Infrared Imaging Spectrometer and spectral libraries publicly available from the U.S. Geological Survey, indicate the potential of SR techniques in the task of accurately characterizing the mixed pixels using the library spectra. This opens new perspectives for spectral unmixing, since the abundance estimation process no longer depends on the availability of pure spectral signatures in the input data nor on the capacity of a certain endmember extraction algorithm to identify such pure signatures.",
"title": ""
},
{
"docid": "956ffd90cc922e77632b8f9f79f42a98",
"text": "Energy efficient actuators with adjustable stiffness: a review on AwAS, AwAS-II and CompACT VSA changing stiffness based on lever mechanism Amir jafari Nikos Tsagarakis Darwin G Caldwell Article information: To cite this document: Amir jafari Nikos Tsagarakis Darwin G Caldwell , (2015),\"Energy efficient actuators with adjustable stiffness: a review on AwAS, AwAS-II and CompACT VSA changing stiffness based on lever mechanism\", Industrial Robot: An International Journal, Vol. 42 Iss 3 pp. Permanent link to this document: http://dx.doi.org/10.1108/IR-12-2014-0433",
"title": ""
},
{
"docid": "589396a7c9dae0567f0bcd4d83461a6f",
"text": "The risk of inadequate hand hygiene in food handling settings is exacerbated when water is limited or unavailable, thereby making washing with soap and water difficult. The SaniTwice method involves application of excess alcohol-based hand sanitizer (ABHS), hand \"washing\" for 15 s, and thorough cleaning with paper towels while hands are still wet, followed by a standard application of ABHS. This study investigated the effectiveness of the SaniTwice methodology as an alternative to hand washing for cleaning and removal of microorganisms. On hands moderately soiled with beef broth containing Escherichia coli (ATCC 11229), washing with a nonantimicrobial hand washing product achieved a 2.86 (±0.64)-log reduction in microbial contamination compared with the baseline, whereas the SaniTwice method with 62 % ethanol (EtOH) gel, 62 % EtOH foam, and 70 % EtOH advanced formula gel achieved reductions of 2.64 ± 0.89, 3.64 ± 0.57, and 4.61 ± 0.33 log units, respectively. When hands were heavily soiled from handling raw hamburger containing E. coli, washing with nonantimicrobial hand washing product and antimicrobial hand washing product achieved reductions of 2.65 ± 0.33 and 2.69 ± 0.32 log units, respectively, whereas SaniTwice with 62 % EtOH foam, 70 % EtOH gel, and 70 % EtOH advanced formula gel achieved reductions of 2.87 ± 0.42, 2.99 ± 0.51, and 3.92 ± 0.65 log units, respectively. These results clearly demonstrate that the in vivo antibacterial efficacy of the SaniTwice regimen with various ABHS is equivalent to or exceeds that of the standard hand washing approach as specified in the U.S. Food and Drug Administration Food Code. Implementation of the SaniTwice regimen in food handling settings with limited water availability should significantly reduce the risk of foodborne infections resulting from inadequate hand hygiene.",
"title": ""
},
{
"docid": "cd55fc3fafe2618f743a845d89c3a796",
"text": "According to the notation proposed by the International Federation for the Theory of Mechanisms and Machines IFToMM (Ionescu, 2003); a parallel manipulator is a mechanism where the motion of the end-effector, namely the moving or movable platform, is controlled by means of at least two kinematic chains. If each kinematic chain, also known popularly as limb or leg, has a single active joint, then the mechanism is called a fully-parallel mechanism, in which clearly the nominal degree of freedom equates the number of limbs. Tire-testing machines (Gough & Whitehall, 1962) and flight simulators (Stewart, 1965), appear to be the first transcendental applications of these complex mechanisms. Parallel manipulators, and in general mechanisms with parallel kinematic architectures, due to benefits --over their serial counterparts-such as higher stiffness and accuracy, have found interesting applications such as walking machines, pointing devices, multi-axis machine tools, micro manipulators, and so on. The pioneering contributions of Gough and Stewart, mainly the theoretical paper of Stewart (1965), influenced strongly the development of parallel manipulators giving birth to an intensive research field. In that way, recently several parallel mechanisms for industrial purposes have been constructed using the, now, classical hexapod as a base mechanism: Octahedral Hexapod HOH-600 (Ingersoll), HEXAPODE CMW 300 (CMW), Cosmo Center PM-600 (Okuma), F-200i (FANUC) and so on. On the other hand one cannot ignore that this kind of parallel kinematic structures have a limited and complex-shaped workspace. Furthermore, their rotation and position capabilities are highly coupled and therefore the control and calibration of them are rather complicated. It is well known that many industrial applications do not require the six degrees of freedom of a parallel manipulator. Thus in order to simplify the kinematics, mechanical assembly and control of parallel manipulators, an interesting trend is the development of the so called defective parallel manipulators, in other words, spatial parallel manipulators with fewer than six degrees of freedom. Special mention deserves the Delta robot, invented by Clavel (1991); which proved that parallel robotic manipulators are an excellent option for industrial applications where the accuracy and stiffness are fundamental characteristics. Consider for instance that the Adept Quattro robot, an application of the Delta robot, developed by Francois Pierrot in collaboration with Fatronik (Int. patent appl. WO/2006/087399), has a",
"title": ""
}
] |
scidocsrr
|
dcd2dd029398250c200f85104d03a989
|
A Deep Feature based Multi-kernel Learning Approach for Video Emotion Recognition
|
[
{
"docid": "3f88da8f70976c11bf5bab5f1d438d58",
"text": "The task of the Emotion Recognition in the Wild (EmotiW) Challenge is to assign one of seven emotions to short video clips extracted from Hollywood style movies. The videos depict acted-out emotions under realistic conditions with a large degree of variation in attributes such as pose and illumination, making it worthwhile to explore approaches which consider combinations of features from multiple modalities for label assignment. In this paper we present our approach to learning several specialist models using deep learning techniques, each focusing on one modality. Among these are a convolutional neural network, focusing on capturing visual information in detected faces, a deep belief net focusing on the representation of the audio stream, a K-Means based “bag-of-mouths” model, which extracts visual features around the mouth region and a relational autoencoder, which addresses spatio-temporal aspects of videos. We explore multiple methods for the combination of cues from these modalities into one common classifier. This achieves a considerably greater accuracy than predictions from our strongest single-modality classifier. Our method was the winning submission in the 2013 EmotiW challenge and achieved a test set accuracy of 47.67 % on the 2014 dataset.",
"title": ""
}
] |
[
{
"docid": "d5019a5536950482e166d68dc3a7cac7",
"text": "Co-contamination of the environment with toxic chlorinated organic and heavy metal pollutants is one of the major problems facing industrialized nations today. Heavy metals may inhibit biodegradation of chlorinated organics by interacting with enzymes directly involved in biodegradation or those involved in general metabolism. Predictions of metal toxicity effects on organic pollutant biodegradation in co-contaminated soil and water environments is difficult since heavy metals may be present in a variety of chemical and physical forms. Recent advances in bioremediation of co-contaminated environments have focussed on the use of metal-resistant bacteria (cell and gene bioaugmentation), treatment amendments, clay minerals and chelating agents to reduce bioavailable heavy metal concentrations. Phytoremediation has also shown promise as an emerging alternative clean-up technology for co-contaminated environments. However, despite various investigations, in both aerobic and anaerobic systems, demonstrating that metal toxicity hampers the biodegradation of the organic component, a paucity of information exists in this area of research. Therefore, in this review, we discuss the problems associated with the degradation of chlorinated organics in co-contaminated environments, owing to metal toxicity and shed light on possible improvement strategies for effective bioremediation of sites co-contaminated with chlorinated organic compounds and heavy metals.",
"title": ""
},
{
"docid": "5e9cc7e7933f85b6cffe103c074105d4",
"text": "Substrate-integrated waveguides (SIWs) maintain the advantages of planar circuits (low loss, low profile, easy manufacturing, and integration in a planar circuit board) and improve the quality factor of filter resonators. Empty substrate-integrated waveguides (ESIWs) substantially reduce the insertion losses, because waves propagate through air instead of a lossy dielectric. The first ESIW used a simple tapering transition that cannot be used for thin substrates. A new transition has recently been proposed, which includes a taper also in the microstrip line, not only inside the ESIW, and so it can be used for all substrates, although measured return losses are only 13 dB. In this letter, the cited transition is improved by placing via holes that prevent undesired radiation, as well as two holes that help to ensure good accuracy in the mechanization of the input iris, thus allowing very good return losses (over 20 dB) in the measured results. A design procedure that allows the successful design of the proposed new transition is also provided. A back-to-back configuration of the improved new transition has been successfully manufactured and measured.",
"title": ""
},
{
"docid": "9592fc0ec54a5216562478414dc68eb4",
"text": "We consider the problem of finding the best arm in a stochastic multi-armed bandit game. The regret of a forecaster is here defined by the gap between the mean reward of the optimal arm and the mean reward of the ultimately chosen arm. We propose a highly exploring UCB policy and a new algorithm based on successive rejects. We show that these algorithms are essentially optimal since their regret decreases exponentially at a rate which is, up to a logarithmic factor, the best possible. However, while the UCB policy needs the tuning of a parameter depending on the unobservable hardness of the task, the successive rejects policy benefits from being parameter-free, and also independent of the scaling of the rewards. As a by-product of our analysis, we show that identifying the best arm (when it is unique) requires a number of samples of order (up to a log(K) factor) ∑ i 1/∆ 2 i , where the sum is on the suboptimal arms and ∆i represents the difference between the mean reward of the best arm and the one of arm i. This generalizes the well-known fact that one needs of order of 1/∆ samples to differentiate the means of two distributions with gap ∆.",
"title": ""
},
{
"docid": "7fb9cb7cb777d7f245b2444cd2cd4f9d",
"text": "Several recent studies have introduced lightweight versions of Java: reduced languages in which complex features like threads and reflection are dropped to enable rigorous arguments about key properties such as type safety. We carry this process a step further, omitting almost all features of the full language (including interfaces and even assignment) to obtain a small calculus, Featherweight Java, for which rigorous proofs are not only possible but easy. Featherweight Java bears a similar relation to Java as the lambda-calculus does to languages such as ML and Haskell. It offers a similar computational \"feel,\" providing classes, methods, fields, inheritance, and dynamic typecasts with a semantics closely following Java's. A proof of type safety for Featherweight Java thus illustrates many of the interesting features of a safety proof for the full language, while remaining pleasingly compact. The minimal syntax, typing rules, and operational semantics of Featherweight Java make it a handy tool for studying the consequences of extensions and variations. As an illustration of its utility in this regard, we extend Featherweight Java with generic classes in the style of GJ (Bracha, Odersky, Stoutamire, and Wadler) and give a detailed proof of type safety. The extended system formalizes for the first time some of the key features of GJ.",
"title": ""
},
{
"docid": "db5dcaddaa38f472afaa84b61e4ea650",
"text": "The dynamics of load, especially induction motors, are the driving force for short-term voltage stability (STVS) problems. In this paper, the equivalent rotation speed of motors is identified online and its recovery time is estimated next to realize an emergency-demand-response (EDR) based under speed load shedding (USLS) scheme to improve STVS. The proposed scheme consists of an EDR program and two regular stages (RSs). In the EDR program, contracted load is used as a fast-response resource rather than the last defense. The estimated recovery time (ERT) is used as the triggering signal for the EDR program. In the RSs, the amount of load to be shed at each bus is determined according to the assigned weights based on ERTs. Case studies on a practical power system in China Southern Power Grid have validated the performance of the proposed USLS scheme under various contingency scenarios. The utilization of EDR resources and the adaptive distribution of shedding amount in RSs guarantee faster voltage recovery. Therefore, USLS offers a new and more effective approach compared with existing under voltage load shedding to improve STVS.",
"title": ""
},
{
"docid": "3e0a52bc1fdf84279dee74898fcd93bf",
"text": "A variety of abnormal imaging findings of the petrous apex are encountered in children. Many petrous apex lesions are identified incidentally while images of the brain or head and neck are being obtained for indications unrelated to the temporal bone. Differential considerations of petrous apex lesions in children include “leave me alone” lesions, infectious or inflammatory lesions, fibro-osseous lesions, neoplasms and neoplasm-like lesions, as well as a few rare miscellaneous conditions. Some lesions are similar to those encountered in adults, and some are unique to children. Langerhans cell histiocytosis (LCH) and primary and metastatic pediatric malignancies such as neuroblastoma, rhabomyosarcoma and Ewing sarcoma are more likely to be encountered in children. Lesions such as petrous apex cholesterol granuloma, cholesteatoma and chondrosarcoma are more common in adults and are rarely a diagnostic consideration in children. We present a comprehensive pictorial review of CT and MRI appearances of pediatric petrous apex lesions.",
"title": ""
},
{
"docid": "0d706058ff906f643d35295075fa4199",
"text": "[Purpose] The present study examined the effects of treatment using PNF extension techniques on the pain, pressure pain, and neck and shoulder functions of the upper trapezius muscles of myofascial pain syndrome (MPS) patients. [Subjects] Thirty-two patients with MPS in the upper trapezius muscle were divided into two groups: a PNF group (n=16), and a control group (n=16) [Methods] The PNF group received upper trapezius muscle relaxation therapy and shoulder joint stabilizing exercises. Subjects in the control group received only the general physical therapies for the upper trapezius muscles. Subjects were measured for pain on a visual analog scale (VAS), pressure pain threshold (PPT), the neck disability index (NDI), and the Constant-Murley scale (CMS). [Results] None of the VAS, PPT, and NDI results showed significant differences between the groups, while performing postures, internal rotation, and external rotation among the CMS items showed significant differences between the groups. [Conclusion] Exercise programs that apply PNF techniques can be said to be effective at improving the function of MPS patients.",
"title": ""
},
{
"docid": "1862f864cc1e24346c063ebc8a9e6a59",
"text": "We focus on knowledge base construction (KBC) from richly formatted data. In contrast to KBC from text or tabular data, KBC from richly formatted data aims to extract relations conveyed jointly via textual, structural, tabular, and visual expressions. We introduce Fonduer, a machine-learning-based KBC system for richly formatted data. Fonduer presents a new data model that accounts for three challenging characteristics of richly formatted data: (1) prevalent document-level relations, (2) multimodality, and (3) data variety. Fonduer uses a new deep-learning model to automatically capture the representation (i.e., features) needed to learn how to extract relations from richly formatted data. Finally, Fonduer provides a new programming model that enables users to convert domain expertise, based on multiple modalities of information, to meaningful signals of supervision for training a KBC system. Fonduer-based KBC systems are in production for a range of use cases, including at a major online retailer. We compare Fonduer against state-of-the-art KBC approaches in four different domains. We show that Fonduer achieves an average improvement of 41 F1 points on the quality of the output knowledge base---and in some cases produces up to 1.87x the number of correct entries---compared to expert-curated public knowledge bases. We also conduct a user study to assess the usability of Fonduer's new programming model. We show that after using Fonduer for only 30 minutes, non-domain experts are able to design KBC systems that achieve on average 23 F1 points higher quality than traditional machine-learning-based KBC approaches.",
"title": ""
},
{
"docid": "85d31f3940ee258589615661e596211d",
"text": "Bulk Synchronous Parallelism (BSP) provides a good model for parallel processing of many large-scale graph applications, however it is unsuitable/inefficient for graph applications that require coordination, such as graph-coloring, subcoloring, and clustering. To address this problem, we present an efficient modification to the BSP model to implement serializability (sequential consistency) without reducing the highlyparallel nature of BSP. Our modification bypasses the message queues in BSP and reads directly from the worker’s memory for the internal vertex executions. To ensure serializability, coordination is performed— implemented via dining philosophers or token ring— only for border vertices partitioned across workers. We implement our modifications to BSP on Giraph, an open-source clone of Google’s Pregel. We show through a graph-coloring application that our modified framework, Giraphx, provides much better performance than implementing the application using dining-philosophers over Giraph. In fact, Giraphx outperforms Giraph even for embarrassingly parallel applications that do not require coordination, e.g., PageRank.",
"title": ""
},
{
"docid": "838bd8a38f9d67d768a34183c72da07d",
"text": "Jacobsen syndrome (JS), a rare disorder with multiple dysmorphic features, is caused by the terminal deletion of chromosome 11q. Typical features include mild to moderate psychomotor retardation, trigonocephaly, facial dysmorphism, cardiac defects, and thrombocytopenia, though none of these features are invariably present. The estimated occurrence of JS is about 1/100,000 births. The female/male ratio is 2:1. The patient admitted to our clinic at 3.5 years of age with a cardiac murmur and facial anomalies. Facial anomalies included trigonocephaly with bulging forehead, hypertelorism, telecanthus, downward slanting palpebral fissures, and a carp-shaped mouth. The patient also had strabismus. An echocardiogram demonstrated perimembranous aneurysmatic ventricular septal defect and a secundum atrial defect. The patient was <3rd percentile for height and weight and showed some developmental delay. Magnetic resonance imaging (MRI) showed hyperintensive gliotic signal changes in periventricular cerebral white matter, and leukodystrophy was suspected. Chromosomal analysis of the patient showed terminal deletion of chromosome 11. The karyotype was designated 46, XX, del(11) (q24.1). A review of published reports shows that the severity of the observed clinical abnormalities in patients with JS is not clearly correlated with the extent of the deletion. Most of the patients with JS had short stature, and some of them had documented growth hormone deficiency, or central or primary hypothyroidism. In patients with the classical phenotype, the diagnosis is suspected on the basis of clinical findings: intellectual disability, facial dysmorphic features and thrombocytopenia. The diagnosis must be confirmed by cytogenetic analysis. For patients who survive the neonatal period and infancy, the life expectancy remains unknown. In this report, we describe a patient with the clinical features of JS without thrombocytopenia. To our knowledge, this is the first case reported from Turkey.",
"title": ""
},
{
"docid": "ec4b7c50f3277bb107961c9953fe3fc4",
"text": "A blockchain is a linked-list of immutable tamper-proof blocks, which is stored at each participating node. Each block records a set of transactions and the associated metadata. Blockchain transactions act on the identical ledger data stored at each node. Blockchain was first perceived by Satoshi Nakamoto (Satoshi 2008), as a peer-to-peer money exchange system. Nakamoto referred to the transactional tokens exchanged among clients in his system, as Bitcoins. Overview",
"title": ""
},
{
"docid": "68ecfd8434fb7b28e3c5c88effde3c2a",
"text": "Enterprise Resource Planning (ERP) systems involve the purchase of pre-written software modules from third party suppliers, rather than bespoke (i.e. specially tailored) production of software requirements, and are often described as a buy rather than build approach to information systems development. Current research has shown that there has been a notable decrease in the satisfaction levels of ERP implementations over the period 1998-2000.\nThe environment in which such software is selected, implemented and used may be viewed as a social activity system, which consists of a variety of stakeholders e.g. users, developers, managers, suppliers and consultants. In such a context, an interpretive research approach (Walsham, 1995) is appropriate in order to understand the influences at work.\nThis paper reports on an interpretive study that attempts to understand the reasons for this apparent lack of success by analyzing issues raised by representatives of key stakeholder groups. Resulting critical success factors are then compared with those found in the literature, most notably those of Bancroft et al (1998).\nConclusions are drawn on a wide range of organizational, management and political issues that relate to the multiplicity of stakeholder perceptions.",
"title": ""
},
{
"docid": "6df55b88150f5d52aa30ab770f464546",
"text": "OBJECTIVES\nThe objective of this study has been to review the incidence of biological and technical complications in case of tooth-implant-supported fixed partial denture (FPD) treatments on the basis of survival data regarding clinical cases.\n\n\nMATERIAL AND METHODS\nBased on the treatment documentations of a Bundeswehr dental clinic (Cologne-Wahn German Air Force Garrison), the medical charts of 83 patients with tooth-implant-supported FPDs were completely recorded. The median follow-up time was 4.73 (time range: 2.2-8.3) years. In the process, survival curves according to Kaplan and Meier were applied in addition to frequency counts.\n\n\nRESULTS\nA total of 84 tooth-implant (83 patients) connected prostheses were followed (132 abutment teeth, 142 implant abutments (Branemark, Straumann). FPDs: the time-dependent illustration reveals that after 5 years, as many as 10% of the tooth-implant-supported FPDs already had to be subjected to a technical modification (renewal (n=2), reintegration (n=4), veneer fracture (n=5), fracture of frame (n=2)). In contrast to non-rigid connection of teeth and implants, technical modification measures were rarely required in case of tooth-implant-supported FPDs with a rigid connection. There was no statistical difference between technical complications and the used implant system. Abutment teeth and implants: during the observation period, none of the functionally loaded implants (n=142) had to be removed. Three of the overall 132 abutment teeth were lost because of periodontal inflammation. The time-dependent illustration reveals, that after 5 years as many as 8% of the abutment teeth already required corresponding therapeutic measures (periodontal treatment (5%), filling therapy (2.5%), endodontic treatment (0.5%)). After as few as 3 years, the connection related complications of implant abutments (abutment or occlusal screw loosening, loss of cementation) already had to be corrected in approximately 8% of the cases. In the utilization period there was no screw or abutment fracture.\n\n\nCONCLUSION\nTechnical complications of implant-supported FPDs are dependent on the different bridge configurations. When using rigid functional connections, similarly favourable values will be achieved as in case of solely implant-supported FPDs. In this study other characteristics like different fixation systems (screwed vs. cemented) or various implant systems had no significant effect to the rate of technical complications.",
"title": ""
},
{
"docid": "88d8fe415f3026a45e0aa4b1a8c36c57",
"text": "Traffic sign detection plays an important role in a number of practical applications, such as intelligent driver assistance and roadway inventory management. In order to process the large amount of data from either real-time videos or large off-line databases, a high-throughput traffic sign detection system is required. In this paper, we propose an FPGA-based hardware accelerator for traffic sign detection based on cascade classifiers. To maximize the throughput and power efficiency, we propose several novel ideas, including: 1) rearranged numerical operations; 2) shared image storage; 3) adaptive workload distribution; and 4) fast image block integration. The proposed design is evaluated on a Xilinx ZC706 board. When processing high-definition (1080p) video, it achieves the throughput of 126 frames/s and the energy efficiency of 0.041 J/frame.",
"title": ""
},
{
"docid": "47eef1318d313e2f89bb700f8cd34472",
"text": "This paper sets out to detect controversial news reports using online discussions as a source of information. We define controversy as a public discussion that divides society and demonstrate that a content and stylometric analysis of these debates yields useful signals for extracting disputed news items. Moreover, we argue that a debate-based approach could produce more generic models, since the discussion architectures we exploit to measure controversy occur on many different platforms.",
"title": ""
},
{
"docid": "ed22fe0d13d4450005abe653f41df2c0",
"text": "Polycystic ovary syndrome (PCOS) is a complex endocrine disorder affecting 5-10 % of women of reproductive age. It generally manifests with oligo/anovulatory cycles, hirsutism and polycystic ovaries, together with a considerable prevalence of insulin resistance. Although the aetiology of the syndrome is not completely understood yet, PCOS is considered a multifactorial disorder with various genetic, endocrine and environmental abnormalities. Moreover, PCOS patients have a higher risk of metabolic and cardiovascular diseases and their related morbidity, if compared to the general population.",
"title": ""
},
{
"docid": "de7b16961bb4aa2001a3d0859f68e4c6",
"text": "A new practical method is given for the self-calibration of a camera. In this method, at least three images are taken from the same point in space with different orientations of the camera and calibration is computed from an analysis of point matches between the images. The method requires no knowledge of the orientations of the camera. Calibration is based on the image correspondences only. This method differs fundamentally from previous results by Maybank and Faugeras on selfcalibration using the epipolar structure of image pairs. In the method of this paper, there is no epipolar structure since all images are taken from the same point in space. Since the images are all taken from the same point in space, determination of point matches is considerably easier than for images taken with a moving camera, since problems of occlusion or change of aspect or illumination do not occur. The calibration method is evaluated on several sets of synthetic and real image data.",
"title": ""
},
{
"docid": "6956dadf7462db200559b5c51a09c481",
"text": "W propose that the temporal dimension is fragile in that choices are insufficiently sensitive to it, and second, such sensitivity as exists is exceptionally malleable, unlike other dimensions such as money, which are attended by default. To test this, we axiomatize a “constant-sensitivity” discount function, and in four studies, we show that the degree of time-sensitivity is inadequate relative to the compound discounting norm, and strongly susceptible to manipulation. Time-sensitivity is increased by a comparative within-subject presentation (Experiment 1), direct instruction (Experiment 3), and provision of a visual cue for time duration (Experiment 4); time-sensitivity is decreased using a time pressure manipulation (Experiment 2). In each study, the sensitivity manipulation has an opposite effect on near-future and far-future valuations: Increased sensitivity decreases discounting in the near future and increases discounting in the far future. In contrast, such sensitivity manipulations have little effect on the money dimension.",
"title": ""
},
{
"docid": "a031f8352b511987e95f7d9127b44436",
"text": "The environmental robustness of DNN-based acoustic models can be significantly improved by using multi-condition training data. However, as data collection is a costly proposition, simulation of the desired conditions is a frequently adopted strategy. In this paper we detail a data augmentation approach for far-field ASR. We examine the impact of using simulated room impulse responses (RIRs), as real RIRs can be difficult to acquire, and also the effect of adding point-source noises. We find that the performance gap between using simulated and real RIRs can be eliminated when point-source noises are added. Further we show that the trained acoustic models not only perform well in the distant-talking scenario but also provide better results in the close-talking scenario. We evaluate our approach on several LVCSR tasks which can adequately represent both scenarios.",
"title": ""
},
{
"docid": "f492f0121eba327778151a462e32e7b4",
"text": "We describe the instructional software JFLAP 4.0 and how it can be used to provide a hands-on formal languages and automata theory course. JFLAP 4.0 doubles the number of chapters worth of material from JFLAP 3.1, now covering topics from eleven of thirteen chapters for a semester course. JFLAP 4.0 has easier interactive approaches to previous topics and covers many new topics including three parsing algorithms, multi-tape Turing machines, L-systems, and grammar transformations.",
"title": ""
}
] |
scidocsrr
|
fdc696b24e0e5e14853186cd23f84f10
|
Hybrid Recommender Systems: A Systematic Literature Review
|
[
{
"docid": "e870f2fe9a26b241bdeca882b6186169",
"text": "Some people may be laughing when looking at you reading in your spare time. Some may be admired of you. And some may want be like you who have reading hobby. What about your own feel? Have you felt right? Reading is a need and a hobby at once. This condition is the on that will make you feel that you must read. If you know are looking for the book enPDFd recommender systems handbook as the choice of reading, you can find here.",
"title": ""
}
] |
[
{
"docid": "8c308305b4a04934126c4746c8333b52",
"text": "The authors report on the development of the Cancer Tissue Information Extraction System (caTIES)--an application that supports collaborative tissue banking and text mining by leveraging existing natural language processing methods and algorithms, grid communication and security frameworks, and query visualization methods. The system fills an important need for text-derived clinical data in translational research such as tissue-banking and clinical trials. The design of caTIES addresses three critical issues for informatics support of translational research: (1) federation of research data sources derived from clinical systems; (2) expressive graphical interfaces for concept-based text mining; and (3) regulatory and security model for supporting multi-center collaborative research. Implementation of the system at several Cancer Centers across the country is creating a potential network of caTIES repositories that could provide millions of de-identified clinical reports to users. The system provides an end-to-end application of medical natural language processing to support multi-institutional translational research programs.",
"title": ""
},
{
"docid": "8f025fda5bbf9468dc65c16539d0aa0d",
"text": "Image compression is one of the key image processing techniques in signal processing and communication systems. Compression of images leads to reduction of storage space and reduces transmission bandwidth and hence also the cost. Advances in VLSI technology are rapidly changing the technological needs of common man. One of the major technological domains that are directly related to mankind is image compression. Neural networks can be used for image compression. Neural network architectures have proven to be more reliable, robust, and programmable and offer better performance when compared with classical techniques. In this work the main focus is on development of new architectures for hardware implementation of 3-D neural network based image compression optimizing area, power and speed as specific to ASIC implementation, and comparison with FPGA.",
"title": ""
},
{
"docid": "f3345e524ff05bcd6c8a13bbb5e2aa6d",
"text": "Permission-induced attacks, i.e., security breaches enabled by permission misuse, are among the most critical and frequent issues threatening the security of Android devices. By ignoring the temporal aspects of an attack during the analysis and enforcement, the state-of-the-art approaches aimed at protecting the users against such attacks are prone to have low-coverage in detection and high-disruption in prevention of permission-induced attacks. To address this shortcomings, we present Terminator, a temporal permission analysis and enforcement framework for Android. Leveraging temporal logic model checking,Terminator's analyzer identifies permission-induced threats with respect to dynamic permission states of the apps. At runtime, Terminator's enforcer selectively leases (i.e., temporarily grants) permissions to apps when the system is in a safe state, and revokes the permissions when the system moves to an unsafe state realizing the identified threats. The results of our experiments, conducted over thousands of apps, indicate that Terminator is able to provide an effective, yet non-disruptive defense against permission-induced attacks. We also show that our approach, which does not require modification to the Android framework or apps' implementation logic, is highly reliable and widely applicable.",
"title": ""
},
{
"docid": "6087e066b04b9c3ac874f3c58979f89a",
"text": "What does it mean for a machine learning model to be ‘fair’, in terms which can be operationalised? Should fairness consist of ensuring everyone has an equal probability of obtaining some benefit, or should we aim instead to minimise the harms to the least advantaged? Can the relevant ideal be determined by reference to some alternative state of affairs in which a particular social pattern of discrimination does not exist? Various definitions proposed in recent literature make different assumptions about what terms like discrimination and fairness mean and how they can be defined in mathematical terms. Questions of discrimination, egalitarianism and justice are of significant interest to moral and political philosophers, who have expended significant efforts in formalising and defending these central concepts. It is therefore unsurprising that attempts to formalise ‘fairness’ in machine learning contain echoes of these old philosophical debates. This paper draws on existing work in moral and political philosophy in order to elucidate emerging debates about fair machine learning.",
"title": ""
},
{
"docid": "5e51b4363a156f4c3fde12da345e9438",
"text": "In this work we present an annotation framework to capture causality between events, inspired by TimeML, and a language resource covering both temporal and causal relations. This data set is then used to build an automatic extraction system for causal signals and causal links between given event pairs. The evaluation and analysis of the system’s performance provides an insight into explicit causality in text and the connection between temporal and causal relations.",
"title": ""
},
{
"docid": "57ffea840501c5e9a77a2c7e0d609d07",
"text": "Datasets power computer vison research and drive breakthroughs. Larger and larger datasets are needed to better utilize the exponentially increasing computing power. However, datasets generation is both time consuming and expensive as human beings are required for image labelling. Human labelling cannot scale well. How can we generate larger image datasets easier and faster? In this paper, we provide a new approach for large scale datasets generation. We generate images from 3D object models directly. The large volume of freely available 3D CAD models and mature computer graphics techniques make generating large scale image datasets from 3D models very efficient. As little human effort involved in this process, it can scale very well. Rather than releasing a static dataset, we will also provide a software library for dataset generation so that the computer vision community can easily extend or modify the datasets accordingly.",
"title": ""
},
{
"docid": "bd8ae67f959a7b840eff7e8c400a41e0",
"text": "Enabling a humanoid robot to drive a car, requires the development of a set of basic primitive actions. These include: walking to the vehicle, manually controlling its commands (e.g., ignition, gas pedal and steering), and moving with the whole-body, to ingress/egress the car. In this paper, we present a sensorbased reactive framework for realizing the central part of the complete task, consisting in driving the car along unknown roads. The proposed framework provides three driving strategies by which a human supervisor can teleoperate the car, ask for assistive driving, or give the robot full control of the car. A visual servoing scheme uses features of the road image to provide the reference angle for the steering wheel to drive the car at the center of the road. Simultaneously, a Kalman filter merges optical flow and accelerometer measurements, to estimate the car linear velocity and correspondingly compute the gas pedal command for driving at a desired speed. The steering wheel and gas pedal reference are sent to the robot control to achieve the driving task with the humanoid. We present results from a driving experience with a real car and the humanoid robot HRP-2Kai. Part of the framework has been used to perform the driving task at the DARPA Robotics Challenge.",
"title": ""
},
{
"docid": "0e2b885774f69342ade2b9ad1bc84835",
"text": "History repeatedly demonstrates that rural communities have unique technological needs. Yet, we know little about how rural communities use modern technologies, so we lack knowledge on how to design for them. To address this gap, our empirical paper investigates behavioral differences between more than 3,000 rural and urban social media users. Using a dataset collected from a broadly popular social network site, we analyze users' profiles, 340,000 online friendships and 200,000 interpersonal messages. Using social capital theory, we predict differences between rural and urban users and find strong evidence supporting our hypotheses. Namely, rural people articulate far fewer friends online, and those friends live much closer to home. Our results also indicate that the groups have substantially different gender distributions and use privacy features differently. We conclude by discussing design implications drawn from our findings; most importantly, designers should reconsider the binary friend-or-not model to allow for incremental trust-building.",
"title": ""
},
{
"docid": "94f1de78a229dc542a67ea564a0b259f",
"text": "Voice enabled personal assistants like Microsoft Cortana are becoming better every day. As a result more users are relying on such software to accomplish more tasks. While these applications are significantly improving due to great advancements in the underlying technologies, there are still shortcomings in their performance resulting in a class of user queries that such assistants cannot yet handle with satisfactory results. We analyze the data from millions of user queries, and build a machine learning system capable of classifying user queries into two classes; a class of queries that are addressable by Cortana with high user satisfaction, and a class of queries that are not. We then use unsupervised learning to cluster similar queries and assign them to human assistants who can complement Cortana functionality.",
"title": ""
},
{
"docid": "ff5fb2a555c9bcdfad666406b94ebc71",
"text": "Driven by profits, spam reviews for product promotion or suppression become increasingly rampant in online shopping platforms. This paper focuses on detecting hidden spam users based on product reviews. In the literature, there have been tremendous studies suggesting diversified methods for spammer detection, but whether these methods can be combined effectively for higher performance remains unclear. Along this line, a hybrid PU-learning-based Spammer Detection (hPSD) model is proposed in this paper. On one hand, hPSD can detect multi-type spammers by injecting or recognizing only a small portion of positive samples, which meets particularly real-world application scenarios. More importantly, hPSD can leverage both user features and user relations to build a spammer classifier via a semi-supervised hybrid learning framework. Experimental results on movie data sets with shilling injection show that hPSD outperforms several state-of-the-art baseline methods. In particular, hPSD shows great potential in detecting hidden spammers as well as their underlying employers from a real-life Amazon data set. These demonstrate the effectiveness and practical value of hPSD for real-life applications.",
"title": ""
},
{
"docid": "128de222f033bc2c50b5af44db8f6f6f",
"text": "Copyright & reuse City University London has developed City Research Online so that its users may access the research outputs of City University London's staff. Copyright © and Moral Rights for this paper are retained by the individual author(s) and/ or other copyright holders. All material in City Research Online is checked for eligibility for copyright before being made available in the live archive. URLs from City Research Online may be freely distributed and linked to from other web pages.",
"title": ""
},
{
"docid": "bf156a97587b55e8afe255fe1b1a8ac0",
"text": "In recent years researches are focused towards mining infrequent patterns rather than frequent patterns. Mining infrequent pattern plays vital role in detecting any abnormal event. In this paper, an algorithm named Infrequent Pattern Miner for Data Streams (IPM-DS) is proposed for mining nonzero infrequent patterns from data streams. The proposed algorithm adopts the FP-growth based approach for generating all infrequent patterns. The proposed algorithm (IPM-DS) is evaluated using health data set collected from wearable physiological sensors that measure vital parameters such as Heart Rate (HR), Breathing Rate (BR), Oxygen Saturation (SPO2) and Blood pressure (BP) and also with two publically available data sets such as e-coli and Wine from UCI repository. The experimental results show that the proposed algorithm generates all possible infrequent patterns in less time.",
"title": ""
},
{
"docid": "1657df28bba01b18fb26bb8c823ad4b4",
"text": "Come with us to read a new book that is coming recently. Yeah, this is a new coming book that many people really want to read will you be one of them? Of course, you should be. It will not make you feel so hard to enjoy your life. Even some people think that reading is a hard to do, you must be sure that you can do it. Hard will be felt when you have no ideas about what kind of book to read. Or sometimes, your reading material is not interesting enough.",
"title": ""
},
{
"docid": "2117e3c0cf7854c8878417b7d84491ce",
"text": "We designed a new annotation scheme for formalising relation structures in research papers, through the investigation of computer science papers. The annotation scheme is based on the hypothesis that identifying the role of entities and events that are described in a paper is useful for intelligent information retrieval in academic literature, and the role can be determined by the relationship between the author and the described entities or events, and relationships among them. Using the scheme, we have annotated research abstracts from the IPSJ Journal published in Japanese by the Information Processing Society of Japan. On the basis of the annotated corpus, we have developed a prototype information extraction system which has the facility to classify sentences according to the relationship between entities mentioned, to help find the role of the entity in which the searcher is interested.",
"title": ""
},
{
"docid": "43b0358c4d3fec1dd58600847bf0c1b8",
"text": "The transformative promises and potential of Big and Open Data are substantial for e-government services, openness and transparency, governments, and the interaction between governments, citizens, and the business sector. From “smart” government to transformational government, Big and Open Data can foster collaboration; create real-time solutions to challenges in agriculture, health, transportation, and more; promote greater openness; and usher in a new era of policyand decision-making. There are, however, a range of policy challenges to address regarding Big and Open Data, including access and dissemination; digital asset management, archiving and preservation; privacy; and security. After presenting a discussion of the open data policies that serve as a foundation for Big Data initiatives, this paper examines the ways in which the current information policy framework fails to address a number of these policy challenges. It then offers recommendations intended to serve as a beginning point for a revised policy framework to address significant issues raised by the U.S. government’s engagement in Big Data efforts.",
"title": ""
},
{
"docid": "db5ff75a7966ec6c1503764d7e510108",
"text": "Qualitative content analysis as described in published literature shows conflicting opinions and unsolved issues regarding meaning and use of concepts, procedures and interpretation. This paper provides an overview of important concepts (manifest and latent content, unit of analysis, meaning unit, condensation, abstraction, content area, code, category and theme) related to qualitative content analysis; illustrates the use of concepts related to the research procedure; and proposes measures to achieve trustworthiness (credibility, dependability and transferability) throughout the steps of the research procedure. Interpretation in qualitative content analysis is discussed in light of Watzlawick et al.'s [Pragmatics of Human Communication. A Study of Interactional Patterns, Pathologies and Paradoxes. W.W. Norton & Company, New York, London] theory of communication.",
"title": ""
},
{
"docid": "39007be7d6b2f296e8dff368d49ac0fe",
"text": "Neural oscillations at low- and high-frequency ranges are a fundamental feature of large-scale networks. Recent evidence has indicated that schizophrenia is associated with abnormal amplitude and synchrony of oscillatory activity, in particular, at high (beta/gamma) frequencies. These abnormalities are observed during task-related and spontaneous neuronal activity which may be important for understanding the pathophysiology of the syndrome. In this paper, we shall review the current evidence for impaired beta/gamma-band oscillations and their involvement in cognitive functions and certain symptoms of the disorder. In the first part, we will provide an update on neural oscillations during normal brain functions and discuss underlying mechanisms. This will be followed by a review of studies that have examined high-frequency oscillatory activity in schizophrenia and discuss evidence that relates abnormalities of oscillatory activity to disturbed excitatory/inhibitory (E/I) balance. Finally, we shall identify critical issues for future research in this area.",
"title": ""
},
{
"docid": "9270af032d1adbf9829e7d723ff76849",
"text": "To detect illegal copies of copyrighted images, recent copy detection methods mostly rely on the bag-of-visual-words (BOW) model, in which local features are quantized into visual words for image matching. However, both the limited discriminability of local features and the BOW quantization errors will lead to many false local matches, which make it hard to distinguish similar images from copies. Geometric consistency verification is a popular technology for reducing the false matches, but it neglects global context information of local features and thus cannot solve this problem well. To address this problem, this paper proposes a global context verification scheme to filter false matches for copy detection. More specifically, after obtaining initial scale invariant feature transform (SIFT) matches between images based on the BOW quantization, the overlapping region-based global context descriptor (OR-GCD) is proposed for the verification of these matches to filter false matches. The OR-GCD not only encodes relatively rich global context information of SIFT features but also has good robustness and efficiency. Thus, it allows an effective and efficient verification. Furthermore, a fast image similarity measurement based on random verification is proposed to efficiently implement copy detection. In addition, we also extend the proposed method for partial-duplicate image detection. Extensive experiments demonstrate that our method achieves higher accuracy than the state-of-the-art methods, and has comparable efficiency to the baseline method based on the BOW quantization.",
"title": ""
},
{
"docid": "b9c40aa4c8ac9d4b6cbfb2411c542998",
"text": "This review will summarize molecular and genetic analyses aimed at identifying the mechanisms underlying the sequence of events during plant zygotic embryogenesis. These events are being studied in parallel with the histological and morphological analyses of somatic embryogenesis. The strength and limitations of somatic embryogenesis as a model system will be discussed briefly. The formation of the zygotic embryo has been described in some detail, but the molecular mechanisms controlling the differentiation of the various cell types are not understood. In recent years plant molecular and genetic studies have led to the identification and characterization of genes controlling the establishment of polarity, tissue differentiation and elaboration of patterns during embryo development. An investigation of the developmental basis of a number of mutant phenotypes has enabled the identification of gene activities promoting (1) asymmetric cell division and polarization leading to heterogeneous partitioning of the cytoplasmic determinants necessary for the initiation of embryogenesis (e.g. GNOM), (2) the determination of the apical-basal organization which is established independently of the differentiation of the tissues of the radial pattern elements (e.g. KNOLLE, FACKEL, ZWILLE), (3) the differentiation of meristems (e.g. SHOOT-MERISTEMLESS), and (4) the formation of a mature embryo characterized by the accumulation of LEA and storage proteins. The accumulation of these two types of proteins is controlled by ABA-dependent regulatory mechanisms as shown using both ABA-deficient and ABA-insensitive mutants (e.g. ABA, ABI3). Both types of embryogenesis have been studied by different techniques and common features have been identified between them. In spite of the relative difficulty of identifying the original cells involved in the developmental processes of somatic embryogenesis, common regulatory mechanisms are probably involved in the first stages up to the globular form. Signal molecules, such as growth regulators, have been shown to play a role during development of both types of embryos. The most promising method for identifying regulatory mechanisms responsible for the key events of embryogenesis will come from molecular and genetic analyses. The mutations already identified will shed light on the nature of the genes that affect developmental processes as well as elucidating the role of the various regulatory genes that control plant embryogenesis.",
"title": ""
},
{
"docid": "2130cc3df3443c912d9a38f83a51ab14",
"text": "Event cameras, such as dynamic vision sensors (DVS), and dynamic and activepixel vision sensors (DAVIS) can supplement other autonomous driving sensors by providing a concurrent stream of standard active pixel sensor (APS) images and DVS temporal contrast events. The APS stream is a sequence of standard grayscale global-shutter image sensor frames. The DVS events represent brightness changes occurring at a particular moment, with a jitter of about a millisecond under most lighting conditions. They have a dynamic range of >120 dB and effective frame rates >1 kHz at data rates comparable to 30 fps (frames/second) image sensors. To overcome some of the limitations of current image acquisition technology, we investigate in this work the use of the combined DVS and APS streams in endto-end driving applications. The dataset DDD17 accompanying this paper is the first open dataset of annotated DAVIS driving recordings. DDD17 has over 12 h of a 346x260 pixel DAVIS sensor recording highway and city driving in daytime, evening, night, dry and wet weather conditions, along with vehicle speed, GPS position, driver steering, throttle, and brake captured from the car’s on-board diagnostics interface. As an example application, we performed a preliminary end-toend learning study of using a convolutional neural network that is trained to predict the instantaneous steering angle from DVS and APS visual data.",
"title": ""
}
] |
scidocsrr
|
9514041d98f05f2e6fe6f1cc1686c30c
|
Zero-Shot Learning on Semantic Class Prototype Graph
|
[
{
"docid": "be9fc2798c145abe70e652b7967c3760",
"text": "Given semantic descriptions of object classes, zero-shot learning aims to accurately recognize objects of the unseen classes, from which no examples are available at the training stage, by associating them to the seen classes, from which labeled examples are provided. We propose to tackle this problem from the perspective of manifold learning. Our main idea is to align the semantic space that is derived from external information to the model space that concerns itself with recognizing visual features. To this end, we introduce a set of \"phantom\" object classes whose coordinates live in both the semantic space and the model space. Serving as bases in a dictionary, they can be optimized from labeled data such that the synthesized real object classifiers achieve optimal discriminative performance. We demonstrate superior accuracy of our approach over the state of the art on four benchmark datasets for zero-shot learning, including the full ImageNet Fall 2011 dataset with more than 20,000 unseen classes.",
"title": ""
},
{
"docid": "85be4bd00c69fdd43841fa7112df20b1",
"text": "The role of semantics in zero-shot learning is considered. The effectiveness of previous approaches is analyzed according to the form of supervision provided. While some learn semantics independently, others only supervise the semantic subspace explained by training classes. Thus, the former is able to constrain the whole space but lacks the ability to model semantic correlations. The latter addresses this issue but leaves part of the semantic space unsupervised. This complementarity is exploited in a new convolutional neural network (CNN) framework, which proposes the use of semantics as constraints for recognition. Although a CNN trained for classification has no transfer ability, this can be encouraged by learning an hidden semantic layer together with a semantic code for classification. Two forms of semantic constraints are then introduced. The first is a loss-based regularizer that introduces a generalization constraint on each semantic predictor. The second is a codeword regularizer that favors semantic-to-class mappings consistent with prior semantic knowledge while allowing these to be learned from data. Significant improvements over the state-of-the-art are achieved on several datasets.",
"title": ""
}
] |
[
{
"docid": "ba60234f9b1769ab83f588326e95742e",
"text": "Functional languages offer a high level of abstraction, which results in programs that are elegant and easy to understand. Central to the development of functional programming are inductive and coinductive types and associated programming constructs, such as pattern-matching. Whereas inductive types have a long tradition and are well supported in most languages, coinductive types are subject of more recent research and are less mainstream. We present CoCaml, a functional programming language extending OCaml, which allows us to define recursive functions on regular coinductive datatypes. These functions are defined like usual recursive functions, but parameterized by an equation solver. We present a full implementation of all the constructs and solvers and show how these can be used in a variety of examples, including operations on infinite lists, infinitary λ-terms, and p-adic numbers.",
"title": ""
},
{
"docid": "0685c33de763bdedf2a1271198569965",
"text": "The use of virtual-reality technology in the areas of rehabilitation and therapy continues to grow, with encouraging results being reported for applications that address human physical, cognitive, and psychological functioning. This article presents a SWOT (Strengths, Weaknesses, Opportunities, and Threats) analysis for the field of VR rehabilitation and therapy. The SWOT analysis is a commonly employed framework in the business world for analyzing the factors that influence a company's competitive position in the marketplace with an eye to the future. However, the SWOT framework can also be usefully applied outside of the pure business domain. A quick check on the Internet will turn up SWOT analyses for urban-renewal projects, career planning, website design, youth sports programs, and evaluation of academic research centers, and it becomes obvious that it can be usefully applied to assess and guide any organized human endeavor designed to accomplish a mission. It is hoped that this structured examination of the factors relevant to the current and future status of VR rehabilitation will provide a good overview of the key issues and concerns that are relevant for understanding and advancing this vital application area.",
"title": ""
},
{
"docid": "874cecfb3f21f4c145fda262e1eee369",
"text": "For many languages that use non-Roman based indigenous scripts (e.g., Arabic, Greek and Indic languages) one can often find a large amount of user generated transliterated content on the Web in the Roman script. Such content creates a monolingual or multi-lingual space with more than one script which we refer to as the Mixed-Script space. IR in the mixed-script space is challenging because queries written in either the native or the Roman script need to be matched to the documents written in both the scripts. Moreover, transliterated content features extensive spelling variations. In this paper, we formally introduce the concept of Mixed-Script IR, and through analysis of the query logs of Bing search engine, estimate the prevalence and thereby establish the importance of this problem. We also give a principled solution to handle the mixed-script term matching and spelling variation where the terms across the scripts are modelled jointly in a deep-learning architecture and can be compared in a low-dimensional abstract space. We present an extensive empirical analysis of the proposed method along with the evaluation results in an ad-hoc retrieval setting of mixed-script IR where the proposed method achieves significantly better results (12% increase in MRR and 29% increase in MAP) compared to other state-of-the-art baselines.",
"title": ""
},
{
"docid": "d94d31377a8dbe487f4fdcbfc0f2beb7",
"text": "A core novelty of Alpha Zero is the interleaving of tree search and deep learning, which has proven very successful in board games like Chess, Shogi and Go. These games have a discrete action space. However, many real-world reinforcement learning domains have continuous action spaces, for example in robotic control, navigation and self-driving cars. This paper presents the necessary theoretical extensions of Alpha Zero to deal with continuous action space. We also provide a preliminary experiment on the Pendulum swing-up task, empirically verifying the feasibility of our approach. Thereby, this work provides a first step towards the application of iterated search and learning in domains with a continuous action space.",
"title": ""
},
{
"docid": "fb162c94248297f35825ff1022ad2c59",
"text": "This article traces the evolution of ambulance location and relocation models proposed over the past 30 years. The models are classified in two main categories. Deterministic models are used at the planning stage and ignore stochastic considerations regarding the availability of ambulances. Probabilistic models reflect the fact that ambulances operate as servers in a queueing system and cannot always answer a call. In addition, dynamic models have been developed to repeatedly relocate ambulances throughout the day. 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "ada6c6b93b7d2109cd131a653117074a",
"text": "Music relies heavily on repetition to build structure and meaning. Self-reference occurs on multiple timescales, from motifs to phrases to reusing of entire sections of music, such as in pieces with ABA structure. e he Transformer (Vaswani et al., 2017), a sequence model based on self-attention, has achieved compelling results in many generation tasks that require maintaining long-range coherence. This suggests that self-attention might also be well-suited to modeling music. In musical composition and performance, however, relative timing is critically important. Existing approaches for representing relative positional information in the Transformer modulate attention based on pairwise distance (Shaw et al., 2018). This is impractical for long sequences such as musical compositions since their memory complexity is quadratic in the sequence length. We propose an algorithm that reduces the intermediate memory requirements to linear in the sequence length. This enables us to demonstrate that a Transformer with our modified relative attention mechanism can generate minute-long (thousands of steps) compositions with compelling structure, generate continuations that coherently elaborate on a given motif, and in a seq2seq setup generate accompaniments conditioned on melodies1. We evaluate the Transformer with our relative attention mechanism on two datasets, JSB Chorales and Piano-e-Competition, and obtain state-of-the-art results on the latter.",
"title": ""
},
{
"docid": "aa729fab5a97378b2ce9ae6ae4ee4e66",
"text": "Previous information extraction (IE) systems are typically organized as a pipeline architecture of separated stages which make independent local decisions. When the data grows beyond some certain size, the extracted facts become inter-dependent and thus we can take advantage of information redundancy to conduct reasoning across documents and improve the performance of IE. We describe a joint inference approach based on information network structure to conduct cross-fact reasoning with an integer linear programming framework. Without using any additional labeled data this new method obtained 13.7%-24.4% user browsing cost reduction over a state-of-the-art IE system which extracts various types of facts independently.",
"title": ""
},
{
"docid": "4c67486d34309ac506341224e5e7e994",
"text": "Image deconvolution is still to be a challenging illposed problem for recovering a clear image from a given blurry image, when the point spread function is known. Although competitive deconvolution methods are numerically impressive and approach theoretical limits, they are becoming more complex, making analysis, and implementation difficult. Furthermore, accurate estimation of the regularization parameter is not easy for successfully solving image deconvolution problems. In this paper, we develop an effective approach for image restoration based on one explicit image filter guided filter. By applying the decouple of denoising and deblurring techniques to the deconvolution model, we reduce the optimization complexity and achieve a simple but effective algorithm to automatically compute the parameter in each iteration, which is based on Morozov’s discrepancy principle. Experimental results demonstrate that the proposed algorithm outperforms many state-of-the-art deconvolution methods in terms of both ISNR and visual quality. Keywords—Image deconvolution, guided filter, edge-preserving, adaptive parameter estimation.",
"title": ""
},
{
"docid": "e858a020c498272ce560656cecf15354",
"text": "A low-voltage, low-power CMOS voltage reference with high temperature stability in a wide temperature range is presented. The temperature dependence of mobility and oxide capacitance is removed by employing transistors in saturation and triode regions and the temperature dependence of threshold voltage is removed by exploiting the transistors in weak inversion region. Implemented in 0.13um CMOS, the proposed voltage reference achieves temperature coefficient of 29.3ppm/°C against temperature variation of −50 – 130°C and line sensitivity of 337ppm/V against supply variation of 0.7–1.8V, while consuming 210nW from 0.7V supply and occupying 0.023mm2.",
"title": ""
},
{
"docid": "6a3afa9644477304d2d32d99c99e07c8",
"text": "This paper presents a comprehensive survey of five most widely used in-vehicle networks from three perspectives: system cost, data transmission capacity, and fault-tolerance capability. The paper reviews the pros and cons of each network, and identifies possible approaches to improve the quality of service (QoS). In addition, two classifications of automotive gateways have been presented along with a brief discussion about constructing a comprehensive in-vehicle communication system with different networks and automotive gateways. Furthermore, security threats to in-vehicle networks are briefly discussed, along with the corresponding protective methods. The survey concludes with highlighting the trends in future development of in-vehicle network technology and a proposal of a topology of the next generation in-vehicle network.",
"title": ""
},
{
"docid": "bd3e5a403cc42952932a7efbd0d57719",
"text": "The acoustic echo cancellation system is very important in the communication applications that are used these days; in view of this importance we have implemented this system practically by using DSP TMS320C6713 Starter Kit (DSK). The acoustic echo cancellation system was implemented based on 8 subbands techniques using Least Mean Square (LMS) algorithm and Normalized Least Mean Square (NLMS) algorithm. The system was evaluated by measuring the performance according to Echo Return Loss Enhancement (ERLE) factor and Mean Square Error (MSE) factor. Keywords—Acoustic echo canceller; Least Mean Square (LMS); Normalized Least Mean Square (NLMS); TMS320C6713; 8 subbands adaptive filter",
"title": ""
},
{
"docid": "527c1e2a78e7f171025231a475a828b9",
"text": "Cryptography is the science to transform the information in secure way. Encryption is best alternative to convert the data to be transferred to cipher data which is an unintelligible image or data which cannot be understood by any third person. Images are form of the multimedia data. There are many image encryption schemes already have been proposed, each one of them has its own potency and limitation. This paper presents a new algorithm for the image encryption/decryption scheme which has been proposed using chaotic neural network. Chaotic system produces the same results if the given inputs are same, it is unpredictable in the sense that it cannot be predicted in what way the system's behavior will change for any little change in the input to the system. The objective is to investigate the use of ANNs in the field of chaotic Cryptography. The weights of neural network are achieved based on chaotic sequence. The chaotic sequence generated and forwarded to ANN and weighs of ANN are updated which influence the generation of the key in the encryption algorithm. The algorithm has been implemented in the software tool MATLAB and results have been studied. To compare the relative performance peak signal to noise ratio (PSNR) and mean square error (MSE) are used.",
"title": ""
},
{
"docid": "83dec7aa3435effc3040dfb08cb5754a",
"text": "This paper examines the relationship between annual report readability and firm performance and earnings persistence. This is motivated by the Securities and Exchange Commission’s plain English disclosure regulations that attempt to make corporate disclosures easier to read for ordinary investors. I measure the readability of public company annual reports using both the Fog Index from computational linguistics and the length of the document. I find that the annual reports of firms with lower earnings are harder to read (i.e., they have higher Fog and are longer). Moreover, the positive earnings of firms with annual reports that are easier to read are more persistent. This suggests that managers may be opportunistically choosing the readability of annual reports to hide adverse information from investors.",
"title": ""
},
{
"docid": "f8878dd6e858f2acba35bf0f75168815",
"text": "BACKGROUND\nPsoriasis can be found at several different localizations which may be of various impact on patients' quality of life (QoL). One of the easy visible, and difficult to conceal localizations are the nails.\n\n\nOBJECTIVE\nTo achieve more insight into the QoL of psoriatic patients with nail psoriasis, and to characterize the patients with nail involvement which are more prone to the impact of the nail alterations caused by psoriasis.\n\n\nMETHOD\nA self-administered questionnaire was distributed to all members (n = 5400) of the Dutch Psoriasis Association. The Dermatology Life Quality Index (DLQI) and the Nail Psoriasis Quality of life 10 (NPQ10) score were included as QoL measures. Severity of cutaneous lesions was determined using the self-administered psoriasis area and severity index (SAPASI).\n\n\nRESULTS\nPatients with nail psoriasis scored significantly higher mean scores on the DLQI (4.9 vs. 3.7, P = <0.001) and showed more severe psoriasis (SAPASI, 6.6 vs. 5.3, P = <0.001). Patients with coexistence of nail bed and nail matrix features showed higher DLQI scores compared with patients with involvement of one of the two localizations exclusively (5.3 vs. 4.2 vs. 4.3, P = 0.003). Patients with only nail bed alterations scored significant higher NPQ10 scores when compared with patients with only nail matrix features. Patients with psoriatic arthritis (PsA) and nail psoriasis experiences more impairments compared with nail psoriasis patients without PsA (DLQI 5.5 vs. 4.3, NPQ10 13.3 vs. 7.0). Females scored higher mean scores on all QoL scores.\n\n\nCONCLUSION\nGreater attention should be paid to the possible impact nail abnormalities have on patients with nail psoriasis, which can be identified by nail psoriasis specific questionnaires such as the NPQ10. As improving the severity of disease may have a positive influence on QoL, the outcome of QoL measurements should be taken into account when deciding on treatment strategies.",
"title": ""
},
{
"docid": "0d62a781e48d6becc93bcac11692a3c2",
"text": "A Fresnel lens with electrically-tunable diffraction efficiency while possessing high image quality is demonstrated using a phase-separated composite film (PSCOF). The light scattering-free PSCOF is obtained by anisotropic phase separation between liquid crystal and polymer. Such a lens can be operated below 12 volts and its switching time is reasonably fast (~10 ms). The maximum diffraction efficiency reaches ~35% for a linearly polarized light, which is close to the theoretical limit of 41%.",
"title": ""
},
{
"docid": "d62e79e84e17c6e5b4e397e58077fd75",
"text": "We develop a decentralized Bayesian model of college admissions with two ranked colleges, heterogeneous students and two realistic match frictions: students find it costly to apply to college, and college evaluations of their applications are uncertain. Students thus face a portfolio choice problem in their application decision, while colleges choose admissions standards that act like market-clearing prices. Enrollment at each college is affected by the standards at the other college through student portfolio reallocation. In equilibrium, student-college sorting may fail: weaker students sometimes apply more aggressively, and the weaker college might impose higher standards. Applying our framework, we analyze affirmative action, showing how it induces minority applicants to construct their application portfolios as if they were majority students of higher caliber. ∗Earlier versions were called “The College Admissions Problem with Uncertainty” and “A Supply and Demand Model of the College Admissions Problem”. We would like to thank Philipp Kircher (CoEditor) and three anonymous referees for their helpful comments and suggestions. Greg Lewis and Lones Smith are grateful for the financial support of the National Science Foundation. We have benefited from seminars at BU, UCLA, Georgetown, HBS, the 2006 Two-Sided Matching Conference (Bonn), 2006 SED (Vancouver), 2006 Latin American Econometric Society Meetings (Mexico City), and 2007 American Econometric Society Meetings (New Orleans), Iowa State, Harvard/MIT, the 2009 Atlanta NBER Conference, and Concordia. Parag Pathak and Philipp Kircher provided useful discussions of our paper. We are also grateful to John Bound and Brad Hershbein for providing us with student college applications data. †Arizona State University, Department of Economics, Tempe, AZ 85287. ‡Harvard University, Department of Economics, Cambridge, MA 02138. §University of Wisconsin, Department of Economics, Madison, WI 53706.",
"title": ""
},
{
"docid": "459f368625415f80c88da01b69e94258",
"text": "Data visualization and feature selection methods are proposed based on the )oint mutual information and ICA. The visualization methods can find many good 2-D projections for high dimensional data interpretation, which cannot be easily found by the other existing methods. The new variable selection method is found to be better in eliminating redundancy in the inputs than other methods based on simple mutual information. The efficacy of the methods is illustrated on a radar signal analysis problem to find 2-D viewing coordinates for data visualization and to select inputs for a neural network classifier.",
"title": ""
},
{
"docid": "5ab4db508bddd2481a867eecd41e6b9a",
"text": "For centuries, music has been shared and remembered by two traditions: aural transmission and in the form of written documents normally called musical scores. Many of these scores exist in the form of unpublished manuscripts and hence they are in danger of being lost through the normal ravages of time. To preserve the music some form of typesetting or, ideally, a computer system that can automatically decode the symbolic images and create new scores is required. Programs analogous to optical character recognition systems called optical music recognition (OMR) systems have been under intensive development for many years. However, the results to date are far from ideal. Each of the proposed methods emphasizes different properties and therefore makes it difficult to effectively evaluate its competitive advantages. This article provides an overview of the literature concerning the automatic analysis of images of printed and handwritten musical scores. For self-containment and for the benefit of the reader, an introduction to OMR processing systems precedes the literature overview. The following study presents a reference scheme for any researcher wanting to compare new OMR algorithms against well-known ones.",
"title": ""
},
{
"docid": "de455ce971c40fe49d14415cd8164122",
"text": "Cardiovascular disease remains the most common health problem in developed countries, and residual risk after implementing all current therapies is still high. Permanent changes in lifestyle may be hard to achieve and people may not always be motivated enough to make the recommended modifications. Emerging research has explored the application of natural food-based strategies in disease management. In recent years, much focus has been placed on the beneficial effects of fish consumption. Many of the positive effects of fish consumption on dyslipidemia and heart diseases have been attributed to n-3 polyunsaturated fatty acids (n-3 PUFAs, i.e., EPA and DHA); however, fish is also an excellent source of protein and, recently, fish protein hydrolysates containing bioactive peptides have shown promising activities for the prevention/management of cardiovascular disease and associated health complications. The present review will focus on n-3 PUFAs and bioactive peptides effects on cardiovascular disease risk factors. Moreover, since considerable controversy exists regarding the association between n-3 PUFAs and major cardiovascular endpoints, we have also reviewed the main clinical trials supporting or not this association.",
"title": ""
},
{
"docid": "0332be71a529382e82094239db31ea25",
"text": "Nguyen and Shparlinski recently presented a polynomial-time algorithm that provably recovers the signer’s secret DSA key when a few bits of the random nonces k (used at each signature generation) are known for a number of DSA signatures at most linear in log q (q denoting as usual the small prime of DSA), under a reasonable assumption on the hash function used in DSA. The number of required bits is about log q, and can be further decreased to 2 if one assumes access to ideal lattice basis reduction, namely an oracle for the lattice closest vector problem for the infinity norm. All previously known results were only heuristic, including those of Howgrave-Graham and Smart who introduced the topic. Here, we obtain similar results for the elliptic curve variant of DSA (ECDSA).",
"title": ""
}
] |
scidocsrr
|
a85c5f75026c981339b0a94ba6a95ccf
|
A Systematic Literature Review of Open Government Data Research: Challenges, Opportunities and Gaps
|
[
{
"docid": "0ccc233ea8225de88882883d678793c8",
"text": "Sustaining of Moore's Law over the next decade will require not only continued scaling of the physical dimensions of transistors but also performance improvement and aggressive reduction in power consumption. Heterojunction Tunnel FET (TFET) has emerged as promising transistor candidate for supply voltage scaling down to sub-0.5V due to the possibility of sub-kT/q switching without compromising on-current (ION). Recently, n-type III-V HTFET with reasonable on-current and sub-kT/q switching at supply voltage of 0.5V have been experimentally demonstrated. However, steep switching performance of III-V HTFET till date has been limited to range of drain current (IDS) spanning over less than a decade. In this work, we will present progress on complimentary Tunnel FETs and analyze primary roadblocks in the path towards achieving steep switching performance in III-V HTFET.",
"title": ""
},
{
"docid": "1c0efa706f999ee0129d21acbd0ef5ab",
"text": "Ten years ago, we presented the DeLone and McLean Information Systems (IS) Success Model as a framework and model for measuring the complexdependent variable in IS research. In this paper, we discuss many of the important IS success research contributions of the last decade, focusing especially on research efforts that apply, validate, challenge, and propose enhancements to our original model. Based on our evaluation of those contributions, we propose minor refinements to the model and propose an updated DeLone and McLean IS Success Model. We discuss the utility of the updated model for measuring e-commerce system success. Finally, we make a series of recommendations regarding current and future measurement of IS success. 10 DELONE AND MCLEAN",
"title": ""
},
{
"docid": "299c0b60f9803c4eb60cc900b196a689",
"text": "The exponentially growing production of data and the social trend towards openness and sharing are powerful forces that are changing the global economy and society. Governments around the world have become active participants in this evolution, opening up their data for access and re-use by public and private agents alike. The phenomenon of Open Government Data has spread around the world in the last four years, driven by the widely held belief that use of Open Government Data has the ability to generate both economic and social value. However, a cursory review of the popular press, as well as an investigation of academic research and empirical data, reveals the need to further understand the relationship between Open Government Data and value. In this paper, we focus on how use of Open Government Data can bring about new innovative solutions that can generate social and economic value. We apply a critical realist approach to a case study analysis to uncover the mechanisms that can explain how data is transformed to value. We explore the case of Opower, a pioneer in using and transforming data to induce a behavioral change that has resulted in a considerable reduction in energy use over the last six years.",
"title": ""
},
{
"docid": "053470c0115d17ffbcbeea313f2da702",
"text": "Although a significant number of public organizations have embraced the idea of open data, many are still reluctant to do this. One root cause is that the publicizing of data represents a shift from a closed to an open system of governance, which has a significant impact upon the relationships between public agencies and the users of open data. Yet no systematic research is available which compares the benefits of an open data with the barriers to its adoption. Based on interviews and a workshop, the benefits and adoption barriers for open data have been derived. The findings show that a gap exists between the promised benefits and barriers. They furthermore suggest that a conceptually simplistic view is often adopted with regard to open data, one which automatically correlates the publicizing of data with use and benefits. Five ‘myths’ are formulated promoting the use of open data and placing the expectations within a realistic perspective. Further, the recommendation is given to take a user’s view and to actively govern the relationship between government and its users.",
"title": ""
}
] |
[
{
"docid": "d9870dc31895226f60537b3e8591f9fd",
"text": "This paper reports on the design of a low phase noise 76.8 MHz AlN-on-silicon reference oscillator using SiO2 as temperature compensation material. The paper presents profound theoretical optimization of all the important parameters for AlN-on-silicon width extensional mode resonators, filling into the knowledge gap targeting the tens of megahertz frequency range for this type of resonators. Low loading CMOS cross coupled series resonance oscillator is used to reach the-state-of-the-art LTE phase noise specifications. Phase noise of 123 dBc/Hz at 1 kHz, and 162 dBc/Hz at 1 MHz offset is achieved. The oscillator's integrated root mean square RMS jitter is 106 fs (10 kHz to 20 MHz), consuming 850 μA, with startup time of 250 μs, and a figure-of-merit FOM of 216 dB. This work offers a platform for high performance MEMS reference oscillators; where, it shows the applicability of replacing bulky quartz with MEMS resonators in cellular platforms. & 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "3d5c4772d5d73343cc518d062e90f3db",
"text": "Accurate real-time tracking of influenza outbreaks helps public health officials make timely and meaningful decisions that could save lives. We propose an influenza tracking model, ARGO (AutoRegression with GOogle search data), that uses publicly available online search data. In addition to having a rigorous statistical foundation, ARGO outperforms all previously available Google-search-based tracking models, including the latest version of Google Flu Trends, even though it uses only low-quality search data as input from publicly available Google Trends and Google Correlate websites. ARGO not only incorporates the seasonality in influenza epidemics but also captures changes in people's online search behavior over time. ARGO is also flexible, self-correcting, robust, and scalable, making it a potentially powerful tool that can be used for real-time tracking of other social events at multiple temporal and spatial resolutions.",
"title": ""
},
{
"docid": "e86247471d4911cb84aa79911547045b",
"text": "Creating rich representations of environments requires integration of multiple sensing modalities with complementary characteristics such as range and imaging sensors. To precisely combine multisensory information, the rigid transformation between different sensor coordinate systems (i.e., extrinsic parameters) must be estimated. The majority of existing extrinsic calibration techniques require one or multiple planar calibration patterns (such as checkerboards) to be observed simultaneously from the range and imaging sensors. The main limitation of these approaches is that they require modifying the scene with artificial targets. In this paper, we present a novel algorithm for extrinsically calibrating a range sensor with respect to an image sensor with no requirement of external artificial targets. The proposed method exploits natural linear features in the scene to precisely determine the rigid transformation between the coordinate frames. First, a set of 3D lines (plane intersection and boundary line segments) are extracted from the point cloud, and a set of 2D line segments are extracted from the image. Correspondences between the 3D and 2D line segments are used as inputs to an optimization problem which requires jointly estimating the relative translation and rotation between the coordinate frames. The proposed method is not limited to any particular types or configurations of sensors. To demonstrate robustness, efficiency and generality of the presented algorithm, we include results using various sensor configurations.",
"title": ""
},
{
"docid": "14cb0e8fc4e8f82dc4e45d8562ca4bb2",
"text": "Information security is one of the most important factors to be considered when secret information has to be communicated between two parties. Cryptography and steganography are the two techniques used for this purpose. Cryptography scrambles the information, but it reveals the existence of the information. Steganography hides the actual existence of the information so that anyone else other than the sender and the recipient cannot recognize the transmission. In steganography the secret information to be communicated is hidden in some other carrier in such a way that the secret information is invisible. In this paper an image steganography technique is proposed to hide audio signal in image in the transform domain using wavelet transform. The audio signal in any format (MP3 or WAV or any other type) is encrypted and carried by the image without revealing the existence to anybody. When the secret information is hidden in the carrier the result is the stego signal. In this work, the results show good quality stego signal and the stego signal is analyzed for different attacks. It is found that the technique is robust and it can withstand the attacks. The quality of the stego image is measured by Peak Signal to Noise Ratio (PSNR), Structural Similarity Index Metric (SSIM), Universal Image Quality Index (UIQI). The quality of extracted secret audio signal is measured by Signal to Noise Ratio (SNR), Squared Pearson Correlation Coefficient (SPCC). The results show good values for these metrics. © 2015 The Authors. Published by Elsevier B.V. Peer-review under responsibility of organizing committee of the Graph Algorithms, High Performance Implementations and Applications (ICGHIA2014).",
"title": ""
},
{
"docid": "f85a8a7e11a19d89f2709cc3c87b98fc",
"text": "This paper presents novel store-and-forward packet routing algorithms for Wireless Body Area Networks (WBAN) with frequent postural partitioning. A prototype WBAN has been constructed for experimentally characterizing on-body topology disconnections in the presence of ultra short range radio links, unpredictable RF attenuation, and human postural mobility. On-body DTN routing protocols are then developed using a stochastic link cost formulation, capturing multi-scale topological localities in human postural movements. Performance of the proposed protocols are evaluated experimentally and via simulation, and are compared with a number of existing single-copy DTN routing protocols and an on-body packet flooding mechanism that serves as a performance benchmark with delay lower-bound. It is shown that via multi-scale modeling of the spatio-temporal locality of on-body link disconnection patterns, the proposed algorithms can provide better routing performance compared to a number of existing probabilistic, opportunistic, and utility-based DTN routing protocols in the literature.",
"title": ""
},
{
"docid": "e7f9e290eb7cc21b4a0785430546a33b",
"text": "In this study, 306 individuals in 3 age groups--adolescents (13-16), youths (18-22), and adults (24 and older)--completed 2 questionnaire measures assessing risk preference and risky decision making, and 1 behavioral task measuring risk taking. Participants in each age group were randomly assigned to complete the measures either alone or with 2 same-aged peers. Analyses indicated that (a) risk taking and risky decision making decreased with age; (b) participants took more risks, focused more on the benefits than the costs of risky behavior, and made riskier decisions when in peer groups than alone; and (c) peer effects on risk taking and risky decision making were stronger among adolescents and youths than adults. These findings support the idea that adolescents are more inclined toward risky behavior and risky decision making than are adults and that peer influence plays an important role in explaining risky behavior during adolescence.",
"title": ""
},
{
"docid": "5481f319296c007412e62129d2ec5943",
"text": "We propose a new family of optimization criteria for variational auto-encoding models, generalizing the standard evidence lower bound. We provide conditions under which they recover the data distribution and learn latent features, and formally show that common issues such as blurry samples and uninformative latent features arise when these conditions are not met. Based on these new insights, we propose a new sequential VAE model that can generate sharp samples on the LSUN image dataset based on pixel-wise reconstruction loss, and propose an optimization criterion that encourages unsupervised learning of informative latent features.",
"title": ""
},
{
"docid": "4a29051479ac4b3ad7e7cd84540dbdb6",
"text": "A compact, shared-aperture antenna (SAA) configuration consisting of various planar antennas embedded into a single footprint is presented in this article. An L-probefed, suspended-plate, horizontally polarized antenna operating in an 900-MHz band; an aperture-coupled, vertically polarized, microstrip antenna operating at 4.2-GHz; a 2 × 2 microstrip patch array operating at the X band; a low-side-lobe level (SLL), corporate-fed, 8 × 4 microstrip planar array for synthetic aperture radar (SAR) in the X band; and a printed, single-arm, circularly polarized, tilted-beam spiral antenna operating at the C band are integrated into a single aperture for simultaneous operation. This antenna system could find potential application in many airborne and unmanned aircraft vehicle (UAV) technologies. While the design of these antennas is not that critical, their optimal placement in a compact configuration for simultaneous operation with minimal interference poses a significant challenge to the designer. The placement optimization was arrived at based on extensive numerical fullwave optimizations.",
"title": ""
},
{
"docid": "e6f34f5b5cae1b2e8d7387e9154284ed",
"text": "In this paper the fundamental knowledge of a variable reluctance resolver is presented and an analytical model is demonstrated. With the simulation results are calculated and validated by measurements on a sensor test bench. Based on the introduced model, mechanical and electrical failures of any variable reluctance sensor can be analyzed. The model based simulation is compared to the measurement results and future prospects are given.",
"title": ""
},
{
"docid": "41a54cd203b0964a6c3d9c2b3addff46",
"text": "Increasing occupancy rates and revenue by improving customer experience is the aim of modern hospitality organizations. To achieve these results, hotel managers need to have a deep knowledge of customers’ needs, behavior, and preferences and be aware of the ways in which the services delivered create value for the customers and then stimulate their retention and loyalty. In this article a methodological framework to analyze the guest–hotel relationship and to profile hotel guests is discussed, focusing on the process of designing a customer information system and particularly the guest information matrix on which the system database will be built.",
"title": ""
},
{
"docid": "2ff3d496f0174ffc0e3bd21952c8f0ae",
"text": "Each time a latency in responding to a stimulus is measured, we owe a debt to F. C. Donders, who in the mid-19th century made the fundamental discovery that the time required to perform a mental computation reveals something fundamental about how the mind works. Donders expressed the idea in the following simple and optimistic statement about the feasibility of measuring the mind: “Will all quantitative treatment of mental processes be out of the question then? By no means! An important factor seemed to be susceptible to measurement: I refer to the time required for simple mental processes” (Donders, 1868/1969, pp. 413–414). With particular variations of simple stimuli and subjects’ choices, Donders demonstrated that it is possible to bring order to understanding invisible thought processes by computing the time that elapses between stimulus presentation and response production. A more specific observation he offered lies at the center of our own modern understanding of mental operations:",
"title": ""
},
{
"docid": "67beb9dbd03ae20d4e45a928fdb61f47",
"text": "representation of the game. It was programmed in LI SP. Further use of abstraction was also studied by Friedenbach (1980). The combination of s earch, heuristics, and expert systems led to the best programs in the eighties. At the end of the eighties a new type of Go programs emerged. Th ese programs made an intensive use of pattern recognition. This approach was dis cussed in detail by Boon (1990). In the following years, different AI techniques, such as Rei nforcement Learning (Schraudolph, Dayan, and Sejnowski, 1993), Monte Carlo (Br ügmann, 1993), and Neural Networks (Richards, Moriarty, and Miikkulainen, 1998), were tested in Go. However, programs applying these techniques were not able to surpass the level of the best programs. The combination of search, heuristics, expert systems, and pattern r ecognition remained the winning methodology. Brügmann (1993) proposed to use Monte-Carlo evaluations as an lter ative technique for Computer Go. His idea did not got many followers in the 199 0s. In the following decade, Bouzy and Helmstetter (2003) and Bouzy (2006) combined Mont e-Carlo evaluations and search in Indigo. The program won three bronze medals at the O lympiads of 2004, 2005, and 2006. Their pioneering research inspired the developme nt of Monte-Carlo Tree Search (MCTS) (Coulom, 2006; Kocsis and Szepesv ári, 2006; Chaslot et al., 2006a). Since 2007, MCTS programs are dominating the Computer Go field. MCTS will be explained in the next chapter. 2.6 Go Programs MANGO and MOGO In this subsection, we briefly describe the Go programs M ANGO and MOGO that we use for the experiments in the thesis. Their performance in vari ous tournaments is discussed as well.4",
"title": ""
},
{
"docid": "662fef280f2d03ae535bfbcc06f32810",
"text": "This paper describes a voiceless speech recognition technique that utilizes dynamic visual features to represent the facial movements during phonation. The dynamic features extracted from the mouth video are used to classify utterances without using the acoustic data. The audio signals of consonants are more confusing than vowels and the facial movements involved in pronunciation of consonants are more discernible. Thus, this paper focuses on identifying consonants using visual information. This paper adopts a visual speech model that categorizes utterances into sequences of smallest visually distinguishable units known as visemes. The viseme model used is based on the viseme model of Moving Picture Experts Group 4 (MPEG-4) standard. The facial movements are segmented from the video data using motion history images (MHI). MHI is a spatio-temporal template (grayscale image) generated from the video data using accumulative image subtraction technique. The proposed approach combines discrete stationary wavelet transform (SWT) and Zernike moments to extract rotation invariant features from the MHI. A feedforward multilayer perceptron (MLP) neural network is used to classify the features based on the patterns of visible facial movements. The preliminary experimental results indicate that the proposed technique is suitable for recognition of English consonants.",
"title": ""
},
{
"docid": "9ea0612f646228a3da41b7f55c23e825",
"text": "It is shown that many published models for the Stanford Question Answering Dataset (Rajpurkar et al., 2016) lack robustness, suffering an over 50% decrease in F1 score during adversarial evaluation based on the AddSent (Jia and Liang, 2017) algorithm. It has also been shown that retraining models on data generated by AddSent has limited effect on their robustness. We propose a novel alternative adversary-generation algorithm, AddSentDiverse, that significantly increases the variance within the adversarial training data by providing effective examples that punish the model for making certain superficial assumptions. Further, in order to improve robustness to AddSent’s semantic perturbations (e.g., antonyms), we jointly improve the model’s semantic-relationship learning capabilities in addition to our AddSentDiversebased adversarial training data augmentation. With these additions, we show that we can make a state-of-the-art model significantly more robust, achieving a 36.5% increase in F1 score under many different types of adversarial evaluation while maintaining performance on the regular SQuAD task.",
"title": ""
},
{
"docid": "54477e35cf5cfcfc61e4dc675449a068",
"text": "Nowadays the amount of data that is being generated every day is increasing in a high level for various sectors. In fact, this volume and diversity of data push us to think wisely for a better solution to store, process and analyze it in the right way. Taking into consideration the healthcare industry, there is a great benefit for using the concept of big data, due to the diversity of data that we are dealing with, the extant, and the velocity which lead us to think about providing the best care for the patients. In this paper, we aim to present a new architecture model for health data. The framework supports the storage and the management of unstructured medical data in a distributed environment based on multi-agent paradigm. The integration of the mobile agent model into hadoop ecosystem will give us the opportunity to enable instant communication process between multiple health repositories.",
"title": ""
},
{
"docid": "09c9a0990946fd884df70d4eeab46ecc",
"text": "Studies of technological change constitute a field of growing importance and sophistication. In this paper we contribute to the discussion with a methodological reflection and application of multi-stage patent citation analysis for the mea surement of inventive progress. Investigating specific patterns of patent citation data, we conclude that single-stage citation analysis cannot reveal technological paths or linea ges. Therefore, one should also make use of indirect citations and bibliographical coupling. To measure aspects of cumulative inventive progress, we develop a “shared specialization measu r ” of patent families. We relate this measure to an expert rating of the technological va lue dded in the field of variable valve actuation for internal combustion engines. In sum, the study presents promising evidence for multi-stage patent citation analysis in order to ex plain aspects of technological change. JEL classification: O31",
"title": ""
},
{
"docid": "72a283eda92eb25404536308d8909999",
"text": "This paper presents a 128.7nW analog front-end amplifier and Gm-C filter for biomedical sensing applications, specifically for Electroencephalogram (EEG) use. The proposed neural amplifier has a supply voltage of 1.8V, consumes a total current of 71.59nA, for a total dissipated power of 128nW and has a gain of 40dB. Also, a 3th order Butterworth Low Pass Gm-C Filter with a 14.7nS transconductor is designed and presented. The filter has a pass band suitable for use in EEG (1-100Hz). The amplifier and filter utilize current sources without resistance which provide 56nA and (1.154nA ×5) respectively. The proposed amplifier occupies and area of 0.26mm2 in 0.3μm TSMC process.",
"title": ""
},
{
"docid": "fecfd19eaf90b735cf00e727fca768b8",
"text": "Real-time detection of irregularities in visual data is very invaluable and useful in many prospective applications including surveillance, patient monitoring systems, etc. With the surge of deep learning methods in the recent years, researchers have tried a wide spectrum of methods for different applications. However, for the case of irregularity or anomaly detection in videos, training an end-to-end model is still an open challenge, since often irregularity is not well-defined and there are not enough irregular samples to use during training. In this paper, inspired by the success of generative adversarial networks (GANs) for training deep models in unsupervised or self-supervised settings, we propose an end-to-end deep network for detection and fine localization of irregularities in videos (and images). Our proposed architecture is composed of two networks, which are trained in competing with each other while collaborating to find the irregularity. One network works as a pixel-level irregularity Inpainter, and the other works as a patch-level Detector. After an adversarial self-supervised training, in which I tries to fool D into accepting its inpainted output as regular (normal), the two networks collaborate to detect and fine-segment the irregularity in any given testing video. Our results on three different datasets show that our method can outperform the state-of-the-art and fine-segment the irregularity. 1",
"title": ""
},
{
"docid": "f1e646a0627a5c61a0f73a41d35ccac7",
"text": "Smart cities play an increasingly important role for the sustainable economic development of a determined area. Smart cities are considered a key element for generating wealth, knowledge and diversity, both economically and socially. A Smart City is the engine to reach the sustainability of its infrastructure and facilitate the sustainable development of its industry, buildings and citizens. The first goal to reach that sustainability is reduce the energy consumption and the levels of greenhouse gases (GHG). For that purpose, it is required scalability, extensibility and integration of new resources in order to reach a higher awareness about the energy consumption, distribution and generation, which allows a suitable modeling which can enable new countermeasure and action plans to mitigate the current excessive power consumption effects. Smart Cities should offer efficient support for global communications and access to the services and information. It is required to enable a homogenous and seamless machine to machine (M2M) communication in the different solutions and use cases. This work presents how to reach an interoperable Smart Lighting solution over the emerging M2M protocols such as CoAP built over REST architecture. This follows up the guidelines defined by the IP for Smart Objects Alliance (IPSO Alliance) in order to implement and interoperable semantic level for the street lighting, and describes the integration of the communications and logic over the existing street lighting infrastructure.",
"title": ""
}
] |
scidocsrr
|
4c7b94f0e7470fdd5d62b4174ecb3c7c
|
Please Share! Online Word of Mouth and Charitable Crowdfunding
|
[
{
"docid": "befc5dbf4da526963f8aa180e1fda522",
"text": "Charities publicize the donations they receive, generally according to dollar categories rather than the exact amount. Donors in turn tend to give the minimum amount necessary to get into a category. These facts suggest that donors have a taste for having their donations made public. This paper models the effects of such a taste for ‘‘prestige’’ on the behavior of donors and charities. I show how a taste for prestige means that charities can increase donations by using categories. The paper also discusses the effect of a taste for prestige on competition between charities. 1998 Elsevier Science S.A.",
"title": ""
}
] |
[
{
"docid": "976f16e21505277525fa697876b8fe96",
"text": "A general technique for obtaining intermediate-band crystal filters from prototype low-pass (LP) networks which are neither symmetric nor antimetric is presented. This immediately enables us to now realize the class of low-transient responses. The bandpass (BP) filter appears as a cascade of symmetric lattice sections, obtained by partitioning the LP prototype filter, inserting constant reactances where necessary, and then applying the LP to BP frequency transformation. Manuscript received January 7, 1974; revised October 9, 1974. The author is with the Systems Development Division, Westinghouse Electric Corporation, Baltimore, Md. The cascade is composed of only two fundamental sections. Finally, the method introduced is illustrated with an example.",
"title": ""
},
{
"docid": "16f96e68b19fb561d2232ea4e586bb2e",
"text": "In this letter, charge-based capacitance measurement (CBCM) is applied to characterize bias-dependent capacitances in a CMOS transistor. Due to its special advantage of being free from the errors induced by charge injection, the operation of charge-injection-induced-error-free CBCM allows for the extraction of full-range gate capacitance from the accumulation region to the inversion region and the overlap capacitance of MOSFET devices with submicrometer dimensions.",
"title": ""
},
{
"docid": "c17522f4b9f3b229dae56b394adb69a1",
"text": "This paper investigates fault effects and error propagation in a FlexRay-based network with hybrid topology that includes a bus subnetwork and a star subnetwork. The investigation is based on about 43500 bit-flip fault injection inside different parts of the FlexRay communication controller. To do this, a FlexRay communication controller is modeled by Verilog HDL at the behavioral level. Then, this controller is exploited to setup a FlexRay-based network composed of eight nodes (four nodes in the bus subnetwork and four nodes in the star subnetwork). The faults are injected in a node of the bus subnetwork and a node of the star subnetwork of the hybrid network Then, the faults resulting in the three kinds of errors, namely, content errors, syntax errors and boundary violation errors are characterized. The results of fault injection show that boundary violation errors and content errors are negligibly propagated to the star subnetwork and syntax errors propagation is almost equal in the both bus and star subnetworks. Totally, the percentage of errors propagation in the bus subnetwork is more than the star subnetwork.",
"title": ""
},
{
"docid": "ec36f5a41650cc6c3ba17eb6bd928677",
"text": "Deep learning techniques based on Convolutional Neural Networks (CNNs) are extensively used for the classification of hyperspectral images. These techniques present high computational cost. In this paper, a GPU (Graphics Processing Unit) implementation of a spatial-spectral supervised classification scheme based on CNNs and applied to remote sensing datasets is presented. In particular, two deep learning libraries, Caffe and CuDNN, are used and compared. In order to achieve an efficient GPU projection, different techniques and optimizations have been applied. The implemented scheme comprises Principal Component Analysis (PCA) to extract the main features, a patch extraction around each pixel to take the spatial information into account, one convolutional layer for processing the spectral information, and fully connected layers to perform the classification. To improve the initial GPU implementation accuracy, a second convolutional layer has been added. High speedups are obtained together with competitive classification accuracies.",
"title": ""
},
{
"docid": "83da776714bf49c3bbb64976d20e26a2",
"text": "Orthogonal frequency division multiplexing (OFDM) has been widely adopted in modern wireless communication systems due to its robustness against the frequency selectivity of wireless channels. For coherent detection, channel estimation is essential for receiver design. Channel estimation is also necessary for diversity combining or interference suppression where there are multiple receive antennas. In this paper, we will present a survey on channel estimation for OFDM. This survey will first review traditional channel estimation approaches based on channel frequency response (CFR). Parametric model (PM)-based channel estimation, which is particularly suitable for sparse channels, will be also investigated in this survey. Following the success of turbo codes and low-density parity check (LDPC) codes, iterative processing has been widely adopted in the design of receivers, and iterative channel estimation has received a lot of attention since that time. Iterative channel estimation will be emphasized in this survey as the emerging iterative receiver improves system performance significantly. The combination of multiple-input multiple-output (MIMO) and OFDM has been widely accepted in modern communication systems, and channel estimation in MIMO-OFDM systems will also be addressed in this survey. Open issues and future work are discussed at the end of this paper.",
"title": ""
},
{
"docid": "3251674643f09b73a24d037dc1076c72",
"text": "Although the link between sagittal plane motion and exercise intensity has been highlighted, no study assessed if different workloads lead to changes in three-dimensional cycling kinematics. This study compared three-dimensional joint and segment kinematics between competitive and recreational road cyclists across different workloads. Twenty-four road male cyclists (12 competitive and 12 recreational) underwent an incremental workload test to determine aerobic peak power output. In a following session, cyclists performed four trials at sub-maximal workloads (65, 75, 85 and 95% of their aerobic peak power output) at 90 rpm of pedalling cadence. Mean hip adduction, thigh rotation, shank rotation, pelvis inclination (latero-lateral and anterior-posterior), spine inclination and rotation were computed at the power section of the crank cycle (12 o'clock to 6 o'clock crank positions) using three-dimensional kinematics. Greater lateral spine inclination (p < .01, 5-16%, effect sizes = 0.09-0.25) and larger spine rotation (p < .01, 16-29%, effect sizes = 0.31-0.70) were observed for recreational cyclists than competitive cyclists across workload trials. No differences in segment and joint angles were observed from changes in workload with significant individual effects on spine inclination (p < .01). No workload effects were found in segment angles but differences, although small, existed when comparing competitive road to recreational cyclists. When conducting assessment of joint and segment motions, workload between 65 and 95% of individual cyclists' peak power output could be used.",
"title": ""
},
{
"docid": "1e80983e98d5d94605315b8ef45af0fd",
"text": "Neural networks dominate the modern machine learning landscape, but their training and success still suffer from sensitivity to empirical choices of hyperparameters such as model architecture, loss function, and optimisation algorithm. In this work we present Population Based Training (PBT), a simple asynchronous optimisation algorithm which effectively utilises a fixed computational budget to jointly optimise a population of models and their hyperparameters to maximise performance. Importantly, PBT discovers a schedule of hyperparameter settings rather than following the generally sub-optimal strategy of trying to find a single fixed set to use for the whole course of training. With just a small modification to a typical distributed hyperparameter training framework, our method allows robust and reliable training of models. We demonstrate the effectiveness of PBT on deep reinforcement learning problems, showing faster wall-clock convergence and higher final performance of agents by optimising over a suite of hyperparameters. In addition, we show the same method can be applied to supervised learning for machine translation, where PBT is used to maximise the BLEU score directly, and also to training of Generative Adversarial Networks to maximise the Inception score of generated images. In all cases PBT results in the automatic discovery of hyperparameter schedules and model selection which results in stable training and better final performance.",
"title": ""
},
{
"docid": "e77cf8938714824d46cfdbdb1b809f93",
"text": "Generative models provide a way to model structure in complex distributions and have been shown to be useful for many tasks of practical interest. However, current techniques for training generative models require access to fully-observed samples. In many settings, it is expensive or even impossible to obtain fullyobserved samples, but economical to obtain partial, noisy observations. We consider the task of learning an implicit generative model given only lossy measurements of samples from the distribution of interest. We show that the true underlying distribution can be provably recovered even in the presence of per-sample information loss for a class of measurement models. Based on this, we propose a new method of training Generative Adversarial Networks (GANs) which we call AmbientGAN. On three benchmark datasets, and for various measurement models, we demonstrate substantial qualitative and quantitative improvements. Generative models trained with our method can obtain 2-4x higher inception scores than the baselines.",
"title": ""
},
{
"docid": "9fa8133dcb3baef047ee887fea1ed5a3",
"text": "In this paper, we present an effective hierarchical shot classification scheme for broadcast soccer video. We first partition a video into replay and non-replay shots with replay logo detection. Then, non-replay shots are further classified into Long, Medium, Close-up or Out-field types with color and texture features based on a decision tree. We tested the method on real broadcast FIFA soccer videos, and the experimental results demonstrate its effectiveness..",
"title": ""
},
{
"docid": "3d3589a002f8195bb20324dd8a8f5d76",
"text": "Vacuum-based end effectors are widely used in industry and are often preferred over parallel-jaw and multifinger grippers due to their ability to lift objects with a single point of contact. Suction grasp planners often target planar surfaces on point clouds near the estimated centroid of an object. In this paper, we propose a compliant suction contact model that computes the quality of the seal between the suction cup and local target surface and a measure of the ability of the suction grasp to resist an external gravity wrench. To characterize grasps, we estimate robustness to perturbations in end-effector and object pose, material properties, and external wrenches. We analyze grasps across 1,500 3D object models to generate Dex-Net 3.0, a dataset of 2.8 million point clouds, suction grasps, and grasp robustness labels. We use Dex-Net 3.0 to train a Grasp Quality Convolutional Neural Network (GQ-CNN) to classify robust suction targets in point clouds containing a single object. We evaluate the resulting system in 350 physical trials on an ABB YuMi fitted with a pneumatic suction gripper. When evaluated on novel objects that we categorize as Basic (prismatic or cylindrical), Typical (more complex geometry), and Adversarial (with few available suction-grasp points) Dex-Net 3.0 achieves success rates of 98%, 82%, and 58% respectively, improving to 81% in the latter case when the training set includes only adversarial objects. Code, datasets, and supplemental material can be found at http://berkeleyautomation.github.io/dex-net.",
"title": ""
},
{
"docid": "541de3d6af2edacf7396e5ca66c385e2",
"text": "This paper presents a simple and intuitive method for mining search engine query logs to get fast query recommendations on a large scale industrial strength search engine. In order to get a more comprehensive solution, we combine two methods together. On the one hand, we study and model search engine users' sequential search behavior, and interpret this consecutive search behavior as client-side query refinement, that should form the basis for the search engine's own query refinement process. On the other hand, we combine this method with a traditional content based similarity method to compensate for the high sparsity of real query log data, and more specifically, the shortness of most query sessions. To evaluate our method, we use one hundred day worth query logs from SINA' search engine to do off-line mining. Then we analyze three independent editors evaluations on a query test set. Based on their judgement, our method was found to be effective for finding related queries, despite its simplicity. In addition to the subjective editors' rating, we also perform tests based on actual anonymous user search sessions.",
"title": ""
},
{
"docid": "dac8564305055eaf9291e731dbf9a44d",
"text": "Named Entity Recognition and classification (NERC) is an essential and challenging task in (NLP). Kann ada is a highly inflectional and agglutinating language prov iding one of the richest and most challenging sets of linguistic and statistical features resulting in long and complex word forms, which is large in number. It is primarily a suffixi ng Language and inflected word starts with a root and may have several suffix es added to the right. It is also a Freeword order Language. Like other Indian languages, it is a resource poor language. Annotate d corpora, name dictionaries, good morphological an lyzers, Parts of Speech (POS) taggers etc. are not yet available in the req ui d measure and not many works are reported for t his language. The work related to NERC in Kannada is not yet reported. In recent years, automatic named entity recognition an d extraction systems have become one of the popular research areas. Building NERC for Kannada is challenging. It seeks to classi fy words which represent names in text into predefined categories like perso n name, location, organization, date, time etc. Thi s paper deals with some attempts in this direction. This work starts with e xp riments in building Semi-Automated Statistical M achine learning NLP Models based on Noun Taggers. In this paper we have de loped an algorithm based on supervised learnin g techniques that include Hidden Markov Model (HMM). Some sample resu lts are reported.",
"title": ""
},
{
"docid": "055c9fad6d2f246fc1b6cbb1bce26a92",
"text": "This work uses deep learning models for daily directional movements prediction of a stock price using financial news titles and technical indicators as input. A comparison is made between two different sets of technical indicators, set 1: Stochastic %K, Stochastic %D, Momentum, Rate of change, William’s %R, Accumulation/Distribution (A/D) oscillator and Disparity 5; set 2: Exponential Moving Average, Moving Average Convergence-Divergence, Relative Strength Index, On Balance Volume and Bollinger Bands. Deep learning methods can detect and analyze complex patterns and interactions in the data allowing a more precise trading process. Experiments has shown that Convolutional Neural Network (CNN) can be better than Recurrent Neural Networks (RNN) on catching semantic from texts and RNN is better on catching the context information and modeling complex temporal characteristics for stock market forecasting. So, there are two models compared in this paper: a hybrid model composed by a CNN for the financial news and a Long Short-Term Memory (LSTM) for technical indicators, named as SI-RCNN; and a LSTM network only for technical indicators, named as I-RNN. The output of each model is used as input for a trading agent that buys stocks on the current day and sells the next day when the model predicts that the price is going up, otherwise the agent sells stocks on the current day and buys the next day. The proposed method shows a major role of financial news in stabilizing the results and almost no improvement when comparing different sets of technical indicators.",
"title": ""
},
{
"docid": "caac45f02e29295d592ee784697c6210",
"text": "The studies included in this PhD thesis examined the interactions of syphilis, which is caused by Treponema pallidum, and HIV. Syphilis reemerged worldwide in the late 1990s and hereafter increasing rates of early syphilis were also reported in Denmark. The proportion of patients with concurrent HIV has been substantial, ranging from one third to almost two thirds of patients diagnosed with syphilis some years. Given that syphilis facilitates transmission and acquisition of HIV the two sexually transmitted diseases are of major public health concern. Further, syphilis has a negative impact on HIV infection, resulting in increasing viral loads and decreasing CD4 cell counts during syphilis infection. Likewise, HIV has an impact on the clinical course of syphilis; patients with concurrent HIV are thought to be at increased risk of neurological complications and treatment failure. Almost ten per cent of Danish men with syphilis acquired HIV infection within five years after they were diagnosed with syphilis during an 11-year study period. Interestingly, the risk of HIV declined during the later part of the period. Moreover, HIV-infected men had a substantial increased risk of re-infection with syphilis compared to HIV-uninfected men. As one third of the HIV-infected patients had viral loads >1,000 copies/ml, our conclusion supported the initiation of cART in more HIV-infected MSM to reduce HIV transmission. During a five-year study period, including the majority of HIV-infected patients from the Copenhagen area, we observed that syphilis was diagnosed in the primary, secondary, early and late latent stage. These patients were treated with either doxycycline or penicillin and the rate of treatment failure was similar in the two groups, indicating that doxycycline can be used as a treatment alternative - at least in an HIV-infected population. During a four-year study period, the T. pallidum strain type distribution was investigated among patients diagnosed by PCR testing of material from genital lesions. In total, 22 strain types were identified. HIV-infected patients were diagnosed with nine different strains types and a difference by HIV status was not observed indicating that HIV-infected patients did not belong to separate sexual networks. In conclusion, concurrent HIV remains common in patients diagnosed with syphilis in Denmark, both in those diagnosed by serological testing and PCR testing. Although the rate of syphilis has stabilized in recent years, a spread to low-risk groups is of concern, especially due to the complex symptomatology of syphilis. However, given the efficient treatment options and the targeted screening of pregnant women and persons at higher risk of syphilis, control of the infection seems within reach. Avoiding new HIV infections is the major challenge and here cART may play a prominent role.",
"title": ""
},
{
"docid": "dc3de555216f10d84890ecb1165774ff",
"text": "Research into the visual perception of human emotion has traditionally focused on the facial expression of emotions. Recently researchers have turned to the more challenging field of emotional body language, i.e. emotion expression through body pose and motion. In this work, we approach recognition of basic emotional categories from a computational perspective. In keeping with recent computational models of the visual cortex, we construct a biologically plausible hierarchy of neural detectors, which can discriminate seven basic emotional states from static views of associated body poses. The model is evaluated against human test subjects on a recent set of stimuli manufactured for research on emotional body language.",
"title": ""
},
{
"docid": "c699ede2caeb5953decc55d8e42c2741",
"text": "Traditionally, two distinct approaches have been employed for exploratory factor analysis: maximum likelihood factor analysis and principal component analysis. A third alternative, called regularized exploratory factor analysis, was introduced recently in the psychometric literature. Small sample size is an important issue that has received considerable discussion in the factor analysis literature. However, little is known about the differential performance of these three approaches to exploratory factor analysis in a small sample size scenario. A simulation study and an empirical example demonstrate that regularized exploratory factor analysis may be recommended over the two traditional approaches, particularly when sample sizes are small (below 50) and the sample covariance matrix is near singular.",
"title": ""
},
{
"docid": "6dbe972f08097355b32685c5793f853a",
"text": "BACKGROUND/AIMS\nRheumatoid arthritis (RA) is a serious health problem resulting in significant morbidity and disability. Tai Chi may be beneficial to patients with RA as a result of effects on muscle strength and 'mind-body' interactions. To obtain preliminary data on the effects of Tai Chi on RA, we conducted a pilot randomized controlled trial. Twenty patients with functional class I or II RA were randomly assigned to Tai Chi or attention control in twice-weekly sessions for 12 weeks. The American College of Rheumatology (ACR) 20 response criterion, functional capacity, health-related quality of life and the depression index were assessed.\n\n\nRESULTS\nAt 12 weeks, 5/10 patients (50%) randomized to Tai Chi achieved an ACR 20% response compared with 0/10 (0%) in the control (p = 0.03). Tai Chi had greater improvement in the disability index (p = 0.01), vitality subscale of the Medical Outcome Study Short Form 36 (p = 0.01) and the depression index (p = 0.003). Similar trends to improvement were also observed for disease activity, functional capacity and health-related quality of life. No adverse events were observed and no patients withdrew from the study.\n\n\nCONCLUSION\nTai Chi appears safe and may be beneficial for functional class I or II RA. These promising results warrant further investigation into the potential complementary role of Tai Chi for treatment of RA.",
"title": ""
},
{
"docid": "38a8471eb20b08499136ef459eb866c2",
"text": "Some recent studies suggest that in progressive multiple sclerosis, neurodegeneration may occur independently from inflammation. The aim of our study was to analyse the interdependence of inflammation, neurodegeneration and disease progression in various multiple sclerosis stages in relation to lesional activity and clinical course, with a particular focus on progressive multiple sclerosis. The study is based on detailed quantification of different inflammatory cells in relation to axonal injury in 67 multiple sclerosis autopsies from different disease stages and 28 controls without neurological disease or brain lesions. We found that pronounced inflammation in the brain is not only present in acute and relapsing multiple sclerosis but also in the secondary and primary progressive disease. T- and B-cell infiltrates correlated with the activity of demyelinating lesions, while plasma cell infiltrates were most pronounced in patients with secondary progressive multiple sclerosis (SPMS) and primary progressive multiple sclerosis (PPMS) and even persisted, when T- and B-cell infiltrates declined to levels seen in age matched controls. A highly significant association between inflammation and axonal injury was seen in the global multiple sclerosis population as well as in progressive multiple sclerosis alone. In older patients (median 76 years) with long-disease duration (median 372 months), inflammatory infiltrates declined to levels similar to those found in age-matched controls and the extent of axonal injury, too, was comparable with that in age-matched controls. Ongoing neurodegeneration in these patients, which exceeded the extent found in normal controls, could be attributed to confounding pathologies such as Alzheimer's or vascular disease. Our study suggests a close association between inflammation and neurodegeneration in all lesions and disease stages of multiple sclerosis. It further indicates that the disease processes of multiple sclerosis may die out in aged patients with long-standing disease.",
"title": ""
},
{
"docid": "e75620184f4baca454af714daf5e7801",
"text": "Although fingerprint experts have presented evidence in criminal courts for more than a century, there have been few scientific investigations of the human capacity to discriminate these patterns. A recent latent print matching experiment shows that qualified, court-practicing fingerprint experts are exceedingly accurate (and more conservative) compared with novices, but they do make errors. Here, a rationale for the design of this experiment is provided. We argue that fidelity, generalizability, and control must be balanced to answer important research questions; that the proficiency and competence of fingerprint examiners are best determined when experiments include highly similar print pairs, in a signal detection paradigm, where the ground truth is known; and that inferring from this experiment the statement \"The error rate of fingerprint identification is 0.68%\" would be unjustified. In closing, the ramifications of these findings for the future psychological study of forensic expertise and the implications for expert testimony and public policy are considered.",
"title": ""
}
] |
scidocsrr
|
942371f9a23a5bae9dd577d4a892384f
|
From Benedict Cumberbatch to Sherlock Holmes: Character Identification in TV series without a Script
|
[
{
"docid": "d5a4c2d61e7d65f1972ed934f399847e",
"text": "We address the problem of learning a joint model of actors and actions in movies using weak supervision provided by scripts. Specifically, we extract actor/action pairs from the script and use them as constraints in a discriminative clustering framework. The corresponding optimization problem is formulated as a quadratic program under linear constraints. People in video are represented by automatically extracted and tracked faces together with corresponding motion features. First, we apply the proposed framework to the task of learning names of characters in the movie and demonstrate significant improvements over previous methods used for this task. Second, we explore the joint actor/action constraint and show its advantage for weakly supervised action learning. We validate our method in the challenging setting of localizing and recognizing characters and their actions in feature length movies Casablanca and American Beauty.",
"title": ""
}
] |
[
{
"docid": "c0e1be5859be1fc5871993193a709f2d",
"text": "This paper reviews the possible causes and effects for no-fault-found observations and intermittent failures in electronic products and summarizes them into cause and effect diagrams. Several types of intermittent hardware failures of electronic assemblies are investigated, and their characteristics and mechanisms are explored. One solder joint intermittent failure case study is presented. The paper then discusses when no-fault-found observations should be considered as failures. Guidelines for assessment of intermittent failures are then provided in the discussion and conclusions. Ó 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "aed8a983fc25d2c1c71401b338d8f5f3",
"text": "Heart disease is the leading cause of death in the world over the past 10 years. Researchers have been using several data mining techniques to help health care professionals in the diagnosis of heart disease. Decision Tree is one of the successful data mining techniques used. However, most research has applied J4.8 Decision Tree, based on Gain Ratio and binary discretization. Gini Index and Information Gain are two other successful types of Decision Trees that are less used in the diagnosis of heart disease. Also other discretization techniques, voting method, and reduced error pruning are known to produce more accurate Decision Trees. This research investigates applying a range of techniques to different types of Decision Trees seeking better performance in heart disease diagnosis. A widely used benchmark data set is used in this research. To evaluate the performance of the alternative Decision Trees the sensitivity, specificity, and accuracy are calculated. The research proposes a model that outperforms J4.8 Decision Tree and Bagging algorithm in the diagnosis of heart disease patients.",
"title": ""
},
{
"docid": "f5eb797695e17d59ed9359456a8acfc8",
"text": "The availability of inexpensive CMOS technologies that perform well at microwave frequencies has created new opportunities for automated material handling within supply chain management (SCM) that in hindsight, be viewed as revolutionary. This article outlines the system architecture and circuit design considerations that influence the development of radio frequency identification (RFID) tags through a case study involving a high-performance implementation that achieves a throughput of nearly 800 tags/s at a range greater than 10 m. The impact of a novel circuit design approach ideally suited to the power and die area challenges is also discussed. Insights gleaned from first-generation efforts are reviewed as an object lesson in how to make RFID technology for SCM, at a cost measured in pennies per tag, reach its full potential through a generation 2 standard.",
"title": ""
},
{
"docid": "726728a9ada1d4823ce5420d57b80201",
"text": "OBJECTIVE\nTo investigate the association of muscle function and subgroups of low back pain (no low back pain, pelvic girdle pain, lumbar pain and combined pelvic girdle pain and lumbar pain) in relation to pregnancy.\n\n\nDESIGN\nProspective cohort study.\n\n\nSUBJECTS\nConsecutively enrolled pregnant women seen in gestational weeks 12-18 (n = 301) and 3 months postpartum (n = 262).\n\n\nMETHODS\nClassification into subgroups by means of mechanical assessment of the lumbar spine, pelvic pain provocation tests, standard history and a pain drawing. Trunk muscle endurance, hip muscle strength (dynamometer) and gait speed were investigated.\n\n\nRESULTS\nIn pregnancy 116 women had no low back pain, 33% (n = 99) had pelvic girdle pain, 11% (n = 32) had lumbar pain and 18% (n = 54) had combined pelvic girdle pain and lumbar pain. The prevalence of pelvic girdle pain/combined pelvic girdle pain and lumbar pain decreased postpartum, whereas the prevalence of lumbar pain remained stable. Women with pelvic girdle pain and/or combined pelvic girdle pain and lumbar pain had lower values for trunk muscle endurance, hip extension and gait speed as compared to women without low back pain in pregnancy and postpartum (p < 0.001-0.04). Women with pelvic girdle pain throughout the study had lower values of back flexor endurance compared with women without low back pain.\n\n\nCONCLUSION\nMuscle dysfunction was associated with pelvic girdle pain, which should be taken into consideration when developing treatment strategies and preventive measures.",
"title": ""
},
{
"docid": "3c41bdaeaaa40481c8e68ad00426214d",
"text": "Image captioning is an important task, applicable to virtual assistants, editing tools, image indexing, and support of the disabled. In recent years significant progress has been made in image captioning, using Recurrent Neural Networks powered by long-short-term-memory (LSTM) units. Despite mitigating the vanishing gradient problem, and despite their compelling ability to memorize dependencies, LSTM units are complex and inherently sequential across time. To address this issue, recent work has shown benefits of convolutional networks for machine translation and conditional image generation [9, 34, 35]. Inspired by their success, in this paper, we develop a convolutional image captioning technique. We demonstrate its efficacy on the challenging MSCOCO dataset and demonstrate performance on par with the LSTM baseline [16], while having a faster training time per number of parameters. We also perform a detailed analysis, providing compelling reasons in favor of convolutional language generation approaches.",
"title": ""
},
{
"docid": "b3012ab055e3f4352b3473700c30c085",
"text": "Zero-shot recognition (ZSR) deals with the problem of predicting class labels for target domain instances based on source domain side information (e.g. attributes) of unseen classes. We formulate ZSR as a binary prediction problem. Our resulting classifier is class-independent. It takes an arbitrary pair of source and target domain instances as input and predicts whether or not they come from the same class, i.e. whether there is a match. We model the posterior probability of a match since it is a sufficient statistic and propose a latent probabilistic model in this context. We develop a joint discriminative learning framework based on dictionary learning to jointly learn the parameters of our model for both domains, which ultimately leads to our class-independent classifier. Many of the existing embedding methods can be viewed as special cases of our probabilistic model. On ZSR our method shows 4.90% improvement over the state-of-the-art in accuracy averaged across four benchmark datasets. We also adapt ZSR method for zero-shot retrieval and show 22.45% improvement accordingly in mean average precision (mAP).",
"title": ""
},
{
"docid": "4608c8ca2cf58ca9388c25bb590a71df",
"text": "Life expectancy in most countries has been increasing continually over the several few decades thanks to significant improvements in medicine, public health, as well as personal and environmental hygiene. However, increased life expectancy combined with falling birth rates are expected to engender a large aging demographic in the near future that would impose significant burdens on the socio-economic structure of these countries. Therefore, it is essential to develop cost-effective, easy-to-use systems for the sake of elderly healthcare and well-being. Remote health monitoring, based on non-invasive and wearable sensors, actuators and modern communication and information technologies offers an efficient and cost-effective solution that allows the elderly to continue to live in their comfortable home environment instead of expensive healthcare facilities. These systems will also allow healthcare personnel to monitor important physiological signs of their patients in real time, assess health conditions and provide feedback from distant facilities. In this paper, we have presented and compared several low-cost and non-invasive health and activity monitoring systems that were reported in recent years. A survey on textile-based sensors that can potentially be used in wearable systems is also presented. Finally, compatibility of several communication technologies as well as future perspectives and research challenges in remote monitoring systems will be discussed.",
"title": ""
},
{
"docid": "eb22a8448b82f6915850fe4d60440b3b",
"text": "In story-based games or other interactive systems, a drama manager (DM) is an omniscient agent that acts to bring about a particular sequence of plot points for the player to experience. Traditionally, the DM's narrative evaluation criteria are solely derived from a human designer. We present a DM that learns a model of the player's storytelling preferences and automatically recommends a narrative experience that is predicted to optimize the player's experience while conforming to the human designer's storytelling intentions. Our DM is also capable of manipulating the space of narrative trajectories such that the player is more likely to make choices that result in the recommended experience. Our DM uses a novel algorithm, called prefix-based collaborative filtering (PBCF), that solves the sequential recommendation problem to find a sequence of plot points that maximizes the player's rating of his or her experience. We evaluate our DM in an interactive storytelling environment based on choose-your-own-adventure novels. Our experiments show that our algorithms can improve the player's experience over the designer's storytelling intentions alone and can deliver more personalized experiences than other interactive narrative systems while preserving players' agency.",
"title": ""
},
{
"docid": "3da6fadaf2363545dfd0cea87fe2b5da",
"text": "It is a marketplace reality that marketing managers sometimes inflict switching costs on their customers, to inhibit them from defecting to new suppliers. In a competitive setting, such as the Internet market, where competition may be only one click away, has the potential of switching costs as an exit barrier and a binding ingredient of customer loyalty become altered? To address that issue, this article examines the moderating effects of switching costs on customer loyalty through both satisfaction and perceived-value measures. The results, evoked from a Web-based survey of online service users, indicate that companies that strive for customer loyalty should focus primarily on satisfaction and perceived value. The moderating effects of switching costs on the association of customer loyalty and customer satisfaction and perceived value are significant only when the level of customer satisfaction or perceived value is above average. In light of the major findings, the article sets forth strategic implications for customer loyalty in the setting of electronic commerce. © 2004 Wiley Periodicals, Inc. In the consumer marketing community, customer loyalty has long been regarded as an important goal (Reichheld & Schefter, 2000). Both marketing academics and professionals have attempted to uncover the most prominent antecedents of customer loyalty. Numerous studies have Psychology & Marketing, Vol. 21(10):799–822 (October 2004) Published online in Wiley InterScience (www.interscience.wiley.com) © 2004 Wiley Periodicals, Inc. DOI: 10.1002/mar.20030",
"title": ""
},
{
"docid": "ddecb743bc098a3e31ca58bc17810cf1",
"text": "Maxout network is a powerful alternate to traditional sigmoid neural networks and is showing success in speech recognition. However, maxout network is prone to overfitting thus regularization methods such as dropout are often needed. In this paper, a stochastic pooling regularization method for max-out networks is proposed to control overfitting. In stochastic pooling, a distribution is produced for each pooling region by the softmax normalization of the piece values. The active piece is selected based on the distribution during training, and an effective probability weighting is conducted during testing. We apply the stochastic pooling maxout (SPM) networks within the DNN-HMM framework and evaluate its effectiveness under a low-resource speech recognition condition. On benchmark test sets, the SPM network yields 4.7-8.6% relative improvements over the baseline maxout network. Further evaluations show the superiority of stochastic pooling over dropout for low-resource speech recognition.",
"title": ""
},
{
"docid": "7c0748301936c39166b9f91ba72d92ef",
"text": "methods and native methods are considered to be type safe if they do not override a final method. methodIsTypeSafe(Class, Method) :doesNotOverrideFinalMethod(Class, Method), methodAccessFlags(Method, AccessFlags), member(abstract, AccessFlags). methodIsTypeSafe(Class, Method) :doesNotOverrideFinalMethod(Class, Method), methodAccessFlags(Method, AccessFlags), member(native, AccessFlags). private methods and static methods are orthogonal to dynamic method dispatch, so they never override other methods (§5.4.5). doesNotOverrideFinalMethod(class('java/lang/Object', L), Method) :isBootstrapLoader(L). doesNotOverrideFinalMethod(Class, Method) :isPrivate(Method, Class). doesNotOverrideFinalMethod(Class, Method) :isStatic(Method, Class). doesNotOverrideFinalMethod(Class, Method) :isNotPrivate(Method, Class), isNotStatic(Method, Class), doesNotOverrideFinalMethodOfSuperclass(Class, Method). doesNotOverrideFinalMethodOfSuperclass(Class, Method) :classSuperClassName(Class, SuperclassName), classDefiningLoader(Class, L), loadedClass(SuperclassName, L, Superclass), classMethods(Superclass, SuperMethodList), finalMethodNotOverridden(Method, Superclass, SuperMethodList). 4.10 Verification of class Files THE CLASS FILE FORMAT 202 final methods that are private and/or static are unusual, as private methods and static methods cannot be overridden per se. Therefore, if a final private method or a final static method is found, it was logically not overridden by another method. finalMethodNotOverridden(Method, Superclass, SuperMethodList) :methodName(Method, Name), methodDescriptor(Method, Descriptor), member(method(_, Name, Descriptor), SuperMethodList), isFinal(Method, Superclass), isPrivate(Method, Superclass). finalMethodNotOverridden(Method, Superclass, SuperMethodList) :methodName(Method, Name), methodDescriptor(Method, Descriptor), member(method(_, Name, Descriptor), SuperMethodList), isFinal(Method, Superclass), isStatic(Method, Superclass). If a non-final private method or a non-final static method is found, skip over it because it is orthogonal to overriding. finalMethodNotOverridden(Method, Superclass, SuperMethodList) :methodName(Method, Name), methodDescriptor(Method, Descriptor), member(method(_, Name, Descriptor), SuperMethodList), isNotFinal(Method, Superclass), isPrivate(Method, Superclass), doesNotOverrideFinalMethodOfSuperclass(Superclass, Method). finalMethodNotOverridden(Method, Superclass, SuperMethodList) :methodName(Method, Name), methodDescriptor(Method, Descriptor), member(method(_, Name, Descriptor), SuperMethodList), isNotFinal(Method, Superclass), isStatic(Method, Superclass), doesNotOverrideFinalMethodOfSuperclass(Superclass, Method). THE CLASS FILE FORMAT Verification of class Files 4.10 203 If a non-final, non-private, non-static method is found, then indeed a final method was not overridden. Otherwise, recurse upwards. finalMethodNotOverridden(Method, Superclass, SuperMethodList) :methodName(Method, Name), methodDescriptor(Method, Descriptor), member(method(_, Name, Descriptor), SuperMethodList), isNotFinal(Method, Superclass), isNotStatic(Method, Superclass), isNotPrivate(Method, Superclass). finalMethodNotOverridden(Method, Superclass, SuperMethodList) :methodName(Method, Name), methodDescriptor(Method, Descriptor), notMember(method(_, Name, Descriptor), SuperMethodList), doesNotOverrideFinalMethodOfSuperclass(Superclass, Method). 4.10 Verification of class Files THE CLASS FILE FORMAT 204 4.10.1.6 Type Checking Methods with Code Non-abstract, non-native methods are type correct if they have code and the code is type correct. methodIsTypeSafe(Class, Method) :doesNotOverrideFinalMethod(Class, Method), methodAccessFlags(Method, AccessFlags), methodAttributes(Method, Attributes), notMember(native, AccessFlags), notMember(abstract, AccessFlags), member(attribute('Code', _), Attributes), methodWithCodeIsTypeSafe(Class, Method). A method with code is type safe if it is possible to merge the code and the stack map frames into a single stream such that each stack map frame precedes the instruction it corresponds to, and the merged stream is type correct. The method's exception handlers, if any, must also be legal. methodWithCodeIsTypeSafe(Class, Method) :parseCodeAttribute(Class, Method, FrameSize, MaxStack, ParsedCode, Handlers, StackMap), mergeStackMapAndCode(StackMap, ParsedCode, MergedCode), methodInitialStackFrame(Class, Method, FrameSize, StackFrame, ReturnType), Environment = environment(Class, Method, ReturnType, MergedCode, MaxStack, Handlers), handlersAreLegal(Environment), mergedCodeIsTypeSafe(Environment, MergedCode, StackFrame). THE CLASS FILE FORMAT Verification of class Files 4.10 205 Let us consider exception handlers first. An exception handler is represented by a functor application of the form: handler(Start, End, Target, ClassName) whose arguments are, respectively, the start and end of the range of instructions covered by the handler, the first instruction of the handler code, and the name of the exception class that this handler is designed to handle. An exception handler is legal if its start (Start) is less than its end (End), there exists an instruction whose offset is equal to Start, there exists an instruction whose offset equals End, and the handler's exception class is assignable to the class Throwable. The exception class of a handler is Throwable if the handler's class entry is 0, otherwise it is the class named in the handler. An additional requirement exists for a handler inside an <init> method if one of the instructions covered by the handler is invokespecial of an <init> method. In this case, the fact that a handler is running means the object under construction is likely broken, so it is important that the handler does not swallow the exception and allow the enclosing <init> method to return normally to the caller. Accordingly, the handler is required to either complete abruptly by throwing an exception to the caller of the enclosing <init> method, or to loop forever. 4.10 Verification of class Files THE CLASS FILE FORMAT 206 handlersAreLegal(Environment) :exceptionHandlers(Environment, Handlers), checklist(handlerIsLegal(Environment), Handlers). handlerIsLegal(Environment, Handler) :Handler = handler(Start, End, Target, _), Start < End, allInstructions(Environment, Instructions), member(instruction(Start, _), Instructions), offsetStackFrame(Environment, Target, _), instructionsIncludeEnd(Instructions, End), currentClassLoader(Environment, CurrentLoader), handlerExceptionClass(Handler, ExceptionClass, CurrentLoader), isBootstrapLoader(BL), isAssignable(ExceptionClass, class('java/lang/Throwable', BL)), initHandlerIsLegal(Environment, Handler). instructionsIncludeEnd(Instructions, End) :member(instruction(End, _), Instructions). instructionsIncludeEnd(Instructions, End) :member(endOfCode(End), Instructions). handlerExceptionClass(handler(_, _, _, 0), class('java/lang/Throwable', BL), _) :isBootstrapLoader(BL). handlerExceptionClass(handler(_, _, _, Name), class(Name, L), L) :Name \\= 0. THE CLASS FILE FORMAT Verification of class Files 4.10 207 initHandlerIsLegal(Environment, Handler) :notInitHandler(Environment, Handler). notInitHandler(Environment, Handler) :Environment = environment(_Class, Method, _, Instructions, _, _), isNotInit(Method). notInitHandler(Environment, Handler) :Environment = environment(_Class, Method, _, Instructions, _, _), isInit(Method), member(instruction(_, invokespecial(CP)), Instructions), CP = method(MethodClassName, MethodName, Descriptor), MethodName \\= '<init>'. initHandlerIsLegal(Environment, Handler) :isInitHandler(Environment, Handler), sublist(isApplicableInstruction(Target), Instructions, HandlerInstructions), noAttemptToReturnNormally(HandlerInstructions). isInitHandler(Environment, Handler) :Environment = environment(_Class, Method, _, Instructions, _, _), isInit(Method). member(instruction(_, invokespecial(CP)), Instructions), CP = method(MethodClassName, '<init>', Descriptor). isApplicableInstruction(HandlerStart, instruction(Offset, _)) :Offset >= HandlerStart. noAttemptToReturnNormally(Instructions) :notMember(instruction(_, return), Instructions). noAttemptToReturnNormally(Instructions) :member(instruction(_, athrow), Instructions). 4.10 Verification of class Files THE CLASS FILE FORMAT 208 Let us now turn to the stream of instructions and stack map frames. Merging instructions and stack map frames into a single stream involves four cases: • Merging an empty StackMap and a list of instructions yields the original list of instructions. mergeStackMapAndCode([], CodeList, CodeList). • Given a list of stack map frames beginning with the type state for the instruction at Offset, and a list of instructions beginning at Offset, the merged list is the head of the stack map frame list, followed by the head of the instruction list, followed by the merge of the tails of the two lists. mergeStackMapAndCode([stackMap(Offset, Map) | RestMap], [instruction(Offset, Parse) | RestCode], [stackMap(Offset, Map), instruction(Offset, Parse) | RestMerge]) :mergeStackMapAndCode(RestMap, RestCode, RestMerge). • Otherwise, given a list of stack map frames beginning with the type state for the instruction at OffsetM, and a list of instructions beginning at OffsetP, then, if OffsetP < OffsetM, the merged list consists of the head of the instruction list, followed by the merge of the stack map frame list and the tail of the instruction list. mergeStackMapAndCode([stackMap(OffsetM, Map) | RestMap], [instruction(OffsetP, Parse) | RestCode], [instruction(OffsetP, Parse) | RestMerge]) :OffsetP < OffsetM, mergeStackMapAndCode([stackMap(OffsetM, Map) | RestMap], RestCode, RestMerge). • Otherwise, the merge of the two lists is undefined. Since the instruction list has monotonically increasing offsets, the merge of the two lists is not defined unless every stack map frame offset has a corresponding instruction offset and the stack map frames are in monotonically ",
"title": ""
},
{
"docid": "1f121c30e686d25f44363f44dc71b495",
"text": "In this paper we show that the Euler number of the compactified Jacobian of a rational curve C with locally planar singularities is equal to the multiplicity of the δ-constant stratum in the base of a semi-universal deformation of C. In particular, the multiplicity assigned by Yau, Zaslow and Beauville to a rational curve on a K3 surface S coincides with the multiplicity of the normalisation map in the moduli space of stable maps to S. Introduction Let C be a reduced and irreducible projective curve with singular set Σ ⊂ C and let n : C̃ −→ C be its normalisation. The generalised Jacobian JC of C is an extension of JC̃ by an affine commutative group of dimension δ := dimH0(n∗(OC̃)/OC) = ∑",
"title": ""
},
{
"docid": "d197eacce97d161e4292ba541f8bed57",
"text": "A Luenberger-based observer is proposed to the state estimation of a class of nonlinear systems subject to parameter uncertainty and bounded disturbance signals. A nonlinear observer gain is designed in order to minimize the effects of the uncertainty, error estimation and exogenous signals in an 7-L, sense by means of a set of state- and parameterdependent linear matrix inequalities that are solved using standard software packages. A numerical example illustrates the approach.",
"title": ""
},
{
"docid": "e3299737a0fb3cd3c9433f462565b278",
"text": "BACKGROUND\nMore than two-thirds of pregnant women experience low-back pain and almost one-fifth experience pelvic pain. The two conditions may occur separately or together (low-back and pelvic pain) and typically increase with advancing pregnancy, interfering with work, daily activities and sleep.\n\n\nOBJECTIVES\nTo update the evidence assessing the effects of any intervention used to prevent and treat low-back pain, pelvic pain or both during pregnancy.\n\n\nSEARCH METHODS\nWe searched the Cochrane Pregnancy and Childbirth (to 19 January 2015), and the Cochrane Back Review Groups' (to 19 January 2015) Trials Registers, identified relevant studies and reviews and checked their reference lists.\n\n\nSELECTION CRITERIA\nRandomised controlled trials (RCTs) of any treatment, or combination of treatments, to prevent or reduce the incidence or severity of low-back pain, pelvic pain or both, related functional disability, sick leave and adverse effects during pregnancy.\n\n\nDATA COLLECTION AND ANALYSIS\nTwo review authors independently assessed trials for inclusion and risk of bias, extracted data and checked them for accuracy.\n\n\nMAIN RESULTS\nWe included 34 RCTs examining 5121 pregnant women, aged 16 to 45 years and, when reported, from 12 to 38 weeks' gestation. Fifteen RCTs examined women with low-back pain (participants = 1847); six examined pelvic pain (participants = 889); and 13 examined women with both low-back and pelvic pain (participants = 2385). Two studies also investigated low-back pain prevention and four, low-back and pelvic pain prevention. Diagnoses ranged from self-reported symptoms to clinicians' interpretation of specific tests. All interventions were added to usual prenatal care and, unless noted, were compared with usual prenatal care. The quality of the evidence ranged from moderate to low, raising concerns about the confidence we could put in the estimates of effect. For low-back painResults from meta-analyses provided low-quality evidence (study design limitations, inconsistency) that any land-based exercise significantly reduced pain (standardised mean difference (SMD) -0.64; 95% confidence interval (CI) -1.03 to -0.25; participants = 645; studies = seven) and functional disability (SMD -0.56; 95% CI -0.89 to -0.23; participants = 146; studies = two). Low-quality evidence (study design limitations, imprecision) also suggested no significant differences in the number of women reporting low-back pain between group exercise, added to information about managing pain, versus usual prenatal care (risk ratio (RR) 0.97; 95% CI 0.80 to 1.17; participants = 374; studies = two). For pelvic painResults from a meta-analysis provided low-quality evidence (study design limitations, imprecision) of no significant difference in the number of women reporting pelvic pain between group exercise, added to information about managing pain, and usual prenatal care (RR 0.97; 95% CI 0.77 to 1.23; participants = 374; studies = two). For low-back and pelvic painResults from meta-analyses provided moderate-quality evidence (study design limitations) that: an eight- to 12-week exercise program reduced the number of women who reported low-back and pelvic pain (RR 0.66; 95% CI 0.45 to 0.97; participants = 1176; studies = four); land-based exercise, in a variety of formats, significantly reduced low-back and pelvic pain-related sick leave (RR 0.76; 95% CI 0.62 to 0.94; participants = 1062; studies = two).The results from a number of individual studies, incorporating various other interventions, could not be pooled due to clinical heterogeneity. There was moderate-quality evidence (study design limitations or imprecision) from individual studies suggesting that osteomanipulative therapy significantly reduced low-back pain and functional disability, and acupuncture or craniosacral therapy improved pelvic pain more than usual prenatal care. Evidence from individual studies was largely of low quality (study design limitations, imprecision), and suggested that pain and functional disability, but not sick leave, were significantly reduced following a multi-modal intervention (manual therapy, exercise and education) for low-back and pelvic pain.When reported, adverse effects were minor and transient.\n\n\nAUTHORS' CONCLUSIONS\nThere is low-quality evidence that exercise (any exercise on land or in water), may reduce pregnancy-related low-back pain and moderate- to low-quality evidence suggesting that any exercise improves functional disability and reduces sick leave more than usual prenatal care. Evidence from single studies suggests that acupuncture or craniosacral therapy improves pregnancy-related pelvic pain, and osteomanipulative therapy or a multi-modal intervention (manual therapy, exercise and education) may also be of benefit.Clinical heterogeneity precluded pooling of results in many cases. Statistical heterogeneity was substantial in all but three meta-analyses, which did not improve following sensitivity analyses. Publication bias and selective reporting cannot be ruled out.Further evidence is very likely to have an important impact on our confidence in the estimates of effect and change the estimates. Studies would benefit from the introduction of an agreed classification system that can be used to categorise women according to their presenting symptoms, so that treatment can be tailored accordingly.",
"title": ""
},
{
"docid": "00413dc27271c927b8fd67bde63f48eb",
"text": "The SEAGULL project aims at the development of intelligent systems to support maritime situation awareness based on unmanned aerial vehicles. It proposes to create an intelligent maritime surveillance system by equipping unmanned aerial vehicles (UAVs) with different types of optical sensors. Optical sensors such as cameras (visible, infrared, multi and hyper spectral) can contribute significantly to the generation of situational awareness of maritime events such as (i) detection and georeferencing of oil spills or hazardous and noxious substances; (ii) tracking systems (e.g. vessels, shipwrecked, lifeboat, debris, etc.); (iii) recognizing behavioral patterns (e.g. vessels rendezvous, high-speed vessels, atypical patterns of navigation, etc.); and (iv) monitoring parameters and indicators of good environmental status. On-board transponders will be used for collision detection and avoidance mechanism (sense and avoid). This paper describes the core of the research and development work done during the first 2 years of the project with particular emphasis on the following topics: system architecture, automatic detection of sea vessels by vision sensors and custom designed computer vision algorithms; and a sense and avoid system developed in the theoretical framework of zero-sum differential games.",
"title": ""
},
{
"docid": "a697f85ad09699ddb38994bd69b11103",
"text": "We show how to perform sparse approximate Gaussian elimination for Laplacian matrices. We present a simple, nearly linear time algorithm that approximates a Laplacian by the product of a sparse lower triangular matrix with its transpose. This gives the first nearly linear time solver for Laplacian systems that is based purely on random sampling, and does not use any graph theoretic constructions such as low-stretch trees, sparsifiers, or expanders. Our algorithm performs a subsampled Cholesky factorization, which we analyze using matrix martingales. As part of the analysis, we give a proof of a concentration inequality for matrix martingales where the differences are sums of conditionally independent variables.",
"title": ""
},
{
"docid": "e79db51ac85ceafba66dddd5c038fbdf",
"text": "Machine learning based anti-phishing techniques are based on various features extracted from different sources. These features differentiate a phishing website from a legitimate one. Features are taken from various sources like URL, page content, search engine, digital certificate, website traffic, etc, of a website to detect it as a phishing or non-phishing. The websites are declared as phishing sites if the heuristic design of the websites matches with the predefined rules. The accuracy of the anti-phishing solution depends on features set, training data and machine learning algorithm. This paper presents a comprehensive analysis of Phishing attacks, their exploitation, some of the recent machine learning based approaches for phishing detection and their comparative study. It provides a better understanding of the phishing problem, current solution space in machine learning domain, and scope of future research to deal with Phishing attacks efficiently using machine learning based approaches.",
"title": ""
},
{
"docid": "936128e89e1c0edec5c0489fa41ba4a2",
"text": "Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We introduce a novel class of probabilistic models, comprising an undirected discrete component and a directed hierarchical continuous component, that can be trained efficiently using the variational autoencoder framework. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data; and outperforms state-of-the-art methods on the permutation-invariant MNIST, OMNIGLOT, and Caltech-101 Silhouettes datasets.",
"title": ""
},
{
"docid": "c6954957e6629a32f9845df15c60be85",
"text": "Some mathematical and natural objects (a random sequence, a sequence of zeros, a perfect crystal, a gas) are intuitively trivial, while others (e.g. the human body, the digits of π) contain internal evidence of a nontrivial causal history. We formalize this distinction by defining an object’s “logical depth” as the time required by a standard universal Turing machine to generate it from an input that is algorithmically random (i.e. Martin-Löf random). This definition of depth is shown to be reasonably machineindependent, as well as obeying a slow-growth law: deep objects cannot be quickly produced from shallow ones by any deterministic process, nor with much probability by a probabilistic process, but can be produced slowly. Next we apply depth to the physical problem of “self-organization,” inquiring in particular under what conditions (e.g. noise, irreversibility, spatial and other symmetries of the initial conditions and equations of motion) statistical-mechanical model systems can imitate computers well enough to undergo unbounded increase of depth in the limit of infinite space and time.",
"title": ""
}
] |
scidocsrr
|
c7424b68be7680bf6e1aef9ec49a024a
|
Adjustable Real-time Style Transfer
|
[
{
"docid": "b5c8ea776debc32ea2663090eb6f37df",
"text": "Neural style transfer has recently received significant attention and demonstrated amazing results. An efficient solution proposed by Johnson et al. trains feed-forward convolutional neural networks by defining and optimizing perceptual loss functions. Such methods are typically based on high-level features extracted from pre-trained neural networks, where the loss functions contain two components: style loss and content loss. However, such pre-trained networks are originally designed for object recognition, and hence the high-level features often focus on the primary target and neglect other details. As a result, when input images contain multiple objects potentially at different depths, the resulting images are often unsatisfactory because image layout is destroyed and the boundary between the foreground and background as well as different objects becomes obscured. We observe that the depth map effectively reflects the spatial distribution in an image and preserving the depth map of the content image after stylization helps produce an image that preserves its semantic content. In this paper, we introduce a novel approach for neural style transfer that integrates depth preservation as additional loss, preserving overall image layout while performing style transfer.",
"title": ""
},
{
"docid": "10ae6cdb445e4faf1e6bed5cad6eb3ba",
"text": "It this paper we revisit the fast stylization method introduced in Ulyanov et al. (2016). We show how a small change in the stylization architecture results in a significant qualitative improvement in the generated images. The change is limited to swapping batch normalization with instance normalization, and to apply the latter both at training and testing times. The resulting method can be used to train high-performance architectures for real-time image generation. The code will be made available at https://github.com/DmitryUlyanov/texture_nets.",
"title": ""
},
{
"docid": "8a55bf5b614d750a7de6ac34dc321b10",
"text": "Unsupervised image-to-image translation aims at learning the relationship between samples from two image domains without supervised pair information. The relationship between two domain images can be one-to-one, one-to-many or many-to-many. In this paper, we study the one-to-many unsupervised image translation problem in which an input sample from one domain can correspond to multiple samples in the other domain. To learn the complex relationship between the two domains, we introduce an additional variable to control the variations in our one-to-many mapping. A generative model with an XO-structure, called the XOGAN, is proposed to learn the cross domain relationship among the two domains and the additional variables. Not only can we learn to translate between the two image domains, we can also handle the translated images with additional variations. Experiments are performed on unpaired image generation tasks, including edges-to-objects translation and facial image translation. We show that the proposed XOGAN model can generate plausible images and control variations, such as color and texture, of the generated images. Moreover, while state-of-the-art unpaired image generation algorithms tend to generate images with monotonous colors, XOGAN can generate more diverse results.",
"title": ""
},
{
"docid": "344be59c5bb605dec77e4d7bd105d899",
"text": "Recently, style transfer has received a lot of attention. While much of this research has aimed at speeding up processing, the approaches are still lacking from a principled, art historical standpoint: a style is more than just a single image or an artist, but previous work is limited to only a single instance of a style or shows no benefit from more images. Moreover, previous work has relied on a direct comparison of art in the domain of RGB images or on CNNs pre-trained on ImageNet, which requires millions of labeled object bounding boxes and can introduce an extra bias, since it has been assembled without artistic consideration. To circumvent these issues, we propose a style-aware content loss, which is trained jointly with a deep encoder-decoder network for real-time, high-resolution stylization of images and videos. We propose a quantitative measure for evaluating the quality of a stylized image and also have art historians rank patches from our approach against those from previous work. These and our qualitative results ranging from small image patches to megapixel stylistic images and videos show that our approach better captures the subtle nature in which a style affects content.",
"title": ""
}
] |
[
{
"docid": "963f97c27adbc7d1136e713247e9a852",
"text": "Scheduling in the context of parallel systems is often thought of in terms of assigning tasks in a program to processors, so as to minimize the makespan. This formulation assumes that the processors are dedicated to the program in question. But when the parallel system is shared by a number of users, this is not necessarily the case. In the context of multiprogrammed parallel machines, scheduling refers to the execution of threads from competing programs. This is an operating system issue, involved with resource allocation, not a program development issue. Scheduling schemes for multiprogrammed parallel systems can be classi ed as one or two leveled. Single-level scheduling combines the allocation of processing power with the decision of which thread will use it. Two level scheduling decouples the two issues: rst, processors are allocated to the job, and then the job's threads are scheduled using this pool of processors. The processors of a parallel system can be shared in two basic ways, which are relevant for both one-level and two-level scheduling. One approach is to use time slicing, e.g. when all the processors in the system (or all the processors in the pool) service a global queue of ready threads. The other approach is to use space slicing, and partition the processors statically or dynamically among the di erent jobs. As these approaches are orthogonal to each other, it is also possible to combine them in various ways; for example, this is often done in gang scheduling. Systems using the various approaches are described, and the implications of the di erent mechanisms are discussed. The goals of this survey are to describe the many di erent approaches within a uni ed framework based on the mechanisms used to achieve multiprogramming, and at the same time document commercial systems that have not been described in the open literature.",
"title": ""
},
{
"docid": "409baee7edaec587727624192eab93aa",
"text": "It has been widely shown that recognition memory includes two distinct retrieval processes: familiarity and recollection. Many studies have shown that recognition memory can be facilitated when there is a perceptual match between the studied and the tested items. Most event-related potential studies have explored the perceptual match effect on familiarity on the basis of the hypothesis that the specific event-related potential component associated with familiarity is the FN400 (300-500 ms mid-frontal effect). However, it is currently unclear whether the FN400 indexes familiarity or conceptual implicit memory. In addition, on the basis of the findings of a previous study, the so-called perceptual manipulations in previous studies may also involve some conceptual alterations. Therefore, we sought to determine the influence of perceptual manipulation by color changes on recognition memory when the perceptual or the conceptual processes were emphasized. Specifically, different instructions (perceptually or conceptually oriented) were provided to the participants. The results showed that color changes may significantly affect overall recognition memory behaviorally and that congruent items were recognized with a higher accuracy rate than incongruent items in both tasks, but no corresponding neural changes were found. Despite the evident familiarity shown in the two tasks (the behavioral performance of recognition memory was much higher than at the chance level), the FN400 effect was found in conceptually oriented tasks, but not perceptually oriented tasks. It is thus highly interesting that the FN400 effect was not induced, although color manipulation of recognition memory was behaviorally shown, as seen in previous studies. Our findings of the FN400 effect for the conceptual but not perceptual condition support the explanation that the FN400 effect indexes conceptual implicit memory.",
"title": ""
},
{
"docid": "2742db8262616f2b69d92e0066e6930c",
"text": "Most of previous work in knowledge base (KB) completion has focused on the problem of relation extraction. In this work, we focus on the task of inferring missing entity type instances in a KB, a fundamental task for KB competition yet receives little attention. Due to the novelty of this task, we construct a large-scale dataset and design an automatic evaluation methodology. Our knowledge base completion method uses information within the existing KB and external information from Wikipedia. We show that individual methods trained with a global objective that considers unobserved cells from both the entity and the type side gives consistently higher quality predictions compared to baseline methods. We also perform manual evaluation on a small subset of the data to verify the effectiveness of our knowledge base completion methods and the correctness of our proposed automatic evaluation method.",
"title": ""
},
{
"docid": "a91add591aacaa333e109d77576ba463",
"text": "It has become essential to scrutinize and evaluate software development methodologies, mainly because of their increasing number and variety. Evaluation is required to gain a better understanding of the features, strengths, and weaknesses of the methodologies. The results of such evaluations can be leveraged to identify the methodology most appropriate for a specific context. Moreover, methodology improvement and evolution can be accelerated using these results. However, despite extensive research, there is still a need for a feature/criterion set that is general enough to allow methodologies to be evaluated regardless of their types. We propose a general evaluation framework which addresses this requirement. In order to improve the applicability of the proposed framework, all the features – general and specific – are arranged in a hierarchy along with their corresponding criteria. Providing different levels of abstraction enables users to choose the suitable criteria based on the context. Major evaluation frameworks for object-oriented, agent-oriented, and aspect-oriented methodologies have been studied and assessed against the proposed framework to demonstrate its reliability and validity.",
"title": ""
},
{
"docid": "00e60176eca7d86261c614196849a946",
"text": "This paper proposes a novel low-profile dual polarized antenna for 2.4 GHz application. The proposed antenna consists of a circular patch with four curved T-stubs and a differential feeding network. Due to the parasitic loading of the curved T-stubs, the bandwidth has been improved. Good impedance matching and dual-polarization with low cross polarization have been achieved within 2.4–2.5 GHz, which is sufficient for WLAN application. The total thickness of the antenna is only 0.031A,o, which is low-profile when compared with its counterparts.",
"title": ""
},
{
"docid": "b5dc56272d4dea04b756a8614d6762c9",
"text": "Platforms have been considered as a paradigm for managing new product development and innovation. Since their introduction, studies on platforms have introduced multiple conceptualizations, leading to a fragmentation of research and different perspectives. By systematically reviewing the platform literature and combining bibliometric and content analyses, this paper examines the platform concept and its evolution, proposes a thematic classification, and highlights emerging trends in the literature. Based on this hybrid methodological approach (bibliometric and content analyses), the results show that platform research has primarily focused on issues that are mainly related to firms' internal aspects, such as innovation, modularity, commonality, and mass customization. Moreover, scholars have recently started to focus on new research themes, including managerial questions related to capability building, strategy, and ecosystem building based on platforms. As its main contributions, this paper improves the understanding of and clarifies the evolutionary trajectory of the platform concept, and identifies trends and emerging themes to be addressed in future studies.",
"title": ""
},
{
"docid": "35d7da09017c0a6a40bf90bd2e7ea5fc",
"text": "Cloud computing promises a radical shift in the provisioning of computing resource within enterprise. This paper: i) describes the challenges that decision-makers face when attempting to determine the feasibility of the adoption of cloud computing in their organisations; ii) illustrates a lack of existing work to address the feasibility challenges of cloud adoption in enterprise; iii) introduces the Cloud Adoption Toolkit that provides a framework to support decision-makers in identifying their concerns, and matching these concerns to appropriate tools/techniques that can be used to address them. The paper adopts a position paper methodology such that case study evidence is provided, where available, to support claims. We conclude that the Cloud Adoption Toolkit, whilst still under development, shows signs that it is a useful tool for decision-makers as it helps address the feasibility challenges of cloud adoption in enterprise.",
"title": ""
},
{
"docid": "8fb1386af94abb9cacda76861680effd",
"text": "This paper focuses on the development of a front- and rear-wheel-independent drive-type electric vehicle (EV) (FRID EV) as a next-generation EV. The ideal characteristics of a FRID EV promote good performance and safety and are the result of structural features that independently control the driving and braking torques of the front and rear wheels. The first characteristic is the failsafe function. This function enables vehicles to continue running without any unexpected or sudden stops, even if one of the propulsion systems fails. The second characteristic is a function that performs efficient acceleration and deceleration on all road surfaces. This function works by distributing the driving or braking torques to the front and rear wheels, taking into consideration load movement. The third characteristic ensures that the vehicle runs safely on roads with a low friction coefficient (μ), such as icy roads. In this paper, we propose a driving torque distribution method when cornering and a braking torque distribution method; these methods are related to the third characteristic, and they are particularly effective when driving on roads with ultralow μ. We verify the effectiveness of the proposed torque control methods through simulations and experiments on the ultralow-μ road surface with a μ of 0.1.",
"title": ""
},
{
"docid": "5ebd4fc7ee26a8f831f7fea2f657ccdd",
"text": "1 This article was reviewed and accepted by all the senior editors, including the editor-in-chief. Articles published in future issues will be accepted by just a single senior editor, based on reviews by members of the Editorial Board. 2 Sincere thanks go to Anna Dekker and Denyse O’Leary for their assistance with this research. Funding was generously provided by the Advanced Practices Council of the Society for Information Management and by the Social Sciences and Humanities Research Council of Canada. An earlier version of this manuscript was presented at the Academy of Management Conference in Toronto, Canada, in August 2000. 3 In this article, the terms information systems (IS) and information technology (IT) are used interchangeably. 4 Regardless of whether IS services are provided internally (in a centralized, decentralized, or federal manner) or are outsourced, we assume the boundaries of the IS function can be identified. Thus, the fit between the unit(s) providing IS services and the rest of the organization can be examined. and books have been written on the subject, firms continue to demonstrate limited alignment.",
"title": ""
},
{
"docid": "06bba1f9f57b7b452af47321ac8fa358",
"text": "Little is known about the genetic changes that distinguish domestic cat populations from their wild progenitors. Here we describe a high-quality domestic cat reference genome assembly and comparative inferences made with other cat breeds, wildcats, and other mammals. Based upon these comparisons, we identified positively selected genes enriched for genes involved in lipid metabolism that underpin adaptations to a hypercarnivorous diet. We also found positive selection signals within genes underlying sensory processes, especially those affecting vision and hearing in the carnivore lineage. We observed an evolutionary tradeoff between functional olfactory and vomeronasal receptor gene repertoires in the cat and dog genomes, with an expansion of the feline chemosensory system for detecting pheromones at the expense of odorant detection. Genomic regions harboring signatures of natural selection that distinguish domestic cats from their wild congeners are enriched in neural crest-related genes associated with behavior and reward in mouse models, as predicted by the domestication syndrome hypothesis. Our description of a previously unidentified allele for the gloving pigmentation pattern found in the Birman breed supports the hypothesis that cat breeds experienced strong selection on specific mutations drawn from random bred populations. Collectively, these findings provide insight into how the process of domestication altered the ancestral wildcat genome and build a resource for future disease mapping and phylogenomic studies across all members of the Felidae.",
"title": ""
},
{
"docid": "cc4e8c21e58a8b26bf901b597d0971d8",
"text": "Pedestrian detection and semantic segmentation are high potential tasks for many real-time applications. However most of the top performing approaches provide state of art results at high computational costs. In this work we propose a fast solution for achieving state of art results for both pedestrian detection and semantic segmentation. As baseline for pedestrian detection we use sliding windows over cost efficient multiresolution filtered LUV+HOG channels. We use the same channels for classifying pixels into eight semantic classes. Using short range and long range multiresolution channel features we achieve more robust segmentation results compared to traditional codebook based approaches at much lower computational costs. The resulting segmentations are used as additional semantic channels in order to achieve a more powerful pedestrian detector. To also achieve fast pedestrian detection we employ a multiscale detection scheme based on a single flexible pedestrian model and a single image scale. The proposed solution provides competitive results on both pedestrian detection and semantic segmentation benchmarks at 8 FPS on CPU and at 15 FPS on GPU, being the fastest top performing approach.",
"title": ""
},
{
"docid": "e462c0cfc1af657cb012850de1b7b717",
"text": "ASSOCIATIONS BETWEEN PHYSICAL ACTIVITY, PHYSICAL FITNESS, AND FALLS RISK IN HEALTHY OLDER INDIVIDUALS Christopher Deane Vaughan Old Dominion University, 2016 Chair: Dr. John David Branch Objective: The purpose of this study was to assess relationships between objectively measured physical activity, physical fitness, and the risk of falling. Methods: A total of n=29 subjects completed the study, n=15 male and n=14 female age (mean±SD)= 70± 4 and 71±3 years, respectively. In a single testing session, subjects performed pre-post evaluations of falls risk (Short-from PPA) with a 6-minute walking intervention between the assessments. The falls risk assessment included tests of balance, knee extensor strength, proprioception, reaction time, and visual contrast. The sub-maximal effort 6-minute walking task served as an indirect assessment of cardiorespiratory fitness. Subjects traversed a walking mat to assess for variation in gait parameters during the walking task. Additional center of pressure (COP) balance measures were collected via forceplate during the falls risk assessments. Subjects completed a Modified Falls Efficacy Scale (MFES) falls confidence survey. Subjects’ falls histories were also collected. Subjects wore hip mounted accelerometers for a 7-day period to assess time spent in moderate to vigorous physical activity (MVPA). Results: Males had greater body mass and height than females (p=0.001, p=0.001). Males had a lower falls risk than females at baseline (p=0.043) and post-walk (p=0.031). MFES scores were similar among all subjects (Median = 10). Falls history reporting revealed; fallers (n=8) and non-fallers (n=21). No significant relationships were found between main outcome measures of MVPA, cardiorespiratory fitness, or falls risk. Fallers had higher knee extensor strength than non-fallers at baseline (p=0.028) and post-walk (p=0.011). Though not significant (p=0.306), fallers spent 90 minutes more time in MVPA than non-fallers (427.8±244.6 min versus 335.7±199.5). Variations in gait and COP variables were not significant. Conclusions: This study found no apparent relationship between objectively measured physical activity, indirectly measured cardiorespiratory fitness, and falls risk.",
"title": ""
},
{
"docid": "2fb92e88ecbf2937b3b08a9f8de34618",
"text": "The area of image captioning i.e. the automatic generation of short textual descriptions of images has experienced much progress recently. However, image captioning approaches often only focus on describing the content of the image without any emotional or sentimental dimension which is common in human captions. This paper presents an approach for image captioning designed specifically to incorporate emotions and feelings into the caption generation process. The presented approach consists of a Deep Convolutional Neural Network (CNN) for detecting Adjective Noun Pairs in the image and a novel graphical network architecture called \"Concept And Syntax Transition (CAST)\" network for generating sentences from these detected concepts.",
"title": ""
},
{
"docid": "9ca12c5f314d077093753dc0f3ff9cd5",
"text": "We introduce a general-purpose conditioning method for neural networks called FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network computation via a simple, feature-wise affine transformation based on conditioning information. We show that FiLM layers are highly effective for visual reasoning — answering image-related questions which require a multi-step, high-level process — a task which has proven difficult for standard deep learning methods that do not explicitly model reasoning. Specifically, we show on visual reasoning tasks that FiLM layers 1) halve state-of-theart error for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are robust to ablations and architectural modifications, and 4) generalize well to challenging, new data from few examples or even zero-shot.",
"title": ""
},
{
"docid": "d91e11127e0d665b859420a534288516",
"text": "In most cases, the story of popular RPG games is designed by professional designers as a main content. However, manual design of game content has limitation in the quantitative aspect. Manual story generation requires a large amount of time and effort. Because gamers want more diverse and rich content, so it is not easy to satisfy the needs with manual design. PCG (Procedural Content Generation) is to automatically generate the content of the game. In this paper, we propose a quest generation engine using Petri net planning. As a combination of Petri-net modules a quest, a quest plot is created. The proposed method is applied to a commercial game platform to show the feasibility.",
"title": ""
},
{
"docid": "27329c67322a5ed2c4f2a7dd6ceb79a8",
"text": "In the world’s largest-ever deployment of online voting, the iVote Internet voting system was trusted for the return of 280,000 ballots in the 2015 state election in New South Wales, Australia. During the election, we performed an independent security analysis of parts of the live iVote system and uncovered severe vulnerabilities that could be leveraged to manipulate votes, violate ballot privacy, and subvert the verification mechanism. These vulnerabilities do not seem to have been detected by the election authorities before we disclosed them, despite a preelection security review and despite the system having run in a live state election for five days. One vulnerability, the result of including analytics software from an insecure external server, exposed some votes to complete compromise of privacy and integrity. At least one parliamentary seat was decided by a margin much smaller than the number of votes taken while the system was vulnerable. We also found fundamental protocol flaws, including vote verification that was itself susceptible to manipulation. This incident underscores the difficulty of conducting secure elections online and carries lessons for voters, election officials, and the e-voting research community.",
"title": ""
},
{
"docid": "c175910d1809ad6dc073f79e4ca15c0c",
"text": "The Global Positioning System (GPS) double-difference carrier-phase data are biased by an integer number of cycles. In this contribution a new method is introduced that enables very fast integer least-squares estimation of the ambiguities. The method makes use of an ambiguity transformation that allows one to reformulate the original ambiguity estimation problem as a new problem that is much easier to solve. The transformation aims at decorrelating the least-squares ambiguities and is based on an integer approximation of the conditional least-squares transformation. And through a flattening of the typical discontinuity in the GPS-spectrum of conditional variances of the ambiguities, the transformation returns new ambiguities that show a dramatic improvement in precision in comparison with the original double-difference ambiguities.",
"title": ""
},
{
"docid": "1c177a7fdbd15e04a6b122a284a9014a",
"text": "Malicious software installed on infected computers is a fundamental component of online crime. Malware development thus plays an essential role in the underground economy of cyber-crime. Malware authors regularly update their software to defeat defenses or to support new or improved criminal business models. A large body of research has focused on detecting malware, defending against it and identifying its functionality. In addition to these goals, however, the analysis of malware can provide a glimpse into the software development industry that develops malicious code.\n In this work, we present techniques to observe the evolution of a malware family over time. First, we develop techniques to compare versions of malicious code and quantify their differences. Furthermore, we use behavior observed from dynamic analysis to assign semantics to binary code and to identify functional components within a malware binary. By combining these techniques, we are able to monitor the evolution of a malware's functional components. We implement these techniques in a system we call Beagle, and apply it to the observation of 16 malware strains over several months. The results of these experiments provide insight into the effort involved in updating malware code, and show that Beagle can identify changes to individual malware components.",
"title": ""
},
{
"docid": "04a4996eb5be0d321037cac5cb3c1ad6",
"text": "Repeated retrieval enhances long-term retention, and spaced repetition also enhances retention. A question with practical and theoretical significance is whether there are particular schedules of spaced retrieval (e.g., gradually expanding the interval between tests) that produce the best learning. In the present experiment, subjects studied and were tested on items until they could recall each one. They then practiced recalling the items on 3 repeated tests that were distributed according to one of several spacing schedules. Increasing the absolute (total) spacing of repeated tests produced large effects on long-term retention: Repeated retrieval with long intervals between each test produced a 200% improvement in long-term retention relative to repeated retrieval with no spacing between tests. However, there was no evidence that a particular relative spacing schedule (expanding, equal, or contracting) was inherently superior to another. Although expanding schedules afforded a pattern of increasing retrieval difficulty across repeated tests, this did not translate into gains in long-term retention. Repeated spaced retrieval had powerful effects on retention, but the relative schedule of repeated tests had no discernible impact.",
"title": ""
}
] |
scidocsrr
|
b9a2345fa6d0740625baf845a07488d4
|
Diagonal principal component analysis for face recognition
|
[
{
"docid": "94b84ed0bb69b6c4fc7a268176146eea",
"text": "We consider the problem of representing image matrices with a set of basis functions. One common solution for that problem is to first transform the 2D image matrices into 1D image vectors and then to represent those 1D image vectors with eigenvectors, as done in classical principal component analysis. In this paper, we adopt a natural representation for the 2D image matrices using eigenimages, which are 2D matrices with the same size of original images and can be directly computed from original 2D image matrices. We discuss how to compute those eigenimages effectively. Experimental result on ORL image database shows the advantages of eigenimages method in representing the 2D images.",
"title": ""
}
] |
[
{
"docid": "41eab64d00f1a4aaea5c5899074d91ca",
"text": "Informally described design patterns are useful for communicating proven solutions for recurring design problems to developers, but they cannot be used as compliance points against which solutions that claim to conform to the patterns are checked. Pattern specification languages that utilize mathematical notation provide the needed formality, but often at the expense of usability. We present a rigorous and practical technique for specifying pattern solutions expressed in the unified modeling language (UML). The specification technique paves the way for the development of tools that support rigorous application of design patterns to UML design models. The technique has been used to create specifications of solutions for several popular design patterns. We illustrate the use of the technique by specifying observer and visitor pattern solutions.",
"title": ""
},
{
"docid": "f9fd7fc57dfdfbfa6f21dc074c9e9daf",
"text": "Recently, Lin and Tsai proposed an image secret sharing scheme with steganography and authentication to prevent participants from the incidental or intentional provision of a false stego-image (an image containing the hidden secret image). However, dishonest participants can easily manipulate the stego-image for successful authentication but cannot recover the secret image, i.e., compromise the steganography. In this paper, we present a scheme to improve authentication ability that prevents dishonest participants from cheating. The proposed scheme also defines the arrangement of embedded bits to improve the quality of stego-image. Furthermore, by means of the Galois Field GF(2), we improve the scheme to a lossless version without additional pixels. 2006 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "dc8ffc5fd84b3af4cc88d75f7bc88f77",
"text": "Digital crimes is big problem due to large numbers of data access and insufficient attack analysis techniques so there is the need for improvements in existing digital forensics techniques. With growing size of storage capacity these digital forensic investigations are getting more difficult. Visualization allows for displaying large amounts of data at once. Integrated visualization of data distribution bars and rules, visualization of behaviour and comprehensive analysis, maps allow user to analyze different rules and data at different level, with any kind of anomaly in data. Data mining techniques helps to improve the process of visualization. These papers give comprehensive review on various visualization techniques with various anomaly detection techniques.",
"title": ""
},
{
"docid": "9813df16b1852cf6d843ff3e1c67fa88",
"text": "Traumatic neuromas are tumors resulting from hyperplasia of axons and nerve sheath cells after section or injury to the nervous tissue. We present a case of this tumor, confirmed by anatomopathological examination, in a male patient with history of circumcision. Knowledge of this entity is very important in achieving the differential diagnosis with other lesions that affect the genital area such as condyloma acuminata, bowenoid papulosis, lichen nitidus, sebaceous gland hyperplasia, achrochordon and pearly penile papules.",
"title": ""
},
{
"docid": "534609ce9b008555cf433ba20b02fb4a",
"text": "VHPOP is a partial order causal link (POCL) planner loosely based on UCPOP. It draws from the experience gained in the early to mid 1990’s on flaw selection strategies for POCL planning, and combines this with more recent developments in the field of domain independent planning such as distance based heuristics and reachability analysis. We present an adaptation of the additive heuristic for plan space planning, and modify it to account for possible reuse of existing actions in a plan. We also propose a large set of novel flaw selection strategies, and show how these can help us solve more problems than previously possible by POCL planners. VHPOP also supports planning with durative actions by incorporating standard techniques for temporal constraint reasoning. We demonstrate that the same heuristic techniques used to boost the performance of classical POCL planning can be effective in domains with durative actions as well. The result is a versatile heuristic POCL planner competitive with established CSP-based and heuristic state space planners.",
"title": ""
},
{
"docid": "8c8a100e4dc69e1e68c2bd55f010656d",
"text": "In this paper, a data hiding scheme by simple LSB substitution is proposed. By applying an optimal pixel adjustment process to the stego-image obtained by the simple LSB substitution method, the image quality of the stego-image can be greatly improved with low extra computational complexity. The worst case mean-square-error between the stego-image and the cover-image is derived. Experimental results show that the stego-image is visually indistinguishable from the original cover-image. The obtained results also show a signi7cant improvement with respect to a previous work. ? 2003 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "cb2df8e27a3c284028d0fbb86652ae14",
"text": "The large bulk of packets/flows in future core networks will require a highly efficient header processing in the switching elements. Simplifying lookup in core network switching elements is capital to transport data at high rates and with low latency. Flexible network hardware combined with agile network control is also an essential property for future software-defined networking. We argue that only further decoupling between the control and data planes will unlock the flexibility and agility in SDN for the design of new network solutions for core networks. This article proposes a new approach named KeyFlow to build a flexible network-fabricbased model. It replaces the table lookup in the forwarding engine by elementary operations relying on a residue number system. This provides us tools to design a stateless core network by still using OpenFlow centralized control. A proof of concept prototype is validated using the Mininet emulation environment and OpenFlow 1.0. The results indicate RTT reduction above 50 percent, especially for networks with densely populated flow tables. KeyFlow achieves above 30 percent reduction in keeping active flow state in the network.",
"title": ""
},
{
"docid": "00a48b2c053c5d634a3480c1543cb3d2",
"text": "Interruptions and distractions due to smartphone use in healthcare settings pose potential risks to patient safety. Therefore, it is important to assess smartphone use at work, to encourage nursing students to review their relevant behaviors, and to recognize these potential risks. This study's aim was to develop a scale to measure smartphone addiction and test its validity and reliability. We investigated nursing students' experiences of distractions caused by smartphones in the clinical setting and their opinions about smartphone use policies. Smartphone addiction and the need for a scale to measure it were identified through a literature review and in-depth interviews with nursing students. This scale showed reliability and validity with exploratory and confirmatory factor analysis. In testing the discriminant and convergent validity of the selected (18) items with four factors, the smartphone addiction model explained approximately 91% (goodness-of-fit index = 0.909) of the variance in the data. Pearson correlation coefficients among addiction level, distractions in the clinical setting, and attitude toward policies on smartphone use were calculated. Addiction level and attitude toward policies of smartphone use were negatively correlated. This study suggests that healthcare organizations in Korea should create practical guidelines and policies for the appropriate use of smartphones in clinical practice.",
"title": ""
},
{
"docid": "42b0c0c340cfb49e1eb7c07e8f251f94",
"text": "The fisheries sector in the course of the last three decades have been transformed from a developed country to a developing country dominance. Aquaculture, the farming of waters, though a millennia old tradition during this period has become a significant contributor to food fish production, currently accounting for nearly 50 % of global food fish consumption; in effect transforming our dependence from a hunted to a farmed supply as for all our staple food types. Aquaculture and indeed the fisheries sector as a whole is predominated in the developing countries, and accordingly the development strategies adopted by the sector are influenced by this. Aquaculture also being a newly emerged food production sector has being subjected to an increased level of public scrutiny, and one of the most contentious aspects has been its impacts on biodiversity. In this synthesis an attempt is made to assess the impacts of aquaculture on biodiversity. Instances of major impacts on biodiversity conservation arising from aquaculture, such as land use, effluent discharge, effects on wild populations, alien species among others are highlighted and critically examined. The influence of paradigm changes in development strategies and modern day market forces have begun to impact on aquaculture developments. Consequently, improvements in practices and adoption of more environmentally friendly approaches that have a decreasing negative influence on biodiversity conservation are highlighted. An attempt is also made to demonstrate direct and or indirect benefits of aquaculture, such as through being a substitute to meet human needs for food, particularly over-exploited and vulnerable fish stocks, and for other purposes (e.g. medicinal ingredients), on biodiversity conservation, often a neglected entity.",
"title": ""
},
{
"docid": "611f7b5564c9168f73f778e7466d1709",
"text": "A fold-back current-limit circuit, with load-insensitive quiescent current characteristic for CMOS low dropout regulator (LDO), is proposed in this paper. This method has been designed in 0.35 µm CMOS technology and verified by Hspice simulation. The quiescent current of the LDO is 5.7 µA at 100-mA load condition. It is only 2.2% more than it in no-load condition, 5.58 µA. The maximum current limit is set to be 197 mA, and the short-current limit is 77 mA. Thus, the power consumption can be saved up to 61% at the short-circuit condition, which also decreases the risk of damaging the power transistor. Moreover, the thermal protection can be simplified and the LDO will be more reliable.",
"title": ""
},
{
"docid": "fccbcdff722a297e5a389674d7557a18",
"text": "For the last few decades more than twenty standardized usability questionnaires for evaluating software systems have been proposed. These instruments have been widely used in the assessment of usability of user interfaces. They have their own characteristics, can be generic or address specific kinds of systems and can be composed of one or several items. Some comparison or comparative studies were also conducted to identify the best one in different situations. All these issues should be considered while choosing a questionnaire. In this paper, we present an extensive review of these questionnaires considering their key features, some classifications and main comparison studies already performed. Moreover, we present the result of a detailed analysis of all items being evaluated in each questionnaire to indicate those that can identify users’ perceptions about specific usability problems. This analysis was performed by confronting each questionnaire item (around 475 items) with usability criteria proposed by quality standards (ISO 9421-11 and ISO/WD 9241-112) and classical quality ergonomic criteria.",
"title": ""
},
{
"docid": "766b18cdae33d729d21d6f1b2b038091",
"text": "1.1 Terminology Intercultural communication or communication between people of different cultural backgrounds has always been and will probably remain an important precondition of human co-existance on earth. The purpose of this paper is to provide a framework of factors thatare important in intercultural communication within a general model of human, primarily linguistic, communication. The term intercultural is chosen over the largely synonymousterm cross-cultural because it is linked to language use such as “interdisciplinary”, that is cooperation between people with different scientific backgrounds. Perhaps the term also has somewhat fewer connotations than crosscultural. It is not cultures that communicate, whatever that might imply, but people (and possibly social institutions) with different cultural backgrounds that do. In general, the term”cross-cultural” is probably best used for comparisons between cultures (”crosscultural comparison”).",
"title": ""
},
{
"docid": "b8875516c3ccf633eb174c94112f436d",
"text": "In an attempt to mimic everyday activities that are performed in 3-dimensional environments, exercise programs have been designed to integrate training of the trunk muscles with training of the extremities. Many believe that the most effective way to recruit the core stabilizing muscles is to execute traditional exercise movements on unstable surfaces. However, physical activity is rarely performed with a stable load on an unstable surface; usually, the surface is stable, and the external resistance is not. The purpose of this study was to evaluate muscle activity of the prime movers and core stabilizers while lifting stable and unstable loads on stable and unstable surfaces during the seated overhead shoulder press exercise. Thirty resistance-trained subjects performed the shoulder press exercise for 3 sets of 3 repetitions under 2 load (barbell and dumbbell) and 2 surface (exercise bench and Swiss ball) conditions at a 10 repetition maximum relative intensity. Surface electromyography (EMG) measured muscle activity for 8 muscles (anterior deltoid, middle deltoid, trapezius, triceps brachii, rectus abdominis, external obliques, and upper and lower erector spinae). The average root mean square of the EMG signal was calculated for each condition. The results showed that as the instability of the exercise condition increased, the external load decreased. Triceps activation increased with external resistance, where the barbell/bench condition had the greatest EMG activation and the dumbbell/Swiss ball condition had the least. The upper erector spinae had greater muscle activation when performing the barbell presses on the Swiss ball vs. the bench. The findings provide little support for training with a lighter load using unstable loads or unstable surfaces.",
"title": ""
},
{
"docid": "eba9ec47b04e08ff2606efa9ffebb6f8",
"text": "OBJECTIVE\nThe incidence of neuroleptic malignant syndrome (NMS) is not known, but the frequency of its occurrence with conventional antipsychotic agents has been reported to vary from 0.02% to 2.44%.\n\n\nDATA SOURCES\nMEDLINE search conducted in January 2003 and review of references within the retrieved articles.\n\n\nDATA SYNTHESIS\nOur MEDLINE research yielded 68 cases (21 females and 47 males) of NMS associated with atypical antipsychotic drugs (clozapine, N = 21; risperidone, N = 23; olanzapine, N = 19; and quetiapine, N = 5). The fact that 21 cases of NMS with clozapine were found indicates that low occurrence of extrapyramidal symptoms (EPS) and low EPS-inducing potential do not prevent the occurrence of NMS and D(2) dopamine receptor blocking potential does not have direct correlation with the occurrence of NMS. One of the cardinal features of NMS is an increasing manifestation of EPS, and the conventional antipsychotic drugs are known to produce EPS in 95% or more of NMS cases. With atypical antipsychotic drugs, the incidence of EPS during NMS is of a similar magnitude.\n\n\nCONCLUSIONS\nFor NMS associated with atypical antipsychotic drugs, the mortality rate was lower than that with conventional antipsychotic drugs. However, the mortality rate may simply be a reflection of physicians' awareness and ensuing early treatment.",
"title": ""
},
{
"docid": "62a0b14c86df32d889d43eb484eadcda",
"text": "Common spatial pattern (CSP) is a popular feature extraction method for electroencephalogram (EEG) classification. Most of existing CSP-based methods exploit covariance matrices on a subject-by-subject basis so that inter-subject information is neglected. In this paper we present modifications of CSP for subject-to-subject transfer, where we exploit a linear combination of covariance matrices of subjects in consideration. We develop two methods to determine a composite covariance matrix that is a weighted sum of covariance matrices involving subjects, leading to composite CSP. Numerical experiments on dataset IVa in BCI competition III confirm that our composite CSP methods improve classification performance over the standard CSP (on a subject-by-subject basis), especially in the case of subjects with fewer number of training samples.",
"title": ""
},
{
"docid": "2eafdf2c8f1324090cee1a141a2488e7",
"text": "Understanding recurrent networks through rule extraction has a long history. This has taken on new interests due to the need for interpreting or verifying neural networks. One basic form for representing stateful rules is deterministic finite automata (DFA). Previous research shows that extracting DFAs from trained second-order recurrent networks is not only possible but also relatively stable. Recently, several new types of recurrent networks with more complicated architectures have been introduced. These handle challenging learning tasks usually involving sequential data. However, it remains an open problem whether DFAs can be adequately extracted from these models. Specifically, it is not clear how DFA extraction will be affected when applied to different recurrent networks trained on data sets with different levels of complexity. Here, we investigate DFA extraction on several widely adopted recurrent networks that are trained to learn a set of seven regular Tomita grammars. We first formally analyze the complexity of Tomita grammars and categorize these grammars according to that complexity. Then we empirically evaluate different recurrent networks for their performance of DFA extraction on all Tomita grammars. Our experiments show that for most recurrent networks, their extraction performance decreases as the complexity of the underlying grammar increases. On grammars of lower complexity, most recurrent networks obtain desirable extraction performance. As for grammars with the highest level of complexity, while several complicated models fail with only certain recurrent networks having satisfactory extraction performance.",
"title": ""
},
{
"docid": "48dbd48a531867486b2d018442f64ebb",
"text": "The purpose of this paper is to analyze the extent to which the use of social media can support customer knowledge management (CKM) in organizations relying on a traditional bricks-and-mortar business model. The paper uses a combination of qualitative case study and netnography on Starbucks, an international coffee house chain. Data retrieved from varied sources such as newspapers, newswires, magazines, scholarly publications, books, and social media services were textually analyzed. Three major findings could be culled from the paper. First, Starbucks deploys a wide range of social media tools for CKM that serve as effective branding and marketing instruments for the organization. Second, Starbucks redefines the roles of its customers through the use of social media by transforming them from passive recipients of beverages to active contributors of innovation. Third, Starbucks uses effective strategies to alleviate customers’ reluctance for voluntary knowledge sharing, thereby promoting engagement in social media. The scope of the paper is limited by the window of the data collection period. Hence, the findings should be interpreted in the light of this constraint. The lessons gleaned from the case study suggest that social media is not a tool exclusive to online businesses. It can be a potential game-changer in supporting CKM efforts even for traditional businesses. This paper represents one of the earliest works that analyzes the use of social media for CKM in an organization that relies on a traditional bricks-and-mortar business model.",
"title": ""
},
{
"docid": "cb9ba3aaafccae2cd7ea5e32479d2099",
"text": "Partial least squares-based structural equation modeling (PLS-SEM) is extensively used in the field of information systems, as well as in many other fields where multivariate statistical methods are employed. One of the most fundamental issues in PLS-SEM is that of minimum sample size estimation. The “10-times rule” has been a favorite due to its simplicity of application, even though it tends to yield imprecise estimates. We propose two related methods, based on mathematical equations, as alternatives for minimum sample size estimation in PLSSEM: the inverse square root method, and the gamma-exponential method. Based on three Monte Carlo experiments, we demonstrate that both methods are fairly accurate. The inverse square root method is particularly attractive in terms of its simplicity of application.",
"title": ""
},
{
"docid": "9327ab4f9eba9a32211ddb39463271b1",
"text": "We investigate techniques for visualizing time series data and evaluate their effect in value comparison tasks. We compare line charts with horizon graphs - a space-efficient time series visualization technique - across a range of chart sizes, measuring the speed and accuracy of subjects' estimates of value differences between charts. We identify transition points at which reducing the chart height results in significantly differing drops in estimation accuracy across the compared chart types, and we find optimal positions in the speed-accuracy tradeoff curve at which viewers performed quickly without attendant drops in accuracy. Based on these results, we propose approaches for increasing data density that optimize graphical perception.",
"title": ""
},
{
"docid": "94fbd5c6f1347bb04ab8d9f6e768f8df",
"text": "(3) because ‖(xa,va)‖2 ≤ L and ηt only has a finite variance. For the first term on the right-hand side in Eq (2), if the regularization parameter λ1 is sufficiently large, the Hessian matrix of the loss function specified in the paper is positive definite at the optimizer based on the property of alternating least square (Uschmajew 2012). The estimation of Θ and va is thus locally q-linearly convergent to the optimizer. This indicates that for every 1 > 0, we have, ‖v̂a,t+1 − v a‖2 ≤ (q1 + 1)‖v̂a,t − v a‖2 (4) where 0 < q1 < 1. As a conclusion, we have for any δ > 0, with probability at least 1− δ,",
"title": ""
}
] |
scidocsrr
|
4f7def054e9928937bb4e2a827dc1821
|
Rendering Subdivision Surfaces using Hardware Tessellation
|
[
{
"docid": "5d9ed198f35312988a4b823c79ebb3a4",
"text": "A quadtree algorithm is developed to triangulate deformed, intersecting parametric surfaces. The biggest problem with adaptive sampling is to guarantee that the triangulation is accurate within a given tolerance. A new method guarantees the accuracy of the triangulation, given a \"Lipschitz\" condition on the surface definition. The method constructs a hierarchical set of bounding volumes for the surface, useful for ray tracing and solid modeling operations. The task of adaptively sampling a surface is broken into two parts: a subdivision mechanism for recursively subdividing a surface, and a set of subdivision criteria for controlling the subdivision process.An adaptive sampling technique is said to be robust if it accurately represents the surface being sampled. A new type of quadtree, called a restricted quadtree, is more robust than the traditional unrestricted quadtree at adaptive sampling of parametric surfaces. Each sub-region in the quadtree is half the width of the previous region. The restricted quadtree requires that adjacent regions be the same width within a factor of two, while the traditional quadtree makes no restriction on neighbor width. Restricted surface quadtrees are effective at recursively sampling a parametric surface. Quadtree samples are concentrated in regions of high curvature, and along intersection boundaries, using several subdivision criteria. Silhouette subdivision improves the accuracy of the silhouette boundary when a viewing transformation is available at sampling time. The adaptive sampling method is more robust than uniform sampling, and can be more efficient at rendering deformed, intersecting parametric surfaces.",
"title": ""
},
{
"docid": "9c2e89bad3ca7b7416042f95bf4f4396",
"text": "We present a simple and computationally efficient algorithm for approximating Catmull-Clark subdivision surfaces using a minimal set of bicubic patches. For each quadrilateral face of the control mesh, we construct a geometry patch and a pair of tangent patches. The geometry patches approximate the shape and silhouette of the Catmull-Clark surface and are smooth everywhere except along patch edges containing an extraordinary vertex where the patches are C0. To make the patch surface appear smooth, we provide a pair of tangent patches that approximate the tangent fields of the Catmull-Clark surface. These tangent patches are used to construct a continuous normal field (through their cross-product) for shading and displacement mapping. Using this bifurcated representation, we are able to define an accurate proxy for Catmull-Clark surfaces that is efficient to evaluate on next-generation GPU architectures that expose a programmable tessellation unit.",
"title": ""
}
] |
[
{
"docid": "90fa2211106f4a8e23c5a9c782f1790e",
"text": "Page layout is dominant in many genres of physical documents, but it is frequently overlooked when texts are digitised. Its presence is largely determined by available technologies and skills: If no provision is made for creating, preserving, or describing layout, then it tends not to be created, preserved or described. However, I argue, the significance and utility of layout for readers is such that it will survive or re-emerge. I review how layout has been treated in the literature of graphic design and linguistics, and consider its role as a memory tool. I distinguish between fixed, flowed, fugitive and fragmented pages, determined not only by authorial intent but also by technical constraints. Finally, I describe graphic literacy as a component of functional literacy and suggest that corresponding graphic literacies are needed not only by readers, but by creators of documents and by the information management technologies that produce, deliver, and store them.",
"title": ""
},
{
"docid": "327a681898f6f39ae98321643e06fba1",
"text": "Adversarial training (AT) is a regularization method that can be used to improve the robustness of neural network methods by adding small perturbations in the training data. We show how to use AT for the tasks of entity recognition and relation extraction. In particular, we demonstrate that applying AT to a general purpose baseline model for jointly extracting entities and relations, allows improving the stateof-the-art effectiveness on several datasets in different contexts (i.e., news, biomedical, and real estate data) and for different languages (English and Dutch).",
"title": ""
},
{
"docid": "297d95a81658b3d50bf3aff5bcbf7047",
"text": "In this paper, we introduce a new large-scale face dataset named VGGFace2. The dataset contains 3.31 million images of 9131 subjects, with an average of 362.6 images for each subject. Images are downloaded from Google Image Search and have large variations in pose, age, illumination, ethnicity and profession (e.g. actors, athletes, politicians). The dataset was collected with three goals in mind: (i) to have both a large number of identities and also a large number of images for each identity; (ii) to cover a large range of pose, age and ethnicity; and (iii) to minimise the label noise. We describe how the dataset was collected, in particular the automated and manual filtering stages to ensure a high accuracy for the images of each identity. To assess face recognition performance using the new dataset, we train ResNet-50 (with and without Squeeze-and-Excitation blocks) Convolutional Neural Networks on VGGFace2, on MS-Celeb-1M, and on their union, and show that training on VGGFace2 leads to improved recognition performance over pose and age. Finally, using the models trained on these datasets, we demonstrate state-of-the-art performance on the IJB-A and IJB-B face recognition benchmarks, exceeding the previous state-of-the-art by a large margin. The dataset and models are publicly available.",
"title": ""
},
{
"docid": "2b34bd00f114ddd7758bf4878edcab45",
"text": "This paper considers an UWB balun optimized for a frequency band from 6 to 8.5 GHz. The balun provides a transition from unbalanced coplanar waveguide (CPW) to balanced coplanar stripline (CPS), which is suitable for feeding broadband coplanar antennas such as Vivaldi or bow-tie antennas. It is shown, that applying a solid ground plane under the CPS-to-CPS transition enables decreasing its area by a factor of 4.7. Such compact balun can be used for feeding uniplanar antennas, while significantly saving substrate area. Several transition configurations have been fabricated for single and double-layer configurations. They have been verified by comparison with results both from a full-wave electromagnetic (EM) simulation and experimental measurements.",
"title": ""
},
{
"docid": "17ebf9f15291a3810d57771a8c669227",
"text": "We describe preliminary work toward applying a goal reasoning agent for controlling an underwater vehicle in a partially observable, dynamic environment. In preparation for upcoming at-sea tests, our investigation focuses on a notional scenario wherein a autonomous underwater vehicle pursuing a survey goal unexpectedly detects the presence of a potentially hostile surface vessel. Simulations suggest that Goal Driven Autonomy can successfully reason about this scenario using only the limited computational resources typically available on underwater robotic platforms.",
"title": ""
},
{
"docid": "e377063b8fe2d8a12b7c894e11a530e3",
"text": "This paper aims at learning to score the figure skating sports videos. To address this task, we propose a deep architecture that includes two complementary components, i.e., Self-Attentive LSTM and Multi-scale Convolutional Skip LSTM. These two components can efficiently learn the local and global sequential information in each video. Furthermore, we present a large-scale figure skating sports video dataset – FisV dataset. This dataset includes 500 figure skating videos with the average length of 2 minutes and 50 seconds. Each video is annotated by two scores of nine different referees, i.e., Total Element Score(TES) and Total Program Component Score (PCS). Our proposed model is validated on FisV and MIT-skate datasets. The experimental results show the effectiveness of our models in learning to score the figure skating videos.",
"title": ""
},
{
"docid": "550070e6bc24986fbc30c58e2171c227",
"text": "Detection of anomalous trajectories is an important problem in the surveillance domain. Various algorithms based on learning of normal trajectory patterns have been proposed for this problem. Yet, these algorithms typically suffer from one or more limitations: They are not designed for sequential analysis of incomplete trajectories or online learning based on an incrementally updated training set. Moreover, they typically involve tuning of many parameters, including ad-hoc anomaly thresholds, and may therefore suffer from overfitting and poorly-calibrated alarm rates. In this article, we propose and investigate the Sequential Hausdorff Nearest-Neighbour Conformal Anomaly Detector (SHNN-CAD) for online learning and sequential anomaly detection in trajectories. This is a parameter-light algorithm that offers a well-founded approach to the calibration of the anomaly threshold. The discords algorithm, originally proposed by Keogh et al, is another parameter-light anomaly detection algorithm that has previously been shown to have good classification performance on a wide range of time-series datasets, including trajectory data. We implement and investigate the performance of SHNN-CAD and the discords algorithm on four different labelled trajectory datasets. The results show that SHNN-CAD achieves competitive classification performance with minimum parameter tuning during unsupervised online learning and sequential anomaly detection in trajectories.",
"title": ""
},
{
"docid": "8085ffe018b09505464547242b2e3c21",
"text": "Reducible flow graphs occur naturally in connection with flowcharts of computer programs and are used extensively for code optimization and global data flow analysis. In this paper we present an O(n2 log(n2/m)) algorithm for finding a maximum cycle packing in any weighted reducible flow graph with n vertices and m arcs; our algorithm heavily relies on Ramachandran's earlier work concerning reducible flow graphs.",
"title": ""
},
{
"docid": "9593712906aa8272716a7fe5b482b91d",
"text": "User stories are a widely used notation for formulating requirements in agile development projects. Despite their popularity in industry, little to no academic work is available on assessing their quality. The few existing approaches are too generic or employ highly qualitative metrics. We propose the Quality User Story Framework, consisting of 14 quality criteria that user story writers should strive to conform to. Additionally, we introduce the conceptual model of a user story, which we rely on to design the AQUSA software tool. AQUSA aids requirements engineers in turning raw user stories into higher-quality ones by exposing defects and deviations from good practice in user stories. We evaluate our work by applying the framework and a prototype implementation to three user story sets from industry.",
"title": ""
},
{
"docid": "4805f0548cb458b7fad623c07ab7176d",
"text": "This paper presents a unified control framework for controlling a quadrotor tail-sitter UAV. The most salient feature of this framework is its capability of uniformly treating the hovering and forward flight, and enabling continuous transition between these two modes, depending on the commanded velocity. The key part of this framework is a nonlinear solver that solves for the proper attitude and thrust that produces the required acceleration set by the position controller in an online fashion. The planned attitude and thrust are then achieved by an inner attitude controller that is global asymptotically stable. To characterize the aircraft aerodynamics, a full envelope wind tunnel test is performed on the full-scale quadrotor tail-sitter UAV. In addition to planning the attitude and thrust required by the position controller, this framework can also be used to analyze the UAV's equilibrium state (trimmed condition), especially when wind gust is present. Finally, simulation results are presented to verify the controller's capacity, and experiments are conducted to show the attitude controller's performance.",
"title": ""
},
{
"docid": "3eb0ed6db613c94af266279bc38c1c28",
"text": "We can better understand deep neural networks by identifying which features each of their neurons have learned to detect. To do so, researchers have created Deep Visualization techniques including activation maximization, which synthetically generates inputs (e.g. images) that maximally activate each neuron. A limitation of current techniques is that they assume each neuron detects only one type of feature, but we know that neurons can be multifaceted, in that they fire in response to many different types of features: for example, a grocery store class neuron must activate either for rows of produce or for a storefront. Previous activation maximization techniques constructed images without regard for the multiple different facets of a neuron, creating inappropriate mixes of colors, parts of objects, scales, orientations, etc. Here we introduce an algorithm that explicitly uncovers the multiple facets of each neuron by producing a synthetic visualization of each of the types of images that activate a neuron. We also introduce regularization methods that produce state-of-the-art results in terms of the interpretability of images obtained by activation maximization. By separately synthesizing each type of image a neuron fires in response to, the visualizations have more appropriate colors and coherent global structure. Multifaceted feature visualization thus provides a clearer and more comprehensive description of the role of each neuron. Proceedings of the 33 rd International Conference on Machine Learning, New York, NY, USA, 2016. JMLR: W&CP volume 48. Copyright 2016 by the author(s). Figure 1. Top: Visualizations of 8 types of images (feature facets) that activate the same “grocery store” class neuron. Bottom: Example training set images that activate the same neuron, and resemble the corresponding synthetic image in the top panel.",
"title": ""
},
{
"docid": "23a329c63f9a778e3ec38c25fa59748a",
"text": "Expedia users who prefer the same types of hotels presumably share other commonalities (i.e., non-hotel commonalities) with each other. With this in mind, Kaggle challenged developers to recommend hotels to Expedia users. Armed with a training set containing data about 37 million Expedia users, we set out to do just that. Our machine-learning algorithms ranged from direct applications of material learned in class to multi-part algorithms with novel combinations of recommender system techniques. Kaggle’s benchmark for randomly guessing a user’s hotel cluster is 0.02260, and the mean average precision K = 5 value for näıve recommender systems is 0.05949. Our best combination of machine-learning algorithms achieved a figure just over 0.30. Our results provide insight into performing multi-class classification on data sets that lack linear structure.",
"title": ""
},
{
"docid": "77d2255e0a2d77ea8b2682937b73cc7d",
"text": "Recommendation plays an increasingly important role in our daily lives. Recommender systems automatically suggest to a user items that might be of interest to her. Recent studies demonstrate that information from social networks can be exploited to improve accuracy of recommendations. In this paper, we present a survey of collaborative filtering (CF) based social recommender systems. We provide a brief overview over the task of recommender systems and traditional approaches that do not use social network information. We then present how social network information can be adopted by recommender systems as additional input for improved accuracy. We classify CF-based social recommender systems into two categories: matrix factorization based social recommendation approaches and neighborhood based social recommendation approaches. For each category, we survey and compare several represen-",
"title": ""
},
{
"docid": "4e9005d6f8e1ddcd8d160c66cc61ab41",
"text": "Architectural tactics are decisions to efficiently solve quality attributes in software architecture. Security is a complex quality property due to its strong dependence on the application domain. However, the selection of security tactics in the definition of software architecture is guided informally and depends on the experience of the architect. This study presents a methodological approach to address and specify the quality attribute of security in architecture design applying security tactics. The approach is illustrated with a case study about a Tsunami Early Warning System.",
"title": ""
},
{
"docid": "1f613fc1a2e7b29473cf0d3aa53cbb80",
"text": "The visualization and analysis of dynamic social networks are challenging problems, demanding the simultaneous consideration of relational and temporal aspects. In order to follow the evolution of a network over time, we need to detect not only which nodes and which links change and when these changes occur, but also the impact they have on their neighbourhood and on the overall relational structure. Aiming to enhance the perception of structural changes at both the micro and the macro level, we introduce the change centrality metric. This novel metric, as well as a set of further metrics we derive from it, enable the pair wise comparison of subsequent states of an evolving network in a discrete-time domain. Demonstrating their exploitation to enrich visualizations, we show how these change metrics support the visual analysis of network dynamics.",
"title": ""
},
{
"docid": "e0f88ddc85cfe4cdcbe761b85d2781d8",
"text": "Intermodal Transportation Systems (ITS) are logistics networks integrating different transportation services, designed to move goods from origin to destination in a timely manner and using intermodal transportation means. This paper addresses the problem of the modeling and management of ITS at the operational level considering the impact that the new Information and Communication Technologies (ICT) tools can have on management and control of these systems. An effective ITS model at the operational level should focus on evaluating performance indices describing activities, resources and concurrency, by integrating information and financial flows. To this aim, ITS are regarded as discrete event systems and are modeled in a Petri net framework. We consider as a case study the ferry terminal of Trieste (Italy) that is described and simulated in different operative conditions characterized by different types of ICT solutions and information. The simulation results show that ICT have a huge potential for efficient real time management and operation of ITS, as well as an effective impact on the infrastructures.",
"title": ""
},
{
"docid": "63b283d40abcccd17b4771535ac000e4",
"text": "Developing agents to engage in complex goaloriented dialogues is challenging partly because the main learning signals are very sparse in long conversations. In this paper, we propose a divide-and-conquer approach that discovers and exploits the hidden structure of the task to enable efficient policy learning. First, given successful example dialogues, we propose the Subgoal Discovery Network (SDN) to divide a complex goal-oriented task into a set of simpler subgoals in an unsupervised fashion. We then use these subgoals to learn a multi-level policy by hierarchical reinforcement learning. We demonstrate our method by building a dialogue agent for the composite task of travel planning. Experiments with simulated and real users show that our approach performs competitively against a state-of-theart method that requires human-defined subgoals. Moreover, we show that the learned subgoals are often human comprehensible.",
"title": ""
},
{
"docid": "83926511ab8ce222f02e96820c8feb68",
"text": "The grounding system design for GIS indoor substation is proposed in this paper. The design concept of equipotential ground grids in substation building as well as connection of GIS enclosures to main ground grid is described. The main ground grid design is performed according to IEEE Std. 80-2000. The real case study of grounding system design for 120 MVA, 69-24 kV distribution substation in MEA's power system is demonstrated.",
"title": ""
},
{
"docid": "d18faf207a0dbccc030e5dcc202949ab",
"text": "This manuscript conducts a comparison on modern object detection systems in their ability to detect multiple maritime vessel classes. Three highly scoring algorithms from the Pascal VOC Challenge, Histogram of Oriented Gradients by Dalal and Triggs, Exemplar-SVM by Malisiewicz, and Latent-SVM with Deformable Part Models by Felzenszwalb, were compared to determine performance of recognition within a specific category rather than the general classes from the original challenge. In all cases, the histogram of oriented edges was used as the feature set and support vector machines were used for classification. A summary and comparison of the learning algorithms is presented and a new image corpus of maritime vessels was collected. Precision-recall results show improved recognition performance is achieved when accounting for vessel pose. In particular, the deformable part model has the best performance when considering the various components of a maritime vessel.",
"title": ""
},
{
"docid": "b2cb59b7464c3d7ead4fe3d70410a49c",
"text": "X-ray measurements of the hip joints of children, with special reference to the acetabular index, suggest that the upper standard deviation of normal comprises the borderline to a critical zone where extreme values of normal and pathologic hips were found together. Above the double standard deviation only severe dysplasias were present. Investigations of the shaft-neck angle and the degree of anteversion including the wide standard deviation demonstrate that it is very difficult to determine where these angles become pathologic. It is more important to look for the relationship between femoral head and acetabulum. A new measurement--the Hip Value is based on measurements of the Idelberg- Frank angle, the Wiberg angle and MZ-distance of decentralization. By statistical methods, normal and pathological joints can be separated as follows: in adult Hip Values, between 6 and 15 indicate a normal joint form; values between 16 and 21 indicate a slight deformation and values of 22 and above are indications of a severe deformation, in children in the normal range the Hip Value reaches 14; values of 15 and up are pathological.",
"title": ""
}
] |
scidocsrr
|
1defe92f13d92c65f2dce69e045109d4
|
Classification-Driven Watershed Segmentation
|
[
{
"docid": "5f31e3405af91cd013c3193c7d3cdd8d",
"text": "In this paper, we review most major filtering approaches to texture feature extraction and perform a comparative study. Filtering approaches included are Laws masks, ring/wedge filters, dyadic Gabor filter banks, wavelet transforms, wavelet packets and wavelet frames, quadrature mirror filters, discrete cosine transform, eigenfilters, optimized Gabor filters, linear predictors, and optimized finite impulse response filters. The features are computed as the local energy of the filter responses. The effect of the filtering is highlighted, keeping the local energy function and the classification algorithm identical for most approaches. For reference, comparisons with two classical nonfiltering approaches, co-occurrence (statistical) and autoregressive (model based) features, are given. We present a ranking of the tested approaches based on extensive experiments.",
"title": ""
},
{
"docid": "6206968905f6e211b07e896f49ecdc57",
"text": "We present here a new algorithm for segmentation of intensity images which is robust, rapid, and free of tuning parameters. The method, however, requires the input of a number of seeds, either individual pixels or regions, which will control the formation of regions into which the image will be segmented. In this correspondence, we present the algorithm, discuss briefly its properties, and suggest two ways in which it can be employed, namely, by using manual seed selection or by automated procedures.",
"title": ""
}
] |
[
{
"docid": "dc62e382c60237ae71ebeab6d9be93ea",
"text": "Deep reinforcement learning for multi-agent cooperation and competition has been a hot topic recently. This paper focuses on cooperative multi-agent problem based on actor-critic methods under local observations settings. Multi agent deep deterministic policy gradient obtained state of art results for some multi-agent games, whereas, it cannot scale well with growing amount of agents. In order to boost scalability, we propose a parameter sharing deterministic policy gradient method with three variants based on neural networks, including actor-critic sharing, actor sharing and actor sharing with partially shared critic. Benchmarks from rllab show that the proposed method has advantages in learning speed and memory efficiency, well scales with growing amount of agents, and moreover, it can make full use of reward sharing and exchangeability if possible.",
"title": ""
},
{
"docid": "03371f6200ebf2bdf0807e41a998550c",
"text": "As next-generation sequencing projects generate massive genome-wide sequence variation data, bioinformatics tools are being developed to provide computational predictions on the functional effects of sequence variations and narrow down the search of casual variants for disease phenotypes. Different classes of sequence variations at the nucleotide level are involved in human diseases, including substitutions, insertions, deletions, frameshifts, and non-sense mutations. Frameshifts and non-sense mutations are likely to cause a negative effect on protein function. Existing prediction tools primarily focus on studying the deleterious effects of single amino acid substitutions through examining amino acid conservation at the position of interest among related sequences, an approach that is not directly applicable to insertions or deletions. Here, we introduce a versatile alignment-based score as a new metric to predict the damaging effects of variations not limited to single amino acid substitutions but also in-frame insertions, deletions, and multiple amino acid substitutions. This alignment-based score measures the change in sequence similarity of a query sequence to a protein sequence homolog before and after the introduction of an amino acid variation to the query sequence. Our results showed that the scoring scheme performs well in separating disease-associated variants (n = 21,662) from common polymorphisms (n = 37,022) for UniProt human protein variations, and also in separating deleterious variants (n = 15,179) from neutral variants (n = 17,891) for UniProt non-human protein variations. In our approach, the area under the receiver operating characteristic curve (AUC) for the human and non-human protein variation datasets is ∼0.85. We also observed that the alignment-based score correlates with the deleteriousness of a sequence variation. In summary, we have developed a new algorithm, PROVEAN (Protein Variation Effect Analyzer), which provides a generalized approach to predict the functional effects of protein sequence variations including single or multiple amino acid substitutions, and in-frame insertions and deletions. The PROVEAN tool is available online at http://provean.jcvi.org.",
"title": ""
},
{
"docid": "9d700ef057eb090336d761ebe7f6acb0",
"text": "This article presents initial results on a supervised machine learning approach to determine the semantics of noun compounds in Dutch and Afrikaans. After a discussion of previous research on the topic, we present our annotation methods used to provide a training set of compounds with the appropriate semantic class. The support vector machine method used for this classification experiment utilizes a distributional lexical semantics representation of the compound’s constituents to make its classification decision. The collection of words that occur in the near context of the constituent are considered an implicit representation of the semantics of this constituent. Fscores were reached of 47.8% for Dutch and 51.1% for Afrikaans. Keywords—compound semantics; Afrikaans; Dutch; machine learning; distributional methods",
"title": ""
},
{
"docid": "b4edd546c786bbc7a72af67439dfcad7",
"text": "We aim to develop a computationally feasible, cognitivelyinspired, formal model of concept invention, drawing on Fauconnier and Turner’s theory of conceptual blending, and grounding it on a sound mathematical theory of concepts. Conceptual blending, although successfully applied to describing combinational creativity in a varied number of fields, has barely been used at all for implementing creative computational systems, mainly due to the lack of sufficiently precise mathematical characterisations thereof. The model we will define will be based on Goguen’s proposal of a Unified Concept Theory, and will draw from interdisciplinary research results from cognitive science, artificial intelligence, formal methods and computational creativity. To validate our model, we will implement a proof of concept of an autonomous computational creative system that will be evaluated in two testbed scenarios: mathematical reasoning and melodic harmonisation. We envisage that the results of this project will be significant for gaining a deeper scientific understanding of creativity, for fostering the synergy between understanding and enhancing human creativity, and for developing new technologies for autonomous creative systems.",
"title": ""
},
{
"docid": "b893e0321a51a2b06e1d8f2a59a296b6",
"text": "Green tea (GT) and green tea extracts (GTE) have been postulated to decrease cancer incidence. In vitro results indicate a possible effect; however, epidemiological data do not support cancer chemoprevention. We have performed a PubMED literature search for green tea consumption and the correlation to the common tumor types lung, colorectal, breast, prostate, esophageal and gastric cancer, with cohorts from both Western and Asian countries. We additionally included selected mechanistical studies for a possible mode of action. The comparability between studies was limited due to major differences in study outlines; a meta analysis was thus not possible and studies were evaluated individually. Only for breast cancer could a possible small protective effect be seen in Asian and Western cohorts, whereas for esophagus and stomach cancer, green tea increased the cancer incidence, possibly due to heat stress. No effect was found for colonic/colorectal and prostatic cancer in any country, for lung cancer Chinese studies found a protective effect, but not studies from outside China. Epidemiological studies thus do not support a cancer protective effect. GT as an indicator of as yet undefined parameters in lifestyle, environment and/or ethnicity may explain some of the observed differences between China and other countries.",
"title": ""
},
{
"docid": "d880349c2760a8cd71d86ea3212ba1f0",
"text": "As a class of hard combinatorial optimization problems, the school bus routing problem has received considerable attention in the last decades. For a multi-school system, given the bus trips for each school, the school bus scheduling problem aims at optimizing bus schedules to serve all the trips within the school time windows. In this paper, we propose two approaches for solving the bi-objective school bus scheduling problem: an exact method of mixed integer programming (MIP) and a metaheuristic method which combines simulated annealing with local search. We develop MIP formulations for homogenous and heterogeneous fleet problems respectively and solve the models by MIP solver CPLEX. The bus type-based formulation for heterogeneous fleet problem reduces the model complexity in terms of the number of decision variables and constraints. The metaheuristic method is a two-stage framework for minimizing the number of buses to be used as well as the total travel distance of buses. We evaluate the proposed MIP and the metaheuristic method on two benchmark datasets, showing that on both instances, our metaheuristic method significantly outperforms the respective state-of-the-art methods.",
"title": ""
},
{
"docid": "b46801d2903131bcfbc12bdd457ddbe7",
"text": "Indicators of Compromise (IOCs) are artifacts observed on a network or in an operating system that can be utilized to indicate a computer intrusion and detect cyber-attacks in an early stage. Thus, they exert an important role in the field of cybersecurity. However, state-of-the-art IOCs detection systems rely heavily on hand-crafted features with expert knowledge of cybersecurity, and require a large amount of supervised training corpora to train an IOC classifier. In this paper, we propose using a neural-based sequence labelling model to identify IOCs automatically from reports on cybersecurity without expert knowledge of cybersecurity. Our work is the first to apply an end-to-end sequence labelling to the task in IOCs identification. By using an attention mechanism and several token spelling features, we find that the proposed model is capable of identifying the low frequency IOCs from long sentences contained in cybersecurity reports. Experiments show that the proposed model outperforms other sequence labelling models, achieving over 88% average F1-score.",
"title": ""
},
{
"docid": "cf6f0a6d53c3b615f27a20907e6eb93f",
"text": "OBJECTIVE\nWe sought to investigate whether a low-fat vegan diet improves glycemic control and cardiovascular risk factors in individuals with type 2 diabetes.\n\n\nRESEARCH DESIGN AND METHODS\nIndividuals with type 2 diabetes (n = 99) were randomly assigned to a low-fat vegan diet (n = 49) or a diet following the American Diabetes Association (ADA) guidelines (n = 50). Participants were evaluated at baseline and 22 weeks.\n\n\nRESULTS\nForty-three percent (21 of 49) of the vegan group and 26% (13 of 50) of the ADA group participants reduced diabetes medications. Including all participants, HbA(1c) (A1C) decreased 0.96 percentage points in the vegan group and 0.56 points in the ADA group (P = 0.089). Excluding those who changed medications, A1C fell 1.23 points in the vegan group compared with 0.38 points in the ADA group (P = 0.01). Body weight decreased 6.5 kg in the vegan group and 3.1 kg in the ADA group (P < 0.001). Body weight change correlated with A1C change (r = 0.51, n = 57, P < 0.0001). Among those who did not change lipid-lowering medications, LDL cholesterol fell 21.2% in the vegan group and 10.7% in the ADA group (P = 0.02). After adjustment for baseline values, urinary albumin reductions were greater in the vegan group (15.9 mg/24 h) than in the ADA group (10.9 mg/24 h) (P = 0.013).\n\n\nCONCLUSIONS\nBoth a low-fat vegan diet and a diet based on ADA guidelines improved glycemic and lipid control in type 2 diabetic patients. These improvements were greater with a low-fat vegan diet.",
"title": ""
},
{
"docid": "02209c1215a39c17b4099603ef700c97",
"text": "The goal of the Automated Evaluation of Scientific Writing (AESW) Shared Task is to analyze the linguistic characteristics of scientific writing to promote the development of automated writing evaluation tools that can assist authors in writing scientific papers. The proposed task is to predict whether a given sentence requires editing to ensure its “fit” with the scientific writing genre. We describe the proposed task, training, development, and test data sets, and evaluation metrics. Quality means doing it right when no one is looking. – Henry Ford",
"title": ""
},
{
"docid": "7c1b3ba1b8e33ed866ae90b3ddf80ce6",
"text": "This paper presents a universal tuning system for harmonic operation of series-resonant inverters (SRI), based on a self-oscillating switching method. In the new tuning system, SRI can instantly operate in one of the switching frequency harmonics, e.g., the first, third, or fifth harmonic. Moreover, the new system can utilize pulse density modulation (PDM), phase shift (PS), and power–frequency control methods for each harmonic. Simultaneous combination of PDM and PS control method is also proposed for smoother power regulation. In addition, this paper investigates performance of selected harmonic operation based on phase-locked loop (PLL) circuits. In comparison with the fundamental harmonic operation, PLL circuits suffer from stability problem for the other harmonic operations. The proposed method has been verified using laboratory prototypes with resonant frequencies of 20 up to 75 kHz and output power of about 200 W.",
"title": ""
},
{
"docid": "9b123e0cf32118094b803323d1073b99",
"text": "The lack of sufficient labeled Web pages in many languages, especially for those uncommonly used ones, presents a great challenge to traditional supervised classification methods to achieve satisfactory Web page classification performance. To address this, we propose a novel Nonnegative Matrix Tri-factorization (NMTF) based Dual Knowledge Transfer (DKT) approach for cross-language Web page classification, which is based on the following two important observations. First, we observe that Web pages for a same topic from different languages usually share some common semantic patterns, though in different representation forms. Second, we also observe that the associations between word clusters and Web page classes are a more reliable carrier than raw words to transfer knowledge across languages. With these recognitions, we attempt to transfer knowledge from the auxiliary language, in which abundant labeled Web pages are available, to target languages, in which we want classify Web pages, through two different paths: word cluster approximations and the associations between word clusters and Web page classes. Due to the reinforcement between these two different knowledge transfer paths, our approach can achieve better classification accuracy. We evaluate the proposed approach in extensive experiments using a real world cross-language Web page data set. Promising results demonstrate the effectiveness of our approach that is consistent with our theoretical analyses.",
"title": ""
},
{
"docid": "d6f1278ccb6de695200411137b85b89a",
"text": "The complexity of information systems is increasing in recent years, leading to increased effort for maintenance and configuration. Self-adaptive systems (SASs) address this issue. Due to new computing trends, such as pervasive computing, miniaturization of IT leads to mobile devices with the emerging need for context adaptation. Therefore, it is beneficial that devices are able to adapt context. Hence, we propose to extend the definition of SASs and include context adaptation. This paper presents a taxonomy of self-adaptation and a survey on engineering SASs. Based on the taxonomy and the survey, we motivate a new perspective on SAS including context adaptation.",
"title": ""
},
{
"docid": "c174facf9854db5aae149e82f9f2a239",
"text": "A new feeding technique for printed Log-periodic dipole arrays (LPDAs) is presented, and used to design a printed LPDA operating between 4 and 18 GHz. The antenna has been designed using CST MICROWAVE STUDIO 2010, and the simulation results show that the antenna can be used as an Ultra Wideband Antenna in the range 6-9 GHz.",
"title": ""
},
{
"docid": "e473c5133203e8f1b937ec9dae7cd469",
"text": "The Data Warehouse (DW) design remains a great challenge process for DW designers. As well, so far, there is no strong method to support the requirements analysis process in DW projects. The literature approaches try to solve this tedious and important issue; however, many of these approaches ignore or bypass the requirements elicitation phase. In this paper, we propose a method to generate multidimensional schemas from decisional requirements. We elected natural language (NL) like syntax for expressing decisional/business users' needs. Our approach distinguishes from existing ones in that it: i) is NL-based for requirements elicitation; ii) uses a matrix representation to normalize users' requirements, iii) automates the generation of star schemas relying on eight specific heuristics. We developed SSReq (Star Schemas from Requirements) prototype to demonstrate the feasibility of our approach illustrated with a real case study.",
"title": ""
},
{
"docid": "3a6c58a05427392750d15307fda4faec",
"text": "In this paper, we present the design of a low voltage bandgap reference (LVBGR) circuit for supply voltage of 1.2V which can generate an output reference voltage of 0.363V. Traditional BJT based bandgap reference circuits give very precise output reference but power and area consumed by these BJT devices is larger so for low supply bandgap reference we chose MOSFETs operating in subthreshold region based reference circuits. LVBGR circuits with less sensitivity to supply voltage and temperature is used in both analog and digital circuits like high precise comparators used in data converter, phase-locked loop, ring oscillator, memory systems, implantable biomedical product etc. In the proposed circuit subthreshold MOSFETs temperature characteristics are used to achieve temperature compensation of output voltage reference and it can work under very low supply voltage. A PMOS structure 2stage opamp which will be operating in subthreshold region is designed for the proposed LVBGR circuit whose gain is 89.6dB and phase margin is 74 °. Finally a LVBGR circuit is designed which generates output voltage reference of 0.364V given with supply voltage of 1.2 V with 10 % variation and temperature coefficient of 240ppm/ °C is obtained for output reference voltage variation with respect to temperature over a range of 0 to 100°C. The output reference voltage exhibits a variation of 230μV with a supply range of 1.08V to 1.32V at typical process corner. The proposed LVBGR circuit for 1.2V supply is designed with the Mentor Graphics Pyxis tool using 130nm technology with EldoSpice simulator. Overall current consumed by the circuit is 900nA and also the power consumed by the entire LVBGR circuit is 0.9μW and the PSRR of the LVBGR circuit is -70dB.",
"title": ""
},
{
"docid": "cdb83e9a31172d6687622dc7ac841c91",
"text": "Introduction Various forms of social media are used by many mothers to maintain social ties and manage the stress associated with their parenting roles and responsibilities. ‘Mommy blogging’ as a specific type of social media usage is a common and growing phenomenon, but little is known about mothers’ blogging-related experiences and how these may contribute to their wellbeing. This exploratory study investigated the blogging-related motivations and goals of Australian mothers. Methods An online survey was emailed to members of an Australian online parenting community. The survey included open-ended questions that invited respondents to discuss their motivations and goals for blogging. A thematic analysis using a grounded approach was used to analyze the qualitative data obtained from 235 mothers. Results Five primary motivations for blogging were identified: developing connections with others, experiencing heightened levels of mental stimulation, achieving self-validation, contributing to the welfare of others, and extending skills and abilities. Discussion These motivations are discussed in terms of their various properties and dimensions to illustrate how these mothers appear to use blogging to enhance their psychological wellbeing.",
"title": ""
},
{
"docid": "f77495366909b9713463bebf2b4ff2fc",
"text": "This paper introduces a fully deep learning approach to monocular SLAM, which can perform simultaneous localization using a neural network for learning visual odometry (L-VO) and dense 3D mapping. Dense 2D flow and a depth image are generated from monocular images by sub-networks, which are then used by a 3D flow associated layer in the L-VO network to generate dense 3D flow. Given this 3D flow, the dual-stream L-VO network can then predict the 6DOF relative pose and furthermore reconstruct the vehicle trajectory. In order to learn the correlation between motion directions, the Bivariate Gaussian modeling is employed in the loss function. The L-VO network achieves an overall performance of 2.68 % for average translational error and 0.0143°/m for average rotational error on the KITTI odometry benchmark. Moreover, the learned depth is leveraged to generate a dense 3D map. As a result, an entire visual SLAM system, that is, learning monocular odometry combined with dense 3D mapping, is achieved.",
"title": ""
},
{
"docid": "9b7ff8a7dec29de5334f3de8d1a70cc3",
"text": "The paper introduces a complete offline programming toolbox for remote laser welding (RLW) which provides a semi-automated method for computing close-to-optimal robot programs. A workflow is proposed for the complete planning process, and new models and algorithms are presented for solving the optimization problems related to each step of the workflow: the sequencing of the welding tasks, path planning, workpiece placement, calculation of inverse kinematics and the robot trajectory, as well as for generating the robot program code. The paper summarizes the results of an industrial case study on the assembly of a car door using RLW technology, which illustrates the feasibility and the efficiency of the proposed approach.",
"title": ""
},
{
"docid": "fefa533d5abb4be0afe76d9a7bbd9435",
"text": "Keyphrases are useful for a variety of purposes, including summarizing, indexing, labeling, categorizing, clustering, highlighting, browsing, and searching. The task of automatic keyphrase extraction is to select keyphrases from within the text of a given document. Automatic keyphrase extraction makes it feasible to generate keyphrases for the huge number of documents that do not have manually assigned keyphrases. A limitation of previous keyphrase extraction algorithms is that the selected keyphrases are occasionally incoherent. That is, the majority of the output keyphrases may fit together well, but there may be a minority that appear to be outliers, with no clear semantic relation to the majority or to each other. This paper presents enhancements to the Kea keyphrase extraction algorithm that are designed to increase the coherence of the extracted keyphrases. The approach is to use the degree of statistical association among candidate keyphrases as evidence that they may be semantically related. The statistical association is measured using web mining. Experiments demonstrate that the enhancements improve the quality of the extracted keyphrases. Furthermore, the enhancements are not domain-specific: the algorithm generalizes well when it is trained on one domain (computer science documents) and tested on another (physics documents).",
"title": ""
},
{
"docid": "e41e5221116a7b63c2238fc4541c1d4c",
"text": ". . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii CHAPTER",
"title": ""
}
] |
scidocsrr
|
d7d808b8f227180a5b507e274d286096
|
Almost Linear VC-Dimension Bounds for Piecewise Polynomial Networks
|
[
{
"docid": "40b78c5378159e9cdf38275a773b8109",
"text": "For a common class of artificial neural networks, the mean integrated squared error between the estimated network and a target function f is shown to be bounded by $${\\text{O}}\\left( {\\frac{{C_f^2 }}{n}} \\right) + O(\\frac{{ND}}{N}\\log N)$$ where n is the number of nodes, d is the input dimension of the function, N is the number of training observations, and C f is the first absolute moment of the Fourier magnitude distribution of f. The two contributions to this total risk are the approximation error and the estimation error. Approximation error refers to the distance between the target function and the closest neural network function of a given architecture and estimation error refers to the distance between this ideal network function and an estimated network function. With n ~ C f(N/(dlog N))1/2 nodes, the order of the bound on the mean integrated squared error is optimized to be O(C f((d/N)log N)1/2). The bound demonstrates surprisingly favorable properties of network estimation compared to traditional series and nonparametric curve estimation techniques in the case that d is moderately large. Similar bounds are obtained when the number of nodes n is not preselected as a function of C f (which is generally not known a priori), but rather the number of nodes is optimized from the observed data by the use of a complexity regularization or minimum description length criterion. The analysis involves Fourier techniques for the approximation error, metric entropy considerations for the estimation error, and a calculation of the index of resolvability of minimum complexity estimation of the family of networks.",
"title": ""
}
] |
[
{
"docid": "3e23069ba8a3ec3e4af942727c9273e9",
"text": "This paper describes an automated tool called Dex (difference extractor) for analyzing syntactic and semantic changes in large C-language code bases. It is applied to patches obtained from a source code repository, each of which comprises the code changes made to accomplish a particular task. Dex produces summary statistics characterizing these changes for all of the patches that are analyzed. Dex applies a graph differencing algorithm to abstract semantic graphs (ASGs) representing each version. The differences are then analyzed to identify higher-level program changes. We describe the design of Dex, its potential applications, and the results of applying it to analyze bug fixes from the Apache and GCC projects. The results include detailed information about the nature and frequency of missing condition defects in these projects.",
"title": ""
},
{
"docid": "990d811789fd5025d784a147facf9d07",
"text": "1389-1286/$ see front matter 2012 Elsevier B.V http://dx.doi.org/10.1016/j.comnet.2012.06.016 ⇑ Corresponding author. Tel.: +216 96 819 500. E-mail addresses: olfa.gaddour@enis.rnu.tn (O isep.ipp.pt (A. Koubâa). IPv6 Routing Protocol for Low Power and Lossy Networks (RPL) is a routing protocol specifically designed for Low power and Lossy Networks (LLN) compliant with the 6LoWPAN protocol. It currently shows up as an RFC proposed by the IETF ROLL working group. However, RPL has gained a lot of maturity and is attracting increasing interest in the research community. The absence of surveys about RPL motivates us to write this paper, with the objective to provide a quick introduction to RPL. In addition, we present the most relevant research efforts made around RPL routing protocol that pertain to its performance evaluation, implementation, experimentation, deployment and improvement. We also present an experimental performance evaluation of RPL for different network settings to understand the impact of the protocol attributes on the network behavior, namely in terms of convergence time, energy, packet loss and packet delay. Finally, we point out open research challenges on the RPL design. We believe that this survey will pave the way for interested researchers to understand its behavior and contributes for further relevant research works. 2012 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "76dd20f0464ff42badc5fd4381eed256",
"text": "C therapy (CBT) approaches are rooted in the fundamental principle that an individual’s cognitions play a significant and primary role in the development and maintenance of emotional and behavioral responses to life situations. In CBT models, cognitive processes, in the form of meanings, judgments, appraisals, and assumptions associated with specific life events, are the primary determinants of one’s feelings and actions in response to life events and thus either facilitate or hinder the process of adaptation. CBT includes a range of approaches that have been shown to be efficacious in treating posttraumatic stress disorder (PTSD). In this chapter, we present an overview of leading cognitive-behavioral approaches used in the treatment of PTSD. The treatment approaches discussed here include cognitive therapy/reframing, exposure therapies (prolonged exposure [PE] and virtual reality exposure [VRE]), stress inoculation training (SIT), eye movement desensitization and reprocessing (EMDR), and Briere’s selftrauma model (1992, 1996, 2002). In our discussion of each of these approaches, we include a description of the key assumptions that frame the particular approach and the main strategies associated with the treatment. In the final section of this chapter, we review the growing body of research that has evaluated the effectiveness of cognitive-behavioral treatments for PTSD.",
"title": ""
},
{
"docid": "1b76b9d3f1326e8f6522f3cdd2c276bb",
"text": "Classifier has been widely applied in machine learning, such as pattern recognition, medical diagnosis, credit scoring, banking and weather prediction. Because of the limited local storage at user side, data and classifier has to be outsourced to cloud for storing and computing. However, due to privacy concerns, it is important to preserve the confidentiality of data and classifier in cloud computing because the cloud servers are usually untrusted. In this work, we propose a framework for privacy-preserving outsourced classification in cloud computing (POCC). Using POCC, an evaluator can securely train a classification model over the data encrypted with different public keys, which are outsourced from the multiple data providers. We prove that our scheme is secure in the semi-honest model",
"title": ""
},
{
"docid": "6d2d9de5db5b03a98a26efc8453588d8",
"text": "In this paper we describe a system for use on a mobile robot that detects potential loop closures using both the visual and spatial appearance of the local scene. Loop closing is the act of correctly asserting that a vehicle has returned to a previously visited location. It is an important component in the search to make SLAM (Simultaneous Localization and Mapping) the reliable technology it should be. Paradoxically, it is hardest in the presence of substantial errors in vehicle pose estimates which is exactly when it is needed most. The contribution of this paper is to show how a principled and robust description of local spatial appearance (using laser rangefinder data) can be combined with a purely camera based system to produce superior performance. Individual spatial components (segments) of the local structure are described using a rotationally invariant shape descriptor and salient aspects thereof, and entropy as measure of their innate complexity. Comparisons between scenes are made using relative entropy and by examining the mutual arrangement of groups of segments. We show the inclusion of spatial information allows the resolution of ambiguities stemming from repetitive visual artifacts in urban settings. Importantly the method we present is entirely independent of the navigation and or mapping process and so is entirely unaffected by gross errors in pose estimation.",
"title": ""
},
{
"docid": "4f7c1a965bcde03dedf1702c85b2ce77",
"text": "Strategic managers are consistently faced with the decision of how to allocate scarce corporate resources in an environment that is placing more and more pressures on them. Recent scholarship in strategic management suggests that many of these pressures come directly from sources associated with social issues in management, rather than traditional arenas of strategic management. Using a greatly-improved source of data on corporate social performance, this paper reports the results of a rigorous study of the empirical linkages between financial and social performance. CSP is found to be positively associated with prior financial performance, supporting the theory that slack resource availability and CSP are positively related. CSP is also found to be positively associated with future financial performance, supporting the theory that good management and CSP are positively related. Post-print version of an article published in Strategic Management Journal 18(4): 303-319 (1997 April). doi: 10.1002/(SICI)1097-0266(199704)18:4<303::AID-SMJ869>3.0.CO;2-G",
"title": ""
},
{
"docid": "02621546c67e6457f350d0192b616041",
"text": "Binary embedding of high-dimensional data requires long codes to preserve the discriminative power of the input space. Traditional binary coding methods often suffer from very high computation and storage costs in such a scenario. To address this problem, we propose Circulant Binary Embedding (CBE) which generates binary codes by projecting the data with a circulant matrix. The circulant structure enables the use of Fast Fourier Transformation to speed up the computation. Compared to methods that use unstructured matrices, the proposed method improves the time complexity from O(d) to O(d log d), and the space complexity from O(d) to O(d) where d is the input dimensionality. We also propose a novel time-frequency alternating optimization to learn data-dependent circulant projections, which alternatively minimizes the objective in original and Fourier domains. We show by extensive experiments that the proposed approach gives much better performance than the state-of-the-art approaches for fixed time, and provides much faster computation with no performance degradation for fixed number of bits.",
"title": ""
},
{
"docid": "caaca962473382e40a08f90240cc88b6",
"text": "Lysergic acid diethylamide (LSD) was synthesized in 1938 and its psychoactive effects discovered in 1943. It was used during the 1950s and 1960s as an experimental drug in psychiatric research for producing so-called \"experimental psychosis\" by altering neurotransmitter system and in psychotherapeutic procedures (\"psycholytic\" and \"psychedelic\" therapy). From the mid 1960s, it became an illegal drug of abuse with widespread use that continues today. With the entry of new methods of research and better study oversight, scientific interest in LSD has resumed for brain research and experimental treatments. Due to the lack of any comprehensive review since the 1950s and the widely dispersed experimental literature, the present review focuses on all aspects of the pharmacology and psychopharmacology of LSD. A thorough search of the experimental literature regarding the pharmacology of LSD was performed and the extracted results are given in this review. (Psycho-) pharmacological research on LSD was extensive and produced nearly 10,000 scientific papers. The pharmacology of LSD is complex and its mechanisms of action are still not completely understood. LSD is physiologically well tolerated and psychological reactions can be controlled in a medically supervised setting, but complications may easily result from uncontrolled use by layman. Actually there is new interest in LSD as an experimental tool for elucidating neural mechanisms of (states of) consciousness and there are recently discovered treatment options with LSD in cluster headache and with the terminally ill.",
"title": ""
},
{
"docid": "c7d71b7bb07f62f4b47d87c9c4bae9b3",
"text": "Smart contracts are full-fledged programs that run on blockchains (e.g., Ethereum, one of the most popular blockchains). In Ethereum, gas (in Ether, a cryptographic currency like Bitcoin) is the execution fee compensating the computing resources of miners for running smart contracts. However, we find that under-optimized smart contracts cost more gas than necessary, and therefore the creators or users will be overcharged. In this work, we conduct the first investigation on Solidity, the recommended compiler, and reveal that it fails to optimize gas-costly programming patterns. In particular, we identify 7 gas-costly patterns and group them to 2 categories. Then, we propose and develop GASPER, a new tool for automatically locating gas-costly patterns by analyzing smart contracts' bytecodes. The preliminary results on discovering 3 representative patterns from 4,240 real smart contracts show that 93.5%, 90.1% and 80% contracts suffer from these 3 patterns, respectively.",
"title": ""
},
{
"docid": "7f9b7f50432d04968a1fb62855481eda",
"text": "BACKGROUND/PURPOSE\nAccurate prenatal diagnosis of complex anatomic connections and associated anomalies has only been possible recently with the use of ultrasonography, echocardiography, and fetal magnetic resonance imaging (MRI). To assess the impact of improved antenatal diagnosis in the management and outcome of conjoined twins, the authors reviewed their experience with 14 cases.\n\n\nMETHODS\nA retrospective review of prenatally diagnosed conjoined twins referred to our institution from 1996 to present was conducted.\n\n\nRESULTS\nIn 14 sets of conjoined twins, there were 10 thoracoomphalopagus, 2 dicephalus tribrachius dipus, 1 ischiopagus, and 1 ischioomphalopagus. The earliest age at diagnosis was 9 weeks' gestation (range, 9 to 29; mean, 20). Prenatal imaging with ultrasonography, echocardiography, and ultrafast fetal MRI accurately defined the shared anatomy in all cases. Associated anomalies included cardiac malformations (11 of 14), congenital diaphragmatic hernia (4 of 14), abdominal wall defects (2 of 14), and imperforate anus (2 of 14). Three sets of twins underwent therapeutic abortion, 1 set of twins died in utero, and 10 were delivered via cesarean section at a mean gestational age of 34 weeks. There were 5 individual survivors in the series after separation (18%). In one case, in which a twin with a normal heart perfused the cotwin with a rudimentary heart, the ex utero intrapartum treatment procedure (EXIT) was utilized because of concern that the normal twin would suffer immediate cardiac decompensation at birth. This EXIT-to-separation strategy allowed prompt control of the airway and circulation before clamping the umbilical cord and optimized control over a potentially emergent situation, leading to survival of the normal cotwin. In 2 sets of twins in which each twin had a normal heart, tissue expanders were inserted before separation.\n\n\nCONCLUSIONS\nAdvances in prenatal diagnosis allow detailed, accurate evaluations of conjoined twins. Careful prenatal studies may uncover cases in which emergent separation at birth is lifesaving.",
"title": ""
},
{
"docid": "58677916e11e6d5401b7396d117a517b",
"text": "This work contributes to the development of a common framework for the discussion and analysis of dexterous manipulation across the human and robotic domains. An overview of previous work is first provided along with an analysis of the tradeoffs between arm and hand dexterity. A hand-centric and motion-centric manipulation classification is then presented and applied in four different ways. It is first discussed how the taxonomy can be used to identify a manipulation strategy. Then, applications for robot hand analysis and engineering design are explained. Finally, the classification is applied to three activities of daily living (ADLs) to distinguish the patterns of dexterous manipulation involved in each task. The same analysis method could be used to predict problem ADLs for various impairments or to produce a representative benchmark set of ADL tasks. Overall, the classification scheme proposed creates a descriptive framework that can be used to effectively describe hand movements during manipulation in a variety of contexts and might be combined with existing object centric or other taxonomies to provide a complete description of a specific manipulation task.",
"title": ""
},
{
"docid": "b8b4e582fbcc23a5a72cdaee1edade32",
"text": "In recent years, research into the mining of user check-in behavior for point-of-interest (POI) recommendations has attracted a lot of attention. Existing studies on this topic mainly treat such recommendations in a traditional manner—that is, they treat POIs as items and check-ins as ratings. However, users usually visit a place for reasons other than to simply say that they have visited. In this article, we propose an approach referred to as Urban POI-Walk (UPOI-Walk), which takes into account a user's social-triggered intentions (SI), preference-triggered intentions (PreI), and popularity-triggered intentions (PopI), to estimate the probability of a user checking-in to a POI. The core idea of UPOI-Walk involves building a HITS-based random walk on the normalized check-in network, thus supporting the prediction of POI properties related to each user's preferences. To achieve this goal, we define several user--POI graphs to capture the key properties of the check-in behavior motivated by user intentions. In our UPOI-Walk approach, we propose a new kind of random walk model—Dynamic HITS-based Random Walk—which comprehensively considers the relevance between POIs and users from different aspects. On the basis of similitude, we make an online recommendation as to the POI the user intends to visit. To the best of our knowledge, this is the first work on urban POI recommendations that considers user check-in behavior motivated by SI, PreI, and PopI in location-based social network data. Through comprehensive experimental evaluations on two real datasets, the proposed UPOI-Walk is shown to deliver excellent performance.",
"title": ""
},
{
"docid": "7856e64f16a6b57d8f8743d94ea9f743",
"text": "Unconsciousness is a fundamental component of general anesthesia (GA), but anesthesiologists have no reliable ways to be certain that a patient is unconscious. To develop EEG signatures that track loss and recovery of consciousness under GA, we recorded high-density EEGs in humans during gradual induction of and emergence from unconsciousness with propofol. The subjects executed an auditory task at 4-s intervals consisting of interleaved verbal and click stimuli to identify loss and recovery of consciousness. During induction, subjects lost responsiveness to the less salient clicks before losing responsiveness to the more salient verbal stimuli; during emergence they recovered responsiveness to the verbal stimuli before recovering responsiveness to the clicks. The median frequency and bandwidth of the frontal EEG power tracked the probability of response to the verbal stimuli during the transitions in consciousness. Loss of consciousness was marked simultaneously by an increase in low-frequency EEG power (<1 Hz), the loss of spatially coherent occipital alpha oscillations (8-12 Hz), and the appearance of spatially coherent frontal alpha oscillations. These dynamics reversed with recovery of consciousness. The low-frequency phase modulated alpha amplitude in two distinct patterns. During profound unconsciousness, alpha amplitudes were maximal at low-frequency peaks, whereas during the transition into and out of unconsciousness, alpha amplitudes were maximal at low-frequency nadirs. This latter phase-amplitude relationship predicted recovery of consciousness. Our results provide insights into the mechanisms of propofol-induced unconsciousness, establish EEG signatures of this brain state that track transitions in consciousness precisely, and suggest strategies for monitoring the brain activity of patients receiving GA.",
"title": ""
},
{
"docid": "ac62d57dac1a363275ddf989881d194a",
"text": "0957-4174/$ see front matter 2012 Elsevier Ltd. A http://dx.doi.org/10.1016/j.eswa.2012.08.010 ⇑ Corresponding author. Address: College of De University, 1239 Siping Road, Shanghai 200092, PR 6598 3432. E-mail addresses: huchenliu@foxmaill.com (H.-C (L. Liu), liunan@cqjtu.edu.cn (N. Liu). Failure mode and effects analysis (FMEA) is a risk assessment tool that mitigates potential failures in systems, processes, designs or services and has been used in a wide range of industries. The conventional risk priority number (RPN) method has been criticized to have many deficiencies and various risk priority models have been proposed in the literature to enhance the performance of FMEA. However, there has been no literature review on this topic. In this study, we reviewed 75 FMEA papers published between 1992 and 2012 in the international journals and categorized them according to the approaches used to overcome the limitations of the conventional RPN method. The intention of this review is to address the following three questions: (i) Which shortcomings attract the most attention? (ii) Which approaches are the most popular? (iii) Is there any inadequacy of the approaches? The answers to these questions will give an indication of current trends in research and the best direction for future research in order to further address the known deficiencies associated with the traditional FMEA. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "4deb101ba94ef958cfe84610f2abccc4",
"text": "Iris recognition is considered to be the most reliable and accurate biometric identification system available. Iris recognition system captures an image of an individual’s eye, the iris in the image is then meant for the further segmentation and normalization for extracting its feature. The performance of iris recognition systems depends on the process of segmentation. Segmentation is used for the localization of the correct iris region in the particular portion of an eye and it should be done accurately and correctly to remove the eyelids, eyelashes, reflection and pupil noises present in iris region. In our paper we are using Daughman’s Algorithm segmentation method for Iris Recognition. Iris images are selected from the CASIA Database, then the iris and pupil boundary are detected from rest of the eye image, removing the noises. The segmented iris region was normalized to minimize the dimensional inconsistencies between iris regions by using Daugman’s Rubber Sheet Model. Then the features of the iris were encoded by convolving the normalized iris region with 1D Log-Gabor filters and phase quantizing the output in order to produce a bit-wise biometric template. The Hamming distance was chosen as a matching metric, which gave the measure of how many bits disagreed between the templates of the iris. Index Terms —Daughman’s Algorithm, Daugman’s Rubber Sheet Model, Hamming Distance, Iris Recognition, segmentation.",
"title": ""
},
{
"docid": "ca70bf377f8823c2ecb1cdd607c064ec",
"text": "To date, few studies have compared the effectiveness of topical silicone gels versus that of silicone gel sheets in preventing scars. In this prospective study, we compared the efficacy and the convenience of use of the 2 products. We enrolled 30 patients who had undergone a surgical procedure 2 weeks to 3 months before joining the study. These participants were randomly assigned to 2 treatment arms: one for treatment with a silicone gel sheet, and the other for treatment with a topical silicone gel. Vancouver Scar Scale (VSS) scores were obtained for all patients; in addition, participants completed scoring patient questionnaires 1 and 3 months after treatment onset. Our results reveal not only that no significant difference in efficacy exists between the 2 products but also that topical silicone gels are more convenient to use. While previous studies have advocated for silicone gel sheets as first-line therapies in postoperative scar management, we maintain that similar effects can be expected with topical silicone gel. The authors recommend that, when clinicians have a choice of silicone-based products for scar prevention, they should focus on each patient's scar location, lifestyle, and willingness to undergo scar prevention treatment.",
"title": ""
},
{
"docid": "af6464d1e51cb59da7affc73977eed71",
"text": "Recommender systems leverage both content and user interactions to generate recommendations that fit users' preferences. The recent surge of interest in deep learning presents new opportunities for exploiting these two sources of information. To recommend items we propose to first learn a user-independent high-dimensional semantic space in which items are positioned according to their substitutability, and then learn a user-specific transformation function to transform this space into a ranking according to the user's past preferences. An advantage of the proposed architecture is that it can be used to effectively recommend items using either content that describes the items or user-item ratings. We show that this approach significantly outperforms state-of-the-art recommender systems on the MovieLens 1M dataset.",
"title": ""
},
{
"docid": "1b3c37f20cc341f50c7d12c425bc94af",
"text": "Vertex is a Wrapper Induction system developed at Yahoo! for extracting structured records from template-based Web pages. To operate at Web scale, Vertex employs a host of novel algorithms for (1) Grouping similar structured pages in a Web site, (2) Picking the appropriate sample pages for wrapper inference, (3) Learning XPath-based extraction rules that are robust to variations in site structure, (4) Detecting site changes by monitoring sample pages, and (5) Optimizing editorial costs by reusing rules, etc. The system is deployed in production and currently extracts more than 250 million records from more than 200 Web sites. To the best of our knowledge, Vertex is the first system to do high-precision information extraction at Web scale.",
"title": ""
},
{
"docid": "66638a2a66f6829f5b9ac72e4ace79ed",
"text": "The Theory of Waste Management is a unified body of knowledge about waste and waste management, and it is founded on the expectation that waste management is to prevent waste to cause harm to human health and the environment and promote resource use optimization. Waste Management Theory is to be constructed under the paradigm of Industrial Ecology as Industrial Ecology is equally adaptable to incorporate waste minimization and/or resource use optimization goals and values.",
"title": ""
}
] |
scidocsrr
|
7869dcc5bcfb069ecbf790ca41cbe38b
|
Hybrid Approach for Emotion Classification of Audio Conversation Based on Text and Speech Mining
|
[
{
"docid": "cfbf63d92dfafe4ac0243acdff6cf562",
"text": "In this paper we present a linguistic resource for the lexical representation of affective knowledge. This resource (named W ORDNETAFFECT) was developed starting from W ORDNET, through a selection and tagging of a subset of synsets representing the affective",
"title": ""
}
] |
[
{
"docid": "16c87d75564404d52fc2abac55297931",
"text": "SHADE is an adaptive DE which incorporates success-history based parameter adaptation and one of the state-of-the-art DE algorithms. This paper proposes L-SHADE, which further extends SHADE with Linear Population Size Reduction (LPSR), which continually decreases the population size according to a linear function. We evaluated the performance of L-SHADE on CEC2014 benchmarks and compared its search performance with state-of-the-art DE algorithms, as well as the state-of-the-art restart CMA-ES variants. The experimental results show that L-SHADE is quite competitive with state-of-the-art evolutionary algorithms.",
"title": ""
},
{
"docid": "ba5cd7dcf8d7e9225df1d9dc69c95c11",
"text": "e eective of information retrieval (IR) systems have become more important than ever. Deep IR models have gained increasing aention for its ability to automatically learning features from raw text; thus, many deep IR models have been proposed recently. However, the learning process of these deep IR models resemble a black box. erefore, it is necessary to identify the dierence between automatically learned features by deep IR models and hand-craed features used in traditional learning to rank approaches. Furthermore, it is valuable to investigate the dierences between these deep IR models. is paper aims to conduct a deep investigation on deep IR models. Specically, we conduct an extensive empirical study on two dierent datasets, including Robust and LETOR4.0. We rst compared the automatically learned features and handcraed features on the respects of query term coverage, document length, embeddings and robustness. It reveals a number of disadvantages compared with hand-craed features. erefore, we establish guidelines for improving existing deep IR models. Furthermore, we compare two dierent categories of deep IR models, i.e. representation-focused models and interaction-focused models. It is shown that two types of deep IR models focus on dierent categories of words, including topic-related words and query-related words.",
"title": ""
},
{
"docid": "9d82ce8e6630a9432054ed97752c7ec6",
"text": "Development is the powerful process involving a genome in the transformation from one egg cell to a multicellular organism with many cell types. The dividing cells manage to organize and assign themselves special, differentiated roles in a reliable manner, creating a spatio-temporal pattern and division of labor. This despite the fact that little positional information may be available to them initially to guide this patterning. Inspired by a model of developmental biologist L. Wolpert, we simulate this situation in an evolutionary setting where individuals have to grow into “French flag” patterns. The cells in our model exist in a 2-layer Potts model physical environment. Controlled by continuous genetic regulatory networks, identical for all cells of one individual, the cells can individually differ in parameters including target volume, shape, orientation, and diffusion. Intercellular communication is possible via secretion and sensing of diffusing morphogens. Evolved individuals growing from a single cell can develop the French flag pattern by setting up and maintaining asymmetric morphogen gradients – a behavior predicted by several theoretical models.",
"title": ""
},
{
"docid": "130139c25f42dbf9c779e5fc3db5f721",
"text": "Among many movies that have been released, some generate high profit while the others do not. This paper studies the relationship between movie factors and its revenue and build prediction models. Besides analysis on aggregate data, we also divide data into groups using different methods and compare accuracy across these techniques as well as explore whether clustering techniques could help improve accuracy. Specifically, two major steps were employed. Initially, linear regression, polynomial regression and support vector regression (SVR) were applied on the entire movie data to predict the movie revenue. Then, clustering techniques, such as by genre, using Expectation Maximization (EM) and using K-means were applied to divide data into groups before regression analyses are executed. To compare accuracy among different techniques, R-square and the root-mean-square error (RMSE) were used as a performance indicator. Our study shows that generally linear regression without clustering offers the model with the highest R-square, while linear regression with EM clustering yields the lowest RMSE.",
"title": ""
},
{
"docid": "110742230132649f178d2fa99c8ffade",
"text": "Recent approaches based on artificial neural networks (ANNs) have shown promising results for named-entity recognition (NER). In order to achieve high performances, ANNs need to be trained on a large labeled dataset. However, labels might be difficult to obtain for the dataset on which the user wants to perform NER: label scarcity is particularly pronounced for patient note de-identification, which is an instance of NER. In this work, we analyze to what extent transfer learning may address this issue. In particular, we demonstrate that transferring an ANN model trained on a large labeled dataset to another dataset with a limited number of labels improves upon the state-of-the-art results on two different datasets for patient note de-identification.",
"title": ""
},
{
"docid": "9b658cf50907e117fdc071ff5d60f8ba",
"text": "Ontology-based data access (OBDA) is a new paradigm aiming at accessing and managing data by means of an ontology, i.e., a conceptual representation of the domain of interest in the underlying information system. In the last years, this new paradigm has been used for providing users with abstract (independent from technological and system-oriented aspects), effective, and reasoning-intensive mechanisms for querying the data residing at the information system sources. In this paper we argue that OBDA, besides querying data, provides the right principles for devising a formal approach to data quality. In particular, we concentrate on one of the most important dimensions considered both in the literature and in the practice of data quality, namely consistency. We define a general framework for data consistency in OBDA, and present algorithms and complexity analysis for several relevant tasks related to the problem of checking data quality under this dimension, both at the extensional level (content of the data sources), and at the intensional level (schema of the",
"title": ""
},
{
"docid": "41a0f95ef912cb6adf072ee33064589d",
"text": "This paper proposes an active capacitive sensing circuit for fingerprint sensors, which includes a pixel level charge-sharing and charge pump to replace an ADC. This paper also proposes the operating algorithm for 16-level gray scale image. The active capacitive technology is more flexible and can be adjusted to adapt to a wide range of different skin types and environments. The proposed novel circuit is composed with unit gain buffer, 6-stage charge pump and analog comparator. The proper operation is validated by the HSPICE simulation of one pixel with condition of 0.35μm typical CMOS parameter and 3.3V power.",
"title": ""
},
{
"docid": "5a4a6328fc88fbe32a81c904135b05c9",
"text": "Semi-supervised learning plays a significant role in multi-class classification, where a small number of labeled data are more deterministic while substantial unlabeled data might cause large uncertainties and potential threats. In this paper, we distinguish the label fitting of labeled and unlabeled training data through a probabilistic vector with an adaptive parameter, which always ensures the significant importance of labeled data and characterizes the contribution of unlabeled instance according to its uncertainty. Instead of using traditional least squares regression (LSR) for classification, we develop a new discriminative LSR by equipping each label with an adjustment vector. This strategy avoids incorrect penalization on samples that are far away from the boundary and simultaneously facilitates multi-class classification by enlarging the geometrical distance of instances belonging to different classes. An efficient alternative algorithm is exploited to solve the proposed model with closed form solution for each updating rule. We also analyze the convergence and complexity of the proposed algorithm theoretically. Experimental results on several benchmark datasets demonstrate the effectiveness and superiority of the proposed model for multi-class classification tasks.",
"title": ""
},
{
"docid": "75a9715ce9eaffaa43df5470ad7cacca",
"text": "Resting frontal electroencephalographic (EEG) asymmetry has been hypothesized as a marker of risk for major depressive disorder (MDD), but the extant literature is based predominately on female samples. Resting frontal asymmetry was assessed on 4 occasions within a 2-week period in 306 individuals aged 18-34 (31% male) with (n = 143) and without (n = 163) lifetime MDD as defined by the Diagnostic and Statistical Manual of Mental Disorders, 4th edition (American Psychiatric Association, 1994). Lifetime MDD was linked to relatively less left frontal activity for both sexes using a current source density (CSD) reference, findings that were not accounted for solely by current MDD status or current depression severity, suggesting that CSD-referenced EEG asymmetry is a possible endophenotype for depression. In contrast, results for average and linked mastoid references were less consistent but demonstrated a link between less left frontal activity and current depression severity in women.",
"title": ""
},
{
"docid": "23fe6b01d4f31e69e753ff7c78674f19",
"text": "Advancements in information technology often task users with complex and consequential privacy and security decisions. A growing body of research has investigated individuals’ choices in the presence of privacy and information security tradeoffs, the decision-making hurdles affecting those choices, and ways to mitigate such hurdles. This article provides a multi-disciplinary assessment of the literature pertaining to privacy and security decision making. It focuses on research on assisting individuals’ privacy and security choices with soft paternalistic interventions that nudge users toward more beneficial choices. The article discusses potential benefits of those interventions, highlights their shortcomings, and identifies key ethical, design, and research challenges.",
"title": ""
},
{
"docid": "a56552cb8ab102fb73a5824634e2c027",
"text": "In this paper, a tutorial overview on anomaly detection for hyperspectral electro-optical systems is presented. This tutorial is focused on those techniques that aim to detect small man-made anomalies typically found in defense and surveillance applications. Since a variety of methods have been proposed for detecting such targets, this tutorial places emphasis on the techniques that are either mathematically more tractable or easier to interpret physically. These methods are not only more suitable for a tutorial publication, but also an essential to a study of anomaly detection. Previous surveys on this subject have focused mainly on anomaly detectors developed in a statistical framework and have been based on well-known background statistical models. However, the most recent research trends seem to move away from the statistical framework and to focus more on deterministic and geometric concepts. This work also takes into consideration these latest trends, providing a wide theoretical review without disregarding practical recommendations about algorithm implementation. The main open research topics are addressed as well, the foremost being algorithm optimization, which is required for embodying anomaly detectors in real-time systems.",
"title": ""
},
{
"docid": "6ae33cdc9601c90f9f3c1bda5aa8086f",
"text": "A k-uniform hypergraph is hamiltonian if for some cyclic ordering of its vertex set, every k consecutive vertices form an edge. In 1952 Dirac proved that if the minimum degree in an n-vertex graph is at least n/2 then the graph is hamiltonian. We prove an approximate version of an analogous result for uniform hypergraphs: For every k ≥ 3 and γ > 0, and for all n large enough, a sufficient condition for an n-vertex k-uniform hypergraph to be hamiltonian is that each (k − 1)-element set of vertices is contained in at least (1/2 + γ)n edges. Research supported by NSF grant DMS-0300529. Research supported by KBN grant 2 P03A 015 23 and N201036 32/2546. Part of research performed at Emory University, Atlanta. Research supported by NSF grant DMS-0100784",
"title": ""
},
{
"docid": "5935224c53222d0234adffddae23eb04",
"text": "The multipath-rich wireless environment associated with typical wireless usage scenarios is characterized by a fading channel response that is time-varying, location-sensitive, and uniquely shared by a given transmitter-receiver pair. The complexity associated with a richly scattering environment implies that the short-term fading process is inherently hard to predict and best modeled stochastically, with rapid decorrelation properties in space, time, and frequency. In this paper, we demonstrate how the channel state between a wireless transmitter and receiver can be used as the basis for building practical secret key generation protocols between two entities. We begin by presenting a scheme based on level crossings of the fading process, which is well-suited for the Rayleigh and Rician fading models associated with a richly scattering environment. Our level crossing algorithm is simple, and incorporates a self-authenticating mechanism to prevent adversarial manipulation of message exchanges during the protocol. Since the level crossing algorithm is best suited for fading processes that exhibit symmetry in their underlying distribution, we present a second and more powerful approach that is suited for more general channel state distributions. This second approach is motivated by observations from quantizing jointly Gaussian processes, but exploits empirical measurements to set quantization boundaries and a heuristic log likelihood ratio estimate to achieve an improved secret key generation rate. We validate both proposed protocols through experimentations using a customized 802.11a platform, and show for the typical WiFi channel that reliable secret key establishment can be accomplished at rates on the order of 10 b/s.",
"title": ""
},
{
"docid": "fe407c5c554096543ab05550599b369a",
"text": "The IMT 2020 requirements of 20 Gb/s peak data rate and 1 ms latency present significant engineering challenges for the design of 5G cellular systems. Systems that make use of the mmWave bands above 10 GHz ---where large regions of spectrum are available --- are a promising 5G candidate that may be able to rise to the occasion. However, although the mmWave bands can support massive peak data rates, delivering these data rates for end-to-end services while maintaining reliability and ultra-low-latency performance to support emerging applications and use cases will require rethinking all layers of the protocol stack. This article surveys some of the challenges and possible solutions for delivering end-to-end, reliable, ultra-low-latency services in mmWave cellular systems in terms of the MAC layer, congestion control, and core network architecture.",
"title": ""
},
{
"docid": "450fdd88aa45a405eace9a5a1e0113f7",
"text": "DNN-based cross-modal retrieval has become a research hotspot, by which users can search results across various modalities like image and text. However, existing methods mainly focus on the pairwise correlation and reconstruction error of labeled data. They ignore the semantically similar and dissimilar constraints between different modalities, and cannot take advantage of unlabeled data. This paper proposes Cross-modal Deep Metric Learning with Multi-task Regularization (CDMLMR), which integrates quadruplet ranking loss and semi-supervised contrastive loss for modeling cross-modal semantic similarity in a unified multi-task learning architecture. The quadruplet ranking loss can model the semantically similar and dissimilar constraints to preserve cross-modal relative similarity ranking information. The semi-supervised contrastive loss is able to maximize the semantic similarity on both labeled and unlabeled data. Compared to the existing methods, CDMLMR exploits not only the similarity ranking information but also unlabeled cross-modal data, and thus boosts cross-modal retrieval accuracy.",
"title": ""
},
{
"docid": "74724f58c6542a75f7510ac79571c90d",
"text": "The World Wide Web is moving from a Web of hyper-linked Documents to a Web of linked Data. Thanks to the Semantic Web spread and to the more recent Linked Open Data (LOD) initiative, a vast amount of RDF data have been published in freely accessible datasets. These datasets are connected with each other to form the so called Linked Open Data cloud. As of today, there are tons of RDF data available in the Web of Data, but only few applications really exploit their potential power. In this paper we show how these data can successfully be used to develop a recommender system (RS) that relies exclusively on the information encoded in the Web of Data. We implemented a content-based RS that leverages the data available within Linked Open Data datasets (in particular DBpedia, Freebase and LinkedMDB) in order to recommend movies to the end users. We extensively evaluated the approach and validated the effectiveness of the algorithms by experimentally measuring their accuracy with precision and recall metrics.",
"title": ""
},
{
"docid": "b106be5cb0510e93b556a14f00877c3b",
"text": "BACKGROUND\nNurses' behavior in Educational-Medical centers is very important for improving the condition of patients. Ethical climate represents the ethical values and behavioral expectations. Attitude of people toward religion is both intrinsic and extrinsic. Different ethical climates and attitude toward religion could be associated with nurses' behavior.\n\n\nAIM\nTo study the mediating effect of ethical climate on religious orientation and ethical behaviors of nurses.\n\n\nRESEARCH DESIGN\nIn an exploratory analysis study, the path analysis method was used to identify the effective variables on ethical behavior. Participants/context: The participants consisted of 259 Iranian nurses from Hamadan University of Medical Sciences. Ethical considerations: This project with an ethical code and a unique ID IR.UMSHA.REC.1395.67 was approved in the Research Council of Hamadan University of Medical Sciences.\n\n\nFINDINGS\nThe beta coefficients obtained by regression analysis of perception of ethical climate of individual egoism (B = -0.202, p < 0.001), individual ethical principles (B = -0.184, p = 0.001), local egoism (B = -0.136, p = 0.003), and extrinsic religious orientation (B = -0.266, p = 0.007) were significant that they could act as predictors of ethical behavior. The summary of regression model indicated that 0.27% of ethical behaviors of nurses are justified by two variables: ethical climate and religious orientation.\n\n\nDISCUSSION AND CONCLUSION\nIntrinsic religious orientation has the most direct impact and then, respectively, the variables of ethical climate of perceptions in the dimensions of individual egoism, individual ethical principles, local egoism, global ethical principle, and ethical behavior and extrinsic religious orientation follow. All the above, except global ethical principles and intrinsic orientation of religion have a negative effect on ethical behavior and can be predictors of ethical behavior. Therefore, applying strategies to promote theories of intrinsic religious orientation and global ethical principles in different situations of nursing is recommended.",
"title": ""
},
{
"docid": "43882b64eec2667444a992d4da5484dd",
"text": "Past research demonstrates that children learn from a previously accurate speaker rather than from a previously inaccurate one. This study shows that children do not necessarily treat a previously inaccurate speaker as unreliable. Rather, they appropriately excuse past inaccuracy arising from the speaker's limited information access. Children (N= 67) aged 3, 4, and 5 years aimed to identify a hidden toy in collaboration with a puppet as informant. When the puppet had previously been inaccurate despite having full information, children tended to ignore what they were told and guess for themselves: They treated the puppet as unreliable in the longer term. However, children more frequently believed a currently well-informed puppet whose past inaccuracies arose legitimately from inadequate information access.",
"title": ""
},
{
"docid": "e464859fd25c6bdcf266ceec090af9f2",
"text": "AC ◦ MOD2 circuits are AC circuits augmented with a layer of parity gates just above the input layer. We study AC ◦MOD2 circuit lower bounds for computing the Boolean Inner Product functions. Recent works by Servedio and Viola (ECCC TR12-144) and Akavia et al. (ITCS 2014) have highlighted this problem as a frontier problem in circuit complexity that arose both as a first step towards solving natural special cases of the matrix rigidity problem and as a candidate for constructing pseudorandom generators of minimal complexity. We give the first superlinear lower bound for the Boolean Inner Product function against AC ◦MOD2 of depth four or greater. Specifically, we prove a superlinear lower bound for circuits of arbitrary constant depth, and an Ω̃(n) lower bound for the special case of depth-4 AC ◦MOD2. Our proof of the depth-4 lower bound employs a new “moment-matching” inequality for bounded, nonnegative integer-valued random variables that may be of independent interest: we prove an optimal bound on the maximum difference between two discrete distributions’ values at 0, given that their first d moments match. ∗Simons Institute for the Theory of Computing, University of California, Berkeley, CA. Email: cheraghchi@berkeley.edu. Supported by a Qualcomm fellowship. †Department of Computer Science, Purdue University, West Lafayette, IN. Email: elena-g@purdue.edu. ‡Department of Computer Science and Engineering, Washington University, St. Louis, MO. Email: bjuba@wustl.edu. Supported by an AFOSR Young Investigator award. §Department of Mathematics, Duquesne University, Pittsburgh, PA. Email: wimmerk@duq.edu. Supported by NSF award CCF-1117079. ¶SCIS, Florida International University, Miami, FL. Email: nxie@cs.fiu.edu. Research supported in part by NSF grant 1423034.",
"title": ""
},
{
"docid": "b2382c9b14526bf7fe526e4d3dc82601",
"text": "We have proposed, fabricated, and studied a new design of a high-speed optical non-volatile memory. The recoding mechanism of the proposed memory utilizes a magnetization reversal of a nanomagnet by a spin-polarized photocurrent. It was shown experimentally that the operational speed of this memory may be extremely fast above 1 TBit/s. The challenges to realize both a high-speed recording and a high-speed reading are discussed. The memory is compact, integratable, and compatible with present semiconductor technology. If realized, it will advance data processing and computing technology towards a faster operation speed.",
"title": ""
}
] |
scidocsrr
|
c452c6a4553d343cefe3fd686b2c8692
|
Analyzing Argumentative Discourse Units in Online Interactions
|
[
{
"docid": "d7a348b092064acf2d6a4fd7d6ef8ee2",
"text": "Argumentation theory involves the analysis of naturally occurring argument, and one key tool employed to this end both in the academic community and in teaching critical thinking skills to undergraduates is argument diagramming. By identifying the structure of an argument in terms of its constituents and the relationships between them, it becomes easier to critically evaluate each part of an argument in turn. The task of analysis and diagramming, however, is labor intensive and often idiosyncratic, which can make academic exchange difficult. The Araucaria system provides an interface which supports the diagramming process, and then saves the result using AML, an open standard, designed in XML, for describing argument structure. Araucaria aims to be of use not only in pedagogical situations, but also in support of research activity. As a result, it has been designed from the outset to handle more advanced argumentation theoretic concepts such as schemes, which capture stereotypical patterns of reasoning. The software is also designed to be compatible with a number of applications under development, including dialogic interaction and online corpus provision. Together, these features, combined with its platform independence and ease of use, have the potential to make Araucaria a valuable resource for the academic community.",
"title": ""
},
{
"docid": "5f7adc28fab008d93a968b6a1e5ad061",
"text": "This paper describes recent approaches using text-mining to automatically profile and extract arguments from legal cases. We outline some of the background context and motivations. We then turn to consider issues related to the construction and composition of a corpora of legal cases. We show how a Context-Free Grammar can be used to extract arguments, and how ontologies and Natural Language Processing can identify complex information such as case factors and participant roles. Together the results bring us closer to automatic identification of legal arguments.",
"title": ""
}
] |
[
{
"docid": "a3ad2be5b2b44277026ee9f84c0d416b",
"text": "In order to attain a useful balanced scorecard (BSC), appropriate performance perspectives and indicators are crucial to reflect all strategies of the organisation. The objectives of this survey were to give an insight regarding the situation of the BSC in the health sector over the past decade, and to afford a generic approach of the BSC development for health settings with specific focus on performance perspectives, performance indicators and BSC generation. After an extensive search based on publication date and research content, 29 articles published since 2002 were identified, categorised and analysed. Four critical attributes of each article were analysed, including BSC generation, performance perspectives, performance indicators and auxiliary tools. The results showed that 'internal business process' was the most notable BSC perspective as it was included in all reviewed articles. After investigating the literature, it was concluded that its comprehensiveness is the reason for the importance and high usage of this perspective. The findings showed that 12 cases out of 29 reviewed articles (41%) exceeded the maximum number of key performance indicators (KPI) suggested in a previous study. It was found that all 12 cases were large organisations with numerous departments (e.g. national health organisations). Such organisations require numerous KPI to cover all of their strategic objectives. It was recommended to utilise the cascaded BSC within such organisations to avoid complexity and difficulty in gathering, analysing and interpreting performance data. Meanwhile it requires more medical staff to contribute in BSC development, which will result in greater reliability of the BSC.",
"title": ""
},
{
"docid": "7e0b9941d5019927fce0a1223a88d6b5",
"text": "Representation and recognition of events in a video is important for a number of tasks such as video surveillance, video browsing and content based video indexing. This paper describes the results of a \"Challenge Project on Video Event Taxonomy\" sponsored by the Advanced Research and Development Activity (ARDA) of the U.S. Government in the summer and fall of 2003. The project brought together more than 30 researchers in computer vision and knowledge representation and representatives of the user community. It resulted in the development of a formal language for describing an ontology of events, which we call VERL (Video Event Representation Language) and a companion language called VEML (Video Event Markup Language) to annotate instances of the events described in VERL. This paper provides a summary of VERL and VEML as well as the considerations associated with the specific design choices.",
"title": ""
},
{
"docid": "799ccd75d6781e38cf5e2faee5784cae",
"text": "Recurrent neural networks (RNNs) form an important class of architectures among neural networks useful for language modeling and sequential prediction. However, optimizing RNNs is known to be harder compared to feed-forward neural networks. A number of techniques have been proposed in literature to address this problem. In this paper we propose a simple technique called fraternal dropout that takes advantage of dropout to achieve this goal. Specifically, we propose to train two identical copies of an RNN (that share parameters) with different dropout masks while minimizing the difference between their (pre-softmax) predictions. In this way our regularization encourages the representations of RNNs to be invariant to dropout mask, thus being robust. We show that our regularization term is upper bounded by the expectation-linear dropout objective which has been shown to address the gap due to the difference between the train and inference phases of dropout. We evaluate our model and achieve state-of-the-art results in sequence modeling tasks on two benchmark datasets – Penn Treebank and Wikitext-2. We also show that our approach leads to performance improvement by a significant margin in image captioning (Microsoft COCO) and semi-supervised (CIFAR-10) tasks.",
"title": ""
},
{
"docid": "d3f97e0de15ab18296e161e287890e18",
"text": "Nosocomial or hospital acquired infections threaten the survival and neurodevelopmental outcomes of infants admitted to the neonatal intensive care unit, and increase cost of care. Premature infants are particularly vulnerable since they often undergo invasive procedures and are dependent on central catheters to deliver nutrition and on ventilators for respiratory support. Prevention of nosocomial infection is a critical patient safety imperative, and invariably requires a multidisciplinary approach. There are no short cuts. Hand hygiene before and after patient contact is the most important measure, and yet, compliance with this simple measure can be unsatisfactory. Alcohol based hand sanitizer is effective against many microorganisms and is efficient, compared to plain or antiseptic containing soaps. The use of maternal breast milk is another inexpensive and simple measure to reduce infection rates. Efforts to replicate the anti-infectious properties of maternal breast milk by the use of probiotics, prebiotics, and synbiotics have met with variable success, and there are ongoing trials of lactoferrin, an iron binding whey protein present in large quantities in colostrum. Attempts to boost the immunoglobulin levels of preterm infants with exogenous immunoglobulins have not been shown to reduce nosocomial infections significantly. Over the last decade, improvements in the incidence of catheter-related infections have been achieved, with meticulous attention to every detail from insertion to maintenance, with some centers reporting zero rates for such infections. Other nosocomial infections like ventilator acquired pneumonia and staphylococcus aureus infection remain problematic, and outbreaks with multidrug resistant organisms continue to have disastrous consequences. Management of infections is based on the profile of microorganisms in the neonatal unit and community and targeted therapy is required to control the disease without leading to the development of more resistant strains.",
"title": ""
},
{
"docid": "3dd8c177ae928f7ccad2aa980bd8c747",
"text": "The quality and nature of knowledge that can be found by an automated knowledge-extraction system depends on its inputs. For systems that learn by reading text, the Web offers a breadth of topics and currency, but it also presents the problems of dealing with casual, unedited writing, non-textual inputs, and the mingling of languages. The results of extraction using the KNEXT system on two Web corpora – Wikipedia and a collection of weblog entries – indicate that, with automatic filtering of the output, even ungrammatical writing on arbitrary topics can yield an extensive knowledge base, which human judges find to be of good quality, with propositions receiving an average score across both corpora of 2.34 (where the range is 1 to 5 and lower is better) versus 3.00 for unfiltered output from the same sources.",
"title": ""
},
{
"docid": "03dc797bafa51245791de2b7c663a305",
"text": "In many applications of computational geometry to modeling objects and processes in the physical world, the participating objects are in a state of continuous change. Motion is the most ubiquitous kind of continuous transformation but others, such as shape deformation, are also possible. In a recent paper, Baech, Guibas, and Hershberger [BGH97] proposed the framework of kinetic data structures (KDSS) as a way to maintain, in a completely on-line fashion, desirable information about the state of a geometric system in continuous motion or change. They gave examples of kinetic data structures for the maximum of a set of (changing) numbers, and for the convex hull and closest pair of a set of (moving) points in the plane. The KDS frameworkallowseach object to change its motion at will according to interactions with other moving objects, the environment, etc. We implemented the KDSSdescribed in [BGH97],es well as came alternative methods serving the same purpose, as a way to validate the kinetic data structures framework in practice. In this note, we report some preliminary results on the maintenance of the convex hull, describe the experimental setup, compare three alternative methods, discuss the value of the measures of quality for KDSS proposed by [BGH97],and highlight some important numerical issues.",
"title": ""
},
{
"docid": "d8143c0b083defa15182e079b23bdfe8",
"text": "OBJECTIVES\nThe purpose of this study was to compare the incidence of genital injury following penile-vaginal penetration with and without consent.\n\n\nDESIGN\nThis study compared observations of genital injuries from two cohorts.\n\n\nSETTING\nParticipants were drawn from St. Mary's Sexual Assault Referral Centre and a general practice surgery in Manchester, and a general practice surgery in Buckinghamshire.\n\n\nPARTICIPANTS\nTwo cohorts were recruited: a retrospective cohort of 500 complainants referred to a specialist Sexual Assault Referral Centre (the Cases) and 68 women recruited at the time of their routine cervical smear test who had recently had sexual intercourse (the Comparison group).\n\n\nMAIN OUTCOME MEASURES\nPresence of genital injuries.\n\n\nRESULTS\n22.8% (n=00, 95% CI 19.2-26.7) of adult complainants of penile-vaginal rape by a single assailant sustained an injury to the genitalia that was visible within 48h of the incident. This was approximately three times more than the 5.9% (n=68, 95% CI 1.6-14.4) of women who sustained a genital injury during consensual sex. This was a statistically significant difference (a<0.05, p=0.0007). Factors such as hormonal status, position during intercourse, criminal justice outcome, relationship to assailant, and the locations, sizes and types of injuries were also considered but the only factor associated with injury was the relationship with the complainant, with an increased risk of injury if the assailant was known to the complainant (p=0.019).\n\n\nCONCLUSIONS\nMost complainants of rape (n=500, 77%, 95% CI 73-81%) will not sustain any genital injury, although women are three times more likely to sustain a genital injury from an assault than consensual intercourse.",
"title": ""
},
{
"docid": "1add7dcbe4f7c666e0453d5fa6661b31",
"text": "Convolutive blind source separation (CBSS) that exploits the sparsity of source signals in the frequency domain is addressed in this paper. We assume the sources follow complex Laplacian-like distribution for complex random variable, in which the real part and imaginary part of complex-valued source signals are not necessarily independent. Based on the maximum a posteriori (MAP) criterion, we propose a novel natural gradient method for complex sparse representation. Moreover, a new CBSS method is further developed based on complex sparse representation. The developed CBSS algorithm works in the frequency domain. Here, we assume that the source signals are sufficiently sparse in the frequency domain. If the sources are sufficiently sparse in the frequency domain and the filter length of mixing channels is relatively small and can be estimated, we can even achieve underdetermined CBSS. We illustrate the validity and performance of the proposed learning algorithm by several simulation examples.",
"title": ""
},
{
"docid": "890a2092f3f55799e9c0216dac3d9e2f",
"text": "The rise in popularity of permissioned blockchain platforms in recent time is significant. Hyperledger Fabric is one such permissioned blockchain platform and one of the Hyperledger projects hosted by the Linux Foundation. The Fabric comprises various components such as smart-contracts, endorsers, committers, validators, and orderers. As the performance of blockchain platform is a major concern for enterprise applications, in this work, we perform a comprehensive empirical study to characterize the performance of Hyperledger Fabric and identify potential performance bottlenecks to gain a better understanding of the system. We follow a two-phased approach. In the first phase, our goal is to understand the impact of various configuration parameters such as block size, endorsement policy, channels, resource allocation, state database choice on the transaction throughput & latency to provide various guidelines on configuring these parameters. In addition, we also aim to identify performance bottlenecks and hotspots. We observed that (1) endorsement policy verification, (2) sequential policy validation of transactions in a block, and (3) state validation and commit (with CouchDB) were the three major bottlenecks. In the second phase, we focus on optimizing Hyperledger Fabric v1.0 based on our observations. We introduced and studied various simple optimizations such as aggressive caching for endorsement policy verification in the cryptography component (3x improvement in the performance) and parallelizing endorsement policy verification (7x improvement). Further, we enhanced and measured the effect of an existing bulk read/write optimization for CouchDB during state validation & commit phase (2.5x improvement). By combining all three optimizations1, we improved the overall throughput by 16x (i.e., from 140 tps to 2250 tps).",
"title": ""
},
{
"docid": "fe903498e0c3345d7e5ebc8bf3407c2f",
"text": "This paper describes a general continuous-time framework for visual-inertial simultaneous localization and mapping and calibration. We show how to use a spline parameterization that closely matches the torque-minimal motion of the sensor. Compared to traditional discrete-time solutions, the continuous-time formulation is particularly useful for solving problems with high-frame rate sensors and multiple unsynchronized devices. We demonstrate the applicability of the method for multi-sensor visual-inertial SLAM and calibration by accurately establishing the relative pose and internal parameters of multiple unsynchronized devices. We also show the advantages of the approach through evaluation and uniform treatment of both global and rolling shutter cameras within visual and visual-inertial SLAM systems.",
"title": ""
},
{
"docid": "de0761b7a43cafe7f30d6f8e518dd031",
"text": "The Internet of Things (IOT) has been denoted as a new wave of information and communication technology (ICT) advancements. The IOT is a multidisciplinary concept that encompasses a wide range of several technologies, application domains, device capabilities, and operational strategies, etc. The ongoing IOT research activities are directed towards the definition and design of standards and open architectures which is still have the issues requiring a global consensus before the final deployment. This paper gives over view about IOT technologies and applications related to agriculture with comparison of other survey papers and proposed a novel irrigation management system. Our main objective of this work is to for Farming where various new technologies to yield higher growth of the crops and their water supply. Automated control features with latest electronic technology using microcontroller which turns the pumping motor ON and OFF on detecting the dampness content of the earth and GSM phone line is proposed after measuring the temperature, humidity, and soil moisture.",
"title": ""
},
{
"docid": "ef08ef786fd759b33a7d323c69be19db",
"text": "Language modeling approaches to information retrieval are attractive and promising because they connect the problem of retrieval with that of language model estimation, which has been studied extensively in other application areas such as speech recognition. The basic idea of these approaches is to estimate a language model for each document, and then rank documents by the likelihood of the query according to the estimated language model. A core problem in language model estimation is smoothing, which adjusts the maximum likelihood estimator so as to correct the inaccuracy due to data sparseness. In this paper, we study the problem of language model smoothing and its influence on retrieval performance. We examine the sensitivity of retrieval performance to the smoothing parameters and compare several popular smoothing methods on different test collection.",
"title": ""
},
{
"docid": "d9950f75380758d0a0f4fd9d6e885dfd",
"text": "In recent decades, the interactive whiteboard (IWB) has become a relatively common educational tool in Western schools. The IWB is essentially a large touch screen, that enables the user to interact with digital content in ways that are not possible with an ordinary computer-projector-canvas setup. However, the unique possibilities of IWBs are rarely leveraged to enhance teaching and learning beyond the primary school level. This is particularly noticeable in high school physics. We describe how a high school physics teacher learned to use an IWB in a new way, how she planned and implemented a lesson on the topic of orbital motion of planets, and what tensions arose in the process. We used an ethnographic approach to account for the teacher’s and involved students’ perspectives throughout the process of teacher preparation, lesson planning, and the implementation of the lesson. To interpret the data, we used the conceptual framework of activity theory. We found that an entrenched culture of traditional white/blackboard use in physics instruction interferes with more technologically innovative and more student-centered instructional approaches that leverage the IWB’s unique instructional potential. Furthermore, we found that the teacher’s confidence in the mastery of the IWB plays a crucial role in the teacher’s willingness to transfer agency within the lesson to the students.",
"title": ""
},
{
"docid": "4c3d8c30223ef63b54f8c7ba3bd061ed",
"text": "There is much recent work on using the digital footprints left by people on social media to predict personal traits and gain a deeper understanding of individuals. Due to the veracity of social media, imperfections in prediction algorithms, and the sensitive nature of one's personal traits, much research is still needed to better understand the effectiveness of this line of work, including users' preferences of sharing their computationally derived traits. In this paper, we report a two- part study involving 256 participants, which (1) examines the feasibility and effectiveness of automatically deriving three types of personality traits from Twitter, including Big 5 personality, basic human values, and fundamental needs, and (2) investigates users' opinions of using and sharing these traits. Our findings show there is a potential feasibility of automatically deriving one's personality traits from social media with various factors impacting the accuracy of models. The results also indicate over 61.5% users are willing to share their derived traits in the workplace and that a number of factors significantly influence their sharing preferences. Since our findings demonstrate the feasibility of automatically inferring a user's personal traits from social media, we discuss their implications for designing a new generation of privacy-preserving, hyper-personalized systems.",
"title": ""
},
{
"docid": "b5214fd5f8f8849a57d453b47f1d73f0",
"text": "The development of Graphical User Interface (GUI) is meant to significantly increase the ease of usability of software applications so that the can be used by users from different backgrounds and knowledge level. Such a development becomes even more important and challenging when the users are those that have limited literacy capabilities. Although the progress of development for standard software interface has increased significantly, similar progress has not been available in interface for illiterate people. To fill this gap, this paper presents our research on developing interface of software application devoted to illiterate people. In particular, the proposed interface was designed for mobile application and combines graphic design and linguistic approaches. With such feature, the developed interface is expected to provide easy to use application for illiterate people.",
"title": ""
},
{
"docid": "6c9d84ced9dd23cdb7542a50f1459fef",
"text": "This article outlines a framework for the analysis of economic integration and its relation to the asymmetries of economic and social development. Consciously breaking with state-centric forms of social science, it argues for a research agenda that is more adequate to the exigencies and consequences of globalisation than has traditionally been the case in 'development studies'. Drawing on earlier attempts to analyse the crossborder activities of firms, their spatial configurations and developmental consequences, the article moves beyond these by proposing the framework of the 'global production network' (GPN). It explores the conceptual elements involved in this framework in some detail and then turns to sketch a stylised example of a GPN. The article concludes with a brief indication of the benefits that could be delivered be research informed by GPN analysis.",
"title": ""
},
{
"docid": "98cd53e6bf758a382653cb7252169d22",
"text": "We introduce a novel malware detection algorithm based on the analysis of graphs constructed from dynamically collected instruction traces of the target executable. These graphs represent Markov chains, where the vertices are the instructions and the transition probabilities are estimated by the data contained in the trace. We use a combination of graph kernels to create a similarity matrix between the instruction trace graphs. The resulting graph kernel measures similarity between graphs on both local and global levels. Finally, the similarity matrix is sent to a support vector machine to perform classification. Our method is particularly appealing because we do not base our classifications on the raw n-gram data, but rather use our data representation to perform classification in graph space. We demonstrate the performance of our algorithm on two classification problems: benign software versus malware, and the Netbull virus with different packers versus other classes of viruses. Our results show a statistically significant improvement over signature-based and other machine learning-based detection methods.",
"title": ""
},
{
"docid": "6927647b1e1f6bf9bcf65db50e9f8d6e",
"text": "Six of the ten leading causes of death in the United States can be directly linked to diet. Measuring accurate dietary intake, the process of determining what someone eats is considered to be an open research problem in the nutrition and health fields. We are developing image-based tools in order to automatically obtain accurate estimates of what foods a user consumes. We have developed a novel food record application using the embedded camera in a mobile device. This paper describes the current status of food image analysis and overviews problems that still need to be addressed.",
"title": ""
},
{
"docid": "81b5379abf3849e1ae4e233fd4955062",
"text": "Three-phase dc/dc converters have the superior characteristics including lower current rating of switches, the reduced output filter requirement, and effective utilization of transformers. To further reduce the voltage stress on switches, three-phase three-level (TPTL) dc/dc converters have been investigated recently; however, numerous active power switches result in a complicated configuration in the available topologies. Therefore, a novel TPTL dc/dc converter adopting a symmetrical duty cycle control is proposed in this paper. Compared with the available TPTL converters, the proposed converter has fewer switches and simpler configuration. The voltage stress on all switches can be reduced to the half of the input voltage. Meanwhile, the ripple frequency of output current can be increased significantly, resulting in a reduced filter requirement. Experimental results from a 540-660-V input and 48-V/20-A output are presented to verify the theoretical analysis and the performance of the proposed converter.",
"title": ""
},
{
"docid": "c11b77f1392c79f4a03f9633c8f97f4d",
"text": "The paper introduces and discusses a concept of syntactic n-grams (sn-grams) that can be applied instead of traditional n-grams in many NLP tasks. Sn-grams are constructed by following paths in syntactic trees, so sngrams allow bringing syntactic knowledge into machine learning methods. Still, previous parsing is necessary for their construction. We applied sn-grams in the task of authorship attribution for corpora of three and seven authors with very promising results.",
"title": ""
}
] |
scidocsrr
|
58a83c37bf4e499e68fdc64b63f2f55c
|
Online travel reviews as persuasive communication : The effects of content type , source , and certi fi cation logos on consumer behavior
|
[
{
"docid": "032f5b66ae4ede7e26a911c9d4885b98",
"text": "Are trust and risk important in consumers' electronic commerce purchasing decisions? What are the antecedents of trust and risk in this context? How do trust and risk affect an Internet consumer's purchasing decision? To answer these questions, we i) develop a theoretical framework describing the trust-based decision-making process a consumer uses when making a purchase from a given site, ii) test the proposed model using a Structural Equation Modeling technique on Internet consumer purchasing behavior data collected via a Web survey, and iii) consider the implications of the model. The results of the study show that Internet consumers' trust and perceived risk have strong impacts on their purchasing decisions. Consumer disposition to trust, reputation, privacy concerns, security concerns, the information quality of the Website, and the company's reputation, have strong effects on Internet consumers' trust in the Website. Interestingly, the presence of a third-party seal did not strongly influence consumers' trust. © 2007 Elsevier B.V. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "99ea14010fe3acd37952fb355a25b71c",
"text": "Today, as the increasing the amount of using internet, there are so most information interchanges are performed in that internet. So, the methods used as intrusion detective tools for protecting network systems against diverse attacks are became too important. The available of IDS are getting more powerful. Support Vector Machine was used as the classical pattern reorganization tools have been widely used for Intruder detections. There have some different characteristic of features in building an Intrusion Detection System. Conventional SVM do not concern about that. Our enhanced SVM Model proposed with an Recursive Feature Elimination (RFE) and kNearest Neighbor (KNN) method to perform a feature ranking and selection task of the new model. RFE can reduce redundant & recursive features and KNN can select more precisely than the conventional SVM. Experiments and comparisons are conducted through intrusion dataset: the KDD Cup 1999 dataset.",
"title": ""
},
{
"docid": "3332bf8d62c1176b8f5f0aa2bb045d24",
"text": "BACKGROUND\nInfectious mononucleosis caused by the Epstein-Barr virus has been associated with increased risk of multiple sclerosis. However, little is known about the characteristics of this association.\n\n\nOBJECTIVE\nTo assess the significance of sex, age at and time since infectious mononucleosis, and attained age to the risk of developing multiple sclerosis after infectious mononucleosis.\n\n\nDESIGN\nCohort study using persons tested serologically for infectious mononucleosis at Statens Serum Institut, the Danish Civil Registration System, the Danish National Hospital Discharge Register, and the Danish Multiple Sclerosis Registry.\n\n\nSETTING\nStatens Serum Institut.\n\n\nPATIENTS\nA cohort of 25 234 Danish patients with mononucleosis was followed up for the occurrence of multiple sclerosis beginning on April 1, 1968, or January 1 of the year after the diagnosis of mononucleosis or after a negative Paul-Bunnell test result, respectively, whichever came later and ending on the date of multiple sclerosis diagnosis, death, emigration, or December 31, 1996, whichever came first.\n\n\nMAIN OUTCOME MEASURE\nThe ratio of observed to expected multiple sclerosis cases in the cohort (standardized incidence ratio).\n\n\nRESULTS\nA total of 104 cases of multiple sclerosis were observed during 556,703 person-years of follow-up, corresponding to a standardized incidence ratio of 2.27 (95% confidence interval, 1.87-2.75). The risk of multiple sclerosis was persistently increased for more than 30 years after infectious mononucleosis and uniformly distributed across all investigated strata of sex and age. The relative risk of multiple sclerosis did not vary by presumed severity of infectious mononucleosis.\n\n\nCONCLUSIONS\nThe risk of multiple sclerosis is increased in persons with prior infectious mononucleosis, regardless of sex, age, and time since infectious mononucleosis or severity of infection. The risk of multiple sclerosis may be increased soon after infectious mononucleosis and persists for at least 30 years after the infection.",
"title": ""
},
{
"docid": "9609d87c2e75b452495e7fb779a94027",
"text": "Cyclophosphamide (CYC) has been the backbone immunosuppressive drug to achieve sustained remission in lupus nephritis (LN). The aim was to evaluate the efficacy and compare adverse effects of low and high dose intravenous CYC therapy in Indian patients with proliferative lupus nephritis. An open-label, parallel group, randomized controlled trial involving 75 patients with class III/IV LN was conducted after obtaining informed consent. The low dose group (n = 38) received 6 × 500 mg CYC fortnightly and high dose group (n = 37) received 6 × 750 mg/m2 CYC four-weekly followed by azathioprine. The primary outcome was complete/partial/no response at 52 weeks. The secondary outcomes were renal and non-renal flares and adverse events. Intention-to-treat analyses were performed. At 52 weeks, 27 (73%) in high dose group achieved complete/partial response (CR/PR) vs 19 (50%) in low dose (p = 0.04). CR was higher in the high dose vs low dose [24 (65%) vs 17 (44%)], although not statistically significant. Non-responders (NR) in the high dose group were also significantly lower 10 (27%) vs low dose 19 (50%) (p = 0.04). The change in the SLEDAI (Median, IQR) was also higher in the high dose 16 (7–20) in contrast to the low dose 10 (5.5–14) (p = 0.04). There was significant alopecia and CYC-induced leucopenia in high dose group. Renal relapses were significantly higher in the low dose group vs high dose [9 (24%) vs 1(3%), (p = 0.01)]. At 52 weeks, high dose CYC was more effective in inducing remission with decreased renal relapses in our population. Trial Registration: The study was registered at http://www.clintrials.gov. NCT02645565.",
"title": ""
},
{
"docid": "a18e6f80284a96f680fb00cb3f0cc692",
"text": "We demonstrate an 8-layer 3D Vertical Gate NAND Flash with WL half pitch =37.5nm, BL half pitch=75nm, 64-WL NAND string with 63% array core efficiency. This is the first time that a 3D NAND Flash can be successfully scaled to below 3Xnm half pitch in one lateral dimension, thus an 8-layer stack device already provides a very cost effective technology with lower cost than the conventional sub-20nm 2D NAND. Our new VG architecture has two key features: (1) To improve the manufacturability a new layout that twists the even/odd BL's (and pages) in the opposite direction (split-page BL) is adopted. This allows the island-gate SSL devices [1] and metal interconnections be laid out in double pitch, creating much larger process window for BL pitch scaling; (2) A novel staircase BL contact formation method using binary sum of only M lithography and etching steps to achieve 2M contacts. This not only allows precise landing of the tight-pitch staircase contacts, but also minimizes the process steps and cost. We have successfully fabricated an 8-layer array using TFT BE-SONOS charge-trapping device. The array characteristics including reading, programming, inhibit, and block erase are demonstrated.",
"title": ""
},
{
"docid": "c1713b817c4b2ce6e134b6e0510a961f",
"text": "BACKGROUND\nEntity recognition is one of the most primary steps for text analysis and has long attracted considerable attention from researchers. In the clinical domain, various types of entities, such as clinical entities and protected health information (PHI), widely exist in clinical texts. Recognizing these entities has become a hot topic in clinical natural language processing (NLP), and a large number of traditional machine learning methods, such as support vector machine and conditional random field, have been deployed to recognize entities from clinical texts in the past few years. In recent years, recurrent neural network (RNN), one of deep learning methods that has shown great potential on many problems including named entity recognition, also has been gradually used for entity recognition from clinical texts.\n\n\nMETHODS\nIn this paper, we comprehensively investigate the performance of LSTM (long-short term memory), a representative variant of RNN, on clinical entity recognition and protected health information recognition. The LSTM model consists of three layers: input layer - generates representation of each word of a sentence; LSTM layer - outputs another word representation sequence that captures the context information of each word in this sentence; Inference layer - makes tagging decisions according to the output of LSTM layer, that is, outputting a label sequence.\n\n\nRESULTS\nExperiments conducted on corpora of the 2010, 2012 and 2014 i2b2 NLP challenges show that LSTM achieves highest micro-average F1-scores of 85.81% on the 2010 i2b2 medical concept extraction, 92.29% on the 2012 i2b2 clinical event detection, and 94.37% on the 2014 i2b2 de-identification, which is considerably competitive with other state-of-the-art systems.\n\n\nCONCLUSIONS\nLSTM that requires no hand-crafted feature has great potential on entity recognition from clinical texts. It outperforms traditional machine learning methods that suffer from fussy feature engineering. A possible future direction is how to integrate knowledge bases widely existing in the clinical domain into LSTM, which is a case of our future work. Moreover, how to use LSTM to recognize entities in specific formats is also another possible future direction.",
"title": ""
},
{
"docid": "64bd2fc0d1b41574046340833144dabe",
"text": "Probe-based confocal laser endomicroscopy (pCLE) provides high-resolution in vivo imaging for intraoperative tissue characterization. Maintaining a desired contact force between target tissue and the pCLE probe is important for image consistency, allowing large area surveillance to be performed. A hand-held instrument that can provide a predetermined contact force to obtain consistent images has been developed. The main components of the instrument include a linear voice coil actuator, a donut load-cell, and a pCLE probe. In this paper, detailed mechanical design of the instrument is presented and system level modeling of closed-loop force control of the actuator is provided. The performance of the instrument has been evaluated in bench tests as well as in hand-held experiments. Results demonstrate that the instrument ensures a consistent predetermined contact force between pCLE probe tip and tissue. Furthermore, it compensates for both simulated physiological movement of the tissue and involuntary movements of the operator's hand. Using pCLE video feature tracking of large colonic crypts within the mucosal surface, the steadiness of the tissue images obtained using the instrument force control is demonstrated by confirming minimal crypt translation.",
"title": ""
},
{
"docid": "8318d49318f442749bfe3a33a3394f42",
"text": "Driving Scene understanding is a key ingredient for intelligent transportation systems. To achieve systems that can operate in a complex physical and social environment, they need to understand and learn how humans drive and interact with traffic scenes. We present the Honda Research Institute Driving Dataset (HDD), a challenging dataset to enable research on learning driver behavior in real-life environments. The dataset includes 104 hours of real human driving in the San Francisco Bay Area collected using an instrumented vehicle equipped with different sensors. We provide a detailed analysis of HDD with a comparison to other driving datasets. A novel annotation methodology is introduced to enable research on driver behavior understanding from untrimmed data sequences. As the first step, baseline algorithms for driver behavior detection are trained and tested to demonstrate the feasibility of the proposed task.",
"title": ""
},
{
"docid": "a11ed66e5368060be9585022db65c2ad",
"text": "This article provides a historical context of evolutionary psychology and feminism, and evaluates the contributions to this special issue of Sex Roles within that context. We briefly outline the basic tenets of evolutionary psychology and articulate its meta-theory of the origins of gender similarities and differences. The article then evaluates the specific contributions: Sexual Strategies Theory and the desire for sexual variety; evolved standards of beauty; hypothesized adaptations to ovulation; the appeal of risk taking in human mating; understanding the causes of sexual victimization; and the role of studies of lesbian mate preferences in evaluating the framework of evolutionary psychology. Discussion focuses on the importance of social and cultural context, human behavioral flexibility, and the evidentiary status of specific evolutionary psychological hypotheses. We conclude by examining the potential role of evolutionary psychology in addressing social problems identified by feminist agendas.",
"title": ""
},
{
"docid": "066fdb2deeca1d13218f16ad35fe5f86",
"text": "As manga (Japanese comics) have become common content in many countries, it is necessary to search manga by text query or translate them automatically. For these applications, we must first extract texts from manga. In this paper, we develop a method to detect text regions in manga. Taking motivation from methods used in scene text detection, we propose an approach using classifiers for both connected components and regions. We have also developed a text region dataset of manga, which enables learning and detailed evaluations of methods used to detect text regions. Experiments using the dataset showed that our text detection method performs more effectively than existing methods.",
"title": ""
},
{
"docid": "bd06f693359bba90de59454f32581c9c",
"text": "Digital business ecosystems are becoming an increasingly popular concept as an open environment for modeling and building interoperable system integration. Business organizations have realized the importance of using standards as a cost-effective method for accelerating business process integration. Small and medium size enterprise (SME) participation in global trade is increasing, however, digital transactions are still at a low level. Cloud integration is expected to offer a cost-effective business model to form an interoperable digital supply chain. By observing the integration models, we can identify the large potential of cloud services to accelerate integration. An industrial case study is conducted. This paper investigates and contributes new knowledge on a how top-down approach by using a digital business ecosystem framework enables business managers to define new user requirements and functionalities for system integration. Through analysis, we identify the current cap of integration design. Using the cloud clustering framework, we identify how the design affects cloud integration services.",
"title": ""
},
{
"docid": "59786d8ea951639b8b9a4e60c9d43a06",
"text": "Compressed sensing is a technique to sample compressible signals below the Nyquist rate, whilst still allowing near optimal reconstruction of the signal. In this paper we present a theoretical analysis of the iterative hard thresholding algorithm when applied to the compressed sensing recovery problem. We show that the algorithm has the following properties (made more precise in the main text of the paper) • It gives near-optimal error guarantees. • It is robust to observation noise. • It succeeds with a minimum number of observations. • It can be used with any sampling operator for which the operator and its adjoint can be computed. • The memory requirement is linear in the problem size. Preprint submitted to Elsevier 28 January 2009 • Its computational complexity per iteration is of the same order as the application of the measurement operator or its adjoint. • It requires a fixed number of iterations depending only on the logarithm of a form of signal to noise ratio of the signal. • Its performance guarantees are uniform in that they only depend on properties of the sampling operator and signal sparsity.",
"title": ""
},
{
"docid": "20d754528009ebce458eaa748312b2fe",
"text": "This poster provides a comparative study between Inverse Reinforcement Learning (IRL) and Apprenticeship Learning (AL). IRL and AL are two frameworks, using Markov Decision Processes (MDP), which are used for the imitation learning problem where an agent tries to learn from demonstrations of an expert. In the AL framework, the agent tries to learn the expert policy whereas in the IRL framework, the agent tries to learn a reward which can explain the behavior of the expert. This reward is then optimized to imitate the expert. One can wonder if it is worth estimating such a reward, or if estimating a policy is sufficient. This quite natural question has not really been addressed in the literature right now. We provide partial answers, both from a theoretical and empirical point of view.",
"title": ""
},
{
"docid": "2adde1812974f2d5d35d4c7e31ca7247",
"text": "All currently available network intrusion detection (ID) systems rely upon a mechanism of data collection---passive protocol analysis---which is fundamentally flawed. In passive protocol analysis, the intrusion detection system (IDS) unobtrusively watches all traffic on the network, and scrutinizes it for patterns of suspicious activity. We outline in this paper two basic problems with the reliability of passive protocol analysis: (1) there isn't enough information on the wire on which to base conclusions about what is actually happening on networked machines, and (2) the fact that the system is passive makes it inherently \"fail-open,\" meaning that a compromise in the availability of the IDS doesn't compromise the availability of the network. We define three classes of attacks which exploit these fundamental problems---insertion, evasion, and denial of service attacks --and describe how to apply these three types of attacks to IP and TCP protocol analysis. We present the results of tests of the efficacy of our attacks against four of the most popular network intrusion detection systems on the market. All of the ID systems tested were found to be vulnerable to each of our attacks. This indicates that network ID systems cannot be fully trusted until they are fundamentally redesigned. Insertion, Evasion, and Denial of Service: Eluding Network Intrusion Detection http://www.robertgraham.com/mirror/Ptacek-Newsham-Evasion-98.html (1 of 55) [17/01/2002 08:32:46 p.m.]",
"title": ""
},
{
"docid": "8caaea6ffb668c019977809773a6d8c5",
"text": "In the past several years, a number of different language modeling improvements over simple trigram models have been found, including caching, higher-order n-grams, skipping, interpolated Kneser–Ney smoothing, and clustering. We present explorations of variations on, or of the limits of, each of these techniques, including showing that sentence mixture models may have more potential. While all of these techniques have been studied separately, they have rarely been studied in combination. We compare a combination of all techniques together to a Katz smoothed trigram model with no count cutoffs. We achieve perplexity reductions between 38 and 50% (1 bit of entropy), depending on training data size, as well as a word error rate reduction of 8 .9%. Our perplexity reductions are perhaps the highest reported compared to a fair baseline. c © 2001 Academic Press",
"title": ""
},
{
"docid": "23a5d1aebe5e2f7dd5ed8dfde17ce374",
"text": "Today's workplace often includes workers from 4 distinct generations, and each generation brings a unique set of core values and characteristics to an organization. These generational differences can produce benefits, such as improved patient care, as well as challenges, such as conflict among employees. This article reviews current research on generational differences in educational settings and the workplace and discusses the implications of these findings for medical imaging and radiation therapy departments.",
"title": ""
},
{
"docid": "b317f33d159bddce908df4aa9ba82cf9",
"text": "Point cloud source data for surface reconstruction is usually contaminated with noise and outliers. To overcome this deficiency, a density-based point cloud denoising method is presented to remove outliers and noisy points. First, particle-swam optimization technique is employed for automatically approximating optimal bandwidth of multivariate kernel density estimation to ensure the robust performance of density estimation. Then, mean-shift based clustering technique is used to remove outliers through a thresholding scheme. After removing outliers from the point cloud, bilateral mesh filtering is applied to smooth the remaining points. The experimental results show that this approach, comparably, is robust and efficient.",
"title": ""
},
{
"docid": "b6ff96922a0b8e32236ba8fb44bf4888",
"text": "Most people acknowledge that personal computers have enormously enhanced the autonomy and communication capacity of people with special needs. The key factor for accessibility to these opportunities is the adequate design of the user interface which, consequently, has a high impact on the social lives of users with disabilities. The design of universally accessible interfaces has a positive effect over the socialisation of people with disabilities. People with sensory disabilities can profit from computers as a way of personal direct and remote communication. Personal computers can also assist people with severe motor impairments to manipulate their environment and to enhance their mobility by means of, for example, smart wheelchairs. In this way they can become more socially active and productive. Accessible interfaces have become so indispensable for personal autonomy and social inclusion that in several countries special legislation protects people from ‘digital exclusion’. To apply this legislation, inexperienced HCI designers can experience difficulties. They would greatly benefit from inclusive design guidelines in order to be able to implement the ‘design for all’ philosophy. In addition, they need clear criteria to avoid negative social and ethical impact on users. This paper analyses the benefits of the use of inclusive design guidelines in order to facilitate a universal design focus so that social exclusion is avoided. In addition, the need for ethical and social guidelines in order to avoid undesirable side effects for users is discussed. Finally, some preliminary examples of socially and ethically aware guidelines are proposed. q 2005 Elsevier B.V. All rights reserved. Interacting with Computers 17 (2005) 484–505 www.elsevier.com/locate/intcom 0953-5438/$ see front matter q 2005 Elsevier B.V. All rights reserved. doi:10.1016/j.intcom.2005.03.002 * Corresponding author. E-mail address: julio.abascal@ehu.es (J. Abascal). J. Abascal, C. Nicolle / Interacting with Computers 17 (2005) 484–505 485 1. HCI and people with disabilities Most people living in developed countries have direct or indirect relationships with computers in diverse ways. In addition, there exist many tasks that could hardly be performed without computers, leading to a dependence on Information Technology. Moreover, people not having access to computers can suffer the effects of the so-called digital divide (Fitch, 2002), a new type of social exclusion. People with disabilities are one of the user groups with higher computer dependence because, for many of them, the computer is the only way to perform several vital tasks, such as personal and remote communication, control of the environment, assisted mobility, access to telematic networks and services, etc. Digital exclusion for disabled people means not having full access to a socially active and independent lifestyle. In this way, Human-Computer Interaction (HCI) is playing an important role in the provision of social opportunities to people with disabilities (Abascal and Civit, 2002). 2. HCI and social integration 2.1. Gaining access to computers Computers provide very effective solutions to help people with disabilities to enhance their social integration. For instance, people with severe speech and motor impairments have serious difficulties to communicate with other people and to perform common operations in their close environment (e.g. to handle objects). For them, computers are incredibly useful as alternative communication devices. Messages can be composed using special keyboards (Lesher et al., 1998), scanning with one or two switches, by means of eye tracking (Sibert and Jacob, 2000), etc. Current software techniques also allow the design of methods to enhance the message composition speed. For instance, Artificial Intelligence methods are frequently used to design word prediction aids to assist in the typing of text with minimum effort (Garay et al., 1997). Computers can also assist the disabled user to autonomously control the environment through wireless communication, to drive smart electric powered wheelchairs, to control assistive robotic arms, etc. What is more, the integration of all of these services allows people with disabilities using the same interface to perform all tasks in a similar way (Abascal and Civit, 2001a). This is possible because assistive technologists have devoted much effort to providing disabled people with devices and procedures to enhance or substitute their physical and cognitive functions in order to be able to gain access to computers (Cook and Hussey, 2002). 2.2. Using commercial software When the need of gaining access to a PC is solved, the user faces another problem due to difficulties in using commercial software. Many applications have been designed without taking into account that they can be used by people using Assistive Technology devices, and therefore they may have unnecessary barriers which impede the use of alternative interaction devices. J. Abascal, C. Nicolle / Interacting with Computers 17 (2005) 484–505 486 This is the case for one of the most promising application fields nowadays: the internet. A PC linked to a telematic network opens the door to new remote services that can be crucial for people with disabilities. Services such us tele-teaching, tele-care, tele-working, tele-shopping, etc., may enormously enhance their quality of life. These are just examples of the great interest of gaining access to services provided by means of computers for people with disabilities. However, if these services are not accessible, they are useless for people with disabilities. In addition, even if the services are accessible, that is, the users can actually perform the tasks they wish to, it is also important that users can perform those tasks easily, effectively and efficiently. Usability, therefore, is also a key requirement. 2.3. Social demand for accessibility and usability Two factors, among others, have greatly influenced the social demand for accessible computing. The first factor was the technological revolution produced by the availability of personal computers that became smaller, cheaper, lower in consumption, and easier to use than previous computing machines. In parallel, a social revolution has evolved as a result of the battle against social exclusion ever since disabled people became conscious of their rights and needs. The conjunction of computer technology in the form of inexpensive and powerful personal computers, with the struggle of people with disabilities towards autonomous life and social integration, produced the starting point of a new technological challenge. This trend has been also supported in some countries by laws that prevent technological exclusion of people with disabilities and favour the inclusive use of technology (e.g. the Americans with Disabilities Act in the United States and the Disability Discrimination Act in the United Kingdom). The next sections discuss how this situation influenced the design of user interfaces for people with disabilities. 3. User interfaces for people with disabilities With the popularity of personal computers many technicians realised that they could become an indispensable tool to assist people with disabilities for most necessary tasks. They soon discovered that a key issue was the availability of suitable user interfaces, due to the special requirements of these users. But the variety of needs and the wide diversity of physical, sensory and cognitive characteristics make the design of interfaces very complex. An interesting process has occurred whereby we have moved from a computer ‘patchwork’ situation to the adoption of more structured HCI methodologies. In the next sections, this process is briefly described, highlighting issues that can and should lead to inclusive design guidelines for socially and ethically aware HCI. 1 Americans with Disabilities Act (ADA). Available at http://www.usdoj.gov/crt/ada/adahom1.htm, last accessed January 15, 2005. 2 Disabilty Discrimination Act (DDA). Available at http://www.disability.gov.uk/dda/index.html, last accessed January 15, 2005. J. Abascal, C. Nicolle / Interacting with Computers 17 (2005) 484–505 487 3.1. First approach: adaptation of existing systems For years, the main activity of people working in Assistive Technology was the adaptation of commercially available computers to the capabilities of users with disabilities. Existing computer interaction style was mainly based on a standard keyboard and mouse for input, and output was based on a screen for data, a printer for hard copy, and a ‘bell’ for some warnings and signals. This kind of interface takes for granted the fact that users have the following physical skills: enough sight capacity to read the screen, movement control and strength in the hands to handle the standard keyboard, coordination for mouse use, and also hearing capacity for audible warnings. In addition, cognitive capabilities to read, understand, reason, etc., were also assumed. When one or more of these skills were lacking, conscientious designers would try to substitute them by another capability, or an alternative way of communication. For instance, blind users could hear the content of the screen when it was read aloud by a textto-voice translator. Alternatively, output could be directed to a Braille printer, or matrix of pins. Thus, adaptation was done in the following way: first, detecting the barriers to gain access to the computer by a user or a group of users, and then, providing them with an alternative way based on the abilities and skills present in this group of users. This procedure often succeeded, producing very useful alternative ways to use computers. Nevertheless, some drawbacks were detected: † Lack of generality: the smaller the group of users the design is focused on, the better results were obtained. Therefore, different systems had to be designed to fit the needs of us",
"title": ""
},
{
"docid": "d72092cd909d88e18598925024dc6b97",
"text": "This paper focuses on the robust dissipative fault-tolerant control problem for one kind of Takagi-Sugeno (T-S) fuzzy descriptor system with actuator failures. The solvable conditions of the robust dissipative fault-tolerant controller are given by using of the Lyapunov theory, Lagrange interpolation polynomial theory, etc. These solvable conditions not only make the closed loop system dissipative, but also integral for the actuator failure situation. The dissipative fault-tolerant controller design methods are given by the aid of the linear matrix inequality toolbox, the function of randomly generated matrix, loop statement, and numerical solution, etc. Thus, simulation process is fully intelligent and efficient. At the same time, the design methods are also obtained for the passive and H∞ fault-tolerant controllers. This explains the fact that the dissipative control unifies H∞ control and passive control. Finally, we give example that illustrates our results.",
"title": ""
},
{
"docid": "446a7404a0e4e78156532fcb93270475",
"text": "Convolutional Neural Networks (CNNs) can provide accurate object classification. They can be extended to perform object detection by iterating over dense or selected proposed object regions. However, the runtime of such detectors scales as the total number and/or area of regions to examine per image, and training such detectors may be prohibitively slow. However, for some CNN classifier topologies, it is possible to share significant work among overlapping regions to be classified. This paper presents DenseNet, an open source system that computes dense, multiscale features from the convolutional layers of a CNN based object classifier. Future work will involve training efficient object detectors with DenseNet feature descriptors.",
"title": ""
},
{
"docid": "14f539b7c27aeb96025045a660416e39",
"text": "This paper describes a method for the automatic self-calibration of a 3D Laser sensor. We wish to acquire crisp point clouds and so we adopt a measure of crispness to capture point cloud quality. We then pose the calibration problem as the task of maximising point cloud quality. Concretely, we use Rényi Quadratic Entropy to measure the degree of organisation of a point cloud. By expressing this quantity as a function of key unknown system parameters, we are able to deduce a full calibration of the sensor via an online optimisation. Beyond details on the sensor design itself, we fully describe the end-to-end intrinsic parameter calibration process and the estimation of the clock skews between the constituent microprocessors. We analyse performance using real and simulated data and demonstrate robust performance over thirty test sites.",
"title": ""
}
] |
scidocsrr
|
2c6848e03b871a46c9228a2951dc7f4f
|
Analysis of Social Networks Using the Techniques of Web Mining
|
[
{
"docid": "bed9bdf4d4965610b85378f2fdbfab2a",
"text": "Application of data mining techniques to the World Wide Web, referred to as Web mining, has been the focus of several recent research projects and papers. However, there is n o established vocabulary, leading to confusion when comparing research efforts. The t e r m W e b mining has been used in two distinct ways. T h e first, called Web content mining in this paper, is the process of information discovery f rom sources across the World Wide Web. The second, called Web m a g e mining, is the process of mining f o r user browsing and access patterns. I n this paper we define W e b mining and present an overview of the various research issues, techniques, and development e f forts . W e briefly describe W E B M I N E R , a system for Web usage mining, and conclude this paper by listing research issues.",
"title": ""
}
] |
[
{
"docid": "ed9f79cab2dfa271ee436b7d6884bc13",
"text": "This study conducts a phylogenetic analysis of extant African papionin craniodental morphology, including both quantitative and qualitative characters. We use two different methods to control for allometry: the previously described narrow allometric coding method, and the general allometric coding method, introduced herein. The results of this study strongly suggest that African papionin phylogeny based on molecular systematics, and that based on morphology, are congruent and support a Cercocebus/Mandrillus clade as well as a Papio/Lophocebus/Theropithecus clade. In contrast to previous claims regarding papionin and, more broadly, primate craniodental data, this study finds that such data are a source of valuable phylogenetic information and removes the basis for considering hard tissue anatomy \"unreliable\" in phylogeny reconstruction. Among highly sexually dimorphic primates such as papionins, male morphologies appear to be particularly good sources of phylogenetic information. In addition, we argue that the male and female morphotypes should be analyzed separately and then added together in a concatenated matrix in future studies of sexually dimorphic taxa. Character transformation analyses identify a series of synapomorphies uniting the various papionin clades that, given a sufficient sample size, should potentially be useful in future morphological analyses, especially those involving fossil taxa.",
"title": ""
},
{
"docid": "6614eeffe9fb332a028b1e80aa24016a",
"text": "Advances in microelectronics, array processing, and wireless networking, have motivated the analysis and design of low-cost integrated sensing, computating, and communicating nodes capable of performing various demanding collaborative space-time processing tasks. In this paper, we consider the problem of coherent acoustic sensor array processing and localization on distributed wireless sensor networks. We first introduce some basic concepts of beamforming and localization for wideband acoustic sources. A review of various known localization algorithms based on time-delay followed by LS estimations as well as maximum likelihood method is given. Issues related to practical implementation of coherent array processing including the need for fine-grain time synchronization are discussed. Then we describe the implementation of a Linux-based wireless networked acoustic sensor array testbed, utilizing commercially available iPAQs with built in microphones, codecs, and microprocessors, plus wireless Ethernet cards, to perform acoustic source localization. Various field-measured results using two localization algorithms show the effectiveness of the proposed testbed. An extensive list of references related to this work is also included. Keywords— Beamforming, Source Localization, Distributed Sensor Network, Wireless Network, Ad Hoc Network, Microphone Array, Time Synchronization.",
"title": ""
},
{
"docid": "805583da675c068b7cc2bca80e918963",
"text": "Designing an actuator system for highly dynamic legged robots has been one of the grand challenges in robotics research. Conventional actuators for manufacturing applications have difficulty satisfying design requirements for high-speed locomotion, such as the need for high torque density and the ability to manage dynamic physical interactions. To address this challenge, this paper suggests a proprioceptive actuation paradigm that enables highly dynamic performance in legged machines. Proprioceptive actuation uses collocated force control at the joints to effectively control contact interactions at the feet under dynamic conditions. Modal analysis of a reduced leg model and dimensional analysis of DC motors address the main principles for implementation of this paradigm. In the realm of legged machines, this paradigm provides a unique combination of high torque density, high-bandwidth force control, and the ability to mitigate impacts through backdrivability. We introduce a new metric named the “impact mitigation factor” (IMF) to quantify backdrivability at impact, which enables design comparison across a wide class of robots. The MIT Cheetah leg is presented, and is shown to have an IMF that is comparable to other quadrupeds with series springs to handle impact. The design enables the Cheetah to control contact forces during dynamic bounding, with contact times down to 85 ms and peak forces over 450 N. The unique capabilities of the MIT Cheetah, achieving impact-robust force-controlled operation in high-speed three-dimensional running and jumping, suggest wider implementation of this holistic actuation approach.",
"title": ""
},
{
"docid": "c2b41a637cdc46abf0e154368a5990df",
"text": "Ideally, the time that an incremental algorithm uses to process a change should be a fimction of the size of the change rather than, say, the size of the entire current input. Based o n a formalization of \"the set of things changed\" by an increInental modification, this paper investigates how and to what extent it is possibh~' to give such a guarantee for a chart-ba.se(l parsing frmnework and discusses the general utility of a tninlmality notion in incremental processing) 1 I n t r o d u c t i o n",
"title": ""
},
{
"docid": "cd1a5d05e1991accd0a733ae0f2b7afc",
"text": "This paper presents the application of an embedded camera system for detecting laser spot in the shooting simulator. The proposed shooting simulator uses a specific target box, where the circular pattern target is mounted. The embedded camera is installed inside the box to capture the circular pattern target and laser spot image. To localize the circular pattern automatically, two colored solid circles are painted on the target. This technique allows the simple and fast color tracking to track the colored objects for localizing the circular pattern. The CMUCam4 is employed as the embedded camera. It is able to localize the target and detect the laser spot in real-time at 30 fps. From the experimental results, the errors in calculating shooting score and detecting laser spot are 3.82% and 0.68% respectively. Further the proposed system provides the more accurate scoring system in real number compared to the conventional integer number.",
"title": ""
},
{
"docid": "691f5f53582ceedaa51812307778b4db",
"text": "This paper looks at how a vulnerability management (VM) process could be designed & implemented within an organization. Articles and studies about VM usually focus mainly on the technology aspects of vulnerability scanning. The goal of this study is to call attention to something that is often overlooked: a basic VM process which could be easily adapted and implemented in any part of the organization. Implementing a vulnerability management process 2 Tom Palmaers",
"title": ""
},
{
"docid": "867516a6a54105e4759338e407bafa5a",
"text": "At the end of the criminal intelligence analysis process there are relatively well established and understood approaches to explicit externalisation and representation of thought that include theories of argumentation, narrative and hybrid approaches that include both of these. However the focus of this paper is on the little understood area of how to support users in the process of arriving at such representations from an initial starting point where little is given. The work is based on theoretical considerations and some initial studies with end users. In focusing on process we discuss the requirements of fluidity and rigor and how to gain traction in investigations, the processes of thinking involved including abductive, deductive and inductive reasoning, how users may use thematic sorting in early stages of investigation and how tactile reasoning may be used to externalize and facilitate reasoning in a productive way. In the conclusion section we discuss the issues raised in this work and directions for future work.",
"title": ""
},
{
"docid": "0cd42818f21ada2a8a6c2ed7a0f078fe",
"text": "In perceiving objects we may synthesize conjunctions of separable features by directing attention serially to each item in turn (A. Treisman and G. Gelade, Cognitive Psychology, 1980, 12, 97136). This feature-integration theory predicts that when attention is diverted or overloaded, features may be wrongly recombined, giving rise to “illusory conjunctions.” The present paper confirms that illusory conjunctions are frequently experienced among unattended stimuli varying in color and shape, and that they occur also with size and solidity (outlined versus filled-in shapes). They are shown both in verbal recall and in simultaneous and successive matching tasks, making it unlikely that they depend on verbal labeling or on memory failure. They occur as often between stimuli differing on many features as between more similar stimuli, and spatial separation has little effect on their frequency. Each feature seems to be coded as an independent entity and to migrate, when attention is diverted, with few constraints from the other features of its source or destination.",
"title": ""
},
{
"docid": "1d0d5ad5371a3f7b8e90fad6d5299fa7",
"text": "Vascularization of embryonic organs or tumors starts from a primitive lattice of capillaries. Upon perfusion, this lattice is remodeled into branched arteries and veins. Adaptation to mechanical forces is implied to play a major role in arterial patterning. However, numerical simulations of vessel adaptation to haemodynamics has so far failed to predict any realistic vascular pattern. We present in this article a theoretical modeling of vascular development in the yolk sac based on three features of vascular morphogenesis: the disconnection of side branches from main branches, the reconnection of dangling sprouts (\"dead ends\"), and the plastic extension of interstitial tissue, which we have observed in vascular morphogenesis. We show that the effect of Poiseuille flow in the vessels can be modeled by aggregation of random walkers. Solid tissue expansion can be modeled by a Poiseuille (parabolic) deformation, hence by deformation under hits of random walkers. Incorporation of these features, which are of a mechanical nature, leads to realistic modeling of vessels, with important biological consequences. The model also predicts the outcome of simple mechanical actions, such as clamping of vessels or deformation of tissue by the presence of obstacles. This study offers an explanation for flow-driven control of vascular branching morphogenesis.",
"title": ""
},
{
"docid": "024e4eebc8cb23d85676df920316f62c",
"text": "E-voting technology has been developed for more than 30 years. However it is still distance away from serious application. The major challenges are to provide a secure solution and to gain trust from the voters in using it. In this paper we try to present a comprehensive review to e-voting by looking at these challenges. We summarized the vast amount of security requirements named in the literature that allows researcher to design a secure system. We reviewed some of the e-voting systems found in the real world and the literature. We also studied how a e-voting system can be usable by looking at different usability research conducted on e-voting. Summarizes on different cryptographic tools in constructing e-voting systems are also presented in the paper. We hope this paper can served as a good introduction for e-voting researches.",
"title": ""
},
{
"docid": "22cdfb6170fab44905a8f79b282a1313",
"text": "CONTEXT\nInteprofessional collaboration (IPC) between biomedically trained doctors (BMD) and traditional, complementary and alternative medicine practitioners (TCAMP) is an essential element in the development of successful integrative healthcare (IHC) services. This systematic review aims to identify organizational strategies that would facilitate this process.\n\n\nMETHODS\nWe searched 4 international databases for qualitative studies on the theme of BMD-TCAMP IPC, supplemented with a purposive search of 31 health services and TCAM journals. Methodological quality of included studies was assessed using published checklist. Results of each included study were synthesized using a framework approach, with reference to the Structuration Model of Collaboration.\n\n\nFINDINGS\nThirty-seven studies of acceptable quality were included. The main driver for developing integrative healthcare was the demand for holistic care from patients. Integration can best be led by those trained in both paradigms. Bridge-building activities, positive promotion of partnership and co-location of practices are also beneficial for creating bonding between team members. In order to empower the participation of TCAMP, the perceived power differentials need to be reduced. Also, resources should be committed to supporting team building, collaborative initiatives and greater patient access. Leadership and funding from central authorities are needed to promote the use of condition-specific referral protocols and shared electronic health records. More mature IHC programs usually formalize their evaluation process around outcomes that are recognized both by BMD and TCAMP.\n\n\nCONCLUSIONS\nThe major themes emerging from our review suggest that successful collaborative relationships between BMD and TCAMP are similar to those between other health professionals, and interventions which improve the effectiveness of joint working in other healthcare teams with may well be transferable to promote better partnership between the paradigms. However, striking a balance between the different practices and preserving the epistemological stance of TCAM will remain the greatest challenge in successful integration.",
"title": ""
},
{
"docid": "b3af820192d34b6066498e04b9a51e31",
"text": "Nowadays there are studies in different fields aimed to extract relevant information on trends, challenges and opportunities; all these studies have something in common: they work with large volumes of data. This work analyzes different studies carried out on the use of Machine Learning (ML) for processing large volumes of data (Big Data). Most of these datasets, are complex and come from various sources with structured or unstructured data. For this reason, it is necessary to find mechanisms that allow classification and, in a certain way, organize them to facilitate to the users the extraction of the required information. The processing of these data requires the use of classification techniques that will also be reviewed.",
"title": ""
},
{
"docid": "10b7ce647229f3c9fe5aeced5be85e38",
"text": "The proliferation of deep learning methods in natural language processing (NLP) and the large amounts of data they often require stands in stark contrast to the relatively data-poor clinical NLP domain. In particular, large text corpora are necessary to build high-quality word embeddings, yet often large corpora that are suitably representative of the target clinical data are unavailable. This forces a choice between building embeddings from small clinical corpora and less representative, larger corpora. This paper explores this trade-off, as well as intermediate compromise solutions. Two standard clinical NLP tasks (the i2b2 2010 concept and assertion tasks) are evaluated with commonly used deep learning models (recurrent neural networks and convolutional neural networks) using a set of six corpora ranging from the target i2b2 data to large open-domain datasets. While combinations of corpora are generally found to work best, the single-best corpus is generally task-dependent.",
"title": ""
},
{
"docid": "f02bd91e8374506aa4f8a2107f9545e6",
"text": "In an online survey with two cohorts (2009 and 2011) of undergraduates in dating relationshi ps, we examined how attachment was related to communication technology use within romantic relation ships. Participants reported on their attachment style and frequency of in-person communication as well as phone, text messaging, social network site (SNS), and electronic mail usage with partners. Texting and SNS communication were more frequent in 2011 than 2009. Attachment avoidance was related to less frequent phone use and texting, and greater email usage. Electronic communication channels (phone and texting) were related to positive relationship qualities, however, once accounting for attachment, only moderated effects were found. Interactions indicated texting was linked to more positive relationships for highly avoidant (but not less avoidant) participants. Additionally, email use was linked to more conflict for highly avoidant (but not less avoidant) participants. Finally, greater use of a SNS was positively associated with intimacy/support for those higher (but not lower) on attachment anxiety. This study illustrates how attachment can help to explain why the use of specific technology-based communication channels within romantic relationships may mean different things to different people, and that certain channels may be especially relevant in meeting insecurely attached individuals’ needs. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "7bbfafb6de6ccd50a4a708af76588beb",
"text": "In this paper we present a system for mobile augmented reality (AR) based on visual recognition. We split the tasks of recognizing an object and tracking it on the user's screen into a server-side and a client-side task, respectively. The capabilities of this hybrid client-server approach are demonstrated with a prototype application on the Android platform, which is able to augment both stationary (landmarks) and non stationary (media covers) objects. The database on the server side consists of hundreds of thousands of landmarks, which is crawled using a state of the art mining method for community photo collections. In addition to the landmark images, we also integrate a database of media covers with millions of items. Retrieval from these databases is done using vocabularies of local visual features. In order to fulfill the real-time constraints for AR applications, we introduce a method to speed-up geometric verification of feature matches. The client-side tracking of recognized objects builds on a multi-modal combination of visual features and sensor measurements. Here, we also introduce a motion estimation method, which is more efficient and precise than similar approaches. To the best of our knowledge this is the first system, which demonstrates a complete pipeline for augmented reality on mobile devices with visual object recognition scaled to millions of objects combined with real-time object tracking.",
"title": ""
},
{
"docid": "30e287e44e66e887ad5d689657e019c3",
"text": "OBJECTIVE\nThe purpose of this study was to determine whether the Sensory Profile discriminates between children with and without autism and which items on the profile best discriminate between these groups.\n\n\nMETHOD\nParents of 32 children with autism aged 3 to 13 years and of 64 children without autism aged 3 to 10 years completed the Sensory Profile. A descriptive analysis of the data set of children with autism identified the distribution of responses on each item. A multivariate analysis of covariance (MANCOVA) of each category of the Sensory Profile identified possible differences among subjects without autism, with mild or moderate autism, and with severe autism. Follow-up univariate analyses were conducted for any category that yielded a significant result on the MANCOVA:\n\n\nRESULTS\nEight-four of 99 items (85%) on the Sensory Profile differentiated the sensory processing skills of subjects with autism from those without autism. There were no group differences between subjects with mild or moderate autism and subjects with severe autism.\n\n\nCONCLUSION\nThe Sensory Profile can provide information about the sensory processing skills of children with autism to assist occupational therapists in assessing and planning intervention for these children.",
"title": ""
},
{
"docid": "510439267c11c53b31dcf0b1c40e331b",
"text": "Spatial multicriteria decision problems are decision problems where one needs to take multiple conflicting criteria as well as geographical knowledge into account. In such a context, exploratory spatial analysis is known to provide tools to visualize as much data as possible on maps but does not integrate multicriteria aspects. Also, none of the tools provided by multicriteria analysis were initially destined to be used in a geographical context.In this paper, we propose an application of the PROMETHEE and GAIA ranking methods to Geographical Information Systems (GIS). The aim is to help decision makers obtain rankings of geographical entities and understand why such rankings have been obtained. To do that, we make use of the visual approach of the GAIA method and adapt it to display the results on geographical maps. This approach is then extended to cover several weaknesses of the adaptation. Finally, it is applied to a study of the region of Brussels as well as an evaluation of the Human Development Index (HDI) in Europe.",
"title": ""
},
{
"docid": "09fc272a6d9ea954727d07075ecd5bfd",
"text": "Deep generative models have recently shown great promise in imitation learning for motor control. Given enough data, even supervised approaches can do one-shot imitation learning; however, they are vulnerable to cascading failures when the agent trajectory diverges from the demonstrations. Compared to purely supervised methods, Generative Adversarial Imitation Learning (GAIL) can learn more robust controllers from fewer demonstrations, but is inherently mode-seeking and more difficult to train. In this paper, we show how to combine the favourable aspects of these two approaches. The base of our model is a new type of variational autoencoder on demonstration trajectories that learns semantic policy embeddings. We show that these embeddings can be learned on a 9 DoF Jaco robot arm in reaching tasks, and then smoothly interpolated with a resulting smooth interpolation of reaching behavior. Leveraging these policy representations, we develop a new version of GAIL that (1) is much more robust than the purely-supervised controller, especially with few demonstrations, and (2) avoids mode collapse, capturing many diverse behaviors when GAIL on its own does not. We demonstrate our approach on learning diverse gaits from demonstration on a 2D biped and a 62 DoF 3D humanoid in the MuJoCo physics environment.",
"title": ""
},
{
"docid": "63063c0a2b08f068c11da6d80236fa87",
"text": "This paper addresses the problem of hallucinating the missing high-resolution (HR) details of a low-resolution (LR) video while maintaining the temporal coherence of the hallucinated HR details by using dynamic texture synthesis (DTS). Most existing multi-frame-based video super-resolution (SR) methods suffer from the problem of limited reconstructed visual quality due to inaccurate sub-pixel motion estimation between frames in a LR video. To achieve high-quality reconstruction of HR details for a LR video, we propose a texture-synthesis-based video super-resolution method, in which a novel DTS scheme is proposed to render the reconstructed HR details in a time coherent way, so as to effectively address the temporal incoherence problem caused by traditional texture synthesis based image SR methods. To further reduce the complexity of the proposed method, our method only performs the DTS-based SR on a selected set of key-frames, while the HR details of the remaining non-key-frames are simply predicted using the bi-directional overlapped block motion compensation. Experimental results demonstrate that the proposed method achieves significant subjective and objective quality improvement over state-of-the-art video SR methods.",
"title": ""
}
] |
scidocsrr
|
70b799ee929463682762f21d422f7b3a
|
Low-Rank Similarity Metric Learning in High Dimensions
|
[
{
"docid": "2d34d9e9c33626727734766a9951a161",
"text": "In this paper, we propose and study the use of alternating direction algorithms for several `1-norm minimization problems arising from sparse solution recovery in compressive sensing, including the basis pursuit problem, the basis-pursuit denoising problems of both unconstrained and constrained forms, as well as others. We present and investigate two classes of algorithms derived from either the primal or the dual forms of the `1-problems. The construction of the algorithms consists of two main steps: (1) to reformulate an `1-problem into one having partially separable objective functions by adding new variables and constraints; and (2) to apply an exact or inexact alternating direction method to the resulting problem. The derived alternating direction algorithms can be regarded as first-order primal-dual algorithms because both primal and dual variables are updated at each and every iteration. Convergence properties of these algorithms are established or restated when they already exist. Extensive numerical results in comparison with several state-of-the-art algorithms are given to demonstrate that the proposed algorithms are efficient, stable and robust. Moreover, we present numerical results to emphasize two practically important but perhaps overlooked points. One point is that algorithm speed should always be evaluated relative to appropriate solution accuracy; another is that whenever erroneous measurements possibly exist, the `1-norm fidelity should be the fidelity of choice in compressive sensing.",
"title": ""
}
] |
[
{
"docid": "1b53b5c7741dad884ab94b3b8a3d8cfd",
"text": "The impact of self-heating effect (SHE) on device reliability characterization, such as BTI, HCI, and TDDB, is extensively examined in this work. Self-heating effect and its impact on device level reliability mechanisms is carefully studied, and an empirical model for layout dependent SHE is established. Since the recovery effect during NBTI characterization is found sensitive to self-heating, either changing VT shift as index or adopting μs-delay measurement system is proposed to get rid of SHE influence. In common HCI stress condition, the high drain stress bias usually leads to high power or self-heating, which may dramatically under-estimate the lifetime extracted. The stress condition Vg = 0.6~0.8Vd is suggested to meet the reasonable operation power and self-heating induced temperature rising. Similarly, drain-bias dependent TDDB characteristics are also under-estimated due to the existence of SHE and need careful calibration to project the lifetime at common usage bias.",
"title": ""
},
{
"docid": "90c46b6e7f125481e966b746c5c76c97",
"text": "Black-box mutational fuzzing is a simple yet effective technique to find bugs in software. Given a set of program-seed pairs, we ask how to schedule the fuzzings of these pairs in order to maximize the number of unique bugs found at any point in time. We develop an analytic framework using a mathematical model of black-box mutational fuzzing and use it to evaluate 26 existing and new randomized online scheduling algorithms. Our experiments show that one of our new scheduling algorithms outperforms the multi-armed bandit algorithm in the current version of the CERT Basic Fuzzing Framework (BFF) by finding 1.5x more unique bugs in the same amount of time.",
"title": ""
},
{
"docid": "5f6e77c95d92c1b8f571921954f252d6",
"text": "Parallel job scheduling has gained increasing recognition in recent years as a distinct area of study. However , there is concern about the divergence of theory and practice in the eld. We review theoretical research in this area, and recommendations based on recent results. This is contrasted with a proposal for standard interfaces among the components of a scheduling system, that has grown from requirements in the eld.",
"title": ""
},
{
"docid": "12f717b4973a5290233d6f03ba05626b",
"text": "We present a modular approach for analyzing calcium imaging recordings of large neuronal ensembles. Our goal is to simultaneously identify the locations of the neurons, demix spatially overlapping components, and denoise and deconvolve the spiking activity from the slow dynamics of the calcium indicator. Our approach relies on a constrained nonnegative matrix factorization that expresses the spatiotemporal fluorescence activity as the product of a spatial matrix that encodes the spatial footprint of each neuron in the optical field and a temporal matrix that characterizes the calcium concentration of each neuron over time. This framework is combined with a novel constrained deconvolution approach that extracts estimates of neural activity from fluorescence traces, to create a spatiotemporal processing algorithm that requires minimal parameter tuning. We demonstrate the general applicability of our method by applying it to in vitro and in vivo multi-neuronal imaging data, whole-brain light-sheet imaging data, and dendritic imaging data.",
"title": ""
},
{
"docid": "a7bd7a5b7d79ce8c5691abfdcecfeec7",
"text": "We consider the problems of learning forward models that map state to high-dimensional images and inverse models that map high-dimensional images to state in robotics. Specifically, we present a perceptual model for generating video frames from state with deep networks, and provide a framework for its use in tracking and prediction tasks. We show that our proposed model greatly outperforms standard deconvolutional methods and GANs for image generation, producing clear, photo-realistic images. We also develop a convolutional neural network model for state estimation and compare the result to an Extended Kalman Filter to estimate robot trajectories. We validate all models on a real robotic system.",
"title": ""
},
{
"docid": "82592f60e0039089e3c16d9534780ad5",
"text": "A model for grey-tone image enhancement using the concept of fuzzy sets is suggested. It involves primary enhancement, smoothing, and then final enhancement. The algorithm for both the primary and final enhancements includes the extraction of fuzzy properties corresponding to pixels and then successive applications of the fuzzy operator \"contrast intensifier\" on the property plane. The three different smoothing techniques considered in the experiment are defocussing, averaging, and max-min rule over the neighbors of a pixel. The reduction of the \"index of fuzziness\" and \"entropy\" for different enhanced outputs (corresponding to different values of fuzzifiers) is demonstrated for an English script input. Enhanced output as obtained by histogram modification technique is also presented for comparison.",
"title": ""
},
{
"docid": "5a8f8b9094c62b77d9f71cf5b2a3a562",
"text": "Recent experiments have established that information can be encoded in the spike times of neurons relative to the phase of a background oscillation in the local field potential-a phenomenon referred to as \"phase-of-firing coding\" (PoFC). These firing phase preferences could result from combining an oscillation in the input current with a stimulus-dependent static component that would produce the variations in preferred phase, but it remains unclear whether these phases are an epiphenomenon or really affect neuronal interactions-only then could they have a functional role. Here we show that PoFC has a major impact on downstream learning and decoding with the now well established spike timing-dependent plasticity (STDP). To be precise, we demonstrate with simulations how a single neuron equipped with STDP robustly detects a pattern of input currents automatically encoded in the phases of a subset of its afferents, and repeating at random intervals. Remarkably, learning is possible even when only a small fraction of the afferents ( approximately 10%) exhibits PoFC. The ability of STDP to detect repeating patterns had been noted before in continuous activity, but it turns out that oscillations greatly facilitate learning. A benchmark with more conventional rate-based codes demonstrates the superiority of oscillations and PoFC for both STDP-based learning and the speed of decoding: the oscillation partially formats the input spike times, so that they mainly depend on the current input currents, and can be efficiently learned by STDP and then recognized in just one oscillation cycle. This suggests a major functional role for oscillatory brain activity that has been widely reported experimentally.",
"title": ""
},
{
"docid": "abe5bdf6a17cf05b49ac578347a3ca5d",
"text": "To realize the broad vision of pervasive computing, underpinned by the “Internet of Things” (IoT), it is essential to break down application and technology-based silos and support broad connectivity and data sharing; the cloud being a natural enabler. Work in IoT tends toward the subsystem, often focusing on particular technical concerns or application domains, before offloading data to the cloud. As such, there has been little regard given to the security, privacy, and personal safety risks that arise beyond these subsystems; i.e., from the wide-scale, cross-platform openness that cloud services bring to IoT. In this paper, we focus on security considerations for IoT from the perspectives of cloud tenants, end-users, and cloud providers, in the context of wide-scale IoT proliferation, working across the range of IoT technologies (be they things or entire IoT subsystems). Our contribution is to analyze the current state of cloud-supported IoT to make explicit the security considerations that require further work.",
"title": ""
},
{
"docid": "77371cfa61dbb3053f3106f5433d23a7",
"text": "We present a new noniterative approach to synthetic aperture radar (SAR) autofocus, termed the multichannel autofocus (MCA) algorithm. The key in the approach is to exploit the multichannel redundancy of the defocusing operation to create a linear subspace, where the unknown perfectly focused image resides, expressed in terms of a known basis formed from the given defocused image. A unique solution for the perfectly focused image is then directly determined through a linear algebraic formulation by invoking an additional image support condition. The MCA approach is found to be computationally efficient and robust and does not require prior assumptions about the SAR scene used in existing methods. In addition, the vector-space formulation of MCA allows sharpness metric optimization to be easily incorporated within the restoration framework as a regularization term. We present experimental results characterizing the performance of MCA in comparison with conventional autofocus methods and discuss the practical implementation of the technique.",
"title": ""
},
{
"docid": "7209596ad58da21211bfe0ceaaccc72b",
"text": "Knowledge tracing (KT)[1] has been used in various forms for adaptive computerized instruction for more than 40 years. However, despite its long history of application, it is difficult to use in domain model search procedures, has not been used to capture learning where multiple skills are needed to perform a single action, and has not been used to compute latencies of actions. On the other hand, existing models used for educational data mining (e.g. Learning Factors Analysis (LFA)[2]) and model search do not tend to allow the creation of a “model overlay” that traces predictions for individual students with individual skills so as to allow the adaptive instruction to automatically remediate performance. Because these limitations make the transition from model search to model application in adaptive instruction more difficult, this paper describes our work to modify an existing data mining model so that it can also be used to select practice adaptively. We compare this new adaptive data mining model (PFA, Performance Factors Analysis) with two versions of LFA and then compare PFA with standard KT.",
"title": ""
},
{
"docid": "00c5432a69225bd7a7dbd41f88a1f391",
"text": "I The viewpoint of the subject of matroids, and related areas of lattice theory, has always been, in one way or another, abstraction of algebraic dependence or, equivalently, abstraction of the incidence relations in geometric representations of algebra. Often one of the main derived facts is that all bases have the same cardinality. (See Van der Waerden, Section 33.) From the viewpoint of mathematical programming, the equal cardinality of all bases has special meaning — namely, that every basis is an optimum-cardinality basis. We are thus prompted to study this simple property in the context of linear programming. It turns out to be useful to regard \" pure matroid theory \" , which is only incidentally related to the aspects of algebra which it abstracts, as the study of certain classes of convex polyhedra. (1) A matroid M = (E, F) can be defined as a finite set E and a nonempty family F of so-called independent subsets of E such that (a) Every subset of an independent set is independent, and (b) For every A ⊆ E, every maximal independent subset of A, i.e., every basis of A, has the same cardinality, called the rank, r(A), of A (with respect to M). (This definition is not standard. It is prompted by the present interest).",
"title": ""
},
{
"docid": "932ed2eb35ccf0055a49da12e2d0edfc",
"text": "An intelligent manhole cover management system (IMCS) is one of the most important basic platforms in a smart city to prevent frequent manhole cover accidents. Manhole cover displacement, loss, and damage pose threats to personal safety, which is contrary to the aim of smart cities. This paper proposes an edge computing-based IMCS for smart cities. A unique radio frequency identification tag with tilt and vibration sensors is used for each manhole cover, and a Narrowband Internet of Things is adopted for communication. Meanwhile, edge computing servers interact with corresponding management personnel through mobile devices based on the collected information. A demonstration application of the proposed IMCS in the Xiasha District of Hangzhou, China, showed its high efficiency. It efficiently reduced the average repair time, which could improve the security for both people and manhole covers.",
"title": ""
},
{
"docid": "5c3ae59522d549bed4c059a11b9724c6",
"text": "The chemokine receptor CCR7 drives leukocyte migration into and within lymph nodes (LNs). It is activated by chemokines CCL19 and CCL21, which are scavenged by the atypical chemokine receptor ACKR4. CCR7-dependent navigation is determined by the distribution of extracellular CCL19 and CCL21, which form concentration gradients at specific microanatomical locations. The mechanisms underpinning the establishment and regulation of these gradients are poorly understood. In this article, we have incorporated multiple biochemical processes describing the CCL19-CCL21-CCR7-ACKR4 network into our model of LN fluid flow to establish a computational model to investigate intranodal chemokine gradients. Importantly, the model recapitulates CCL21 gradients observed experimentally in B cell follicles and interfollicular regions, building confidence in its ability to accurately predict intranodal chemokine distribution. Parameter variation analysis indicates that the directionality of these gradients is robust, but their magnitude is sensitive to these key parameters: chemokine production, diffusivity, matrix binding site availability, and CCR7 abundance. The model indicates that lymph flow shapes intranodal CCL21 gradients, and that CCL19 is functionally important at the boundary between B cell follicles and the T cell area. It also predicts that ACKR4 in LNs prevents CCL19/CCL21 accumulation in efferent lymph, but does not control intranodal gradients. Instead, it attributes the disrupted interfollicular CCL21 gradients observed in Ackr4-deficient LNs to ACKR4 loss upstream. Our novel approach has therefore generated new testable hypotheses and alternative interpretations of experimental data. Moreover, it acts as a framework to investigate gradients at other locations, including those that cannot be visualized experimentally or involve other chemokines.",
"title": ""
},
{
"docid": "98b536786ecfeab870467c5951924662",
"text": "An historical discussion is provided of the intellectual trends that caused nineteenth century interdisciplinary studies of physics and psychobiology by leading scientists such as Helmholtz, Maxwell, and Mach to splinter into separate twentieth-century scientific movements. The nonlinear, nonstationary, and nonlocal nature of behavioral and brain data are emphasized. Three sources of contemporary neural network research—the binary, linear, and continuous-nonlinear models—are noted. The remainder of the article describes results about continuous-nonlinear models: Many models of contentaddressable memory are shown to be special cases of the Cohen-Grossberg model and global Liapunov function, including the additive, brain-state-in-a-box, McCulloch-Pitts, Boltzmann machine, Hartline-Ratliff-Miller, shunting, masking field, bidirectional associative memory, Volterra-Lotka, Gilpin-Ayala, and Eigen-Schuster models. A Liapunov functional method is described for proving global limit or oscillation theorems Purchase Export",
"title": ""
},
{
"docid": "b93825ddae40f61a27435bb255a3cc2e",
"text": "Visual programming arguably provides greater benefit in explicit parallel programming, particularly coarse grain MIMD programming, than in sequential programming. Explicitly parallel programs are multi-dimensional objects; the natural representations of a parallel program are annotated directed graphs: data flow graphs, control flow graphs, etc. where the nodes of the graphs are sequential computations. The execution of parallel programs is a directed graph of instances of sequential computations. A visually based (directed graph) representation of parallel programs is thus more natural than a pure text string language where multi-dimensional structures must be implicitly defined. The naturalness of the annotated directed graph representation of parallel programs enables methods for programming and debugging which are qualitatively different and arguably superior to the conventional practice based on pure text string languages. Annotation of the graphs is a critical element of a practical visual programming system; text is still the best way to represent many aspects of programs. This paper presents a model of parallel programming and a model of execution for parallel programs which are the conceptual framework for a complete visual programming environment including capture of parallel structure, compilation and behavior analysis (performance and debugging). Two visually-oriented parallel programming systems, CODE 2.0 and HeNCE, each based on a variant of the model of programming, will be used to illustrate the concepts. The benefits of visually-oriented realizations of these models for program structure capture, software component reuse, performance analysis and debugging will be explored and hopefully demonstrated by examples in these representations. It is only by actually implementing and using visual parallel programming languages that we have been able to fully evaluate their merits.",
"title": ""
},
{
"docid": "4b96679173c825db7bc334449b6c4b83",
"text": "This article provides the first survey of computational models of emotion in reinforcement learning (RL) agents. The survey focuses on agent/robot emotions, and mostly ignores human user emotions. Emotions are recognized as functional in decision-making by influencing motivation and action selection. Therefore, computational emotion models are usually grounded in the agent’s decision making architecture, of which RL is an important subclass. Studying emotions in RL-based agents is useful for three research fields. For machine learning (ML) researchers, emotion models may improve learning efficiency. For the interactive ML and human–robot interaction community, emotions can communicate state and enhance user investment. Lastly, it allows affective modelling researchers to investigate their emotion theories in a successful AI agent class. This survey provides background on emotion theory and RL. It systematically addresses (1) from what underlying dimensions (e.g. homeostasis, appraisal) emotions can be derived and how these can be modelled in RL-agents, (2) what types of emotions have been derived from these dimensions, and (3) how these emotions may either influence the learning efficiency of the agent or be useful as social signals. We also systematically compare evaluation criteria, and draw connections to important RL sub-domains like (intrinsic) motivation and model-based RL. In short, this survey provides both a practical overview for engineers wanting to implement emotions in their RL agents, and identifies challenges and directions for future emotion-RL research.",
"title": ""
},
{
"docid": "87ebf3c29afc0ea6b8c386f8f5ba31f9",
"text": "In this study, we present a weakly supervised approach that discovers the discriminative structures of sketch images, given pairs of sketch images and web images. In contrast to traditional approaches that use global appearance features or relay on keypoint features, our aim is to automatically learn the shared latent structures that exist between sketch images and real images, even when there are significant appearance differences across its relevant real images. To accomplish this, we propose a deep convolutional neural network, named SketchNet. We firstly develop a triplet composed of sketch, positive and negative real image as the input of our neural network. To discover the coherent visual structures between the sketch and its positive pairs, we introduce the softmax as the loss function. Then a ranking mechanism is introduced to make the positive pairs obtain a higher score comparing over negative ones to achieve robust representation. Finally, we formalize above-mentioned constrains into the unified objective function, and create an ensemble feature representation to describe the sketch images. Experiments on the TUBerlin sketch benchmark demonstrate the effectiveness of our model and show that deep feature representation brings substantial improvements over other state-of-the-art methods on sketch classification.",
"title": ""
},
{
"docid": "254fab8fa998333a9c1f261a620c4b23",
"text": "Pathological self-mutilation has been prevalent throughout history and in many cultures. Major self mutilations autocastration, eye enucleation and limb amputation are rarer than minor self-mutilations like wrist cutting, head banging etc. Because of their gruesome nature, major self-mutilations invoke significant negative emotions among therapists and caregivers. Unfortunately, till date, there is very little research in this field. In the absence of robust neurobiological understanding and speculative psychodynamic theories, the current understanding is far from satisfactory. At the same time, the role of culture and society cannot be completely ignored while understanding major self-mutilations. Literature from western culture describes this as an act of repentance towards past bad thoughts or acts in contrast to the traditional eastern culture that praises it as an act of sacrifice for achieving superiority and higher goals in the society. The authors present here two cases of major self-mutilation i.e. autocastration and autoenucleation both of which occurred in patients suffering from schizophrenia. They have also reviewed the existing literature and current understanding of this phenomenon (German J Psychiatry 2010; 13 (4): 164-170).",
"title": ""
},
{
"docid": "095f8d5c3191d6b70b2647b562887aeb",
"text": "Hardware specialization, in the form of datapath and control circuitry customized to particular algorithms or applications, promises impressive performance and energy advantages compared to traditional architectures. Current research in accelerators relies on RTL-based synthesis flows to produce accurate timing, power, and area estimates. Such techniques not only require significant effort and expertise but also are slow and tedious to use, making large design space exploration infeasible. To overcome this problem, the authors developed Aladdin, a pre-RTL, power-performance accelerator modeling framework and demonstrated its application to system-on-chip (SoC) simulation. Aladdin estimates performance, power, and area of accelerators within 0.9, 4.9, and 6.6 percent with respect to RTL implementations. Integrated with architecture-level general-purpose core and memory hierarchy simulators, Aladdin provides researchers with a fast but accurate way to model the power and performance of accelerators in an SoC environment.",
"title": ""
},
{
"docid": "9292601d14f70925920d3b2ab06a39ce",
"text": "Internet review sites allow consumers to write detailed reviews of products potentially containing information related to user experience (UX) and usability. Using 5198 sentences from 3492 online reviews of software and video games, we investigate the content of online reviews with the aims of (i) charting the distribution of information in reviews among different dimensions of usability and UX, and (ii) extracting an associated vocabulary for each dimension using techniques from natural language processing and machine learning. We (a) find that 13%-49% of sentences in our online reviews pool contain usability or UX information; (b) chart the distribution of four sets of dimensions of usability and UX across reviews from two product categories; (c) extract a catalogue of important word stems for a number of dimensions. Our results suggest that a greater understanding of users' preoccupation with different dimensions of usability and UX may be inferred from the large volume of self-reported experiences online, and that research focused on identifying pertinent dimensions of usability and UX may benefit further from empirical studies of user-generated experience reports.",
"title": ""
}
] |
scidocsrr
|
b133d39f93f87b3f8c051ba53b9acd2a
|
Playing games for security: an efficient exact algorithm for solving Bayesian Stackelberg games
|
[
{
"docid": "4cc4c8fd07f30b5546be2376c1767c19",
"text": "We apply new bilevel and trilevel optimization models to make critical infrastructure more resilient against terrorist attacks. Each model features an intelligent attacker (terrorists) and a defender (us), information transparency, and sequential actions by attacker and defender. We illustrate with examples of the US Strategic Petroleum Reserve, the US Border Patrol at Yuma, Arizona, and an electrical transmission system. We conclude by reporting insights gained from the modeling experience and many “red-team” exercises. Each exercise gathers open-source data on a real-world infrastructure system, develops an appropriate bilevel or trilevel model, and uses these to identify vulnerabilities in the system or to plan an optimal defense.",
"title": ""
}
] |
[
{
"docid": "cc85e917ca668a60461ba6848e4c3b42",
"text": "In this paper a generic method for fault detection and isolation (FDI) in manufacturing systems considered as discrete event systems (DES) is presented. The method uses an identified model of the closed loop of plant and controller built on the basis of observed fault free system behavior. An identification algorithm known from literature is used to determine the fault detection model in form of a non-deterministic automaton. New results of how to parameterize this algorithm are reported. To assess the fault detection capability of an identified automaton, probabilistic measures are proposed. For fault isolation, the concept of residuals adapted for DES is used by defining appropriate set operations representing generic fault symptoms. The method is applied to a case study system.",
"title": ""
},
{
"docid": "8cfa2086e1c73bae6945d1a19d52be26",
"text": "We present a unified dynamics framework for real-time visual effects. Using particles connected by constraints as our fundamental building block allows us to treat contact and collisions in a unified manner, and we show how this representation is flexible enough to model gases, liquids, deformable solids, rigid bodies and cloth with two-way interactions. We address some common problems with traditional particle-based methods and describe a parallel constraint solver based on position-based dynamics that is efficient enough for real-time applications.",
"title": ""
},
{
"docid": "90b0ee9cf92c3ff905c2dffda9e3e509",
"text": "Julius is an open-source large-vocabulary speech recognition software used for both academic research and industrial applications. It executes real-time speech recognition of a 60k-word dictation task on low-spec PCs with small footprint, and even on embedded devices. Julius supports standard language models such as statistical N-gram model and rule-based grammars, as well as Hidden Markov Model (HMM) as an acoustic model. One can build a speech recognition system of his own purpose, or can integrate the speech recognition capability to a variety of applications using Julius. This article describes an overview of Julius, major features and specifications, and summarizes the developments conducted in the recent years.",
"title": ""
},
{
"docid": "fc94c6fb38198c726ab3b417c3fe9b44",
"text": "Tremor is a rhythmical and involuntary oscillatory movement of a body part and it is one of the most common movement disorders. Orthotic devices have been under investigation as a noninvasive tremor suppression alternative to medication or surgery. The challenge in musculoskeletal tremor suppression is estimating and attenuating the tremor motion without impeding the patient's intentional motion. In this research a robust tremor suppression algorithm was derived for patients with pathological tremor in the upper limbs. First the motion in the tremor frequency range is estimated using a high-pass filter. Then, by applying the backstepping method the appropriate amount of torque is calculated to drive the output of the estimator toward zero. This is equivalent to an estimation of the tremor torque. It is shown that the arm/orthotic device control system is stable and the algorithm is robust despite inherent uncertainties in the open-loop human arm joint model. A human arm joint simulator, capable of emulating tremorous motion of a human arm joint was used to evaluate the proposed suppression algorithm experimentally for two types of tremor, Parkinson and essential. Experimental results show 30-42 dB (97.5-99.2%) suppression of tremor with minimal effect on the intentional motion.",
"title": ""
},
{
"docid": "e8db06439dc533e0dd24e0920feb70c9",
"text": "Today, vehicles are increasingly being connected to the Internet of Things which enable them to provide ubiquitous access to information to drivers and passengers while on the move. However, as the number of connected vehicles keeps increasing, new requirements (such as seamless, secure, robust, scalable information exchange among vehicles, humans, and roadside infrastructures) of vehicular networks are emerging. In this context, the original concept of vehicular ad-hoc networks is being transformed into a new concept called the Internet of Vehicles (IoV). We discuss the benefits of IoV along with recent industry standards developed to promote its implementation. We further present recently proposed communication protocols to enable the seamless integration and operation of the IoV. Finally, we present future research directions of IoV that require further consideration from the vehicular research community.",
"title": ""
},
{
"docid": "ccb6067614bebf844d96e9a337a4c0d4",
"text": "BACKGROUND\nJoint pain is thought to be an early sign of injury to a pitcher.\n\n\nOBJECTIVE\nTo evaluate the association between pitch counts, pitch types, and pitching mechanics and shoulder and elbow pain in young pitchers.\n\n\nSTUDY DESIGN\nProspective cohort study.\n\n\nMETHODS\nFour hundred and seventy-six young (ages 9 to 14 years) baseball pitchers were followed for one season. Data were collected from pre- and postseason questionnaires, injury and performance interviews after each game, pitch count logs, and video analysis of pitching mechanics. Generalized estimating equations and logistic regression analysis were used.\n\n\nRESULTS\nHalf of the subjects experienced elbow or shoulder pain during the season. The curveball was associated with a 52% increased risk of shoulder pain and the slider was associated with an 86% increased risk of elbow pain. There was a significant association between the number of pitches thrown in a game and during the season and the rate of elbow pain and shoulder pain.\n\n\nCONCLUSIONS\nPitchers in this age group should be cautioned about throwing breaking pitches (curveballs and sliders) because of the increased risk of elbow and shoulder pain. Limitations on pitches thrown in a game and in a season can also reduce the risk of pain. Further evaluation of pain and pitching mechanics is necessary.",
"title": ""
},
{
"docid": "e4574b1e8241599b5c3ef740b461efba",
"text": "Increasing awareness of ICS security issues has brought about a growing body of work in this area, including pioneering contributions based on realistic control system logs and network traces. This paper surveys the state of the art in ICS security research, including efforts of industrial researchers, highlighting the most interesting works. Research efforts are grouped into divergent areas, where we add “secure control” as a new category to capture security goals specific to control systems that differ from security goals in traditional IT systems.",
"title": ""
},
{
"docid": "329420b8b13e8c315d341e382419315a",
"text": "The aim of this research is to design an intelligent system that addresses the problem of real-time localization and navigation of visually impaired (VI) in an indoor environment using a monocular camera. Systems that have been developed so far for the VI use either many cameras (stereo and monocular) integrated with other sensors or use very complex algorithms that are computationally expensive. In this research work, a computationally less expensive integrated system has been proposed to combine imaging geometry, Visual Odometry (VO), Object Detection (OD) along with Distance-Depth (D-D) estimation algorithms for precise navigation and localization by utilizing a single monocular camera as the only sensor. The developed algorithm is tested for both standard Karlsruhe and indoor environment recorded datasets. Tests have been carried out in real-time using a smartphone camera that captures image data of the environment as the person moves and is sent over Wi-Fi for further processing to the MATLAB software model running on an Intel i7 processor. The algorithm provides accurate results on real-time navigation in the environment with an audio feedback about the person's location. The trajectory of the navigation is expressed in an arbitrary scale. Object detection based localization is accurate. The D-D estimation provides distance and depth measurements up to an accuracy of 94–98%.",
"title": ""
},
{
"docid": "1e5073e73c371f1682d95bb3eedaf7f4",
"text": "Investigation into robot-assisted intervention for children with autism spectrum disorder (ASD) has gained momentum in recent years. Therapists involved in interventions must overcome the communication impairments generally exhibited by children with ASD by adeptly inferring the affective cues of the children to adjust the intervention accordingly. Similarly, a robot must also be able to understand the affective needs of these children-an ability that the current robot-assisted ASD intervention systems lack-to achieve effective interaction that addresses the role of affective states in human-robot interaction and intervention practice. In this paper, we present a physiology-based affect-inference mechanism for robot-assisted intervention where the robot can detect the affective states of a child with ASD as discerned by a therapist and adapt its behaviors accordingly. This paper is the first step toward developing ldquounderstandingrdquo robots for use in future ASD intervention. Experimental results with six children with ASD from a proof-of-concept experiment (i.e., a robot-based basketball game) are presented. The robot learned the individual liking level of each child with regard to the game configuration and selected appropriate behaviors to present the task at his/her preferred liking level. Results show that the robot automatically predicted individual liking level in real time with 81.1% accuracy. This is the first time, to our knowledge, that the affective states of children with ASD have been detected via a physiology-based affect recognition technique in real time. This is also the first time that the impact of affect-sensitive closed-loop interaction between a robot and a child with ASD has been demonstrated experimentally.",
"title": ""
},
{
"docid": "40099678d2c97013eb986d3be93eefb4",
"text": "Mortality prediction of intensive care unit (ICU) patients facilitates hospital benchmarking and has the opportunity to provide caregivers with useful summaries of patient health at the bedside. The development of novel models for mortality prediction is a popular task in machine learning, with researchers typically seeking to maximize measures such as the area under the receiver operator characteristic curve (AUROC). The number of ’researcher degrees of freedom’ that contribute to the performance of a model, however, presents a challenge when seeking to compare reported performance of such models. In this study, we review publications that have reported performance of mortality prediction models based on the Medical Information Mart for Intensive Care (MIMIC) database and attempt to reproduce the cohorts used in their studies. We then compare the performance reported in the studies against gradient boosting and logistic regression models using a simple set of features extracted from MIMIC. We demonstrate the large heterogeneity in studies that purport to conduct the single task of ’mortality prediction’, highlighting the need for improvements in the way that prediction tasks are reported to enable fairer comparison between models. We reproduced datasets for 38 experiments corresponding to 28 published studies using MIMIC. In half of the experiments, the sample size we acquired was 25% greater or smaller than the sample size reported. The highest discrepancy was 11,767 patients. While accurate reproduction of each study cannot be guaranteed, we believe that these results highlight the need for more consistent reporting of model design and methodology to allow performance improvements to be compared. We discuss the challenges in reproducing the cohorts used in the studies, highlighting the importance of clearly reported methods (e.g. data cleansing, variable selection, cohort selection) and the need for open code and publicly available benchmarks.",
"title": ""
},
{
"docid": "f3c1ad1431d3aced0175dbd6e3455f39",
"text": "BACKGROUND\nMethylxanthine therapy is commonly used for apnea of prematurity but in the absence of adequate data on its efficacy and safety. It is uncertain whether methylxanthines have long-term effects on neurodevelopment and growth.\n\n\nMETHODS\nWe randomly assigned 2006 infants with birth weights of 500 to 1250 g to receive either caffeine or placebo until therapy for apnea of prematurity was no longer needed. The primary outcome was a composite of death, cerebral palsy, cognitive delay (defined as a Mental Development Index score of <85 on the Bayley Scales of Infant Development), deafness, or blindness at a corrected age of 18 to 21 months.\n\n\nRESULTS\nOf the 937 infants assigned to caffeine for whom adequate data on the primary outcome were available, 377 (40.2%) died or survived with a neurodevelopmental disability, as compared with 431 of the 932 infants (46.2%) assigned to placebo for whom adequate data on the primary outcome were available (odds ratio adjusted for center, 0.77; 95% confidence interval [CI], 0.64 to 0.93; P=0.008). Treatment with caffeine as compared with placebo reduced the incidence of cerebral palsy (4.4% vs. 7.3%; adjusted odds ratio, 0.58; 95% CI, 0.39 to 0.87; P=0.009) and of cognitive delay (33.8% vs. 38.3%; adjusted odds ratio, 0.81; 95% CI, 0.66 to 0.99; P=0.04). The rates of death, deafness, and blindness and the mean percentiles for height, weight, and head circumference at follow-up did not differ significantly between the two groups.\n\n\nCONCLUSIONS\nCaffeine therapy for apnea of prematurity improves the rate of survival without neurodevelopmental disability at 18 to 21 months in infants with very low birth weight. (ClinicalTrials.gov number, NCT00182312 [ClinicalTrials.gov].).",
"title": ""
},
{
"docid": "a0acd4870951412fa31bc7803f927413",
"text": "Surprisingly little is understood about the physiologic and pathologic processes that involve intraoral sebaceous glands. Neoplasms are rare. Hyperplasia of these glands is undoubtedly more common, but criteria for the diagnosis of intraoral sebaceous hyperplasia have not been established. These lesions are too often misdiagnosed as large \"Fordyce granules\" or, when very large, as sebaceous adenomas. On the basis of a series of 31 nonneoplastic sebaceous lesions and on published data, the following definition is proposed: intraoral sebaceous hyperplasia occurs when a lesion, judged clinically to be a distinct abnormality that requires biopsy for diagnosis or confirmation of clinical impression, has histologic features of one or more well-differentiated sebaceous glands that exhibit no fewer than 15 lobules per gland. Sebaceous glands with fewer than 15 lobules that form an apparently distinct clinical lesion on the buccal mucosa are considered normal, whereas similar lesions of other intraoral sites are considered ectopic sebaceous glands. Sebaceous adenomas are less differentiated than sebaceous hyperplasia.",
"title": ""
},
{
"docid": "23384db962a1eb524f40ca52f4852b14",
"text": "Recent developments in Artificial Intelligence (AI) have generated a steep interest from media and general public. As AI systems (e.g. robots, chatbots, avatars and other intelligent agents) are moving from being perceived as a tool to being perceived as autonomous agents and team-mates, an important focus of research and development is understanding the ethical impact of these systems. What does it mean for an AI system to make a decision? What are the moral, societal and legal consequences of their actions and decisions? Can an AI system be held accountable for its actions? How can these systems be controlled once their learning capabilities bring them into states that are possibly only remotely linked to their initial, designed, setup? Should such autonomous innovation in commercial systems even be allowed, and how should use and development be regulated? These and many other related questions are currently the focus of much attention. The way society and our systems will be able to deal with these questions will for a large part determine our level of trust, and ultimately, the impact of AI in society, and the existence of AI. Contrary to the frightening images of a dystopic future in media and popular fiction, where AI systems dominate the world and is mostly concerned with warfare, AI is already changing our daily lives mostly in ways that improve human health, safety, and productivity (Stone et al. 2016). This is the case in domain such as transportation; service robots; health-care; education; public safety and security; and entertainment. Nevertheless, and in order to ensure that those dystopic futures do not become reality, these systems must be introduced in ways that build trust and understanding, and respect human and civil rights. The need for ethical considerations in the development of intelligent interactive systems is becoming one of the main influential areas of research in the last few years, and has led to several initiatives both from researchers as from practitioners, including the IEEE initiative on Ethics of Autonomous Systems1, the Foundation for Responsible Robotics2, and the Partnership on AI3 amongst several others. As the capabilities for autonomous decision making grow, perhaps the most important issue to consider is the need to rethink responsibility (Dignum 2017). Whatever their level of autonomy and social awareness and their ability to learn, AI systems are artefacts, constructed by people to fulfil some goals. Theories, methods, algorithms are needed to integrate societal, legal and moral values into technological developments in AI, at all stages of development (analysis, design, construction, deployment and evaluation). These frameworks must deal both with the autonomic reasoning of the machine about such issues that we consider to have ethical impact, but most importantly, we need frameworks to guide design choices, to regulate the reaches of AI systems, to ensure proper data stewardship, and to help individuals determine their own involvement. Values are dependent on the socio-cultural context (Turiel 2002), and are often only implicit in deliberation processes, which means that methodologies are needed to elicit the values held by all the stakeholders, and to make these explicit can lead to better understanding and trust on artificial autonomous systems. That is, AI reasoning should be able to take into account societal values, moral and ethical considerations; weigh the respective priorities of values held by different stakeholders in various multicultural contexts; explain its reasoning; and guarantee transparency. Responsible Artificial Intelligence is about human responsibility for the development of intelligent systems along fundamental human principles and values, to ensure human flourishing and wellbeing in a sustainable world. In fact, Responsible AI is more than the ticking of some ethical ‘boxes’ in a report, or the development of some add-on features, or switch-off buttons in AI systems. Rather, responsibility is fundamental",
"title": ""
},
{
"docid": "3688c987419daade77c44912fbc72ecf",
"text": "We propose a visual food recognition framework that integrates the inherent semantic relationships among fine-grained classes. Our method learns semantics-aware features by formulating a multi-task loss function on top of a convolutional neural network (CNN) architecture. It then refines the CNN predictions using a random walk based smoothing procedure, which further exploits the rich semantic information. We evaluate our algorithm on a large \"food-in-the-wild\" benchmark, as well as a challenging dataset of restaurant food dishes with very few training images. The proposed method achieves higher classification accuracy than a baseline which directly fine-tunes a deep learning network on the target dataset. Furthermore, we analyze the consistency of the learned model with the inherent semantic relationships among food categories. Results show that the proposed approach provides more semantically meaningful results than the baseline method, even in cases of mispredictions.",
"title": ""
},
{
"docid": "cc93f5a421ad0e5510d027b01582e5ae",
"text": "This paper assesses the impact of financial reforms in Zimbabwe on savings and credit availability to small and medium scale enterprises (SMEs) and the poor. We established that the reforms improved domestic savings mobilization due to high deposit rates, the emergence of new financial institutions and products and the general increase in real incomes after the 1990 economic reforms. The study uncovered that inflation and real income were the major determinants of savings during the sample period. High lending rates and the use of conventional lending methodologies by banks restricted access to credit by the SMEs and the poor. JEL Classification Numbers: E21, O16.",
"title": ""
},
{
"docid": "0c177af9c2fffa6c4c667d1b4a4d3d79",
"text": "In the last decade, a large number of different software component models have been developed, with different aims and using different principles and technologies. This has resulted in a number of models which have many similarities, but also principal differences, and in many cases unclear concepts. Component-based development has not succeeded in providing standard principles, as has, for example, object-oriented development. In order to increase the understanding of the concepts and to differentiate component models more easily, this paper identifies, discusses, and characterizes fundamental principles of component models and provides a Component Model Classification Framework based on these principles. Further, the paper classifies a large number of component models using this framework.",
"title": ""
},
{
"docid": "f996b9911692cc835e55e561c3a501db",
"text": "This study proposes a clustering-based Wi-Fi fingerprinting localization algorithm. The proposed algorithm first presents a novel support vector machine based clustering approach, namely SVM-C, which uses the margin between two canonical hyperplanes for classification instead of using the Euclidean distance between two centroids of reference locations. After creating the clusters of fingerprints by SVM-C, our positioning system embeds the classification mechanism into a positioning task and compensates for the large database searching problem. The proposed algorithm assigns the matched cluster surrounding the test sample and locates the user based on the corresponding cluster's fingerprints to reduce the computational complexity and remove estimation outliers. Experimental results from realistic Wi-Fi test-beds demonstrated that our approach apparently improves the positioning accuracy. As compared to three existing clustering-based methods, K-means, affinity propagation, and support vector clustering, the proposed algorithm reduces the mean localization errors by 25.34%, 25.21%, and 26.91%, respectively.",
"title": ""
},
{
"docid": "a2fe18fde80d729b9142ad116dbf5ba3",
"text": "We present a physically interpretable, continuous threedimensional (3D) model for handling occlusions with applications to road scene understanding. We probabilistically assign each point in space to an object with a theoretical modeling of the reflection and transmission probabilities for the corresponding camera ray. Our modeling is unified in handling occlusions across a variety of scenarios, such as associating structure from motion (SFM) point tracks with potentially occluding objects or modeling object detection scores in applications such as 3D localization. For point track association, our model uniformly handles static and dynamic objects, which is an advantage over motion segmentation approaches traditionally used in multibody SFM. Detailed experiments on the KITTI raw dataset show the superiority of the proposed method over both state-of-the-art motion segmentation and a baseline that heuristically uses detection bounding boxes for resolving occlusions. We also demonstrate how our continuous occlusion model may be applied to the task of 3D localization in road scenes.",
"title": ""
},
{
"docid": "20b00a2cc472dfec851f4aea42578a9e",
"text": "The self-regulatory strength model maintains that all acts of self-regulation, self-control, and choice result in a state of fatigue called ego-depletion. Self-determination theory differentiates between autonomous regulation and controlled regulation. Because making decisions represents one instance of self-regulation, the authors also differentiate between autonomous choice and controlled choice. Three experiments support the hypothesis that whereas conditions representing controlled choice would be egodepleting, conditions that represented autonomous choice would not. In Experiment 3, the authors found significant mediation by perceived self-determination of the relation between the choice condition (autonomous vs. controlled) and ego-depletion as measured by performance.",
"title": ""
},
{
"docid": "f9d1fcca8fb8f83bdb2391d4fe0ba4ef",
"text": "Evidence is mounting that Convolutional Networks (ConvNets) are the most effective representation learning method for visual recognition tasks. In the common scenario, a ConvNet is trained on a large labeled dataset (source) and the feed-forward units activation of the trained network, at a certain layer of the network, is used as a generic representation of an input image for a task with relatively smaller training set (target). Recent studies have shown this form of representation transfer to be suitable for a wide range of target visual recognition tasks. This paper introduces and investigates several factors affecting the transferability of such representations. It includes parameters for training of the source ConvNet such as its architecture, distribution of the training data, etc. and also the parameters of feature extraction such as layer of the trained ConvNet, dimensionality reduction, etc. Then, by optimizing these factors, we show that significant improvements can be achieved on various (17) visual recognition tasks. We further show that these visual recognition tasks can be categorically ordered based on their similarity to the source task such that a correlation between the performance of tasks and their similarity to the source task w.r.t. the proposed factors is observed.",
"title": ""
}
] |
scidocsrr
|
725400ce7c5aebb6a73a49362a5ec61f
|
Credibility Assessment in the News : Do we need to read ?
|
[
{
"docid": "a31ca7f2c2fce4a4f26d420f4aa91a91",
"text": "Transition-based dependency parsers usually use transition systems that monotonically extend partial parse states until they identify a complete parse tree. Honnibal et al. (2013) showed that greedy onebest parsing accuracy can be improved by adding additional non-monotonic transitions that permit the parser to “repair” earlier parsing mistakes by “over-writing” earlier parsing decisions. This increases the size of the set of complete parse trees that each partial parse state can derive, enabling such a parser to escape the “garden paths” that can trap monotonic greedy transition-based dependency parsers. We describe a new set of non-monotonic transitions that permits a partial parse state to derive a larger set of completed parse trees than previous work, which allows our parser to escape from a larger set of garden paths. A parser with our new nonmonotonic transition system has 91.85% directed attachment accuracy, an improvement of 0.6% over a comparable parser using the standard monotonic arc-eager transitions.",
"title": ""
},
{
"docid": "ee665e5a3d032a4e9b4e95cddac0f95c",
"text": "On p. 219, we describe the data we collected from BuzzSumo as “the total number of times each article was shared on Facebook” (emph. added). In fact, the BuzzSumo data are the number of engagements with each article, defined as the sum of shares, comments, and other interactions such as “likes.” All references to counts of Facebook shares in the paper and the online appendix based on the BuzzSumo data should be replaced with references to counts of Facebook engagements. None of the tables or figures in either the paper or the online appendix are affected by this change, nor does the change affect the results based on our custom survey. None of the substantive conclusions of the paper are affected with one exception discussed below, where our substantive conclusion is strengthened. Examples of cases where the text should be changed:",
"title": ""
}
] |
[
{
"docid": "7ecba9c479a754ad55664bf8208643e0",
"text": "One of the important problems that our society facing is people with disabilities which are finding hard to cope up with the fast growing technology. About nine billion people in the world are deaf and dumb. Communications between deaf-dumb and a normal person have always been a challenging task. Generally deaf and dumb people use sign language for communication, Sign language is an expressive and natural way for communication between normal and dumb people. Some peoples are easily able to get the information from their motions. The remaining is not able to understand their way of conveying the message. In order to overcome the complexity, the artificial mouth is introduced for the dumb people. So, we need a translator to understand what they speak and communicate with us. Hence makes the communication between normal person and disabled people easier. This work aims to lower the barrier of disabled persons in communication. The main aim of this proposed work is to develop a cost effective system which can give voice to voiceless people with the help of Sign language. In the proposed work, the captured images are processed through MATLAB in PC and converted into speech through speaker and text in LCD by interfacing with Arduino. Keyword : Disabled people, Sign language, Image Processing, Arduino, LCD display, Speaker.",
"title": ""
},
{
"docid": "e87a52f3e4f3c08838a2eff7501a12e5",
"text": "A coordinated approach to digital forensic readiness (DFR) in a large organisation requires the management and monitoring of a wide variety of resources, both human and technical. The resources involved in DFR in large organisations typically include staff from multiple departments and business units, as well as network infrastructure and computing platforms. The state of DFR within large organisations may therefore be adversely affected if the myriad human and technical resources involved are not managed in an optimal manner. This paper contributes to DFR by proposing the novel concept of a digital forensic readiness management system (DFRMS). The purpose of a DFRMS is to assist large organisations in achieving an optimal level of management for DFR. In addition to this, we offer an architecture for a DFRMS. This architecture is based on requirements for DFR that we ascertained from an exhaustive review of the DFR literature. We describe the architecture in detail and show that it meets the requirements set out in the DFR literature. The merits and disadvantages of the architecture are also discussed. Finally, we describe and explain an early prototype of a DFRMS.",
"title": ""
},
{
"docid": "fc9ddeeae99a4289d5b955c9ba90c682",
"text": "In recent years there have been growing calls for forging greater connections between education and cognitive neuroscience.As a consequence great hopes for the application of empirical research on the human brain to educational problems have been raised. In this article we contend that the expectation that results from cognitive neuroscience research will have a direct and immediate impact on educational practice are shortsighted and unrealistic. Instead, we argue that an infrastructure needs to be created, principally through interdisciplinary training, funding and research programs that allow for bidirectional collaborations between cognitive neuroscientists, educators and educational researchers to grow.We outline several pathways for scaffolding such a basis for the emerging field of ‘Mind, Brain and Education’ to flourish as well as the obstacles that are likely to be encountered along the path.",
"title": ""
},
{
"docid": "77e30fedf56545ba22ae9f1ef17b4dc9",
"text": "Most of current self-checkout systems rely on barcodes, RFID tags, or QR codes attached on items to distinguish products. This paper proposes an Intelligent Self-Checkout System (ISCOS) embedded with a single camera to detect multiple products without any labels in real-time performance. In addition, deep learning skill is applied to implement product detection, and data mining techniques construct the image database employed as training dataset. Product information gathered from a number of markets in Taiwan is utilized to make recommendation to customers. The bounding boxes are annotated by background subtraction with a fixed camera to avoid time-consuming process for each image. The contribution of this work is to combine deep learning and data mining approaches to real-time multi-object detection in image-based checkout system.",
"title": ""
},
{
"docid": "3c907a3e7ff704348e78239b2b54b917",
"text": "Real-time traffic surveillance is essential in today’s intelligent transportation systems and will surely play a vital role in tomorrow’s smart cities. The work detailed in this paper reports on the development and implementation of a novel smart wireless sensor for traffic monitoring. Computationally efficient and reliable algorithms for vehicle detection, speed and length estimation, classification, and time-synchronization were fully developed, integrated, and evaluated. Comprehensive system evaluation and extensive data analysis were performed to tune and validate the system for a reliable and robust operation. Several field studies conducted on highway and urban roads for different scenarios and under various traffic conditions resulted in 99.98% detection accuracy, 97.11% speed estimation accuracy, and 97% length-based vehicle classification accuracy. The developed system is portable, reliable, and cost-effective. The system can also be used for short-term or long-term installment on surface of highway, roadway, and roadside. Implementation cost of a single node including enclosure is US $50.",
"title": ""
},
{
"docid": "9f348ac8bae993ddf225f47dfa20182b",
"text": "BACKGROUND\nTreatment of giant melanocytic nevi (GMN) remains a multidisciplinary challenge. We present analysis of diagnostics, treatment, and follow- up in children with GMN to establish obligatory procedures in these patients.\n\n\nMATERIAL/METHODS\nIn 24 children with GMN, we analyzed: localization, main nevus diameter, satellite nevi, brain MRI, catecholamines concentrations in 24-h urine collection, surgery stages number, and histological examinations. The t test was used to compare catecholamines concentrations in patient subgroups.\n\n\nRESULTS\nNine children had \"bathing trunk\" nevus, 7 had main nevus on the back, 6 on head/neck, and 2 on neck/shoulder and neck/thorax. Brain MRI revealed neurocutaneous melanosis (NCM) in 7/24 children (29.2%), symptomatic in 1. Among urine catecholamines levels from 20 patients (33 samples), dopamine concentration was elevated in 28/33, noradrenaline in 15, adrenaline in 11, and vanillylmandelic acid in 4. In 6 NCM children, all catecholamines concentrations were higher than in patients without NCM (statistically insignificant). In all patients, histological examination of excised nevi revealed compound nevus, with neurofibromatic component in 15 and melanoma in 2. They remain without recurrence/metastases at 8- and 3-year-follow-up. There were 4/7 NCM patients with more than 1 follow-up MRI; in 1 a new melanin deposit was found and in 3 there was no progression.\n\n\nCONCLUSIONS\nEarly excision with histological examination speeds the diagnosis of melanoma. Brain MRI is necessary to confirm/rule-out NCM. High urine dopamine concentration in GMN children, especially with NCM, is an unpublished finding that can indicate patients with more serious neurological disease. Treatment of GMN children should be tailored individually for each case with respect to all medical/psychological aspects.",
"title": ""
},
{
"docid": "e8bbbc1864090b0246735868faa0e11f",
"text": "A pre-trained deep convolutional neural network (DCNN) is the feed-forward computation perspective which is widely used for the embedded vision systems. In the DCNN, the 2D convolutional operation occupies more than 90% of the computation time. Since the 2D convolutional operation performs massive multiply-accumulation (MAC) operations, conventional realizations could not implement a fully parallel DCNN. The RNS decomposes an integer into a tuple of L integers by residues of moduli set. Since no pair of modulus have a common factor with any other, the conventional RNS decomposes the MAC unit into circuits with different sizes. It means that the RNS could not utilize resources of an FPGA with uniform size. In this paper, we propose the nested RNS (NRNS), which recursively decompose the RNS. It can decompose the MAC unit into circuits with small sizes. In the DCNN using the NRNS, a 48-bit MAC unit is decomposed into 4-bit ones realized by look-up tables of the FPGA. In the system, we also use binary to NRNS converters and NRNS to binary converters. The binary to NRNS converter is realized by on-chip BRAMs, while the NRNS to binary one is realized by DSP blocks and BRAMs. Thus, a balanced usage of FPGA resources leads to a high clock frequency with less hardware. The ImageNet DCNN using the NRNS is implemented on a Xilinx Virtex VC707 evaluation board. As for the performance per area GOPS (Giga operations per second) per a slice, the proposed one is 5.86 times better than the existing best realization.",
"title": ""
},
{
"docid": "1ebb827b9baf3307bc20de78538d23e7",
"text": "0747-5632/$ see front matter 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.chb.2013.07.003 ⇑ Corresponding author. Address: University of North Texas, College of Business, 1155 Union Circle #311160, Denton, TX 76203-5017, USA. E-mail addresses: mohammad.salehan@unt.edu (M. Salehan), arash.negah ban@unt.edu (A. Negahban). 1 These authors contributed equally to the work. Mohammad Salehan 1,⇑, Arash Negahban 1",
"title": ""
},
{
"docid": "d17f6ed783c0ec33e4c74171db82392b",
"text": "Caffeic acid phenethyl ester, derived from natural propolis, has been reported to have anti-cancer properties. Voltage-gated sodium channels are upregulated in many cancers where they promote metastatic cell behaviours, including invasiveness. We found that micromolar concentrations of caffeic acid phenethyl ester blocked voltage-gated sodium channel activity in several invasive cell lines from different cancers, including breast (MDA-MB-231 and MDA-MB-468), colon (SW620) and non-small cell lung cancer (H460). In the MDA-MB-231 cell line, which was adopted as a 'model', long-term (48 h) treatment with 18 μM caffeic acid phenethyl ester reduced the peak current density by 91% and shifted steady-state inactivation to more hyperpolarized potentials and slowed recovery from inactivation. The effects of long-term treatment were also dose-dependent, 1 μM caffeic acid phenethyl ester reducing current density by only 65%. The effects of caffeic acid phenethyl ester on metastatic cell behaviours were tested on the MDA-MB-231 cell line at a working concentration (1 μM) that did not affect proliferative activity. Lateral motility and Matrigel invasion were reduced by up to 14% and 51%, respectively. Co-treatment of caffeic acid phenethyl ester with tetrodotoxin suggested that the voltage-gated sodium channel inhibition played a significant intermediary role in these effects. We conclude, first, that caffeic acid phenethyl ester does possess anti-metastatic properties. Second, the voltage-gated sodium channels, commonly expressed in strongly metastatic cancers, are a novel target for caffeic acid phenethyl ester. Third, more generally, ion channel inhibition can be a significant mode of action of nutraceutical compounds.",
"title": ""
},
{
"docid": "b29f2d688e541463b80006fac19eaf20",
"text": "Autonomous navigation has become an increasingly popular machine learning application. Recent advances in deep learning have also brought huge improvements to autonomous navigation. However, prior outdoor autonomous navigation methods depended on various expensive sensors or expensive and sometimes erroneously labeled real data. In this paper, we propose an autonomous navigation method that does not require expensive labeled real images and uses only a relatively inexpensive monocular camera. Our proposed method is based on (1) domain adaptation with an adversarial learning framework and (2) exploiting synthetic data from a simulator. To the best of the authors’ knowledge, this is the first work to apply domain adaptation with adversarial networks to autonomous navigation. We present empirical results on navigation in outdoor courses using an unmanned aerial vehicle. The performance of our method is comparable to that of a supervised model with labeled real data, although our method does not require any label information for the real data. Our proposal includes a theoretical analysis that supports the applicability of our approach.",
"title": ""
},
{
"docid": "be283056a8db3ab5b2481f3dc1f6526d",
"text": "Numerous groups have applied a variety of deep learning techniques to computer vision problems in highway perception scenarios. In this paper, we presented a number of empirical evaluations of recent deep learning advances. Computer vision, combined with deep learning, has the potential to bring about a relatively inexpensive, robust solution to autonomous driving. To prepare deep learning for industry uptake and practical applications, neural networks will require large data sets that represent all possible driving environments and scenarios. We collect a large data set of highway data and apply deep learning and computer vision algorithms to problems such as car and lane detection. We show how existing convolutional neural networks (CNNs) can be used to perform lane and vehicle detection while running at frame rates required for a real-time system. Our results lend credence to the hypothesis that deep learning holds promise for autonomous driving.",
"title": ""
},
{
"docid": "460b8f82e5c378c7d866d92339e14572",
"text": "When the number of projections does not satisfy the Shannon/Nyquist sampling requirement, streaking artifacts are inevitable in x-ray computed tomography (CT) images reconstructed using filtered backprojection algorithms. In this letter, the spatial-temporal correlations in dynamic CT imaging have been exploited to sparsify dynamic CT image sequences and the newly proposed compressed sensing (CS) reconstruction method is applied to reconstruct the target image sequences. A prior image reconstructed from the union of interleaved dynamical data sets is utilized to constrain the CS image reconstruction for the individual time frames. This method is referred to as prior image constrained compressed sensing (PICCS). In vivo experimental animal studies were conducted to validate the PICCS algorithm, and the results indicate that PICCS enables accurate reconstruction of dynamic CT images using about 20 view angles, which corresponds to an under-sampling factor of 32. This undersampling factor implies a potential radiation dose reduction by a factor of 32 in myocardial CT perfusion imaging.",
"title": ""
},
{
"docid": "cbc6bd586889561cc38696f758ad97d2",
"text": "Introducing a new hobby for other people may inspire them to join with you. Reading, as one of mutual hobby, is considered as the very easy hobby to do. But, many people are not interested in this hobby. Why? Boring is the reason of why. However, this feel actually can deal with the book and time of you reading. Yeah, one that we will refer to break the boredom in reading is choosing design of experiments statistical principles of research design and analysis as the reading material.",
"title": ""
},
{
"docid": "443637fcc9f9efcf1026bb64aa0a9c97",
"text": "Given the unprecedented availability of data and computing resources, there is widespread renewed interest in applying data-driven machine learning methods to problems for which the development of conventional engineering solutions is challenged by modeling or algorithmic deficiencies. This tutorial-style paper starts by addressing the questions of why and when such techniques can be useful. It then provides a high-level introduction to the basics of supervised and unsupervised learning. For both supervised and unsupervised learning, exemplifying applications to communication networks are discussed by distinguishing tasks carried out at the edge and at the cloud segments of the network at different layers of the protocol stack, with an emphasis on the physical layer.",
"title": ""
},
{
"docid": "3b03af1736709e536a4a58363102bc60",
"text": "Music transcription, as an essential component in music signal processing, contributes to wide applications in musicology, accelerates the development of commercial music industry, facilitates the music education as well as benefits extensive music lovers. However, the work relies on a lot of manual work due to heavy requirements on knowledge and experience. This project mainly examines two deep learning methods, DNN and LSTM, to automatize music transcription. We transform the audio files into spectrograms using constant Q transform and extract features from the spectrograms. Deep learning methods have the advantage of learning complex features in music transcription. The promising results verify that deep learning methods are capable of learning specific musical properties, including notes and rhythms. Keywords—automatic music transcription; deep learning; deep neural network (DNN); long shortterm memory networks (LSTM)",
"title": ""
},
{
"docid": "3c4712f1c54f3d9d8d4297d9ab0b619f",
"text": "In this paper, we introduce Cellular Automata-a dynamic evolution model to intuitively detect the salient object. First, we construct a background-based map using color and space contrast with the clustered boundary seeds. Then, a novel propagation mechanism dependent on Cellular Automata is proposed to exploit the intrinsic relevance of similar regions through interactions with neighbors. Impact factor matrix and coherence matrix are constructed to balance the influential power towards each cell's next state. The saliency values of all cells will be renovated simultaneously according to the proposed updating rule. It's surprising to find out that parallel evolution can improve all the existing methods to a similar level regardless of their original results. Finally, we present an integration algorithm in the Bayesian framework to take advantage of multiple saliency maps. Extensive experiments on six public datasets demonstrate that the proposed algorithm outperforms state-of-the-art methods.",
"title": ""
},
{
"docid": "fdd998012aa9b76ba9fe4477796ddebb",
"text": "Low-power wireless devices must keep their radio transceivers off as much as possible to reach a low power consumption, but must wake up often enough to be able to receive communication from their neighbors. This report describes the ContikiMAC radio duty cycling mechanism, the default radio duty cycling mechanism in Contiki 2.5, which uses a power efficient wake-up mechanism with a set of timing constraints to allow device to keep their transceivers off. With ContikiMAC, nodes can participate in network communication yet keep their radios turned off for roughly 99% of the time. This report describes the ContikiMAC mechanism, measures the energy consumption of individual ContikiMAC operations, and evaluates the efficiency of the fast sleep and phase-lock optimizations.",
"title": ""
},
{
"docid": "df69a701bca12d3163857a9932ef51e2",
"text": "Students often have their own individual laptop computers in university classes, and researchers debate the potential benefits and drawbacks of laptop use. In the presented research, we used a combination of surveys and in-class observations to study how students use their laptops in an unmonitored and unrestricted class setting—a large lecture-based university class with nearly 3000 enrolled students. By analyzing computer use over the duration of long (165 minute) classes, we demonstrate how computer use changes over time. The observations and studentreports provided similar descriptions of laptop activities. Note taking was the most common use for the computers, followed by the use of social media web sites. Overall, the data show that students engaged in off-task computer activities for nearly two-thirds of the time. An analysis of the frequency of the various laptop activities over time showed that engagement in individual activities varied significantly over the duration of the class.",
"title": ""
},
{
"docid": "b513d1cbf3b2f649afcea4d0ab6784ac",
"text": "RoboSimian is a quadruped robot inspired by an ape-like morphology, with four symmetric limbs that provide a large dexterous workspace and high torque output capabilities. Advantages of using RoboSimian for rough terrain locomotion include (1) its large, stable base of support, and (2) existence of redundant kinematic solutions, toward avoiding collisions with complex terrain obstacles. However, these same advantages provide significant challenges in experimental implementation of walking gaits. Specifically: (1) a wide support base results in high variability of required body pose and foothold heights, in particular when compared with planning for humanoid robots, (2) the long limbs on RoboSimian have a strong proclivity for self-collision and terrain collision, requiring particular care in trajectory planning, and (3) having rear limbs outside the field of view requires adequate perception with respect to a world map. In our results, we present a tractable means of planning statically stable and collision-free gaits, which combines practical heuristics for kinematics with traditional randomized (RRT) search algorithms. In planning experiments, our method outperforms other tested methodologies. Finally, real-world testing indicates that perception limitations provide the greatest challenge in real-world implementation.",
"title": ""
},
{
"docid": "04d110e130c5d7dc56c2d8e63857e9aa",
"text": "OBJECTIVE\nThis study aimed to assess weight bias among professionals who specialize in treating eating disorders and identify to what extent their weight biases are associated with attitudes about treating obese patients.\n\n\nMETHOD\nParticipants were 329 professionals treating eating disorders, recruited through professional organizations that specialize in eating disorders. Participants completed anonymous, online self-report questionnaires, assessing their explicit weight bias, perceived causes of obesity, attitudes toward treating obese patients, perceptions of treatment compliance and success of obese patients, and perceptions of weight bias among other practitioners.\n\n\nRESULTS\nNegative weight stereotypes were present among some professionals treating eating disorders. Although professionals felt confident (289; 88%) and prepared (276; 84%) to provide treatment to obese patients, the majority (184; 56%) had observed other professionals in their field making negative comments about obese patients, 42% (138) believed that practitioners who treat eating disorders often have negative stereotypes about obese patients, 35% (115) indicated that practitioners feel uncomfortable caring for obese patients, and 29% (95) reported that their colleagues have negative attitudes toward obese patients. Compared to professionals with less weight bias, professionals with stronger weight bias were more likely to attribute obesity to behavioral causes, expressed more negative attitudes and frustrations about treating obese patients, and perceived poorer treatment outcomes for these patients.\n\n\nDISCUSSION\nSimilar to other health disciplines, professionals treating eating disorders are not immune to weight bias. This has important implications for provision of clinical treatment with obese individuals and efforts to reduce weight bias in the eating disorders field.",
"title": ""
}
] |
scidocsrr
|
90788d5ce593a102ea5586c4a2a894f2
|
Segmentation of volumetric MRA images by using capillary active contour
|
[
{
"docid": "f3c2663cb0341576d754bb6cd5f2c0f5",
"text": "This article surveys deformable models, a promising and vigorously researched computer-assisted medical image analysis technique. Among model-based techniques, deformable models offer a unique and powerful approach to image analysis that combines geometry, physics and approximation theory. They have proven to be effective in segmenting, matching and tracking anatomic structures by exploiting (bottom-up) constraints derived from the image data together with (top-down) a priori knowledge about the location, size and shape of these structures. Deformable models are capable of accommodating the significant variability of biological structures over time and across different individuals. Furthermore, they support highly intuitive interaction mechanisms that, when necessary, allow medical scientists and practitioners to bring their expertise to bear on the model-based image interpretation task. This article reviews the rapidly expanding body of work on the development and application of deformable models to problems of fundamental importance in medical image analysis, including segmentation, shape representation, matching and motion tracking.",
"title": ""
},
{
"docid": "e7f1e8f82c91c7afd4d58c9987f3e95e",
"text": "ÐA level set method for capturing the interface between two ¯uids is combined with a variable density projection method to allow for computation of a two-phase ¯ow where the interface can merge/ break and the ¯ow can have a high Reynolds number. A distance function formulation of the level set method enables us to compute ¯ows with large density ratios (1000/1) and ¯ows that are surface tension driven, with no emotional involvement. Recent work has improved the accuracy of the distance function formulation and the accuracy of the advection scheme. We compute ¯ows involving air bubbles and water drops, among others. We validate our code against experiments and theory. In Ref. [1] an Eulerian scheme was described for computing incompressible two-¯uid ¯ow where the density ratio across the interface is large (e.g. air/water) and both surface tension and vis-cous eects are included. In this paper, we modify our scheme improving both the accuracy and eciency of the algorithm. We use a level set function tòcapture' the air/water interface thus allowing us to eciently compute ¯ows with complex interfacial structure. In Ref. [1], a new iterative process was devised in order to maintain the level set function as the signed distance from the air/water interface. Since we know the distance from the interface at any point in the domain, we can give the interface a thickness of size O(h); this allows us to compute with sti surface tension eects and steep density gradients. We have since imposed a new`constraint' on the iterative process improving the accuracy of this process. We have also upgraded our scheme to using higher order ENO for spatial derivatives, and high order Runge±Kutta for the time dis-cretization (see Ref. [2]). An example of the problems we wish to solve is illustrated in Fig. 1. An air bubble rises up to the water surface and then`bursts', emitting a jet of water that eventually breaks up into satellite drops. It is a very dicult problem involving much interfacial complexity and sti surface tension eects. The density ratio at the interface is ca 1000/1. In Ref. [3], the boundary integral method was used to compute thèbubble-burst' problem and compared with experimental results. The boundary integral method is a very good method for inviscid air/water problems because, as a Lagrangian based scheme, only points on the interface need to be discretized. Unfortunately, if one wants to include the merging and breaking …",
"title": ""
},
{
"docid": "d3a18f5ad29f2eddd7eb32c561389212",
"text": "Interpretation of magnetic resonance angiography (MRA) is problematic due to complexities of vascular shape and to artifacts such as the partial volume effect. The authors present new methods to assist in the interpretation of MRA. These include methods for detection of vessel paths and for determination of branching patterns of vascular trees. They are based on the ordered region growing (ORG) algorithm that represents the image as an acyclic graph, which can be reduced to a skeleton by specifying vessel endpoints or by a pruning process. Ambiguities in the vessel branching due to vessel overlap are effectively resolved by heuristic methods that incorporate a priori knowledge of bifurcation spacing. Vessel paths are detected at interactive speeds on a 500-MHz processor using vessel endpoints. These methods apply best to smaller vessels where the image intensity peaks at the center of the lumen which, for the abdominal MRA, includes vessels whose diameter is less than 1 cm.",
"title": ""
}
] |
[
{
"docid": "24151cf5d4481ba03e6ffd1ca29f3441",
"text": "The design, fabrication and characterization of 79 GHz slot antennas based on substrate integrated waveguides (SIW) are presented in this paper. All the prototypes are fabricated in a polyimide flex foil using printed circuit board (PCB) fabrication processes. A novel concept is used to minimize the leakage losses of the SIWs at millimeter wave frequencies. Different losses in the SIWs are analyzed. SIW-based single slot antenna, longitudinal and four-by-four slot array antennas are numerically and experimentally studied. Measurements of the antennas show approximately 4.7%, 5.4% and 10.7% impedance bandwidth (S11=-10 dB) with 2.8 dBi, 6.0 dBi and 11.0 dBi maximum antenna gain around 79 GHz, respectively. The measured results are in good agreement with the numerical simulations.",
"title": ""
},
{
"docid": "e4d35033649087965951736fe7565d6d",
"text": "Much recent work has explored the challenge of nonvisual text entry on mobile devices. While researchers have attempted to solve this problem with gestures, we explore a different modality: speech. We conducted a survey with 169 blind and sighted participants to investigate how often, what for, and why blind people used speech for input on their mobile devices. We found that blind people used speech more often and input longer messages than sighted people. We then conducted a study with 8 blind people to observe how they used speech input on an iPod compared with the on-screen keyboard with VoiceOver. We found that speech was nearly 5 times as fast as the keyboard. While participants were mostly satisfied with speech input, editing recognition errors was frustrating. Participants spent an average of 80.3% of their time editing. Finally, we propose challenges for future work, including more efficient eyes-free editing and better error detection methods for reviewing text.",
"title": ""
},
{
"docid": "92583a036066d87f857ae1be2a9ed109",
"text": "The OpenCog software development framework, for advancement of the development and testing of powerful and responsible integrative AGI, is described. The OpenCog Framework (OCF) 1.0, to be released in 2008 under the GPLv2, is comprised of a collection of portable libraries for OpenCog applications, plus an initial collection of cognitive algorithms that operate within the OpenCog framework. The OCF libraries include a flexible knowledge representation embodied in a scalable knowledge store, a cognitive process scheduler, and a plug-in architecture for allowing interaction between cognitive, perceptual, and control algorithms.",
"title": ""
},
{
"docid": "d86517401c90186abb31895028d6f18b",
"text": "The widespread of ultrasound as a guide to regional anesthesia has allowed the development of numerous alternatives to paravertebral block in breast surgery named fascial or myofascial blocks [1,2]. We chose to use a bilateral ultrasound-guided erector-spinae plane (ESP)blocks in a patient scheduled for breast cancer surgery that rejected epidural analgesia. We present this case report once obtained written informed consent from the patient. A 59-year-oldwoman, height 156 cmandweight 54 kg, ASA2 smoker with history of chronic hypertension and chronic obstructive pulmonary disease was scheduled for right subcutaneous mastectomy with nipple-areola skin sparing due to a breast cancer. A sentinel lymph-",
"title": ""
},
{
"docid": "5192d78f1ea78f0bcaae0433357c25d7",
"text": "The ISO 26262 standard defines functional safety for automotive E/E systems. Since the publication of the first edition of this standard in 2011, many different safety techniques complying to the ISO 26262 have been developed. However, it is not clear which parts and (sub-) phases of the standard are targeted by these techniques and which objectives of the standard are particularly addressed. Therefore, we carried out a gap analysis to identify gaps between the safety standard objectives of the part 3 till 7 and the existing techniques. In this paper the results of the gap analysis are presented such as we identified that there is a lack of mature tool support for the ASIL sub-phase and a need for a common platform for the entire product development cycle.",
"title": ""
},
{
"docid": "3daa9fc7d434f8a7da84dd92f0665564",
"text": "In this article we analyze the response of Time of Flight cameras (active sensors) for close range imaging under three different illumination conditions and compare the results with stereo vision (passive) sensors. Time of Flight sensors are sensitive to ambient light and have low resolution but deliver high frame rate accurate depth data under suitable conditions. We introduce some metrics for performance evaluation over a small region of interest. Based on these metrics, we analyze and compare depth imaging of leaf under indoor (room) and outdoor (shadow and sunlight) conditions by varying exposures of the sensors. Performance of three different time of flight cameras (PMD CamBoard, PMD CamCube and SwissRanger SR4000) is compared against selected stereo-correspondence algorithms (local correlation and graph cuts). PMD CamCube has better cancellation of sunlight, followed by CamBoard, while SwissRanger SR4000 performs poorly under sunlight. stereo vision is more robust to ambient illumination and provides high resolution depth data but it is constrained by texture of the object along with computational efficiency. Graph cut based stereo correspondence algorithm can better retrieve the shape of the leaves but is computationally much more expensive as compared to local correlation. Finally, we propose a method to increase the dynamic range of the ToF cameras for a scene involving both shadow and sunlight exposures at the same time using camera flags (PMD) or confidence matrix (SwissRanger).",
"title": ""
},
{
"docid": "ae4974a3d7efedab7cd6651101987e79",
"text": "Fisher Kernels and Deep Learning were two developments with significant impact on large-scale object categorization in the last years. Both approaches were shown to achieve state-of-the-art results on large-scale object categorization datasets, such as ImageNet. Conceptually, however, they are perceived as very different and it is not uncommon for heated debates to spring up when advocates of both paradigms meet at conferences or workshops. In this work, we emphasize the similarities between both architectures rather than their differences and we argue that such a unified view allows us to transfer ideas from one domain to the other. As a concrete example we introduce a method for learning a support vector machine classifier with Fisher kernel at the same time as a task-specific data representation. We reinterpret the setting as a multi-layer feed forward network. Its final layer is the classifier, parameterized by a weight vector, and the two previous layers compute Fisher vectors, parameterized by the coefficients of a Gaussian mixture model. We introduce a gradient descent based learning algorithm that, in contrast to other feature learning techniques, is not just derived from intuition or biological analogy, but has a theoretical justification in the framework of statistical learning theory. Our experiments show that the new training procedure leads to significant improvements in classification accuracy while preserving the modularity and geometric interpretability of a support vector machine setup.",
"title": ""
},
{
"docid": "72ca634d0236b25a943e60331b43f055",
"text": "3D models derived from point clouds are useful in various shapes to optimize the trade-off between precision and geometric complexity. They are defined at different granularity levels according to each indoor situation. In this article, we present an integrated 3D semantic reconstruction framework that leverages segmented point cloud data and domain ontologies. Our approach follows a part-to-whole conception which models a point cloud in parametric elements usable per instance and aggregated to obtain a global 3D model. We first extract analytic features, object relationships and contextual information to permit better object characterization. Then, we propose a multi-representation modelling mechanism augmented by automatic recognition and fitting from the 3D library ModelNet10 to provide the best candidates for several 3D scans of furniture. Finally, we combine every element to obtain a consistent indoor hybrid 3D model. The method allows a wide range of applications from interior navigation to virtual stores.",
"title": ""
},
{
"docid": "4d4540a59e637f9582a28ed62083bfd6",
"text": "Targeted sentiment analysis classifies the sentiment polarity towards each target entity mention in given text documents. Seminal methods extract manual discrete features from automatic syntactic parse trees in order to capture semantic information of the enclosing sentence with respect to a target entity mention. Recently, it has been shown that competitive accuracies can be achieved without using syntactic parsers, which can be highly inaccurate on noisy text such as tweets. This is achieved by applying distributed word representations and rich neural pooling functions over a simple and intuitive segmentation of tweets according to target entity mentions. In this paper, we extend this idea by proposing a sentencelevel neural model to address the limitation of pooling functions, which do not explicitly model tweet-level semantics. First, a bi-directional gated neural network is used to connect the words in a tweet so that pooling functions can be applied over the hidden layer instead of words for better representing the target and its contexts. Second, a three-way gated neural network structure is used to model the interaction between the target mention and its surrounding contexts. Experiments show that our proposed model gives significantly higher accuracies compared to the current best method for targeted sentiment analysis.",
"title": ""
},
{
"docid": "13f24b04e37c9e965d85d92e2c588c9a",
"text": "In this paper we propose a new user purchase preference model based on their implicit feedback behavior. We analyze user behavior data to seek their purchase preference signals. We find that if a user has more purchase preference on a certain item he would tend to browse it for more times. It gives us an important inspiration that, not only purchasing behavior but also other types of implicit feedback like browsing behavior, can indicate user purchase preference. We further find that user purchase preference signals also exist in the browsing behavior of item categories. Therefore, when we want to predict user purchase preference for certain items, we can integrate these behavior types into our user preference model by converting such preference signals into numerical values. We evaluate our model on a real-world dataset from a shopping site in China. Results further validate that user purchase preference model in our paper can capture more and accurate user purchase preference information from implicit feedback and greatly improves the performance of user purchase prediction.",
"title": ""
},
{
"docid": "5484ad5af4d1133683e95bc0178564f0",
"text": "Two studies investigated the connection between narcissism and sensitivity to criticism. In study 1, participants completed the Narcissistic Personality Inventory (NPI) and the Sensitivity to Criticism Scale (SCS) and were asked to construct and deliver speeches to be rated by performance judges. They were then asked whether they would like to receive evaluative feedback. Narcissism and sensitivity to criticism were mildly, but not significantly, negatively correlated and had contrasting relationships with choices regarding feedback. Highly narcissistic participants tended to seek (rather than avoid) feedback, whereas highly sensitive participants tended to reject feedback opportunities. Study 2 examined the relationship between sensitivity to criticism and both overt and covert narcissism. Those scoring high on the trait narcissism, as measured by the NPI, tended to be less sensitive to criticism, sought (rather than avoided) feedback opportunities, experienced little internalized negative emotions in response to “extreme” feedback conditions, and did not expect to ruminate over their performance. By contrast, participants scoring high on a measure of “covert narcissism” were high in sensitivity to criticism, tended to avoid feedback opportunities, experienced high levels of internalized negative emotions, and showed high levels of expected rumination. These findings suggest that the relationship between narcissism and sensitivity to criticism is highly dependent upon the definition or “form” of narcissism considered.",
"title": ""
},
{
"docid": "e0f797ff66a81b88bbc452e86864d7bc",
"text": "A key challenge in radar micro-Doppler classification is the difficulty in obtaining a large amount of training data due to costs in time and human resources. Small training datasets limit the depth of deep neural networks (DNNs), and, hence, attainable classification accuracy. In this work, a novel method for diversifying Kinect-based motion capture (MOCAP) simulations of human micro-Doppler to span a wider range of potential observations, e.g. speed, body size, and style, is proposed. By applying three transformations, a small set of MOCAP measurements is expanded to generate a large training dataset for network initialization of a 30-layer deep residual neural network. Results show that the proposed training methodology and residual DNN yield improved bottleneck feature performance and the highest overall classification accuracy among other DNN architectures, including transfer learning from the 1.5 million sample ImageNet database.",
"title": ""
},
{
"docid": "be8e1e4fd9b8ddb0fc7e1364455999e8",
"text": "In this paper, we describe the development and exploitation of a corpus-based tool for the identification of metaphorical patterns in large datasets. The analysis of metaphor as a cognitive and cultural, rather than solely linguistic, phenomenon has become central as metaphor researchers working within ‘Cognitive Metaphor Theory’ have drawn attention to the presence of systematic and pervasive conventional metaphorical patterns in ‘ordinary’ language (e.g. I’m at a crossroads in my life). Cognitive Metaphor Theory suggests that these linguistic patterns reflect the existence of conventional conceptual metaphors, namely systematic cross-domain correspondences in conceptual structure (e.g. LIFE IS A JOURNEY). This theoretical approach, described further in section 2, has led to considerable advances in our understanding of metaphor both as a linguistic device and a cognitive model, and to our awareness of its role in many different genres and discourses. Although some recent research has incorporated corpus linguistic techniques into this framework for the analysis of metaphor, to date, such analyses have primarily involved the concordancing of pre-selected search strings (e.g. Deignan 2005). The method described in this paper represents an attempt to extend the limits of this form of analysis. In our approach, we have applied an existing semantic field annotation tool (USAS) developed at Lancaster University to aid metaphor researchers in searching corpora. We are able to filter all possible candidate semantic fields proposed by USAS to assist in finding possible ‘source’ (e.g. JOURNEY) and ‘target’ (e.g. LIFE) domains, and we can then go on to consider the potential metaphoricity of the expressions included under each possible source domain. This method thus enables us to identify open-ended sets of metaphorical expressions, which are not limited to predetermined search strings. In section 3, we present this emerging methodology for the computer-assisted analysis of metaphorical patterns in discourse. The semantic fields automatically annotated by USAS can be seen as roughly corresponding to the domains of metaphor theory. We have used USAS in combination with key word and domain techniques in Wmatrix (Rayson, 2003) to replicate earlier manual analyses, e.g. machine metaphors in Ken Kesey’s One Flew Over the Cuckoo’s Nest (Semino and Swindlehurst, 1996) and war, machine and organism metaphors in business magazines (Koller, 2004a). These studies are described in section 4.",
"title": ""
},
{
"docid": "7d014f64578943f8ec8e5e27d313e148",
"text": "In this paper, we extend the Divergent Component of Motion (DCM, also called `Capture Point') to 3D. We introduce the “Enhanced Centroidal Moment Pivot point” (eCMP) and the “Virtual Repellent Point” (VRP), which allow for the encoding of both direction and magnitude of the external (e.g. leg) forces and the total force (i.e. external forces plus gravity) acting on the robot. Based on eCMP, VRP and DCM, we present a method for real-time planning and control of DCM trajectories in 3D. We address the problem of underactuation and propose methods to guarantee feasibility of the finally commanded forces. The capabilities of the proposed control framework are verified in simulations.",
"title": ""
},
{
"docid": "6fc870c703611e07519ce5fe956c15d1",
"text": "Severe weather conditions such as rain and snow adversely affect the visual quality of images captured under such conditions thus rendering them useless for further usage and sharing. In addition, such degraded images drastically affect performance of vision systems. Hence, it is important to solve the problem of single image de-raining/de-snowing. However, this is a difficult problem to solve due to its inherent ill-posed nature. Existing approaches attempt to introduce prior information to convert it into a well-posed problem. In this paper, we investigate a new point of view in addressing the single image de-raining problem. Instead of focusing only on deciding what is a good prior or a good framework to achieve good quantitative and qualitative performance, we also ensure that the de-rained image itself does not degrade the performance of a given computer vision algorithm such as detection and classification. In other words, the de-rained result should be indistinguishable from its corresponding clear image to a given discriminator. This criterion can be directly incorporated into the optimization framework by using the recently introduced conditional generative adversarial networks (GANs). To minimize artifacts introduced by GANs and ensure better visual quality, a new refined loss function is introduced. Based on this, we propose a novel single image de-raining method called Image De-raining Conditional General Adversarial Network (ID-CGAN), which considers quantitative, visual and also discriminative performance into the objective function. Experiments evaluated on synthetic images and real images show that the proposed method outperforms many recent state-of-the-art single image de-raining methods in terms of quantitative and visual performance.",
"title": ""
},
{
"docid": "a968a9842bb49f160503b24bff57cdd6",
"text": "This paper addresses target discrimination in synthetic aperture radar (SAR) imagery using linear and nonlinear adaptive networks. Neural networks are extensively used for pattern classification but here the goal is discrimination. We show that the two applications require different cost functions. We start by analyzing with a pattern recognition perspective the two-parameter constant false alarm rate (CFAR) detector which is widely utilized as a target detector in SAR. Then we generalize its principle to construct the quadratic gamma discriminator (QGD), a nonparametrically trained classifier based on local image intensity. The linear processing element of the QCD is further extended with nonlinearities yielding a multilayer perceptron (MLP) which we call the NL-QGD (nonlinear QGD). MLPs are normally trained based on the L(2) norm. We experimentally show that the L(2) norm is not recommended to train MLPs for discriminating targets in SAR. Inspired by the Neyman-Pearson criterion, we create a cost function based on a mixed norm to weight the false alarms and the missed detections differently. Mixed norms can easily be incorporated into the backpropagation algorithm, and lead to better performance. Several other norms (L(8), cross-entropy) are applied to train the NL-QGD and all outperformed the L(2) norm when validated by receiver operating characteristics (ROC) curves. The data sets are constructed from TABILS 24 ISAR targets embedded in 7 km(2) of SAR imagery (MIT/LL mission 90).",
"title": ""
},
{
"docid": "dd3d8d5d623a4bed6fb0939e15caa056",
"text": "This paper investigates a number of computational intelligence techniques in the detection of heart disease. Particularly, comparison of six well known classifiers for the well used Cleveland data is performed. Further, this paper highlights the potential of an expert judgment based (i.e., medical knowledge driven) feature selection process (termed as MFS), and compare against the generally employed computational intelligence based feature selection mechanism. Also, this article recognizes that the publicly available Cleveland data becomes imbalanced when considering binary classification. Performance of classifiers, and also the potential of MFS are investigated considering this imbalanced data issue. The experimental results demonstrate that the use of MFS noticeably improved the performance, especially in terms of accuracy, for most of the classifiers considered and for majority of the datasets (generated by converting the Cleveland dataset for binary classification). MFS combined with the computerized feature selection process (CFS) has also been investigated and showed encouraging results particularly for NaiveBayes, IBK and SMO. In summary, the medical knowledge based feature selection method has shown promise for use in heart disease diagnostics. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "615a24719fe4300ea8971e86014ed8fe",
"text": "This paper presents a new code for the analysis of gamma spectra generated by an equipment for continuous measurement of gamma radioactivity in aerosols with paper filter. It is called pGamma and has been developed by the Nuclear Engineering Research Group at the Technical University of Catalonia - Barcelona Tech and by Raditel Serveis i Subministraments Tecnològics, Ltd. The code has been developed to identify the gamma emitters and to determine their activity concentration. It generates alarms depending on the activity of the emitters and elaborates reports. Therefore it includes a library with NORM and artificial emitters of interest. The code is being adapted to the monitors of the Environmental Radiological Surveillance Network of the local Catalan Government in Spain (Generalitat de Catalunya) and is used at three stations of the Network.",
"title": ""
},
{
"docid": "f391c56dd581d965548062944200e95f",
"text": "We present a traceability recovery method and tool based on latent semantic indexing (LSI) in the context of an artefact management system. The tool highlights the candidate links not identified yet by the software engineer and the links identified but missed by the tool, probably due to inconsistencies in the usage of domain terms in the traced software artefacts. We also present a case study of using the traceability recovery tool on software artefacts belonging to different categories of documents, including requirement, design, and testing documents, as well as code components.",
"title": ""
},
{
"docid": "3cab403ffab3e44252174ab5d7d985f8",
"text": "A prominent parallel data processing tool MapReduce is gaining significant momentum from both industry and academia as the volume of data to analyze grows rapidly. While MapReduce is used in many areas where massive data analysis is required, there are still debates on its performance, efficiency per node, and simple abstraction. This survey intends to assist the database and open source communities in understanding various technical aspects of the MapReduce framework. In this survey, we characterize the MapReduce framework and discuss its inherent pros and cons. We then introduce its optimization strategies reported in the recent literature. We also discuss the open issues and challenges raised on parallel data analysis with MapReduce.",
"title": ""
}
] |
scidocsrr
|
6328f332b11863c1a18b27f9b2021915
|
The BSD Packet Filter: A New Architecture for User-level Packet Capture
|
[
{
"docid": "ea33b26333eaa1d92f3c42688eb8aba5",
"text": "Code to implement network protocols can be either inside the kernel of an operating system or in user-level processes. Kernel-resident code is hard to develop, debug, and maintain, but user-level implementations typically incur significant overhead and perform poorly.\nThe performance of user-level network code depends on the mechanism used to demultiplex received packets. Demultiplexing in a user-level process increases the rate of context switches and system calls, resulting in poor performance. Demultiplexing in the kernel eliminates unnecessary overhead.\nThis paper describes the packet filter, a kernel-resident, protocol-independent packet demultiplexer. Individual user processes have great flexibility in selecting which packets they will receive. Protocol implementations using the packet filter perform quite well, and have been in production use for several years.",
"title": ""
}
] |
[
{
"docid": "a7db9f3f1bb5883f6a5a873dd661867b",
"text": "Psychologists and sociologists usually interpret happiness scores as cardinal and comparable across respondents, and thus run OLS regressions on happiness and changes in happiness. Economists usually assume only ordinality and have mainly used ordered latent response models, thereby not taking satisfactory account of fixed individual traits. We address this problem by developing a conditional estimator for the fixed-effect ordered logit model. We find that assuming ordinality or cardinality of happiness scores makes little difference, whilst allowing for fixed-effects does change results substantially. We call for more research into the determinants of the personality traits making up these fixed-effects.",
"title": ""
},
{
"docid": "083d5b88cc1bf5490a0783a4a94e9fb2",
"text": "Taking care and maintenance of a healthy population is the Strategy of each country. Information and communication technologies in the health care system have led to many changes in order to improve the quality of health care services to patients, rational spending time and reduce costs. In the booming field of IT research, the reach of drug delivery, information on grouping of similar drugs has been lacking. The wealth distribution and drug affordability at a certain demographic has been interlinked and proposed in this paper. Looking at the demographic we analyze and group the drugs based on target action and link this to the wealth and the people to medicine ratio, which can be accomplished via data mining and web mining. The data thus mined will be analysed and made available to public and commercial purpose for their further knowledge and benefit.",
"title": ""
},
{
"docid": "df62526aa79eb750790bd48254171faf",
"text": "SUMMARY Non-safety critical software developers have been reaping the benefits of adopting agile practices for a number of years. However, developers of safety critical software often have concerns about adopting agile practices. Through performing a literature review, this research has identified the perceived barriers to following agile practices when developing medical device software. A questionnaire based survey was also conducted with medical device software developers in Ireland to determine the barriers to adopting agile practices. The survey revealed that half of the respondents develop software in accordance with a plan driven software development lifecycle and that they believe that there are a number of perceived barriers to adopting agile practices when developing regulatory compliant software such as: being contradictory to regulatory requirements; insufficient coverage of risk management activities and the lack of up-front planning. In addition, a comparison is performed between the perceived and actual barriers. Based upon the findings of the literature review and survey, it emerged that no external barriers exist to adopting agile practices when developing medical device software and the barriers that do exists are internal barriers such as getting stakeholder buy in.",
"title": ""
},
{
"docid": "0c8517bab8a8fa34f25a72cf6c971b25",
"text": "Automotive radar sensors are key components for driver assistant systems. In order to handle complex traffic scenarios an advanced separability is required with respect to object angle, distance and velocity. In this contribution a highly integrated automotive radar sensor enabling chirp sequence modulation will be presented and discussed. Furthermore, the development of a target simulator which is essential for the characterization of such radar sensors will be introduced including measurements demonstrating the performance of our system.",
"title": ""
},
{
"docid": "70509b891a45c8cdd0f2ed02207af06f",
"text": "This paper presents an algorithm for drawing a sequence of graphs online. The algorithm strives to maintain the global structure of the graph and, thus, the user's mental map while allowing arbitrary modifications between consecutive layouts. The algorithm works online and uses various execution culling methods in order to reduce the layout time and handle large dynamic graphs. Techniques for representing graphs on the GPU allow a speedup by a factor of up to 17 compared to the CPU implementation. The scalability of the algorithm across GPU generations is demonstrated. Applications of the algorithm to the visualization of discussion threads in Internet sites and to the visualization of social networks are provided.",
"title": ""
},
{
"docid": "a94ad02ca81d7c4a25eaf9d37c8c3ef0",
"text": "The use of mobile technologies has recently received great attention in language learning. Most research evaluates the effects of employing mobile devices in language learning and explores the design of mobile-learning interventions that can maximize the benefits of new technologies. However, it is still unclear whether the use of mobile devices in language learning is more effective than other instructional approaches. It is also not clear whether the effects of mobile-device use vary in different settings. Our meta-analysis will explore these questions about mobile technology use in language learning. Based on the specific inclusion and exclusion criteria, 22 d-type effect sizes from 20 studies were calculated for the meta-analysis. We adopted the random-effects model, and the estimated average effect was 0.51 (se = 0.10). This is a moderate positive overall effect of using mobile devices on language acquisition and language-learning achievement. Moderator analyses under the mixed-effects model examined six features; effects varied significantly only by test type and source of the study. The overall effect and the effects of these moderators of mobile-device use on achievement in language learning are discussed.",
"title": ""
},
{
"docid": "ca41837dd01a66259854c03b820a46ff",
"text": "We present a supervised sequence to sequence transduction model with a hard attention mechanism which combines the more traditional statistical alignment methods with the power of recurrent neural networks. We evaluate the model on the task of morphological inflection generation and show that it provides state of the art results in various setups compared to the previous neural and non-neural approaches. Eventually we present an analysis of the learned representations for both hard and soft attention models, shedding light on the features such models extract in order to solve the task.",
"title": ""
},
{
"docid": "333bd26d16544377536a6c96168439b7",
"text": "Mate retention is an important problem in romantic relationships because of mate poachers, infidelity, and the risk of outright defection. The current study (N=892) represents the first study of mate retention tactics conducted in Spain. We tested hypotheses about the effects of gender, relationship commitment status, and personality on mate retention tactics. Women and men differed in the use of resource display, appearance enhancement, intrasexual violence, and submission/self-abasement as mate retention tactics. Those in more committed relationships reported higher levels of resource display, appearance enhancement, love, and verbal signals of possession. Those in less committed relationships more often reported intentionally evoking jealousy in their partner as a mate retention tactic. Personality characteristics, particularly Neuroticism and Agreeableness, correlated in coherent ways with mate retention tactics, supporting two evolution-based hypotheses. Discussion focuses on the implications, future research directions, and interdisciplinary syntheses emerging between personality and social psychology and evolutionary psychology.",
"title": ""
},
{
"docid": "59d194764511b1ad2ce0ca5d858fab21",
"text": "Humanoid robot path finding is one of the core-technologies in robot research domain. This paper presents an approach to finding a path for robot motion by fusing images taken by the NAO's camera and proximity information delivered by sonar sensors. The NAO robot takes an image around its surroundings, uses the fuzzy color extractor to segment its potential path colors, and selects a fitting line as path by the least squares method. Therefore, the NAO robot is able to perform the automatic navigation according to the selected path. As a result, the experiments are conducted to navigate the NAO robot to walk to a given destination and to grasp a box. In addition, the NAO robot uses its sonar sensors to detect a barrier and helps pick up the box with its hands.",
"title": ""
},
{
"docid": "e5c6debcbbb979a18ca13f7739043174",
"text": "Recurrent neural networks and sequence to sequence models require a predetermined length for prediction output length. Our model addresses this by allowing the network to predict a variable length output in inference. A new loss function with a tailored gradient computation is developed that trades off prediction accuracy and output length. The model utilizes a function to determine whether a particular output at a time should be evaluated or not given a predetermined threshold. We evaluate the model on the problem of predicting the prices of securities. We find that the model makes longer predictions for more stable securities and it naturally balances prediction accuracy and length.",
"title": ""
},
{
"docid": "ba35d998ee00110e8d571730811972f9",
"text": "Argument mining of online interactions is in its infancy. One reason is the lack of annotated corpora in this genre. To make progress, we need to develop a principled and scalable way of determining which portions of texts are argumentative and what is the nature of argumentation. We propose a two-tiered approach to achieve this goal and report on several initial studies to assess its potential.",
"title": ""
},
{
"docid": "64fd862582693e030c88418a1dcf4c54",
"text": "Anthropomorphic persuasive appeals are prevalent. However, their effectiveness has not been well studied. The present research addresses this issue with two experiments in the context of environmental persuasion. It shows that anthropomorphic messages, relative to non-anthropomorphic ones, appear to motivate more conservation behaviour and elicit more favourable message responses only among recipients who have a strong need for effectance or social connection. Among recipients whose such need is weak, anthropomorphic appeals seem to backfire. These findings extend the research on motivation and persuasion and add evidence to the motivational bases of anthropomorphism. In addition, joining some recent studies, the present research highlights the implications of anthropomorphism of nature for environmental conservation efforts, and offers some practical suggestions for environmental persuasion.",
"title": ""
},
{
"docid": "55dc046b0052658521d627f29bcd7870",
"text": "The proliferation of IT and its consequent dispersion is an enterprise reality, however, most organizations do not have adequate tools and/or methodologies that enable the management and coordination of their Information Systems. The Zachman Framework provides a structured way for any organization to acquire the necessary knowledge about itself with respect to the Enterprise Architecture. Zachman proposes a logical structure for classifying and organizing the descriptive representations of an enterprise, in different dimensions, and each dimension can be perceived in different perspectives.In this paper, we propose a method for achieving an Enterprise Architecture Framework, based on the Zachman Framework Business and IS perspectives, that defines the several artifacts for each cell, and a method which defines the sequence of filling up each cell in a top-down and incremental approach. We also present a tool developed for the purpose of supporting the Zachman Framework concepts. The tool: (i) behaves as an information repository for the framework's concepts; (ii) produces the proposed artifacts that represent each cell contents, (iii) allows multi-dimensional analysis among cell's elements, which is concerned with perspectives (rows) and/or dimensions (columns) dependency; and (iv) finally, evaluate the integrity, dependency and, business and information systems alignment level, through the answers defined for each framework dimension.",
"title": ""
},
{
"docid": "e813eadbd5c8942f5ab01fdeda85c023",
"text": "Imagination is considered an important component of the creative process, and many psychologists agree that imagination is based on our perceptions, experiences, and conceptual knowledge, recombining them into novel ideas and impressions never before experienced. As an attempt to model this account of imagination, we introduce the Associative Conceptual Imagination (ACI) framework that uses associative memory models in conjunction with vector space models. ACI is a framework for learning conceptual knowledge and then learning associations between those concepts and artifacts, which facilitates imagining and then creating new and interesting artifacts. We discuss the implications of this framework, its creative potential, and possible ways to implement it in practice. We then demonstrate an initial prototype that can imagine and then generate simple images.",
"title": ""
},
{
"docid": "8858053a805375aba9d8e71acfd7b826",
"text": "With the accelerating rate of globalization, business exchanges are carried out cross the border, as a result there is a growing demand for talents professional both in English and Business. We can see that at present Business English courses are offered by many language schools in the aim of meeting the need for Business English talent. Many researchers argue that no differences can be defined between Business English teaching and General English teaching. However, this paper concludes that Business English is different from General English at least in such aspects as in the role of teacher, in course design, in teaching models, etc., thus different teaching methods should be applied in order to realize expected teaching goals.",
"title": ""
},
{
"docid": "40dc2dc28dca47137b973757cdf3bf34",
"text": "In this paper we propose a new word-order based graph representation for text. In our graph representation vertices represent words or phrases and edges represent relations between contiguous words or phrases. The graph representation also includes dependency information. Our text representation is suitable for applications involving the identification of relevance or paraphrases across texts, where word-order information would be useful. We show that this word-order based graph representation performs better than a dependency tree representation while identifying the relevance of one piece of text to another.",
"title": ""
},
{
"docid": "1e0eade3cc92eb79160aeac35a3a26d1",
"text": "Global environmental concerns and the escalating demand for energy, coupled with steady progress in renewable energy technologies, are opening up new opportunities for utilization of renewable energy vailable online 12 January 2011",
"title": ""
},
{
"docid": "6990c4f7bde94cb0e14245872e670f91",
"text": "The UK's recent move to polymer banknotes has seen some of the currently used fingermark enhancement techniques for currency potentially become redundant, due to the surface characteristics of the polymer substrates. Possessing a non-porous surface with some semi-porous properties, alternate processes are required for polymer banknotes. This preliminary investigation explored the recovery of fingermarks from polymer notes via vacuum metal deposition using elemental copper. The study successfully demonstrated that fresh latent fingermarks, from an individual donor, could be clearly developed and imaged in the near infrared. By varying the deposition thickness of the copper, the contrast between the fingermark minutiae and the substrate could be readily optimised. Where the deposition thickness was thin enough to be visually indistinguishable, forensic gelatin lifters could be used to lift the fingermarks. These lifts could then be treated with rubeanic acid to produce a visually distinguishable mark. The technique has shown enough promise that it could be effectively utilised on other semi- and non-porous substrates.",
"title": ""
},
{
"docid": "6018c72660f9fd8f3d073febb4b54043",
"text": "Watershed Transformation in mathematical morphology is a powerful tool for image segmentation. Watershed transformation based segmentation is generally marker controlled segmentation. This paper purposes a novel method of image segmentation that includes image enhancement and noise removal techniques with the Prewitt’s edge detection operator. The proposed method is evaluated and compared to existing method. The results show that the proposed method could effectively reduce the over segmentation effect and achieve more accurate segmentation results than the existing method.",
"title": ""
},
{
"docid": "b3abdcc994bdccde066f35dc863dc542",
"text": "This paper outlines the development of a wearable game controller incorporating vibrotacticle haptic feedback that provides a low cost, versatile and intuitive interface for controlling digital games. The device differs from many traditional haptic feedback implementation in that it combines vibrotactile based haptic feedback with gesture based input, thus becoming a two way conduit between the user and the virtual environment. The device is intended to challenge what is considered an “interface” and draws on work in the area of Actor-Network theory to purposefully blur the boundary between man and machine. This allows for a more immersive experience, so rather than making the user feel like they are controlling an aircraft the intuitive interface allows the user to become the aircraft that is controlled by the movements of the user's hand. This device invites playful action and thrill. It bridges new territory on portable and low cost solutions for haptic controllers in a gaming context.",
"title": ""
}
] |
scidocsrr
|
c3c305f1b0114c46ec4ca620701ce52b
|
Organizational change and development.
|
[
{
"docid": "4a536c1186a1d1d1717ec1e0186b262c",
"text": "In this paper, I outline a perspective on organizational transformation which proposes change as endemic to the practice of organizing and hence as enacted through the situated practices of organizational actors as they improvise, innovate, and adjust their work routines over time. I ground this perspective in an empirical study which examined the use of a new information technology within one organization over a two year period. In this organization, a series of subtle but nonetheless significant changes were enacted over time as organizational actors appropriated the new technology into their work practices, and then experimented with local innovations, responded to unanticipated breakdowns and contingencies, initiated opportunistic shifts in structure and coordination mechanisms, and improvised various procedural, cognitive, and normative variations to accommodate their evolving use of the technology. These findings provide the empirical basis for a practice-based perspective on organizational transformation. Because it is grounded in the micro-level changes that actors enact over time as they make sense of and act in the world, a practice lens can avoid the strong assumptions of rationality, determinism, or discontinuity characterizing existing change perspectives. A situated change perspective may offer a particularly useful strategy for analyzing change in organizations turning increasingly away from patterns of stability, bureaucracy, and control to those of flexibility, selforganizing, and learning.",
"title": ""
}
] |
[
{
"docid": "51c82ab631167a61e553e1ab8e34a385",
"text": "The social and political context of sexual identity development in the United States has changed dramatically since the mid twentieth century. Same-sex attracted individuals have long needed to reconcile their desire with policies of exclusion, ranging from explicit outlaws on same-sex activity to exclusion from major social institutions such as marriage. This paper focuses on the implications of political exclusion for the life course of individuals with same-sex desire through the analytic lens of narrative. Using illustrative evidence from a study of autobiographies of gay men spanning a 60-year period and a study of the life stories of contemporary same-sex attracted youth, we detail the implications of historic silence, exclusion, and subordination for the life course.",
"title": ""
},
{
"docid": "a5bd062a1ed914fb2effc924e41a4f73",
"text": "With the developments and applications of the new information technologies, such as cloud computing, Internet of Things, big data, and artificial intelligence, a smart manufacturing era is coming. At the same time, various national manufacturing development strategies have been put forward, such as Industry 4.0, Industrial Internet, manufacturing based on Cyber-Physical System, and Made in China 2025. However, one of specific challenges to achieve smart manufacturing with these strategies is how to converge the manufacturing physical world and the virtual world, so as to realize a series of smart operations in the manufacturing process, including smart interconnection, smart interaction, smart control and management, etc. In this context, as a basic unit of manufacturing, shop-floor is required to reach the interaction and convergence between physical and virtual spaces, which is not only the imperative demand of smart manufacturing, but also the evolving trend of itself. Accordingly, a novel concept of digital twin shop-floor (DTS) based on digital twin is explored and its four key components are discussed, including physical shop-floor, virtual shop-floor, shop-floor service system, and shop-floor digital twin data. What is more, the operation mechanisms and implementing methods for DTS are studied and key technologies as well as challenges ahead are investigated, respectively.",
"title": ""
},
{
"docid": "cc6161fd350ac32537dc704cbfef2155",
"text": "The contribution of cloud computing and mobile computing technologies lead to the newly emerging mobile cloud computing paradigm. Three major approaches have been proposed for mobile cloud applications: 1) extending the access to cloud services to mobile devices; 2) enabling mobile devices to work collaboratively as cloud resource providers; 3) augmenting the execution of mobile applications on portable devices using cloud resources. In this paper, we focus on the third approach in supporting mobile data stream applications. More specifically, we study how to optimize the computation partitioning of a data stream application between mobile and cloud to achieve maximum speed/throughput in processing the streaming data.\n To the best of our knowledge, it is the first work to study the partitioning problem for mobile data stream applications, where the optimization is placed on achieving high throughput of processing the streaming data rather than minimizing the makespan of executions as in other applications. We first propose a framework to provide runtime support for the dynamic computation partitioning and execution of the application. Different from existing works, the framework not only allows the dynamic partitioning for a single user but also supports the sharing of computation instances among multiple users in the cloud to achieve efficient utilization of the underlying cloud resources. Meanwhile, the framework has better scalability because it is designed on the elastic cloud fabrics. Based on the framework, we design a genetic algorithm for optimal computation partition. Both numerical evaluation and real world experiment have been performed, and the results show that the partitioned application can achieve at least two times better performance in terms of throughput than the application without partitioning.",
"title": ""
},
{
"docid": "c0559cebfad123a67777868990d40c7e",
"text": "One of the attractive methods for providing natural human-computer interaction is the use of the hand as an input device rather than the cumbersome devices such as keyboards and mice, which need the user to be located in a specific location to use these devices. Since human hand is an articulated object, it is an open issue to discuss. The most important thing in hand gesture recognition system is the input features, and the selection of good features representation. This paper presents a review study on the hand postures and gesture recognition methods, which is considered to be a challenging problem in the human-computer interaction context and promising as well. Many applications and techniques were discussed here with the explanation of system recognition framework and its main phases.",
"title": ""
},
{
"docid": "e3db1429e8821649f35270609459cb0d",
"text": "Novelty detection is the task of recognising events the differ from a model of normality. This paper proposes an acoustic novelty detector based on neural networks trained with an adversarial training strategy. The proposed approach is composed of a feature extraction stage that calculates Log-Mel spectral features from the input signal. Then, an autoencoder network, trained on a corpus of “normal” acoustic signals, is employed to detect whether a segment contains an abnormal event or not. A novelty is detected if the Euclidean distance between the input and the output of the autoencoder exceeds a certain threshold. The innovative contribution of the proposed approach resides in the training procedure of the autoencoder network: instead of using the conventional training procedure that minimises only the Minimum Mean Squared Error loss function, here we adopt an adversarial strategy, where a discriminator network is trained to distinguish between the output of the autoencoder and data sampled from the training corpus. The autoencoder, then, is trained also by using the binary cross-entropy loss calculated at the output of the discriminator network. The performance of the algorithm has been assessed on a corpus derived from the PASCAL CHiME dataset. The results showed that the proposed approach provides a relative performance improvement equal to 0.26% compared to the standard autoencoder. The significance of the improvement has been evaluated with a one-tailed z-test and resulted significant with p < 0.001. The presented approach thus showed promising results on this task and it could be extended as a general training strategy for autoencoders if confirmed by additional experiments.",
"title": ""
},
{
"docid": "6b0e2a151fd9aa53a97884d3f6b34c33",
"text": "Building systems that possess the sensitivity and intelligence to identify and describe high-level attributes in music audio signals continues to be an elusive goal but one that surely has broad and deep implications for a wide variety of applications. Hundreds of articles have so far been published toward this goal, and great progress appears to have been made. Some systems produce remarkable accuracies at recognizing high-level semantic concepts, such as music style, genre, and mood. However, it might be that these numbers do not mean what they seem. In this article, we take a state-of-the-art music content analysis system and investigate what causes it to achieve exceptionally high performance in a benchmark music audio dataset. We dissect the system to understand its operation, determine its sensitivities and limitations, and predict the kinds of knowledge it could and could not possess about music. We perform a series of experiments to illuminate what the system has actually learned to do and to what extent it is performing the intended music listening task. Our results demonstrate how the initial manifestation of music intelligence in this state of the art can be deceptive. Our work provides constructive directions toward developing music content analysis systems that can address the music information and creation needs of real-world users.",
"title": ""
},
{
"docid": "69049d1f5a3b14bb00d57d16a93ec47f",
"text": "The porphyrias are disorders of haem biosynthesis which present with acute neurovisceral attacks or disorders of sun-exposed skin. Acute attacks occur mainly in adults and comprise severe abdominal pain, nausea, vomiting, autonomic disturbance, central nervous system involvement and peripheral motor neuropathy. Cutaneous porphyrias can be acute or chronic presenting at various ages. Timely diagnosis depends on clinical suspicion leading to referral of appropriate samples for screening by reliable biochemical methods. All samples should be protected from light. Investigation for an acute attack: • Porphobilinogen (PBG) quantitation in a random urine sample collected during symptoms. Urine concentration must be assessed by measuring creatinine, and a repeat requested if urine creatinine <2 mmol/L. • Urgent porphobilinogen testing should be available within 24 h of sample receipt at the local laboratory. Urine porphyrin excretion (TUP) should subsequently be measured on this urine. • Urine porphobilinogen should be measured using a validated quantitative ion-exchange resin-based method or LC-MS. • Increased urine porphobilinogen excretion requires confirmatory testing and clinical advice from the National Acute Porphyria Service. • Identification of individual acute porphyrias requires analysis of urine, plasma and faecal porphyrins. Investigation for cutaneous porphyria: • An EDTA blood sample for plasma porphyrin fluorescence emission spectroscopy and random urine sample for TUP. • Whole blood for porphyrin analysis is essential to identify protoporphyria. • Faeces need only be collected, if first-line tests are positive or if clinical symptoms persist. Investigation for latent porphyria or family history: • Contact a specialist porphyria laboratory for advice. Clinical, family details are usually required.",
"title": ""
},
{
"docid": "296ce1f0dd7bf02c8236fa858bb1957c",
"text": "As many as one in 20 people in Europe and North America have some form of autoimmune disease. These diseases arise in genetically predisposed individuals but require an environmental trigger. Of the many potential environmental factors, infections are the most likely cause. Microbial antigens can induce cross-reactive immune responses against self-antigens, whereas infections can non-specifically enhance their presentation to the immune system. The immune system uses fail-safe mechanisms to suppress infection-associated tissue damage and thus limits autoimmune responses. The association between infection and autoimmune disease has, however, stimulated a debate as to whether such diseases might also be triggered by vaccines. Indeed there are numerous claims and counter claims relating to such a risk. Here we review the mechanisms involved in the induction of autoimmunity and assess the implications for vaccination in human beings.",
"title": ""
},
{
"docid": "617d1d0900ddebb431ae8fe37ad2e23b",
"text": "We used cDNA microarrays to assess gene expression profiles in 60 human cancer cell lines used in a drug discovery screen by the National Cancer Institute. Using these data, we linked bioinformatics and chemoinformatics by correlating gene expression and drug activity patterns in the NCI60 lines. Clustering the cell lines on the basis of gene expression yielded relationships very different from those obtained by clustering the cell lines on the basis of their response to drugs. Gene-drug relationships for the clinical agents 5-fluorouracil and L-asparaginase exemplify how variations in the transcript levels of particular genes relate to mechanisms of drug sensitivity and resistance. This is the first study to integrate large databases on gene expression and molecular pharmacology.",
"title": ""
},
{
"docid": "40c4175be1573d9542f6f9f859fafb01",
"text": "BACKGROUND\nFalls are a major threat to the health and independence of seniors. Regular physical activity (PA) can prevent 40% of all fall injuries. The challenge is to motivate and support seniors to be physically active. Persuasive systems can constitute valuable support for persons aiming at establishing and maintaining healthy habits. However, these systems need to support effective behavior change techniques (BCTs) for increasing older adults' PA and meet the senior users' requirements and preferences. Therefore, involving users as codesigners of new systems can be fruitful. Prestudies of the user's experience with similar solutions can facilitate future user-centered design of novel persuasive systems.\n\n\nOBJECTIVE\nThe aim of this study was to investigate how seniors experience using activity monitors (AMs) as support for PA in daily life. The addressed research questions are as follows: (1) What are the overall experiences of senior persons, of different age and balance function, in using wearable AMs in daily life?; (2) Which aspects did the users perceive relevant to make the measurements as meaningful and useful in the long-term perspective?; and (3) What needs and requirements did the users perceive as more relevant for the activity monitors to be useful in a long-term perspective?\n\n\nMETHODS\nThis qualitative interview study included 8 community-dwelling older adults (median age: 83 years). The participants' experiences in using two commercial AMs together with tablet-based apps for 9 days were investigated. Activity diaries during the usage and interviews after the usage were exploited to gather user experience. Comments in diaries were summarized, and interviews were analyzed by inductive content analysis.\n\n\nRESULTS\nThe users (n=8) perceived that, by using the AMs, their awareness of own PA had increased. However, the AMs' impact on the users' motivation for PA and activity behavior varied between participants. The diaries showed that self-estimated physical effort varied between participants and varied for each individual over time. Additionally, participants reported different types of accomplished activities; talking walks was most frequently reported. To be meaningful, measurements need to provide the user with a reliable receipt of whether his or her current activity behavior is sufficient for reaching an activity goal. Moreover, praise when reaching a goal was described as motivating feedback. To be useful, the devices must be easy to handle. In this study, the users perceived wearables as easy to handle, whereas tablets were perceived difficult to maneuver. Users reported in the diaries that the devices had been functional 78% (58/74) of the total test days.\n\n\nCONCLUSIONS\nActivity monitors can be valuable for supporting seniors' PA. However, the potential of the solutions for a broader group of seniors can significantly be increased. Areas of improvement include reliability, usability, and content supporting effective BCTs with respect to increasing older adults' PA.",
"title": ""
},
{
"docid": "8d197bf27af825b9972a490d3cc9934c",
"text": "The past decade has witnessed an increasing adoption of cloud database technology, which provides better scalability, availability, and fault-tolerance via transparent partitioning and replication, and automatic load balancing and fail-over. However, only a small number of cloud databases provide strong consistency guarantees for distributed transactions, despite decades of research on distributed transaction processing, due to practical challenges that arise in the cloud setting, where failures are the norm, and human administration is minimal. For example, dealing with locks left by transactions initiated by failed machines, and determining a multi-programming level that avoids thrashing without under-utilizing available resources, are some of the challenges that arise when using lock-based transaction processing mechanisms in the cloud context. Even in the case of optimistic concurrency control, most proposals in the literature deal with distributed validation but still require the database to acquire locks during two-phase commit when installing updates of a single transaction on multiple machines. Very little theoretical work has been done to entirely eliminate the need for locking in distributed transactions, including locks acquired during two-phase commit. In this paper, we re-design optimistic concurrency control to eliminate any need for locking even for atomic commitment, while handling the practical issues in earlier theoretical work related to this problem. We conduct an extensive experimental study to evaluate our approach against lock-based methods under various setups and workloads, and demonstrate that our approach provides many practical advantages in the cloud context.",
"title": ""
},
{
"docid": "b1ba519ffe5321d9ab92ebed8d9264bb",
"text": "OBJECTIVES\nThe purpose of this study was to establish reference charts of fetal biometric parameters measured by 2-dimensional sonography in a large Brazilian population.\n\n\nMETHODS\nA cross-sectional retrospective study was conducted including 31,476 low-risk singleton pregnancies between 18 and 38 weeks' gestation. The following fetal parameters were measured: biparietal diameter, head circumference, abdominal circumference, femur length, and estimated fetal weight. To assess the correlation between the fetal biometric parameters and gestational age, polynomial regression models were created, with adjustments made by the determination coefficient (R(2)).\n\n\nRESULTS\nThe means ± SDs of the biparietal diameter, head circumference, abdominal circumference, femur length, and estimated fetal weight measurements at 18 and 38 weeks were 4.2 ± 2.34 and 9.1 ± 4.0 cm, 15.3 ± 7.56 and 32.3 ± 11.75 cm, 13.3 ± 10.42 and 33.4 ± 20.06 cm, 2.8 ± 2.17 and 7.2 ± 3.58 cm, and 256.34 ± 34.03 and 3169.55 ± 416.93 g, respectively. Strong correlations were observed between all fetal biometric parameters and gestational age, best represented by second-degree equations, with R(2) values of 0.95, 0.96, 0.95, 0.95, and 0.95 for biparietal diameter, head circumference, abdominal circumference, femur length, and estimated fetal weight.\n\n\nCONCLUSIONS\nFetal biometric parameters were determined for a large Brazilian population, and they may serve as reference values in cases with a high risk of intrauterine growth disorders.",
"title": ""
},
{
"docid": "b1eff907bd8b227275f094d57b627ac8",
"text": "BACKGROUND\nPilonidal sinus is a chronic inflammatory disorder of the intergluteal sulcus. The disorder often negatively affects patients' quality of life, and there are numerous possible methods of operative treatment for pilonidal sinus. The aim of our study was to compare the results of 3 different operative procedures (tension-free primary closure, Limberg flap, and Karydakis technique) used in the treatment of pilonidal disease.\n\n\nMETHODS\nThe study was conducted via a prospective randomized design. The patients were randomized into 3 groups via a closed envelope method. Patients were included in the study after admission to our clinic with pilonidal sinus disease and operative treatment already were planned. The 2 main outcomes of the study were early complications from the methods used and later recurrences of the disease.\n\n\nRESULTS\nA total of 150 patients were included in the study, and the groups were similar in terms of age, sex, and American Society of Anesthesiologists scores. The median follow-up time of the study was 24.2 months (range, 18.5-34.27) postsurgery. The recurrence rates were 6% for both the Limberg and Karydakis groups and 4% for the tension-free primary closure group. Therefore, there was no substantial difference in the recurrence rates.\n\n\nCONCLUSION\nThe search for an ideal treatment modality for pilonidal sinus disease is still ongoing. The main conclusion of our study is that a tension-free healing side is much more important than a midline suture line. Also, tension-free primary closure is as effective as a flap procedure, and it is also easier to perform.",
"title": ""
},
{
"docid": "d79a1a6398e98855ddd1181c141d7b00",
"text": "In this paper we describe a new binarisation method designed specifically for OCR of low quality camera images: Background Surface Thresholding or BST. This method is robust to lighting variations and produces images with very little noise and consistent stroke width. BST computes a ”surface” of background intensities at every point in the image and performs adaptive thresholding based on this result. The surface is estimated by identifying regions of lowresolution text and interpolating neighbouring background intensities into these regions. The final threshold is a combination of this surface and a global offset. According to our evaluation BST produces considerably fewer OCR errors than Niblack’s local average method while also being more runtime efficient.",
"title": ""
},
{
"docid": "3e0d88a135e7d7daff538eea1a6f2c9d",
"text": "The first step in an image retrieval pipeline consists of comparing global descriptors from a large database to find a short list of candidate matching images. The more compact the global descriptor, the faster the descriptors can be compared for matching. State-of-the-art global descriptors based on Fisher Vectors are represented with tens of thousands of floating point numbers. While there is significant work on compression of local descriptors, there is relatively little work on compression of high dimensional Fisher Vectors. We study the problem of global descriptor compression in the context of image retrieval, focusing on extremely compact binary representations: 64-1024 bits. Motivated by the remarkable success of deep neural networks in recent literature, we propose a compression scheme based on deeply stacked Restricted Boltzmann Machines (SRBM), which learn lower dimensional non-linear subspaces on which the data lie. We provide a thorough evaluation of several state-of-the-art compression schemes based on PCA, Locality Sensitive Hashing, Product Quantization and greedy bit selection, and show that the proposed compression scheme outperforms all existing schemes.",
"title": ""
},
{
"docid": "7e26a6ccd587ae420b9d2b83f6b54350",
"text": "Because of the SARS epidemic in Asia, people chose to the Internet shopping instead of going shopping on streets. In other words, SARS actually gave the Internet an opportunity to revive from its earlier bubbles. The purpose of this research is to provide managers of shopping Websites regarding consumer purchasing decisions based on the CSI (Consumer Styles Inventory) which was proposed by Sproles (1985) and Sproles & Kendall (1986). According to the CSI, one can capture the decision-making styles of online shoppers. Furthermore, this research also discusses the gender differences among online shoppers. Exploratory factor analysis (EFA) was used to understand the decision-making styles and discriminant analysis was used to distinguish the differences between female and male shoppers. Managers of Internet shopping Websites can design a proper marketing mix with the findings that there are differences in purchasing decisions between genders.",
"title": ""
},
{
"docid": "7f49cb5934130fb04c02db03bd40e83d",
"text": "BACKGROUND\nResearch literature on problematic smartphone use, or smartphone addiction, has proliferated. However, relationships with existing categories of psychopathology are not well defined. We discuss the concept of problematic smartphone use, including possible causal pathways to such use.\n\n\nMETHOD\nWe conducted a systematic review of the relationship between problematic use with psychopathology. Using scholarly bibliographic databases, we screened 117 total citations, resulting in 23 peer-reviewer papers examining statistical relations between standardized measures of problematic smartphone use/use severity and the severity of psychopathology.\n\n\nRESULTS\nMost papers examined problematic use in relation to depression, anxiety, chronic stress and/or low self-esteem. Across this literature, without statistically adjusting for other relevant variables, depression severity was consistently related to problematic smartphone use, demonstrating at least medium effect sizes. Anxiety was also consistently related to problem use, but with small effect sizes. Stress was somewhat consistently related, with small to medium effects. Self-esteem was inconsistently related, with small to medium effects when found. Statistically adjusting for other relevant variables yielded similar but somewhat smaller effects.\n\n\nLIMITATIONS\nWe only included correlational studies in our systematic review, but address the few relevant experimental studies also.\n\n\nCONCLUSIONS\nWe discuss causal explanations for relationships between problem smartphone use and psychopathology.",
"title": ""
},
{
"docid": "a48278ee8a21a33ff87b66248c6b0b8a",
"text": "We describe a unified multi-turn multi-task spoken language understanding (SLU) solution capable of handling multiple context sensitive classification (intent determination) and sequence labeling (slot filling) tasks simultaneously. The proposed architecture is based on recurrent convolutional neural networks (RCNN) with shared feature layers and globally normalized sequence modeling components. The temporal dependencies within and across different tasks are encoded succinctly as recurrent connections. The dialog system responses beyond SLU component are also exploited as effective external features. We show with extensive experiments on a number of datasets that the proposed joint learning framework generates state-of-the-art results for both classification and tagging, and the contextual modeling based on recurrent and external features significantly improves the context sensitivity of SLU models.",
"title": ""
},
{
"docid": "0f8bf207201692ad4905e28a2993ef29",
"text": "Bluespec System Verilog is an EDL toolset for ASIC and FPGA design offering significantly higher productivity via a radically different approach to high-level synthesis. Many other attempts at high-level synthesis have tried to move the design language towards a more software-like specification of the behavior of the intended hardware. By means of code samples, demonstrations and measured results, we illustrate how Bluespec System Verilog, in an environment familiar to hardware designers, can significantly improve productivity without compromising generated hardware quality.",
"title": ""
}
] |
scidocsrr
|
d20ee1b0987b213978540bd652324184
|
A Distributed Anomaly Detection System for In-Vehicle Network Using HTM
|
[
{
"docid": "c158e9421ec0d1265bd625b629e64dc5",
"text": "This paper proposes a gateway framework for in-vehicle networks (IVNs) based on the controller area network (CAN), FlexRay, and Ethernet. The proposed gateway framework is designed to be easy to reuse and verify to reduce development costs and time. The gateway framework can be configured, and its verification environment is automatically generated by a program with a dedicated graphical user interface (GUI). The gateway framework provides state-of-the-art functionalities that include parallel reprogramming, diagnostic routing, network management (NM), dynamic routing update, multiple routing configuration, and security. The proposed gateway framework was developed, and its performance was analyzed and evaluated.",
"title": ""
},
{
"docid": "0f7f8557ffa238a529f28f9474559cc4",
"text": "Fast incipient machine fault diagnosis is becoming one of the key requirements for economical and optimal process operation management. Artificial neural networks have been used to detect machine faults for a number of years and shown to be highly successful in this application area. This paper presents a novel test technique for machine fault detection and classification in electro-mechanical machinery from vibration measurements using one-class support vector machines (SVMs). In order to evaluate one-class SVMs, this paper examines the performance of the proposed method by comparing it with that of multilayer perception, one of the artificial neural network techniques, based on real benchmarking data. q 2005 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "400dce50037a38d19a3057382d9246b5",
"text": "A novel intrusion detection system (IDS) using a deep neural network (DNN) is proposed to enhance the security of in-vehicular network. The parameters building the DNN structure are trained with probability-based feature vectors that are extracted from the in-vehicular network packets. For a given packet, the DNN provides the probability of each class discriminating normal and attack packets, and, thus the sensor can identify any malicious attack to the vehicle. As compared to the traditional artificial neural network applied to the IDS, the proposed technique adopts recent advances in deep learning studies such as initializing the parameters through the unsupervised pre-training of deep belief networks (DBN), therefore improving the detection accuracy. It is demonstrated with experimental results that the proposed technique can provide a real-time response to the attack with a significantly improved detection ratio in controller area network (CAN) bus.",
"title": ""
},
{
"docid": "c3c0e14aa82b438ceb92a84bcdbed184",
"text": "Advances in technology for miniature electronic military equipment and systems have led to the emergence of unmanned aerial vehicles (UAVs) as the new weapons of war and tools used in various other areas. UAVs can easily be controlled from a remote location. They are being used for critical operations, including offensive, reconnaissance, surveillance and other civilian missions. The need to secure these channels in a UAV system is one of the most important aspects of the security of this system because all information critical to the mission is sent through wireless communication channels. It is well understood that loss of control over these systems to adversaries due to lack of security is a potential threat to national security. In this paper various security threats to a UAV system is analyzed and a cyber-security threat model showing possible attack paths has been proposed. This model will help designers and users of the UAV systems to understand the threat profile of the system so as to allow them to address various system vulnerabilities, identify high priority threats, and select mitigation techniques for these threats.",
"title": ""
},
{
"docid": "7f2acf667a66f2812023c26c4ca95cf1",
"text": "Vehicle-IT convergence technology is a rapidly rising paradigm of modern vehicles, in which an electronic control unit (ECU) is used to control the vehicle electrical systems, and the controller area network (CAN), an in-vehicle network, is commonly used to construct an efficient network of ECUs. Unfortunately, security issues have not been treated properly in CAN, although CAN control messages could be life-critical. With the appearance of the connected car environment, in-vehicle networks (e.g., CAN) are now connected to external networks (e.g., 3G/4G mobile networks), enabling an adversary to perform a long-range wireless attack using CAN vulnerabilities. In this paper we show that a long-range wireless attack is physically possible using a real vehicle and malicious smartphone application in a connected car environment. We also propose a security protocol for CAN as a countermeasure designed in accordance with current CAN specifications. We evaluate the feasibility of the proposed security protocol using CANoe software and a DSP-F28335 microcontroller. Our results show that the proposed security protocol is more efficient than existing security protocols with respect to authentication delay and communication load.",
"title": ""
}
] |
[
{
"docid": "d8ebc5a68f8e3e7db1abc6a0e7b37da2",
"text": "Previous research shows that interleaving rather than blocking practice of different skills (e.g. abcbcacab instead of aaabbbccc) usually improves subsequent test performance. Yet interleaving, but not blocking, ensures that practice of any particular skill is distributed, or spaced, because any two opportunities to practice the same task are not consecutive. Hence, because spaced practice typically improves test performance, the previously observed test benefits of interleaving may be due to spacing rather than interleaving per se. In the experiment reported herein, children practiced four kinds of mathematics problems in an order that was interleaved or blocked, and the degree of spacing was fixed. The interleaving of practice impaired practice session performance yet doubled scores on a test given one day later. An analysis of the errors suggested that interleaving boosted test scores by improving participants’ ability to pair each problem with the appropriate procedure. Copyright # 2009 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "0fba05a38cb601a1b08e6105e6b949c1",
"text": "This paper discusses how to implement Paillier homomorphic encryption (HE) scheme in Java as an API. We first analyze existing Pailler HE libraries and discuss their limitations. We then design a comparatively accomplished and efficient Pailler HE Java library. As a proof of concept, we applied our Pailler HE library in an electronic voting system that allows the voting server to sum up the candidates' votes in the encrypted form with voters remain anonymous. Our library records an average of only 2766ms for each vote placement through HTTP POST request.",
"title": ""
},
{
"docid": "019b9076c051d7eb3ad4aae0e018e45c",
"text": "This paper investigates the possible application of reinforcement learning to Tetris. The author investigates the background of Tetris, and qualifies it in a mathematical context. The author discusses reinforcement learning, and considers historically successful applications of it. Finally the author discusses considerations surrounding implementation.",
"title": ""
},
{
"docid": "832916685b22b536d1e8e85f0eeb0e14",
"text": "People have always sought an attractive smile in harmony with an esthetic appearance. This trend is steadily growing as it influences one’s self esteem and psychological well-being.1,2 Faced with highly esthetic demanding patients, the practitioner should guarantee esthetic outcomes involving conservative procedures. This is undoubtedly challenging and often requiring a perfect multidisciplinary approach.3",
"title": ""
},
{
"docid": "593077b1e73b42abbe35b3c4a49cfd50",
"text": "In this paper, we propose a device-to-device (D2D) discovery scheme as a key enabler for a proximity-based service in the Long-Term Evolution Advanced (LTE-A) system. The proximity-based service includes a variety of services exploiting the location information of user equipment (UE), for example, the mobile social network and the mobile marketing. To realize the proximity-based service in the LTE-A system, it is necessary to design a D2D discovery scheme by which UE can discover another UE in its proximity. We design a D2D discovery scheme based on the random access procedure in the LTE-A system. The proposed random-access-based D2D discovery scheme is advantageous in that 1) the proposed scheme can be readily applied to the current LTE-A system without significant modification; 2) the proposed scheme discovers pairs of UE in a centralized manner, which enables the access or core network to centrally control the formation of D2D communication networks; and 3) the proposed scheme adaptively allocates resource blocks for the D2D discovery to prevent underutilization of radio resources. We analyze the performance of the proposed D2D discovery scheme. A closed-form formula for the performance is derived by means of the stochastic geometry-based approach. We show that the analysis results accurately match the simulation results.",
"title": ""
},
{
"docid": "f6d08e76bfad9c4988253b643163671a",
"text": "This paper proposes a technique for unwanted lane departure detection. Initially, lane boundaries are detected using a combination of the edge distribution function and a modified Hough transform. In the tracking stage, a linear-parabolic lane model is used: in the near vision field, a linear model is used to obtain robust information about lane orientation; in the far field, a quadratic function is used, so that curved parts of the road can be efficiently tracked. For lane departure detection, orientations of both lane boundaries are used to compute a lane departure measure at each frame, and an alarm is triggered when such measure exceeds a threshold. Experimental results indicate that the proposed system can fit lane boundaries in the presence of several image artifacts, such as sparse shadows, lighting changes and bad conditions of road painting, being able to detect in advance involuntary lane crossings. q 2005 Elsevier Ltd All rights reserved.",
"title": ""
},
{
"docid": "751563e10e62d6b8c4a4db9909e92058",
"text": "Summarising a high dimensional data set with a low dimension al embedding is a standard approach for exploring its structure. In this paper we provide an over view of some existing techniques for discovering such embeddings. We then introduce a novel prob abilistic interpretation of principal component analysis (PCA) that we term dual probabilistic PC A (DPPCA). The DPPCA model has the additional advantage that the linear mappings from the e mbedded space can easily be nonlinearised through Gaussian processes. We refer to this mod el as a Gaussian process latent variable model (GP-LVM). Through analysis of the GP-LVM objective fu nction, we relate the model to popular spectral techniques such as kernel PCA and multidim ensional scaling. We then review a practical algorithm for GP-LVMs in the context of large data sets and develop it to also handle discrete valued data and missing attributes. We demonstrat e the model on a range of real-world and artificially generated data sets.",
"title": ""
},
{
"docid": "8cd970e1c247478f01a9fe2f62530fc4",
"text": "In this paper, we propose a method for grasping unknown objects from piles or cluttered scenes, given a point cloud from a single depth camera. We introduce a shape-based method - Symmetry Height Accumulated Features (SHAF) - that reduces the scene description complexity such that the use of machine learning techniques becomes feasible. We describe the basic Height Accumulated Features and the Symmetry Features and investigate their quality using an F-score metric. We discuss the gain from Symmetry Features for grasp classification and demonstrate the expressive power of Height Accumulated Features by comparing it to a simple height based learning method. In robotic experiments of grasping single objects, we test 10 novel objects in 150 trials and show significant improvement of 34% over a state-of-the-art method, achieving a success rate of 92%. An improvement of 29% over the competitive method was achieved for a task of clearing a table with 5 to 10 objects and overall 90 trials. Furthermore we show that our approach is easily adaptable for different manipulators by running our experiments on a second platform.",
"title": ""
},
{
"docid": "edc3562602fc9b275e18d44ea3a5d8ac",
"text": "The replicase of all cells is thought to utilize two DNA polymerases for coordinated synthesis of leading and lagging strands. The DNA polymerases are held to DNA by circular sliding clamps. We demonstrate here that the E. coli DNA polymerase III holoenzyme assembles into a particle that contains three DNA polymerases. The three polymerases appear capable of simultaneous activity. Furthermore, the trimeric replicase is fully functional at a replication fork with helicase, primase, and sliding clamps; it produces slightly shorter Okazaki fragments than replisomes containing two DNA polymerases. We propose that two polymerases can function on the lagging strand and that the third DNA polymerase can act as a reserve enzyme to overcome certain types of obstacles to the replication fork.",
"title": ""
},
{
"docid": "997adb89f1e02b66f8e3edc6f2b6aed2",
"text": "Chimeric antigen receptor (CAR)-engineered T cells (CAR-T cells) have yielded unprecedented efficacy in B cell malignancies, most remarkably in anti-CD19 CAR-T cells for B cell acute lymphoblastic leukemia (B-ALL) with up to a 90% complete remission rate. However, tumor antigen escape has emerged as a main challenge for the long-term disease control of this promising immunotherapy in B cell malignancies. In addition, this success has encountered significant hurdles in translation to solid tumors, and the safety of the on-target/off-tumor recognition of normal tissues is one of the main reasons. In this mini-review, we characterize some of the mechanisms for antigen loss relapse and new strategies to address this issue. In addition, we discuss some novel CAR designs that are being considered to enhance the safety of CAR-T cell therapy in solid tumors.",
"title": ""
},
{
"docid": "0836e5d45582b0a0eec78234776aa419",
"text": "‘Description’: ‘Microsoft will accelerate your journey to cloud computing with an! agile and responsive datacenter built from your existing technology investments.’,! ‘DisplayUrl’: ‘www.microsoft.com/en-us/server-cloud/ datacenter/virtualization.aspx’,! ‘ID’: ‘a42b0908-174e-4f25-b59c-70bdf394a9da’,! ‘Title’: ‘Microsoft | Server & Cloud | Datacenter | Virtualization ...’,! ‘Url’: ‘http://www.microsoft.com/en-us/server-cloud/datacenter/ virtualization.aspx’,! ...! Data! #Topics: 228! #Candidate Labels: ~6,000! Domains: BLOGS, BOOKS, NEWS, PUBMED! Candidate labels rated by humans (0-3) ! Published by Lau et al. (2011). 4. Scoring Candidate Labels! Candidate Label: L = {w1, w2, ..., wm}! Scoring Function: Task: The aim of the task is to associate labels with automatically generated topics.",
"title": ""
},
{
"docid": "9beeee852ce0d077720c212cf17be036",
"text": "Spoofing speech detection aims to differentiate spoofing speech from natural speech. Frame-based features are usually used in most of previous works. Although multiple frames or dynamic features are used to form a super-vector to represent the temporal information, the time span covered by these features are not sufficient. Most of the systems failed to detect the non-vocoder or unit selection based spoofing attacks. In this work, we propose to use a temporal convolutional neural network (CNN) based classifier for spoofing speech detection. The temporal CNN first convolves the feature trajectories with a set of filters, then extract the maximum responses of these filters within a time window using a max-pooling layer. Due to the use of max-pooling, we can extract useful information from a long temporal span without concatenating a large number of neighbouring frames, as in feedforward deep neural network (DNN). Five types of feature are employed to access the performance of proposed classifier. Experimental results on ASVspoof 2015 corpus show that the temporal CNN based classifier is effective for synthetic speech detection. Specifically, the proposed method brings a significant performance boost for the unit selection based spoofing speech detection.",
"title": ""
},
{
"docid": "56ed9f8a4b29653411f6ed55c68adc6f",
"text": "The studying of social influence can be used to understand and solve many complicated problems in social network analysis such as predicting influential users. This paper focuses on the problem of predicting influential users on social networks. We introduce a three-level hierarchy that classifies the influence measurements. The hierarchy categorizes the influence measurements by three folds, i.e., models, types and algorithms. Using this hierarchy, we classify the existing influence measurements. We further compare them based on an empirical analysis in terms of performance, accuracy and correlation using datasets from two different social networks to investigate the feasibility of influence measurements. Our results show that predicting influential users does not only depend on the influence measurements but also on the nature of social networks. Our goal is to introduce a standardized baseline for the problem of predicting influential users on social networks.",
"title": ""
},
{
"docid": "bcbba4f99e33ac0daea893e280068304",
"text": "Arterial plasma glucose values throughout a 24-h period average approximately 90 mg/dl, with a maximal concentration usually not exceeding 165 mg/dl such as after meal ingestion1 and remaining above 55 mg/dl such as after exercise2 or a moderate fast (60 h).3 This relative stability contrasts with the situation for other substrates such as glycerol, lactate, free fatty acids, and ketone bodies whose fluctuations are much wider (Table 2.1).4 This narrow range defining normoglycemia is maintained through an intricate regulatory and counterregulatory neuro-hormonal system: A decrement in plasma glucose as little as 20 mg/dl (from 90 to 70 mg/dl) will suppress the release of insulin and will decrease glucose uptake in certain areas in the brain (e.g., hypothalamus where glucose sensors are located); this will activate the sympathetic nervous system and trigger the release of counterregulatory hormones (glucagon, catecholamines, cortisol, and growth hormone).5 All these changes will increase glucose release into plasma and decrease its removal so as to restore normoglycemia. On the other hand, a 10 mg/dl increment in plasma glucose will stimulate insulin release and suppress glucagon secretion to prevent further increments and restore normoglycemia. Glucose in plasma either comes from dietary sources or is either the result of the breakdown of glycogen in liver (glycogenolysis) or the formation of glucose in liver and kidney from other carbons compounds (precursors) such as lactate, pyruvate, amino acids, and glycerol (gluconeogenesis). In humans, glucose removed from plasma may have different fates in different tissues and under different conditions (e.g., postabsorptive vs. postprandial), but the pathways for its disposal are relatively limited. It (1) may be immediately stored as glycogen or (2) may undergo glycolysis, which can be non-oxidative producing pyruvate (which can be reduced to lactate or transaminated to form alanine) or oxidative through conversion to acetyl CoA which is further oxidized through the tricarboxylic acid cycle to form carbon dioxide and water. Non-oxidative glycolysis carbons undergo gluconeogenesis and the newly formed glucose is either stored as glycogen or released back into plasma (Fig. 2.1).",
"title": ""
},
{
"docid": "7eed84f959268599e1b724b0752f6aa5",
"text": "Using the information systems lifecycle as a unifying framework, we review online communities research and propose a sequence for incorporating success conditions during initiation and development to increase their chances of becoming a successful community, one in which members participate actively and develop lasting relationships. Online communities evolve following distinctive lifecycle stages and recommendations for success are more or less relevant depending on the developmental stage of the online community. In addition, the goal of the online community under study determines the components to include in the development of a successful online community. Online community builders and researchers will benefit from this review of the conditions that help online communities succeed.",
"title": ""
},
{
"docid": "e298599e7dc7d2acfc5382a542322762",
"text": "CONTEXT\nPedagogical practices reflect theoretical perspectives and beliefs that people hold about learning. Perspectives on learning are important because they influence almost all decisions about curriculum, teaching and assessment. Since Flexner's 1910 report on medical education, significant changes in perspective have been evident. Yet calls for major reform of medical education may require a broader conceptualisation of the educational process.\n\n\nPAST AND CURRENT PERSPECTIVES\nMedical education has emerged as a complex transformative process of socialisation into the culture and profession of medicine. Theory and research, in medical education and other fields, have contributed important understanding. Learning theories arising from behaviourist, cognitivist, humanist and social learning traditions have guided improvements in curriculum design and instruction, understanding of memory, expertise and clinical decision making, and self-directed learning approaches. Although these remain useful, additional perspectives which recognise the complexity of education that effectively fosters the development of knowledge, skills and professional identity are needed.\n\n\nFUTURE PERSPECTIVES\nSocio-cultural learning theories, particularly situated learning, and communities of practice offer a useful theoretical perspective. They view learning as intimately tied to context and occurring through participation and active engagement in the activities of the community. Legitimate peripheral participation describes learners' entry into the community. As learners gain skill, they assume more responsibility and move more centrally. The community, and the people and artefacts within it, are all resources for learning. Learning is both collective and individual. Social cognitive theory offers a complementary perspective on individual learning. Situated learning allows the incorporation of other learning perspectives and includes workplace learning and experiential learning. Viewing medical education through the lens of situated learning suggests teaching and learning approaches that maximise participation and build on community processes to enhance both collective and individual learning.",
"title": ""
},
{
"docid": "28c19bf17c76a6517b5a7834216cd44d",
"text": "The concept of augmented reality audio characterizes techniques where a real sound environment is extended with virtual auditory environments and communications scenarios. A framework is introduced for mobile augmented reality audio (MARA) based on a specific headset configuration where binaural microphone elements are integrated into stereo earphones. When microphone signals are routed directly to the earphones, a user is exposed to a pseudoacoustic representation of the real environment. Virtual sound events are then mixed with microphone signals to produce a hybrid, an augmented reality audio representation, for the user. An overview of related technology, literature, and application scenarios is provided. Listening test results with a prototype system show that the proposed system has interesting properties. For example, in some cases listeners found it very difficult to determine which sound sources in an augmented reality audio representation are real and which are virtual.",
"title": ""
},
{
"docid": "70df4eee6d98efdbb741e125271f395c",
"text": "Mobile Ad Hoc networks are autonomously self-organized networks without infrastructure support. Wireless sensor networks are appealing to researchers due to their wide range of application potential in areas such as target detection and tracking, environmental monitoring, industrial process monitoring, and tactical systems. Highly dynamic topology and bandwidth constraint in dense networks, brings the necessity to achieve an efficient medium access protocol subject to power constraints. Various MAC protocols with different objectives were proposed for wireless sensor networks. The aim of this paper is to outline the significance of various MAC protocols along with their merits and demerits.",
"title": ""
},
{
"docid": "1c9c30e3e007c2d11c6f5ebd0092050b",
"text": "Fatty acids are essential components of the dynamic lipid metabolism in cells. Fatty acids can also signal to intracellular pathways to trigger a broad range of cellular responses. Oleic acid is an abundant monounsaturated omega-9 fatty acid that impinges on different biological processes, but the mechanisms of action are not completely understood. Here, we report that oleic acid stimulates the cAMP/protein kinase A pathway and activates the SIRT1-PGC1α transcriptional complex to modulate rates of fatty acid oxidation. In skeletal muscle cells, oleic acid treatment increased intracellular levels of cyclic adenosine monophosphate (cAMP) that turned on protein kinase A activity. This resulted in SIRT1 phosphorylation at Ser-434 and elevation of its catalytic deacetylase activity. A direct SIRT1 substrate is the transcriptional coactivator peroxisome proliferator-activated receptor γ coactivator 1-α (PGC1α), which became deacetylated and hyperactive after oleic acid treatment. Importantly, oleic acid, but not other long chain fatty acids such as palmitate, increased the expression of genes linked to fatty acid oxidation pathway in a SIRT1-PGC1α-dependent mechanism. As a result, oleic acid potently accelerated rates of complete fatty acid oxidation in skeletal muscle cells. These results illustrate how a single long chain fatty acid specifically controls lipid oxidation through a signaling/transcriptional pathway. Pharmacological manipulation of this lipid signaling pathway might provide therapeutic possibilities to treat metabolic diseases associated with lipid dysregulation.",
"title": ""
},
{
"docid": "d308f7ebd3f91c42023f4502fd23bc18",
"text": "We present an approach for object segmentation in videos that combines frame-level object detection with concepts from object tracking and motion segmentation. The approach extracts temporally consistent object tubes based on an off-the-shelf detector. Besides the class label for each tube, this provides a location prior that is independent of motion. For the final video segmentation, we combine this information with motion cues. The method overcomes the typical problems of weakly supervised/unsupervised video segmentation, such as scenes with no motion, dominant camera motion, and objects that move as a unit. In contrast to most tracking methods, it provides an accurate, temporally consistent segmentation of each object. We report results on four video segmentation datasets: YouTube Objects, SegTrackv2, egoMotion, and FBMS.",
"title": ""
}
] |
scidocsrr
|
f11778ec3603b0782524282af1f1ec29
|
Considering Race a Problem of Transfer Learning
|
[
{
"docid": "48f784f6fe073c55efbc990b2a2257c6",
"text": "Faces convey a wealth of social signals, including race, expression, identity, age and gender, all of which have attracted increasing attention from multi-disciplinary research, such as psychology, neuroscience, computer science, to name a few. Gleaned from recent advances in computer vision, computer graphics, and machine learning, computational intelligence based racial face analysis has been particularly popular due to its significant potential and broader impacts in extensive real-world applications, such as security and defense, surveillance, human computer interface (HCI), biometric-based identification, among others. These studies raise an important question: How implicit, non-declarative racial category can be conceptually modeled and quantitatively inferred from the face? Nevertheless, race classification is challenging due to its ambiguity and complexity depending on context and criteria. To address this challenge, recently, significant efforts have been reported toward race detection and categorization in the community. This survey provides a comprehensive and critical review of the state-of-the-art advances in face-race perception, principles, algorithms, and applications. We first discuss race perception problem formulation and motivation, while highlighting the conceptual potentials of racial face processing. Next, taxonomy of feature representational models, algorithms, performance and racial databases are presented with systematic discussions within the unified learning scenario. Finally, in order to stimulate future research in this field, we also highlight the major opportunities and challenges, as well as potentially important cross-cutting themes and research directions for the issue of learning race from face.",
"title": ""
},
{
"docid": "b4a0ab9e1d074bff67f80df57a732d8d",
"text": "We study to what extend Chinese, Japanese and Korean faces can be classified and which facial attributes offer the most important cues. First, we propose a novel way of ob- taining large numbers of facial images with nationality la- bels. Then we train state-of-the-art neural networks with these labeled images. We are able to achieve an accuracy of 75.03% in the classification task, with chances being 33.33% and human accuracy 49% . Further, we train mul- tiple facial attribute classifiers to identify the most distinc- tive features for each group. We find that Chinese, Japanese and Koreans do exhibit substantial differences in certain at- tributes, such as bangs, smiling, and bushy eyebrows. Along the way, we uncover several gender-related cross-country patterns as well. Our work, which complements existing APIs such as Microsoft Cognitive Services and Face++, could find potential applications in tourism, e-commerce, social media marketing, criminal justice and even counter- terrorism.",
"title": ""
}
] |
[
{
"docid": "69f597aac301a492892354dd593a4355",
"text": "The influence of user generated content on e-commerce websites and social media has been addressed in both practical and theoretical fields. Since most previous studies focus on either electronic word of mouth (eWOM) from e-commerce websites (EC-eWOM) or social media (SM-eWOM), little is known about the adoption process when consumers are presented EC-eWOM and SM-eWOM simultaneously. We focus on this problem by considering their adoption as an interactive process. It clarifies the mechanism of consumer’s adoption for those from the perspective of cognitive cost theory. A conceptual model is proposed about the relationship between the adoptions of the two types of eWOM. The empirical analysis shows that EC-eWOM’s usefulness and credibility positively influence the adoption of EC-eWOM, but negatively influence that of SM-eWOM. EC-eWOM adoption negatively impacts SM-eWOM adoption, and mediates the relationship between usefulness, credibility and SM-eWOM adoption. The moderating effects of consumers’ cognitive level and degree of involvement are also discussed. This paper further explains the adoption of the two types of eWOM based on the cognitive cost theory and enriches the theoretical research about eWOM in the context of social commerce. Implications for practice, as well as suggestions for future research, are also discussed. 2016 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "2a4f8fdee23dfb009b61899d5773206f",
"text": "We present a unified framework tackling two problems: class-specific 3D reconstruction from a single image, and generation of new 3D shape samples. These tasks have received considerable attention recently; however, most existing approaches rely on 3D supervision, annotation of 2D images with keypoints or poses, and/or training with multiple views of each object instance. Our framework is very general: it can be trained in similar settings to existing approaches, while also supporting weaker supervision. Importantly, it can be trained purely from 2D images, without pose annotations, and with only a single view per instance. We employ meshes as an output representation, instead of voxels used in most prior work. This allows us to reason over lighting parameters and exploit shading information during training, which previous 2D-supervised methods cannot. Thus, our method can learn to generate and reconstruct concave object classes. We evaluate our approach in various settings, showing that: (i) it learns to disentangle shape from pose and lighting; (ii) using shading in the loss improves performance compared to just silhouettes; (iii) when using a standard single white light, our model outperforms state-of-the-art 2Dsupervised methods, both with and without pose supervision, thanks to exploiting shading cues; (iv) performance improves further when using multiple coloured lights, even approaching that of state-of-the-art 3D-supervised methods; (v) shapes produced by our model capture smooth surfaces and fine details better than voxel-based approaches; and (vi) our approach supports concave classes such as bathtubs and sofas, which methods based on silhouettes cannot learn. P. Henderson School of Informatics, University of Edinburgh, Scotland E-mail: paul@pmh47.net V. Ferrari Google Research, Zürich, Switzerland E-mail: vittoferrari@google.com",
"title": ""
},
{
"docid": "ee223b75a3a99f15941e4725d261355e",
"text": "BACKGROUND\nIn Mexico, stunting and anemia have declined but are still high in some regions and subpopulations, whereas overweight and obesity have increased at alarming rates in all age and socioeconomic groups.\n\n\nOBJECTIVE\nThe objective was to describe the coexistence of stunting, anemia, and overweight and obesity at the national, household, and individual levels.\n\n\nDESIGN\nWe estimated national prevalences of and trends for stunting, anemia, and overweight and obesity in children aged <5 y and in school-aged children (5-11 y old) and anemia and overweight and obesity in women aged 20-49 y by using the National Health and Nutrition Surveys conducted in 1988, 1999, 2006, and 2012. With the use of the most recent data (2012), the double burden of malnutrition at the household level was estimated and defined as the coexistence of stunting in children aged <5 y and overweight or obesity in the mother. At the individual level, double burden was defined as concurrent stunting and overweight and obesity in children aged 5-11 y and concurrent anemia and overweight or obesity in children aged 5-11 y and in women. We also tested if the coexistence of the conditions corresponded to expected values, under the assumption of independent distributions of each condition.\n\n\nRESULTS\nAt the household level, the prevalence of concurrent stunting in children aged <5 y and overweight and obesity in mothers was 8.4%; at the individual level, prevalences were 1% for stunting and overweight or obesity and 2.9% for anemia and overweight or obesity in children aged 5-11 y and 7.6% for anemia and overweight or obesity in women. At the household and individual levels in children aged 5-11 y, prevalences of double burden were significantly lower than expected, whereas anemia and the prevalence of overweight or obesity in women were not different from that expected.\n\n\nCONCLUSIONS\nAlthough some prevalences of double burden were lower than expected, assuming independent distributions of the 2 conditions, the coexistence of stunting, overweight or obesity, and anemia at the national, household, and intraindividual levels in Mexico calls for policies and programs to prevent the 3 conditions.",
"title": ""
},
{
"docid": "20a0cf9c98c80aed67e9e57718ea672b",
"text": "The evolution of the Internet and its applications has led to a notable increase in concern about social networking sites (SNSs). SNSs have had global mass appeal and their often frequent use – usually by young people – has triggered worries, discussions and studies on the topic of technological and social networking addictions. In addressing this issue, we have to ask to what extent technological and social networking addictions are of the same nature as substance addictions, and whether the consequences they lead to, if any, are severe enough to merit clinical attention. We can summarize our position on the topic by saying that SNSs are primarily used to increase social capital and that there is not currently enough empirical evidence on SNSs’ addiction potential to claim that SNS addition exists. Although SNSs can provoke certain negative consequences in a subset of their users or provide a platform for the expression of preexisting conditions, this is not sufficient support for their standalone addictive power. It is necessary to distinguish between true addictive disorders, the kind that fall under the category of substance addictions, and the negative side-effects of engaging with certain appealing activities like SNSs so that we do not undermine the severity of psychiatric disorders and the experience of the individuals suffering from them. We propose that psychoeducation, viewing SNS use in context to understand their gratifications and compensatory functions and revisiting the terminology on the subject are sufficient to address the problems that emerge from SNS usage. ARTICLE HISTORY Received 17 June 2015 Revised 1 June 2016 Accepted 1 June 2016 Published online 4 July 2016",
"title": ""
},
{
"docid": "7d2a8a4008f97738d8eacf42ea390692",
"text": "Relational inference is a crucial technique for knowledge base population. The central problem in the study of relational inference is to infer unknown relations between entities from the facts given in the knowledge bases. Two popular models have been put forth recently to solve this problem, which are the latent factor models and the random-walk models, respectively. However, each of them has their pros and cons, depending on their computational efficiency and inference accuracy. In this paper, we propose a hierarchical random-walk inference algorithm for relational learning in large scale graph-structured knowledge bases, which not only maintains the computational simplicity of the random-walk models, but also provides better inference accuracy than related works. The improvements come from two basic assumptions we proposed in this paper. Firstly, we assume that although a relation between two entities is syntactically directional, the information conveyed by this relation is equally shared between the connected entities, thus all of the relations are semantically bidirectional. Secondly, we assume that the topology structures of the relation-specific subgraphs in knowledge bases can be exploited to improve the performance of the random-walk based relational inference algorithms. The proposed algorithm and ideas are validated with numerical results on experimental data sampled from practical knowledge bases, and the results are compared to state-of-the-art approaches.",
"title": ""
},
{
"docid": "7ce1646e0fe1bd83f9feb5ec20233c93",
"text": "An emerging class of theories concerning the functional structure of the brain takes the reuse of neural circuitry for various cognitive purposes to be a central organizational principle. According to these theories, it is quite common for neural circuits established for one purpose to be exapted (exploited, recycled, redeployed) during evolution or normal development, and be put to different uses, often without losing their original functions. Neural reuse theories thus differ from the usual understanding of the role of neural plasticity (which is, after all, a kind of reuse) in brain organization along the following lines: According to neural reuse, circuits can continue to acquire new uses after an initial or original function is established; the acquisition of new uses need not involve unusual circumstances such as injury or loss of established function; and the acquisition of a new use need not involve (much) local change to circuit structure (e.g., it might involve only the establishment of functional connections to new neural partners). Thus, neural reuse theories offer a distinct perspective on several topics of general interest, such as: the evolution and development of the brain, including (for instance) the evolutionary-developmental pathway supporting primate tool use and human language; the degree of modularity in brain organization; the degree of localization of cognitive function; and the cortical parcellation problem and the prospects (and proper methods to employ) for function to structure mapping. The idea also has some practical implications in the areas of rehabilitative medicine and machine interface design.",
"title": ""
},
{
"docid": "d2521791d515b69d5a4a8c9ea02e3d17",
"text": "In this paper, four-wheel active steering (4WAS), which can control the front wheel steering angle and rear wheel steering angle independently, has been investigated based on the analysis of deficiency of conventional four wheel steering (4WS). A model following control structure is adopted to follow the desired yaw rate and vehicle sideslip angle, which consists of feedforward and feedback controller. The feedback controller is designed based on the optimal control theory, minimizing the tracking errors between the outputs of actual vehicle model and that of linear reference model. Finally, computer simulations are performed to evaluate the proposed control system via the co-simulation of Matlab/Simulink and CarSim. Simulation results show that the designed 4WAS controller can achieve the good response performance and improve the vehicle handling and stability.",
"title": ""
},
{
"docid": "e5d2771610e1f1d3153937b072fd8d31",
"text": "The role of the gut microbiome in models of inflammatory and autoimmune disease is now well characterized. Renewed interest in the human microbiome and its metabolites, as well as notable advances in host mucosal immunology, has opened multiple avenues of research to potentially modulate inflammatory responses. The complexity and interdependence of these diet-microbe-metabolite-host interactions are rapidly being unraveled. Importantly, most of the progress in the field comes from new knowledge about the functional properties of these microorganisms in physiology and their effect in mucosal immunity and distal inflammation. This review summarizes the preclinical and clinical evidence on how dietary, probiotic, prebiotic, and microbiome based therapeutics affect our understanding of wellness and disease, particularly in autoimmunity.",
"title": ""
},
{
"docid": "03a2b9ebdac78ca3a6c808f87f73c26b",
"text": "OBJECTIVE\nPost-traumatic stress disorder (PTSD) has major public health significance. Evidence that PTSD may be associated with premature senescence (early or accelerated aging) would have major implications for quality of life and healthcare policy. We conducted a comprehensive review of published empirical studies relevant to early aging in PTSD.\n\n\nMETHOD\nOur search included the PubMed, PsycINFO, and PILOTS databases for empirical reports published since the year 2000 relevant to early senescence and PTSD, including: 1) biomarkers of senescence (leukocyte telomere length [LTL] and pro-inflammatory markers), 2) prevalence of senescence-associated medical conditions, and 3) mortality rates.\n\n\nRESULTS\nAll six studies examining LTL indicated reduced LTL in PTSD (pooled Cohen's d = 0.76). We also found consistent evidence of increased pro-inflammatory markers in PTSD (mean Cohen's ds), including C-reactive protein = 0.18, Interleukin-1 beta = 0.44, Interleukin-6 = 0.78, and tumor necrosis factor alpha = 0.81. The majority of reviewed studies also indicated increased medical comorbidity among several targeted conditions known to be associated with normal aging, including cardiovascular disease, type 2 diabetes mellitus, gastrointestinal ulcer disease, and dementia. We also found seven of 10 studies indicated PTSD to be associated with earlier mortality (average hazard ratio: 1.29).\n\n\nCONCLUSION\nIn short, evidence from multiple lines of investigation suggests that PTSD may be associated with a phenotype of accelerated senescence. Further research is critical to understand the nature of this association. There may be a need to re-conceptualize PTSD beyond the boundaries of mental illness, and instead as a full systemic disorder.",
"title": ""
},
{
"docid": "5e64e36e76f4c0577ae3608b6e715a1f",
"text": "Deep learning has recently become very popular on account of its incredible success in many complex datadriven applications, including image classification and speech recognition. The database community has worked on data-driven applications for many years, and therefore should be playing a lead role in supporting this new wave. However, databases and deep learning are different in terms of both techniques and applications. In this paper, we discuss research problems at the intersection of the two fields. In particular, we discuss possible improvements for deep learning systems from a database perspective, and analyze database applications that may benefit from deep learning techniques.",
"title": ""
},
{
"docid": "08353c7d40a0df4909b09f2d3e5ab4fe",
"text": "Object detection has made great progress in the past few years along with the development of deep learning. However, most current object detection methods are resource hungry, which hinders their wide deployment to many resource restricted usages such as usages on always-on devices, battery-powered low-end devices, etc. This paper considers the resource and accuracy trade-off for resource-restricted usages during designing the whole object detection framework. Based on the deeply supervised object detection (DSOD) framework, we propose Tiny-DSOD dedicating to resource-restricted usages. Tiny-DSOD introduces two innovative and ultra-efficient architecture blocks: depthwise dense block (DDB) based backbone and depthwise feature-pyramid-network (D-FPN) based front-end. We conduct extensive experiments on three famous benchmarks (PASCAL VOC 2007, KITTI, and COCO), and compare Tiny-DSOD to the state-of-the-art ultra-efficient object detection solutions such as Tiny-YOLO, MobileNet-SSD (v1 & v2), SqueezeDet, Pelee, etc. Results show that Tiny-DSOD outperforms these solutions in all the three metrics (parameter-size, FLOPs, accuracy) in each comparison. For instance, Tiny-DSOD achieves 72.1% mAP with only 0.95M parameters and 1.06B FLOPs, which is by far the state-of-the-arts result with such a low resource requirement.∗",
"title": ""
},
{
"docid": "7b717d6c4506befee2a374333055e2d1",
"text": "This is the pre-acceptance version, to read the final version please go to IEEE Geoscience and Remote Sensing Magazine on IEEE XPlore. Standing at the paradigm shift towards data-intensive science, machine learning techniques are becoming increasingly important. In particular, as a major breakthrough in the field, deep learning has proven as an extremely powerful tool in many fields. Shall we embrace deep learning as the key to all? Or, should we resist a “black-box” solution? There are controversial opinions in the remote sensing community. In this article, we analyze the challenges of using deep learning for remote sensing data analysis, review the recent advances, and provide resources to make deep learning in remote sensing ridiculously simple to start with. More importantly, we advocate remote sensing scientists to bring their expertise into deep learning, and use it as an implicit general model to tackle unprecedented large-scale influential challenges, such as climate change and urbanization. X. Zhu and L. Mou are with the Remote Sensing Technology Institute (IMF), German Aerospace Center (DLR), Germany and with Signal Processing in Earth Observation (SiPEO), Technical University of Munich (TUM), Germany, E-mails: xiao.zhu@dlr.de; lichao.mou@dlr.de. D. Tuia was with the Department of Geography, University of Zurich, Switzerland. He is now with the Laboratory of GeoInformation Science and Remote Sensing, Wageningen University of Research, the Netherlands. E-mail: devis.tuia@wur.nl. G.-S Xia and L. Zhang are with the State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing (LIESMARS), Wuhan University. E-mail:guisong.xia@whu.edu.cn; zlp62@whu.edu.cn. F. Xu is with the Key Laboratory for Information Science of Electromagnetic Waves (MoE), Fudan Univeristy. E-mail: fengxu@fudan.edu.cn. F. Fraundorfer is with the Institute of Computer Graphics and Vision, TU Graz, Austria and with the Remote Sensing Technology Institute (IMF), German Aerospace Center (DLR), Germany. E-mail: fraundorfer@icg.tugraz.at. The work of X. Zhu and L. Mou are supported by the European Research Council (ERC) under the European Unions Horizon 2020 research and innovation programme (grant agreement No [ERC-2016-StG-714087], Acronym: So2Sat), Helmholtz Association under the framework of the Young Investigators Group “SiPEO” (VH-NG-1018, www.sipeo.bgu.tum.de) and China Scholarship Council. The work of D. Tuia is supported by the Swiss National Science Foundation (SNSF) under the project NO. PP0P2 150593. The work of G.-S. Xia and L. Zhang are supported by the National Natural Science Foundation of China (NSFC) projects with grant No. 41501462 and No. 41431175. The work of F. Xu are supported by the National Natural Science Foundation of China (NSFC) projects with grant No. 61571134. October 12, 2017 DRAFT ar X iv :1 71 0. 03 95 9v 1 [ cs .C V ] 1 1 O ct 2 01 7 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE, IN PRESS. 2",
"title": ""
},
{
"docid": "db422d1fcb99b941a43e524f5f2897c2",
"text": "AN INDIVIDUAL CORRELATION is a correlation in which the statistical object or thing described is indivisible. The correlation between color and illiteracy for persons in the United States, shown later in Table I, is an individual correlation, because the kind of thing described is an indivisible unit, a person. In an individual correlation the variables are descriptive properties of individuals, such as height, income, eye color, or race, and not descriptive statistical constants such as rates or means. In an ecological correlation the statistical object is a group of persons. The correlation between the percentage of the population which is Negro and the percentage of the population which is illiterate for the 48 states, shown later as Figure 2, is an ecological correlation. The thing described is the population of a state, and not a single individual. The variables are percentages, descriptive properties of groups, and not descriptive properties of individuals. Ecological correlations are used in an impressive number of quantitative sociological studies, some of which by now have attained the status of classics: Cowles’ ‘‘Statistical Study of Climate in Relation to Pulmonary Tuberculosis’’; Gosnell’s ‘‘Analysis of the 1932 Presidential Vote in Chicago,’’ Factorial and Correlational Analysis of the 1934 Vote in Chicago,’’ and the more elaborate factor analysis in Machine Politics; Ogburn’s ‘‘How women vote,’’ ‘‘Measurement of the Factors in the Presidential Election of 1928,’’ ‘‘Factors in the Variation of Crime Among Cities,’’ and Groves and Ogburn’s correlation analyses in American Marriage and Family Relationships; Ross’ study of school attendance in Texas; Shaw’s Delinquency Areas study of the correlates of delinquency, as well as The more recent analyses in Juvenile Delinquency in Urban Areas; Thompson’s ‘‘Some Factors Influencing the Ratios of Children to Women in American Cities, 1930’’; Whelpton’s study of the correlates of birth rates, in ‘‘Geographic and Economic Differentials in Fertility;’’ and White’s ‘‘The Relation of Felonies to Environmental Factors in Indianapolis.’’ Although these studies and scores like them depend upon ecological correlations, it is not because their authors are interested in correlations between the properties of areas as such. Even out-and-out ecologists, in studying delinquency, for example, rely primarily upon data describing individuals, not areas. In each study which uses ecological correlations, the obvious purpose is to discover something about the behavior of individuals. Ecological correlations are used simply because correlations between the properties of individuals are not available. In each instance, however, the substitution is made tacitly rather than explicitly. The purpose of this paper is to clarify the ecological correlation problem by stating, mathematically, the exact relation between ecological and individual correlations, and by showing the bearing of that relation upon the practice of using ecological correlations as substitutes for individual correlations.",
"title": ""
},
{
"docid": "5d6bd34fb5fdb44950ec5d98e77219c3",
"text": "This paper describes an experimental setup and results of user tests focusing on the perception of temporal characteristics of vibration of a mobile device. The experiment consisted of six vibration stimuli of different length. We asked the subjects to score the subjective perception level in a five point Lickert scale. The results suggest that the optimal duration of the control signal should be between 50 and 200 ms in this specific case. Longer durations were perceived as being irritating.",
"title": ""
},
{
"docid": "b44df1268804e966734ea404b8c29360",
"text": "A new night-time lane detection system and its accompanying framework are presented in this paper. The accompanying framework consists of an automated ground truth process and systematic storage of captured videos that will be used for training and testing. The proposed Advanced Lane Detector 2.0 (ALD 2.0) is an improvement over the ALD 1.0 or Layered Approach with integration of pixel remapping, outlier removal, and prediction with tracking. Additionally, a novel procedure to generate the ground truth data for lane marker locations is also proposed. The procedure consists of an original process called time slicing, which provides the user with unique visualization of the captured video and enables quick generation of ground truth information. Finally, the setup and implementation of a database hosting lane detection videos and standardized data sets for testing are also described. The ALD 2.0 is evaluated by means of the user-created annotations accompanying the videos. Finally, the planned improvements and remaining work are addressed.",
"title": ""
},
{
"docid": "e74ef9d0ededd1bf4b7701c2b53eacab",
"text": "This paper presents an outline of our work to develop a word sense disambiguation system in Malayalam. Word sense disambiguation (WSD) is a linguistically based mechanism for automatically defining the correct sense of a word in the context. WSD is a long standing problem in computational linguistics. A particular word may have different meanings in different contexts. For human beings, it is easy to extract the correct meaning by analyzing the sentences. In the area of natural language processing, we are trying to simulate all of these human capabilities with a computer system. In many natural language processing tasks such as machine translation, information retrieval etc., Word Sense Disambiguation plays an important role to improve the quality of systems.",
"title": ""
},
{
"docid": "de6c7d12013908e27abda219326d9054",
"text": "A network’s physical layer is deceptively quiet. Hub lights blink in response to network traffic, but do little to convey the range of information that the network carries. Analysis of the individual traffic flows and their content is essential to a complete understanding of network usage. Many tools let you view traffic in real time, but real-time monitoring at any level requires significant human and hardware resources, and doesn’t scale to networks larger than a single workgroup. It is generally more practical to archive all traffic and analyze subsets as necessary. This process is known as reconstructive traffic analysis, or network forensics.1 In practice, it is often limited to data collection and packetlevel inspection; however, a network forensics analysis tool (NFAT) can provide a richer view of the data collected, allowing you to inspect the traffic from further up the protocol stack.2 The IT industry’s ever-growing concern with security is the primary motivation for network forensics. A network that has been prepared for forensic analysis is easy to monitor, and security vulnerabilities and configuration problems can be conveniently identified. It also allows the best possible analysis of security violations. Most importantly, analyzing a complete record of your network traffic with the appropriate reconstructive tools provides context for other breach-related events. For example, if your analysis detects a user account and its Pretty Good Privacy (PGP, www.pgp.com/index.php) keys being compromised, good practice requires you to review all subsequent activity by that user, or involving those keys. In some industries, laws such as the Health Insurance Portability and Accountability Act (HIPAA, http://cms.hhs.gov/hipaa) regulate monitoring the flow of information. While it is often difficult to balance what is required by law and what is technically feasible, a forensic record of network traffic is a good first step. Security and legal concerns are not the only reasons to want a fuller understanding of your network traffic, however. Forensics tool users have reported many other applications. If your mail server has lost several hours’ or days’ worth of received messages and traditional backup methods have failed, you can recover the messages from the recorded traffic. Similarly, the forensics record allows unhurried analysis of anomalies such as traffic spikes or application errors that might otherwise have remained hearsay.",
"title": ""
},
{
"docid": "69c65c1cbec5d4843797b7ba1a1551be",
"text": "The role of personal data gained significance across all business domains in past decades. Despite strict legal restrictions that processing personal data is subject to, users tend to respond to the extensive collection of data by service providers with distrust. Legal battles between data subjects and processors emphasized the need of adaptations by the current law to face today’s challenges. The European Union has taken action by introducing the General Data Protection Regulation (GDPR), which was adopted in April 2016 and will inure in May 2018. The GDPR extends existing data privacy rights of EU citizens and simultaneously puts pressure on controllers and processors by defining high penalties in case of non-compliance. Uncertainties remain to which extent controllers and processors need to adjust their existing technologies in order to conform to the new law. This work designs, implements, and evaluates a privacy dashboard for data subjects intending to enable and ease the execution of data privacy rights granted by the GDPR.",
"title": ""
},
{
"docid": "509075d64990cf7258c13dd0dfd5e282",
"text": "In recent years we have seen a tremendous growth in applications of passive sensor-enabled RFID technology by researchers; however, their usability in applications such as activity recognition is limited by a key issue associated with their incapability to handle unintentional brownout events leading to missing significant sensed events such as a fall from a chair. Furthermore, due to the need to power and sample a sensor the practical operating range of passive-sensor enabled RFID tags are also limited with respect to passive RFID tags. Although using active or semi-passive tags can provide alternative solutions, they are not without the often undesirable maintenance and limited lifespan issues due to the need for batteries. In this article we propose a new hybrid powered sensor-enabled RFID tag concept which can sustain the supply voltage to the tag circuitry during brownouts and increase the operating range of the tag by combining the concepts from passive RFID tags and semipassive RFID tags, while potentially eliminating shortcomings of electric batteries. We have designed and built our concept, evaluated its desirable properties through extensive experiments and demonstrate its significance in the context of a human activity recognition application.",
"title": ""
},
{
"docid": "31b449b209beaadbbcc36c485517c3cf",
"text": "While a number of information visualization software frameworks exist, creating new visualizations, especially those that involve novel visualization metaphors, interaction techniques, data analysis strategies, and specialized rendering algorithms, is still often a difficult process. To facilitate the creation of novel visualizations we present a new software framework, behaviorism, which provides a wide range of flexibility when working with dynamic information on visual, temporal, and ontological levels, but at the same time providing appropriate abstractions which allow developers to create prototypes quickly which can then easily be turned into robust systems. The core of the framework is a set of three interconnected graphs, each with associated operators: a scene graph for high-performance 3D rendering, a data graph for different layers of semantically-linked heterogeneous data, and a timing graph for sophisticated control of scheduling, interaction, and animation. In particular, the timing graph provides a unified system to add behaviors to both data and visual elements, as well as to the behaviors themselves. To evaluate the framework we look briefly at three different projects all of which required novel visualizations in different domains, and all of which worked with dynamic data in different ways: an interactive ecological simulation, an information art installation, and an information visualization technique.",
"title": ""
}
] |
scidocsrr
|
50905a794a5800f5df319f20ca3452f8
|
Mobile Edge Computing: Opportunities, solutions, and challenges
|
[
{
"docid": "016a07d2ddb55149708409c4c62c67e3",
"text": "Cloud computing has emerged as a computational paradigm and an alternative to the conventional computing with the aim of providing reliable, resilient infrastructure, and with high quality of services for cloud users in both academic and business environments. However, the outsourced data in the cloud and the computation results are not always trustworthy because of the lack of physical possession and control over the data for data owners as a result of using to virtualization, replication and migration techniques. Since that the security protection the threats to outsourced data have become a very challenging and potentially formidable task in cloud computing, many researchers have focused on ameliorating this problem and enabling public auditability for cloud data storage security using remote data auditing (RDA) techniques. This paper presents a comprehensive survey on the remote data storage auditing in single cloud server domain and presents taxonomy of RDA approaches. The objective of this paper is to highlight issues and challenges to current RDA protocols in the cloud and the mobile cloud computing. We discuss the thematic taxonomy of RDA based on significant parameters such as security requirements, security metrics, security level, auditing mode, and update mode. The state-of-the-art RDA approaches that have not received much coverage in the literature are also critically analyzed and classified into three groups of provable data possession, proof of retrievability, and proof of ownership to present a taxonomy. It also investigates similarities and differences in such framework and discusses open research issues as the future directions in RDA research. & 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "d141c13cea52e72bb7b84d3546496afb",
"text": "A number of resource-intensive applications, such as augmented reality, natural language processing, object recognition, and multimedia-based software are pushing the computational and energy boundaries of smartphones. Cloud-based services augment the resource-scare capabilities of smartphones while offloading compute-intensive methods to resource-rich cloud servers. The amalgam of cloud and mobile computing technologies has ushered the rise of Mobile Cloud Computing (MCC) paradigm which envisions operating smartphones and modern mobile devices beyond their intrinsic capabilities. System virtualization, application virtualization, and dynamic binary translation (DBT) techniques are required to address the heterogeneity of smartphone and cloud architectures. However, most of the current research work has only focused on the offloading of virtualized applications while giving limited consideration to native code offloading. Moreover, researchers have not attended to the requirements of multimedia based applications in MCC offloading frameworks. In this study, we present a survey and taxonomy of state-of-the-art MCC frameworks, DBT techniques for native offloading, and cross-platform execution techniques for multimedia based applications. We survey the MCC frameworks from the perspective of offload enabling techniques. We focus on native code offloading frameworks and analyze the DBT and emulation techniques of smartphones (ARM) on a cloud server (x86) architectures. Furthermore, we debate the open research issues and challenges to native offloading of multimedia based smartphone applications.",
"title": ""
},
{
"docid": "55d88de1b0a5ebcf1c2909dea6072879",
"text": "The unabated flurry of research activities to augment various mobile devices in terms of compute-intensive task execution by leveraging heterogeneous resources of available devices in the local vicinity has created a new research domain called mobile ad hoc cloud (MAC) or mobile cloud. It is a new type of mobile cloud computing (MCC). MAC is deemed to be a candidate blueprint for future compute-intensive applications with the aim of delivering high functionalities and rich impressive experience to mobile users. However, MAC is yet in its infancy, and a comprehensive survey of the domain is still lacking. In this paper, we survey the state-of-the-art research efforts carried out in the MAC domain. We analyze several problems inhibiting the adoption of MAC and review corresponding solutions by devising a taxonomy. Moreover, MAC roots are analyzed and taxonomized as architectural components, applications, objectives, characteristics, execution model, scheduling type, formation technologies, and node types. The similarities and differences among existing proposed solutions by highlighting the advantages and disadvantages are also investigated. We also compare the literature based on objectives. Furthermore, our study advocates that the problems stem from the intrinsic characteristics of MAC by identifying several new principles. Lastly, several open research challenges such as incentives, heterogeneity-ware task allocation, mobility, minimal data exchange, and security and privacy are presented as future research directions. Copyright © 2016 John Wiley & Sons, Ltd.",
"title": ""
}
] |
[
{
"docid": "3a47c4e3e5c98b9da1e1b73f2f6d3dc6",
"text": "This paper examines a semantic approach for identity management, namely the W3C WebID, as a representation of personal information, and the WebID-TLS as a decentralized authentication protocol, allowing individuals to manage their own identities and data privacy. The paper identifies a set of important usability, privacy and security issues that needs to be addressed, and proposes an end to end authentication mechanism based on WebID, JSON Web Tokens (JWT) and the blockchain. The WebID includes a personal profile with its certificate, and the social relationship information described as the RDF-based FOAF ontology. The JWT is a standardized container format to encode personal related information in a secure way using \"claims\". The distributed, irreversible, undeletable, and immutable nature of the blockchain has appropriate attributes for distributed credential storage and decentralized identity management.",
"title": ""
},
{
"docid": "24880289ca2b6c31810d28c8363473b3",
"text": "Deep reinforcement learning (RL) has achieved several high profile successes in difficult decision-making problems. However, these algorithms typically require a huge amount of data before they reach reasonable performance. In fact, their performance during learning can be extremely poor. This may be acceptable for a simulator, but it severely limits the applicability of deep RL to many real-world tasks, where the agent must learn in the real environment. In this paper we study a setting where the agent may access data from previous control of the system. We present an algorithm, Deep Q-learning from Demonstrations (DQfD), that leverages small sets of demonstration data to massively accelerate the learning process even from relatively small amounts of demonstration data and is able to automatically assess the necessary ratio of demonstration data while learning thanks to a prioritized replay mechanism. DQfD works by combining temporal difference updates with supervised classification of the demonstrator’s actions. We show that DQfD has better initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN) as it starts with better scores on the first million steps on 41 of 42 games and on average it takes PDD DQN 83 million steps to catch up to DQfD’s performance. DQfD learns to out-perform the best demonstration given in 14 of 42 games. In addition, DQfD leverages human demonstrations to achieve state-of-the-art results for 11 games. Finally, we show that DQfD performs better than three related algorithms for incorporating demonstration data into DQN.",
"title": ""
},
{
"docid": "43ee3d818b528081aadf6abdc23650fa",
"text": "Cloud computing has become an increasingly important research topic given the strong evolution and migration of many network services to such computational environment. The problem that arises is related with efficiency management and utilization of the large amounts of computing resources. This paper begins with a brief retrospect of traditional scheduling, followed by a detailed review of metaheuristic algorithms for solving the scheduling problems by placing them in a unified framework. Armed with these two technologies, this paper surveys the most recent literature about metaheuristic scheduling solutions for cloud. In addition to applications using metaheuristics, some important issues and open questions are presented for the reference of future researches on scheduling for cloud.",
"title": ""
},
{
"docid": "455068ecca4db680a8cd65bf127cfc91",
"text": "OBJECTIVES\nLoneliness is common among older persons and has been associated with health and mental health risks. This systematic review examines the utility of loneliness interventions among older persons.\n\n\nDATA SOURCE\nThirty-four intervention studies were used. STUDY INCLUSION CRITERIA: The study was conducted between 1996 and 2011, included a sample of older adults, implemented an intervention affecting loneliness or identified a situation that directly affected loneliness, included in its outcome measures the effects of the intervention or situation on loneliness levels or on loneliness-related measures (e.g., social interaction), and included in its analysis pretest-posttest comparisons.\n\n\nDATA EXTRACTION\nStudies were accessed using the databases PsycINFO, MEDLINE, ScienceDirect, AgeLine, PsycBOOKS, and Google Scholar for the years 1996-2011.\n\n\nDATA SYNTHESIS\nInterventions were classified based on population, format, and content and were evaluated for quality of design and efficacy.\n\n\nRESULTS\nTwelve studies were effective in reducing loneliness according to the review criteria, and 15 were evaluated as potentially effective. The findings suggest that it is possible to reduce loneliness by using educational interventions focused on social networks maintenance and enhancement.\n\n\nCONCLUSIONS\nMultiple approaches show promise, although flawed design often prevents proper evaluation of efficacy. The value of specific therapy techniques in reducing loneliness is highlighted and warrants a wider investigation. Studies of special populations, such as the cognitively impaired, are also needed.",
"title": ""
},
{
"docid": "9955b14187e172e34f233fec70ae0a38",
"text": "Neural network language models (NNLM) have become an increasingly popular choice for large vocabulary continuous speech recognition (LVCSR) tasks, due to their inherent generalisation and discriminative power. This paper present two techniques to improve performance of standard NNLMs. First, the form of NNLM is modelled by introduction an additional output layer node to model the probability mass of out-of-shortlist (OOS) words. An associated probability normalisation scheme is explicitly derived. Second, a novel NNLM adaptation method using a cascaded network is proposed. Consistent WER reductions were obtained on a state-of-the-art Arabic LVCSR task over conventional NNLMs. Further performance gains were also observed after NNLM adaptation.",
"title": ""
},
{
"docid": "3d7fabdd5f56c683de20640abccafc44",
"text": "The capacity to exercise control over the nature and quality of one's life is the essence of humanness. Human agency is characterized by a number of core features that operate through phenomenal and functional consciousness. These include the temporal extension of agency through intentionality and forethought, self-regulation by self-reactive influence, and self-reflectiveness about one's capabilities, quality of functioning, and the meaning and purpose of one's life pursuits. Personal agency operates within a broad network of sociostructural influences. In these agentic transactions, people are producers as well as products of social systems. Social cognitive theory distinguishes among three modes of agency: direct personal agency, proxy agency that relies on others to act on one's behest to secure desired outcomes, and collective agency exercised through socially coordinative and interdependent effort. Growing transnational embeddedness and interdependence are placing a premium on collective efficacy to exercise control over personal destinies and national life.",
"title": ""
},
{
"docid": "be91ec9b4f017818f32af09cafbb2a9a",
"text": "Brainard et al. 2 INTRODUCTION Object recognition is difficult because there is no simple relation between an object's properties and the retinal image. Where the object is located, how it is oriented, and how it is illuminated also affect the image. Moreover, the relation is under-determined: multiple physical configurations can give rise to the same retinal image. In the case of object color, the spectral power distribution of the light reflected from an object depends not only on the object's intrinsic surface reflectance but also factors extrinsic to the object, such as the illumination. The relation between intrinsic reflectance, extrinsic illumination, and the color signal reflected to the eye is shown schematically in Figure 1. The light incident on a surface is characterized by its spectral power distribution E(λ). A small surface element reflects a fraction of the incident illuminant to the eye. The surface reflectance function S(λ) specifies this fraction as a function of wavelength. The spectrum of the light reaching the eye is called the color signal and is given by C(λ) = E(λ)S(λ). Information about C(λ) is encoded by three classes of cone photoreceptors, the L-, M-, and Scones. The top two patches rendered in Plate 1 illustrate the large effect that a typical change in natural illumination (see Wyszecki and Stiles, 1982) can have on the color signal. This effect might lead us to expect that the color appearance of objects should vary radically, depending as much on the current conditions of illumination as on the object's surface reflectance. Yet the very fact that we can sensibly refer to objects as having a color indicates otherwise. Somehow our visual system stabilizes the color appearance of objects against changes in illumination, a perceptual effect that is referred to as color constancy. Because the illumination is the most salient object-extrinsic factor that affects the color signal, it is natural that emphasis has been placed on understanding how changing the illumination affects object color appearance. In a typical color constancy experiment, the independent variable is the illumination and the dependent variable is a measure of color appearance experiments employ different stimulus configurations and psychophysical tasks, but taken as a whole they support the view that human vision exhibits a reasonable degree of color constancy. Recall that the top two patches of Plate 1 illustrate the limiting case where a single surface reflectance is seen under multiple illuminations. Although this …",
"title": ""
},
{
"docid": "abdc445e498c6d04e8f046e9c2610f9f",
"text": "Ontologies have recently received popularity in the area of knowledge management and knowledge sharing, especially after the evolution of the Semantic Web and its supporting technologies. An ontology defines the terms and concepts (meaning) used to describe and represent an area of knowledge.The aim of this paper is to identify all possible existing ontologies and ontology management tools (Protégé 3.4, Apollo, IsaViz & SWOOP) that are freely available and review them in terms of: a) interoperability, b) openness, c) easiness to update and maintain, d) market status and penetration. The results of the review in ontologies are analyzed for each application area, such as transport, tourism, personal services, health and social services, natural languages and other HCI-related domains. Ontology Building/Management Tools are used by different groups of people for performing diverse tasks. Although each tool provides different functionalities, most of the users just use only one, because they are not able to interchange their ontologies from one tool to another. In addition, we considered the compatibility of different ontologies with different development and management tools. The paper is also concerns the detection of commonalities and differences between the examined ontologies, both on the same domain (application area) and among different domains.",
"title": ""
},
{
"docid": "e4a59205189e8cca8a1aba704460f8ec",
"text": "In this paper, we compare two methods for article summarization. The first method is mainly based on term-frequency, while the second method is based on ontology. We build an ontology database for analyzing the main topics of the article. After identifying the main topics and determining their relative significance, we rank the paragraphs based on the relevance between main topics and each individual paragraph. Depending on the ranks, we choose desired proportion of paragraphs as summary. Experimental results indicate that both methods offer similar accuracy in their selections of the paragraphs.",
"title": ""
},
{
"docid": "b0709248d08564b7d1a1f23243aa0946",
"text": "TrustZone-based Real-time Kernel Protection (TZ-RKP) is a novel system that provides real-time protection of the OS kernel using the ARM TrustZone secure world. TZ-RKP is more secure than current approaches that use hypervisors to host kernel protection tools. Although hypervisors provide privilege and isolation, they face fundamental security challenges due to their growing complexity and code size. TZ-RKP puts its security monitor, which represents its entire Trusted Computing Base (TCB), in the TrustZone secure world; a safe isolated environment that is dedicated to security services. Hence, the security monitor is safe from attacks that can potentially compromise the kernel, which runs in the normal world. Using the secure world for kernel protection has been crippled by the lack of control over targets that run in the normal world. TZ-RKP solves this prominent challenge using novel techniques that deprive the normal world from the ability to control certain privileged system functions. These functions are forced to route through the secure world for inspection and approval before being executed. TZ-RKP's control of the normal world is non-bypassable. It can effectively stop attacks that aim at modifying or injecting kernel binaries. It can also stop attacks that involve modifying the system memory layout, e.g, through memory double mapping. This paper presents the implementation and evaluation of TZ-RKP, which has gone through rigorous and thorough evaluation of effectiveness and performance. It is currently deployed on the latest models of the Samsung Galaxy series smart phones and tablets, which clearly demonstrates that it is a practical real-world system.",
"title": ""
},
{
"docid": "ae73bdfbfe949201036f00820f20a086",
"text": "Increasing efficiency by improving locomotion methods is a key issue for underwater robots. Moreover, a number of different control design challenges must be solved to realize operational swimming robots for underwater tasks. This article proposes and experimentally validates a straightline-path-following controller for biologically inspired swimming snake robots. In particular, a line-of-sight (LOS) guidance law is presented, which is combined with a sinusoidal gait pattern and a directional controller that steers the robot toward and along the desired path. The performance of the path-following controller is investigated through experiments with a physical underwater snake robot for both lateral undulation and eel-like motion. In addition, fluid parameter identification is performed, and simulation results based on the identified fluid coefficients are presented to obtain a back-to-back comparison with the motion of the physical robot during the experiments. The experimental results show that the proposed control strategy successfully steers the robot toward and along the desired path for both lateral undulation and eel-like motion patterns.",
"title": ""
},
{
"docid": "9c8f54b087d90a2bcd9e3d7db1aabd02",
"text": "The \"new Dark Silicon\" model benchmarks transistor technologies at the architectural level for multi-core processors.",
"title": ""
},
{
"docid": "48bb48f6f63e233d17441494d8b81b2a",
"text": "With the proliferation of mobile computing technology, mobile learning (m-learning) will play a vital role in the rapidly growing electronic learning market. M-learning is the delivery of learning to students anytime and anywhere through the use of wireless Internet and mobile devices. However, acceptance of m-learning by individuals is critical to the successful implementation of m-learning systems. Thus, there is a need to research the factors that affect user intention to use m-learning. Based on the unified theory of acceptance and use of technology (UTAUT), which integrates elements across eight models of information technology use, this study was to investigate the determinants of m-learning acceptance and to discover if there exist either age or gender differences in the acceptance of m-learning, or both. Data collected from 330 respondents in Taiwan were tested against the research model using the structural equation modelling approach. The results indicate that performance expectancy, effort expectancy, social influence, perceived playfulness, and self-management of learning were all significant determinants of behavioural intention to use m-learning. We also found that age differences moderate the effects of effort expectancy and social influence on m-learning use intention, and that gender differences moderate the effects of social influence and self-management of learning on m-learning use intention. These findings provide several important implications for m-learning acceptance, in terms of both research and practice. British Journal of Educational Technology Vol 40 No 1 2009 92–118 doi:10.1111/j.1467-8535.2007.00809.x © 2007 The Authors. Journal compilation © 2007 Becta. Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA. Introduction The use of information and communication technology (ICT) may improve learning, especially when coupled with more learner-centred instruction (Zhu & Kaplan, 2002). From notebook computers to wireless phones and handheld devices, the massive infusion of computing devices and rapidly improving Internet capabilities have altered the nature of higher education (Green, 2000). Mobile learning (m-learning) is the follow up of e-learning, which for its part originates from distance education. M-learning refers to the delivery of learning to students anytime and anywhere through the use of wireless Internet and mobile devices, including mobile phones, personal digital assistants (PDAs), smart phones and digital audio players. Namely, m-learning users can interact with educational resources while away from their normal place of learning— the classroom or desktop computer. The place independence of mobile devices provides several benefits for e-learning environments, such as allowing students and instructors to utilise their spare time while traveling in trains or buses to finish their homework or lesson preparation (Virvou & Alepis, 2005). If e-learning took learning away from the classroom, then m-learning is taking learning away from a fixed location (Cmuk, 2007). Motiwalla (2007) contends that learning on mobile devices will never replace classroom or other e-learning approaches. Thus, m-learning is a complementary activity to both e-learning and traditional learning. However, Motiwalla (2007) also suggests that if leveraged properly, mobile technology can complement and add value to the existing learning models, such as the social constructive theory of learning with technology (Brown & Campione, 1996) and conversation theory (Pask, 1975). Thus, some believe that m-learning is becoming progressively more significant, and that it will play a vital role in the rapidly growing e-learning market. Despite the tremendous growth and potential of the mobile devices and networks, wireless e-learning and m-learning are still in their infancy or embryonic stage (Motiwalla, 2007). While the opportunities provided by m-learning are new, there are several challenges facing m-learning, such as connectivity, small screen sizes, limited processing power and reduced input capabilities. Siau, Lim and Shen (2001) also note that mobile devices have ‘(1) small screens and small multifunction key pads; (2) less computational power, limited memory and disk capacity; (3) shorter battery life; (4) complicated text input mechanisms; (5) higher risk of data storage and transaction errors; (6) lower display resolution; (7) less surfability; (8) unfriendly user-interfaces; and (9) graphical limitations’ (p. 6). Equipped with a small phone-style keyboard or a touch screen, users might require more time to search for some information on a page than they need to read it (Motiwalla, 2007). These challenges mean that adapting existing e-learning services to m-learning is not an easy work, and that users may be inclined to not accept m-learning. Thus, the success of m-learning may depend on whether or not users are willing to adopt the new technology that is different from what they have used in the past. While e-learning and mobile commerce/learning has received extensive attention (Concannon, Flynn & Campbell, 2005; Davies & Graff, 2005; Govindasamy, 2002; Harun, 2002; Ismail, 2002; Luarn & Lin, 2005; Mwanza & Engeström, 2005; Motiwalla, 2007; Pituch & Lee, 2006; Selim, 2007; Shee & Wang, in Determinants and age and gender in mobile learning 93 © 2007 The Authors. Journal compilation © 2007 Becta. press; Ravenscroft & Matheson, 2002; Wang, 2003), thus far, little research has been conducted to investigate the factors affecting users’ intentions to adopt m-learning, and to explore the age and gender differences in terms of the acceptance of m-learning. As Pedersen and Ling (2003) suggest, even though traditional Internet services and mobile services are expected to converge into mobile Internet services, few attempts have been made to apply traditional information technology (IT) adoption models to explain their potential adoption. Consequently, the objective of this study was to investigate the determinants, as well as the age and gender differences, in the acceptance of m-learning based on the unified theory of acceptance and use of technology (UTAUT) proposed by Venkatesh, Morris, Davis and Davis (2003). The remainder of this paper is organised as follows. In the next section, we review the UTAUT and show our reasoning for adopting it as the theoretical framework of this study. This is followed by descriptions of the research model and methods. We then present the results of the data analysis and hypotheses testing. Finally, the implications and limitations of this study are discussed. Unified Theory of Acceptance and Use of Technology M-learning acceptance is the central theme of this study, and represents a fundamental managerial challenge in terms of m-learning implementation. A review of prior studies provided a theoretical foundation for hypotheses formulation. Based on eight prominent models in the field of IT acceptance research, Venkatesh et al (2003) proposed a unified model, called the unified theory of acceptance and use of technology (UTAUT), which integrates elements across the eight models. The eight models consist of the theory of reasoned action (TRA) (Fishbein & Ajzen, 1975), the technology acceptance model (TAM) (Davis, 1989), the motivational model (MM) (Davis, Bagozzi & Warshaw, 1992), the theory of planned behaviour (TPB) (Ajzen, 1991), the combined TAM and TPB (C-TAM-TPB) (Taylor & Todd, 1995a), the model of PC utilisation (MPCU) (Triandis, 1977; Thompson, Higgins & Howell, 1991), the innovation diffusion theory (IDT) (Rogers, 2003; Moore & Benbasat, 1991) and the social cognitive theory (SCT) (Bandura, 1986; Compeau & Higgins, 1995). Based on Venkatesh et al’s (2003) study, we briefly review the core constructs in each of the eight models, which have been theorised as the determinants of IT usage intention and/or behaviour. First, TRA has been considered to be one of the most fundamental and influential theories on human behaviour. Attitudes toward behaviour and subjective norms are the two core constructs in TRA. Second, TAM was originally developed to predict IT acceptance and usage on the job, and has been extensively applied to various types of technologies and users. Perceived usefulness and perceived ease of use are the two main constructs mentioned in TAM. More recently, Venkatesh and Davis (2000) presented TAM2 by adding subjective norms to the TAM in the case of mandatory settings. Third, Davis et al (1992) employed motivation theory to understand new technology acceptance and usage, focusing on the primary constructs of extrinsic motivation and intrinsic motivation. Fourth, TPB extended TRA by including the construct of perceived behavioural control, and has been successfully applied to the 94 British Journal of Educational Technology Vol 40 No 1 2009 © 2007 The Authors. Journal compilation © 2007 Becta. understanding of individual acceptance and usage of various technologies (Harrison, Mykytyn & Riemenschneider, 1997; Mathieson, 1991; Taylor & Todd, 1995b). Fifth, C-TAM-TPB is a hybrid model that combines the predictors of TPB with perceived usefulness from TAM. Sixth, based on Triandis’ (1977) theory of human behaviour, Thompson et al (1991) presented the MPCU and used this model to predict PC utilisation. MPCU consists of six constructs, including job fit, complexity, long-term consequences, affect towards use, social factors and facilitating conditions. Seventh, Moore and Benbasat (1991) adapted the properties of innovations posited by IDT and refined a set of constructs that could be used to explore individual technology acceptance. These constructs include relative advantage, ease of use, image, visibility, compatibility, results demonstrability and voluntariness of use. Finally, Compeau and Higgins (1995) applied and extended SCT to the context of computer utilisation (see also Compeau, Higgins &",
"title": ""
},
{
"docid": "458392765ce4aa8b61eda7efd51aad8d",
"text": "The goal of active learning is to minimise the cost of producing an annotated dataset, in which annotators are assumed to be perfect, i.e., they always choose the correct labels. However, in practice, annotators are not infallible, and they are likely to assign incorrect labels to some instances. Proactive learning is a generalisation of active learning that can model different kinds of annotators. Although proactive learning has been applied to certain labelling tasks, such as text classification, there is little work on its application to named entity (NE) tagging. In this paper, we propose a proactive learning method for producing NE annotated corpora, using two annotators with different levels of expertise, and who charge different amounts based on their levels of experience. To optimise both cost and annotation quality, we also propose a mechanism to present multiple sentences to annotators at each iteration. Experimental results for several corpora show that our method facilitates the construction of high-quality NE labelled datasets at minimal cost.",
"title": ""
},
{
"docid": "63685ec8d8697d6f811f38b24c9a4e8c",
"text": "Over the past decade, our group has approached interaction design from an industrial design point of view. In doing so, we focus on a branch of design called “formgiving” Whilst formgiving is somewhat of a neologism in English, many other European languages do have a separate word for form-related design, including German (Gestaltung), Danish (formgivnin), Swedish (formgivning) and Dutch (vormgeving). . Traditionally, formgiving has been concerned with such aspects of objects as form, colour, texture and material. In the context of interaction design, we have come to see formgiving as the way in which objects appeal to our senses and motor skills. In this paper, we first describe our approach to interaction design of electronic products. We start with how we have been first inspired and then disappointed by the Gibsonian perception movement [1], how we have come to see both appearance and actions as carriers of meaning, and how we see usability and aesthetics as inextricably linked. We then show a number of interaction concepts for consumer electronics with both our initial thinking and what we learnt from them. Finally, we discuss the relevance of all this for tangible interaction. We argue that, in addition to a data-centred view, it is also possible to take a perceptual-motor-centred view on tangible interaction. In this view, it is the rich opportunities for differentiation in appearance and action possibilities that make physical objects open up new avenues to meaning and aesthetics in interaction design. Whilst formgiving is somewhat of a neologism in English, many other European languages do have a separate word for form-related design, including German (Gestaltung), Danish (formgivnin), Swedish (formgivning) and Dutch (vormgeving).",
"title": ""
},
{
"docid": "ed0465dc58b0f9c62e729fed4054bb58",
"text": "In this study, an instructional design model was employed for restructuring a teacher education course with technology. The model was applied in a science education method course, which was offered in two different but consecutive semesters with a total enrollment of 111 students in the fall semester and 116 students in the spring semester. Using tools, such as multimedia authoring tools in the fall semester and modeling software in the spring semester, teacher educators designed high quality technology-infused lessons for science and, thereafter, modeled them in classroom for preservice teachers. An assessment instrument was constructed to assess preservice teachers technology competency, which was measured in terms of four aspects, namely, (a) selection of appropriate science topics to be taught with technology, (b) use of appropriate technology-supported representations and transformations for science content, (c) use of technology to support teaching strategies, and (d) integration of computer activities with appropriate inquiry-based pedagogy in the science classroom. The results of a MANOVA showed that preservice teachers in the Modeling group outperformed preservice teachers overall performance in the Multimedia group, F = 21.534, p = 0.000. More specifically, the Modeling group outperformed the Multimedia group on only two of the four aspects of technology competency, namely, use of technology to support teaching strategies and integration of computer activities with appropriate pedagogy in the classroom, F = 59.893, p = 0.000, and F = 10.943, p = 0.001 respectively. The results indicate that the task of preparing preservice teachers to become technology competent is difficult and requires many efforts for providing them with ample of 0360-1315/$ see front matter 2004 Elsevier Ltd. All rights reserved. doi:10.1016/j.compedu.2004.06.002 * Tel.: +357 22 753772; fax: +357 22 377950. E-mail address: cangeli@ucy.ac.cy. 384 C. Angeli / Computers & Education 45 (2005) 383–398 opportunities during their education to develop the competencies needed to be able to teach with technology. 2004 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "ece8f2f4827decf0c440ca328ee272b4",
"text": "We describe an algorithm for converting linear support vector machines and any other arbitrary hyperplane-based linear classifiers into a set of non-overlapping rules that, unlike the original classifier, can be easily interpreted by humans. Each iteration of the rule extraction algorithm is formulated as a constrained optimization problem that is computationally inexpensive to solve. We discuss various properties of the algorithm and provide proof of convergence for two different optimization criteria We demonstrate the performance and the speed of the algorithm on linear classifiers learned from real-world datasets, including a medical dataset on detection of lung cancer from medical images. The ability to convert SVM's and other \"black-box\" classifiers into a set of human-understandable rules, is critical not only for physician acceptance, but also to reducing the regulatory barrier for medical-decision support systems based on such classifiers.",
"title": ""
},
{
"docid": "de4e2e131a0ceaa47934f4e9209b1cdd",
"text": "With the popularity of mobile devices, spatial crowdsourcing is rising as a new framework that enables human workers to solve tasks in the physical world. With spatial crowdsourcing, the goal is to crowdsource a set of spatiotemporal tasks (i.e., tasks related to time and location) to a set of workers, which requires the workers to physically travel to those locations in order to perform the tasks. In this article, we focus on one class of spatial crowdsourcing, in which the workers send their locations to the server and thereafter the server assigns to every worker tasks in proximity to the worker’s location with the aim of maximizing the overall number of assigned tasks. We formally define this maximum task assignment (MTA) problem in spatial crowdsourcing, and identify its challenges. We propose alternative solutions to address these challenges by exploiting the spatial properties of the problem space, including the spatial distribution and the travel cost of the workers. MTA is based on the assumptions that all tasks are of the same type and all workers are equally qualified in performing the tasks. Meanwhile, different types of tasks may require workers with various skill sets or expertise. Subsequently, we extend MTA by taking the expertise of the workers into consideration. We refer to this problem as the maximum score assignment (MSA) problem and show its practicality and generality. Extensive experiments with various synthetic and two real-world datasets show the applicability of our proposed framework.",
"title": ""
},
{
"docid": "434ea2b009a1479925ce20e8171aea46",
"text": "Several high-voltage silicon carbide (SiC) devices have been demonstrated over the past few years, and the latest-generation devices are showing even faster switching, and greater current densities. However, there are no commercial gate drivers that are suitable for these high-voltage, high-speed devices. Consequently, there has been a great research effort into the development of gate drivers for high-voltage SiC transistors. This work presents the first detailed report on the design and testing of a high-power-density, high-speed, and high-noise-immunity gate drive for a high-current, 10 kV SiC MOSFET module.",
"title": ""
},
{
"docid": "4fbc692a4291a92c6fa77dc78913e587",
"text": "Achieving artificial visual reasoning — the ability to answer image-related questions which require a multi-step, high-level process — is an important step towards artificial general intelligence. This multi-modal task requires learning a questiondependent, structured reasoning process over images from language. Standard deep learning approaches tend to exploit biases in the data rather than learn this underlying structure, while leading methods learn to visually reason successfully but are hand-crafted for reasoning. We show that a general-purpose, Conditional Batch Normalization approach achieves state-ofthe-art results on the CLEVR Visual Reasoning benchmark with a 2.4% error rate. We outperform the next best end-to-end method (4.5%) and even methods that use extra supervision (3.1%). We probe our model to shed light on how it reasons, showing it has learned a question-dependent, multi-step process. Previous work has operated under the assumption that visual reasoning calls for a specialized architecture, but we show that a general architecture with proper conditioning can learn to visually reason effectively.",
"title": ""
}
] |
scidocsrr
|
6fe6b923a29ee1fca1ddc14233d66bbe
|
Targeted Storyfying: Creating Stories About Particular Events
|
[
{
"docid": "96540d96bf2faacd0457caed66e0db4a",
"text": "naturally associate with computers. Yet over the last few years there has been a surge of research efforts concerning the combination of both subjects. This article tries to shed light on these efforts. In carrying out this program, one is handicapped by the fact that, as words, both creativity and storytelling are severely lacking in the precision one expects of words to be used for intellectual endeavor. If a speaker were to mention either word in front of an audience, each person listening would probably come up with a different mental picture of what is intended. To avoid the risks that such vagueness might lead to, an initial effort is made here to restrict the endeavor to those aspects that have been modeled computationally in some model or system. The article then proceeds to review some of the research efforts that have addressed these problems from a computational point of view.",
"title": ""
}
] |
[
{
"docid": "83d330486c50fe2ae1d6960a4933f546",
"text": "In this paper, an upgraded version of vehicle tracking system is developed for inland vessels. In addition to the features available in traditional VTS (Vehicle Tracking System) for automobiles, it has the capability of remote monitoring of the vessel's motion and orientation. Furthermore, this device can detect capsize events and other accidents by motion tracking and instantly notify the authority and/or the owner with current coordinates of the vessel, which is obtained using the Global Positioning System (GPS). This can certainly boost up the rescue process and minimize losses. We have used GSM network for the communication between the device installed in the ship and the ground control. So, this can be implemented only in the inland vessels. But using iridium satellite communication instead of GSM will enable the device to be used in any sea-going ships. At last, a model of an integrated inland waterway control system (IIWCS) based on this device is discussed.",
"title": ""
},
{
"docid": "d31ba2b9ca7f5a33619fef33ade3b75a",
"text": "We present ARPKI, a public-key infrastructure that ensures that certificate-related operations, such as certificate issuance, update, revocation, and validation, are transparent and accountable. ARPKI is the first such infrastructure that systematically takes into account requirements identified by previous research. Moreover, ARPKI is co-designed with a formal model, and we verify its core security property using the Tamarin prover. We present a proof-of-concept implementation providing all features required for deployment. ARPKI efficiently handles the certification process with low overhead and without incurring additional latency to TLS.\n ARPKI offers extremely strong security guarantees, where compromising n-1 trusted signing and verifying entities is insufficient to launch an impersonation attack. Moreover, it deters misbehavior as all its operations are publicly visible.",
"title": ""
},
{
"docid": "44368062de68f6faed57d43b8e691e35",
"text": "In this paper we explore one of the key aspects in building an emotion recognition system: generating suitable feature representations. We generate feature representations from both acoustic and lexical levels. At the acoustic level, we first extract low-level features such as intensity, F0, jitter, shimmer and spectral contours etc. We then generate different acoustic feature representations based on these low-level features, including statistics over these features, a new representation derived from a set of low-level acoustic codewords, and a new representation from Gaussian Supervectors. At the lexical level, we propose a new feature representation named emotion vector (eVector). We also use the traditional Bag-of-Words (BoW) feature. We apply these feature representations for emotion recognition and compare their performance on the USC-IEMOCAP database. We also combine these different feature representations via early fusion and late fusion. Our experimental results show that late fusion of both acoustic and lexical features achieves four-class emotion recognition accuracy of 69.2%.",
"title": ""
},
{
"docid": "1b4ece2fe2c92fa1f3c5c8d61739cbb7",
"text": "Generating high-resolution, photo-realistic images has been a long-standing goal in machine learning. Recently, Nguyen et al. [37] showed one interesting way to synthesize novel images by performing gradient ascent in the latent space of a generator network to maximize the activations of one or multiple neurons in a separate classifier network. In this paper we extend this method by introducing an additional prior on the latent code, improving both sample quality and sample diversity, leading to a state-of-the-art generative model that produces high quality images at higher resolutions (227 × 227) than previous generative models, and does so for all 1000 ImageNet categories. In addition, we provide a unified probabilistic interpretation of related activation maximization methods and call the general class of models Plug and Play Generative Networks. PPGNs are composed of 1) a generator network G that is capable of drawing a wide range of image types and 2) a replaceable condition network C that tells the generator what to draw. We demonstrate the generation of images conditioned on a class (when C is an ImageNet or MIT Places classification network) and also conditioned on a caption (when C is an image captioning network). Our method also improves the state of the art of Multifaceted Feature Visualization [40], which generates the set of synthetic inputs that activate a neuron in order to better understand how deep neural networks operate. Finally, we show that our model performs reasonably well at the task of image inpainting. While image models are used in this paper, the approach is modality-agnostic and can be applied to many types of data.",
"title": ""
},
{
"docid": "accbf418bb065494953e784e7c93d0e9",
"text": "Spreadsheets are among the most commonly used applications for data management and analysis. Perhaps they are even among the most widely used computer applications of all kinds. However, the spreadsheet paradigm of computation still lacks sufficient analysis.\n In this paper we demonstrate that a spreadsheet can play the role of a relational database engine, without any use of macros or built-in programming languages, merely by utilizing spreadsheet formulas. We achieve that by implementing all operators of relational algebra by means of spreadsheet functions.\n Given a definition of a database in SQL, it is therefore possible to construct a spreadsheet workbook with empty worksheets for data tables and worksheets filled with formulas for queries. From then on, when the user enters, alters or deletes data in the data worksheets, the formulas in query worksheets automatically compute the actual results of the queries. Thus, the spreadsheet serves as data storage and executes SQL queries, and therefore acts as a relational database engine.\n The paper is based on Microsoft Excel (TM), but our constructions work in other spreadsheet systems, too. We present a number of performance tests conducted in the beta version of Excel 2010. Their conclusion is that the performance is sufficient for a desktop database with a couple thousand rows.",
"title": ""
},
{
"docid": "f9c8ae3d69d4e145e9d4ad2d2c828791",
"text": "Phycobiliproteins are a group of colored proteins commonly present in cyanobacteria and red algae possessing a spectrum of applications. They are extensively commercialized for fluorescent applications in clinical and immunological analysis. They are also used as a colorant, and their therapeutic value has also been categorically demonstrated. However, a comprehensive knowledge and technological base for augmenting their commercial utilities is lacking. Hence, this work is focused towards this objective by means of analyzing global patents and commercial activities with application oriented research. Strategic mining of patents was performed from global patent databases resulting in the identification of 297 patents on phycobiliproteins. The majority of the patents are from USA, Japan and Europe. Patents are grouped into fluorescent applications, general applications and production aspects of phycobiliproteins and the features of each group are discussed. Commercial and applied research activities are compared in parallel. It revealed that US patents are mostly related to fluorescent applications while Japanese are on the production, purification and application for therapeutic and diagnostic purposes. Fluorescent applications are well represented in research, patents and commercial sectors. Biomedical properties documented in research and patents are not ventured commercially. Several novel applications are reported only in patents. The paper further pinpoints the plethora of techniques used for cell breakage and for extraction and purification of phycobiliproteins. The analysis identifies the lacuna and suggests means for improvements in the application and production of phycobiliproteins.",
"title": ""
},
{
"docid": "b21135f6c627d7dfd95ad68c9fc9cc48",
"text": "New mothers can experience social exclusion, particularly during the early weeks when infants are solely dependent on their mothers. We used ethnographic methods to investigate whether technology plays a role in supporting new mothers. Our research identified two core themes: (1) the need to improve confidence as a mother; and (2) the need to be more than \\'18just' a mother. We reflect on these findings both in terms of those interested in designing applications and services for motherhood and also the wider CHI community.",
"title": ""
},
{
"docid": "721121e1393aea483d93a0b4d7fd2543",
"text": "Bitmap indexes must be compressed to reduce input/output costs and minimize CPU usage. To accelerate logical operations (AND, OR, XOR) over bitmaps, we use techniques based on run-length encoding (RLE), such as Word-Aligned Hybrid (WAH) compression. These techniques are sensitive to the order of the rows: a simple lexicographical sort can divide the index size by 9 and make indexes several times faster. We investigate row-reordering heuristics. Simply permuting the columns of the table can increase the sorting efficiency by 40%. Secondary contributions include efficient algorithms to construct and aggregate bitmaps. The effect of word length is also reviewed by constructing 16-bit, 32-bit and 64-bit indexes. Using 64-bit CPUs, we find that 64-bit indexes are slightly faster than 32-bit indexes despite being nearly twice as large.",
"title": ""
},
{
"docid": "f689c97559cba21d270ff9769aafe5d8",
"text": "Many sensor network applications require that each node’s sensor stream be annotated with its physical location in some common coordinate system. Manual measurement and configuration methods for obtaining location don’t scale and are error-prone, and equipping sensors with GPS is often expensive and does not work in indoor and urban deployments. Sensor networks can therefore benefit from a self-configuring method where nodes cooperate with each other, estimate local distances to their neighbors, and converge to a consistent coordinate assignment. This paper describes a fully decentralized algorithm called AFL (Anchor-Free Localization) where nodes start from a random initial coordinate assignment and converge to a consistent solution using only local node interactions. The key idea in AFL is fold-freedom, where nodes first configure into a topology that resembles a scaled and unfolded version of the true configuration, and then run a force-based relaxation procedure. We show using extensive simulations under a variety of network sizes, node densities, and distance estimation errors that our algorithm is superior to previously proposed methods that incrementally compute the coordinates of nodes in the network, in terms of its ability to compute correct coordinates under a wider variety of conditions and its robustness to measurement errors.",
"title": ""
},
{
"docid": "86df4a413696826b615ddd6004189884",
"text": "In this paper, we consider two important problems defined on finite metric spaces, and provide efficient new algorithms and approximation schemes for these problems on inputs given as graph shortest path metrics or high-dimensional Euclidean metrics. The first of these problems is the greedy permutation (or farthest-first traversal) of a finite metric space: a permutation of the points of the space in which each point is as far as possible from all previous points. We describe randomized algorithms to find (1 + ε)-approximate greedy permutations of any graph with n vertices and m edges in expected time O ( ε−1(m + n) log n log(n/ε) ) , and to find (1 + ε)-approximate greedy permutations of points in high-dimensional Euclidean spaces in expected time O(ε−2n1+1/(1+ε) 2+o(1)). Additionally we describe a deterministic algorithm to find exact greedy permutations of any graph with n vertices and treewidth O(1) in worst-case time O(n3/2 log n). The second of the two problems we consider is distance selection: given k ∈ q( n 2 )y , we are interested in computing the kth smallest distance in the given metric space. We show that for planar graph metrics one can approximate this distance, up to a constant factor, in near linear time.",
"title": ""
},
{
"docid": "4f57590f8bbf00d35b86aaa1ff476fc0",
"text": "Pedestrian detection has been used in applications such as car safety, video surveillance, and intelligent vehicles. In this paper, we present a pedestrian detection scheme using HOG, LUV and optical flow features with AdaBoost Decision Stump classifier. Our experiments on Caltech-USA pedestrian dataset show that the proposed scheme achieves promising results of about 16.7% log-average miss rate.",
"title": ""
},
{
"docid": "279870c84659e0eb6668e1ec494e77c9",
"text": "There is a need to move from opinion-based education to evidence-based education. Best evidence medical education (BEME) is the implementation, by teachers in their practice, of methods and approaches to education based on the best evidence available. It involves a professional judgement by the teacher about his/her teaching taking into account a number of factors-the QUESTS dimensions. The Quality of the research evidence available-how reliable is the evidence? the Utility of the evidence-can the methods be transferred and adopted without modification, the Extent of the evidence, the Strength of the evidence, the Target or outcomes measured-how valid is the evidence? and the Setting or context-how relevant is the evidence? The evidence available can be graded on each of the six dimensions. In the ideal situation the evidence is high on all six dimensions, but this is rarely found. Usually the evidence may be good in some respects, but poor in others.The teacher has to balance the different dimensions and come to a decision on a course of action based on his or her professional judgement.The QUESTS dimensions highlight a number of tensions with regard to the evidence in medical education: quality vs. relevance; quality vs. validity; and utility vs. the setting or context. The different dimensions reflect the nature of research and innovation. Best Evidence Medical Education encourages a culture or ethos in which decision making takes place in this context.",
"title": ""
},
{
"docid": "b8efbca1cb19f077c53ce8a7471ed31e",
"text": "Microblogging sites such as Twitter can play a vital role in spreading information during “natural” or man-made disasters. But the volume and velocity of tweets posted during crises today tend to be extremely high, making it hard for disaster-affected communities and professional emergency responders to process the information in a timely manner. Furthermore, posts tend to vary highly in terms of their subjects and usefulness; from messages that are entirely off-topic or personal in nature, to messages containing critical information that augments situational awareness. Finding actionable information can accelerate disaster response and alleviate both property and human losses. In this paper, we describe automatic methods for extracting information from microblog posts. Specifically, we focus on extracting valuable “information nuggets”, brief, self-contained information items relevant to disaster response. Our methods leverage machine learning methods for classifying posts and information extraction. Our results, validated over one large disaster-related dataset, reveal that a careful design can yield an effective system, paving the way for more sophisticated data analysis and visualization systems.",
"title": ""
},
{
"docid": "9eab2aa7c4fbfadb5642b47dd08c2014",
"text": "A class of matrices (H-matrices) is introduced which have the following properties. (i) They are sparse in the sense that only few data are needed for their representation. (ii) The matrix-vector multiplication is of almost linear complexity. (iii) In general, sums and products of these matrices are no longer in the same set, but their truncations to the H-matrix format are again of almost linear complexity. (iv) The same statement holds for the inverse of an H-matrix. This paper is the first of a series and is devoted to the first introduction of the H-matrix concept. Two concret formats are described. The first one is the simplest possible. Nevertheless, it allows the exact inversion of tridiagonal matrices. The second one is able to approximate discrete integral operators. AMS Subject Classifications: 65F05, 65F30, 65F50.",
"title": ""
},
{
"docid": "6be3f84e371874e2df32de9cb1d92482",
"text": "We present an accurate and efficient stereo matching method using locally shared labels, a new labeling scheme that enables spatial propagation in MRF inference using graph cuts. They give each pixel and region a set of candidate disparity labels, which are randomly initialized, spatially propagated, and refined for continuous disparity estimation. We cast the selection and propagation of locally-defined disparity labels as fusion-based energy minimization. The joint use of graph cuts and locally shared labels has advantages over previous approaches based on fusion moves or belief propagation, it produces submodular moves deriving a subproblem optimality, enables powerful randomized search, helps to find good smooth, locally planar disparity maps, which are reasonable for natural scenes, allows parallel computation of both unary and pairwise costs. Our method is evaluated using the Middlebury stereo benchmark and achieves first place in sub-pixel accuracy.",
"title": ""
},
{
"docid": "a6acba54f34d1d101f4abb00f4fe4675",
"text": "We study the potential flow of information in interaction networks, that is, networks in which the interactions between the nodes are being recorded. The central notion in our study is that of an information channel. An information channel is a sequence of interactions between nodes forming a path in the network which respects the time order. As such, an information channel represents a potential way information could have flown in the interaction network. We propose algorithms to estimate information channels of limited time span from every node to other nodes in the network. We present one exact and one more efficient approximate algorithm. Both algorithms are onepass algorithms. The approximation algorithm is based on an adaptation of the HyperLogLog sketch, which allows easily combining the sketches of individual nodes in order to get estimates of how many unique nodes can be reached from groups of nodes as well. We show how the results of our algorithm can be used to build efficient influence oracles for solving the Influence maximization problem which deals with finding top k seed nodes such that the information spread from these nodes is maximized. Experiments show that the use of information channels is an interesting data-driven and model-independent way to find top k influential nodes in interaction networks.",
"title": ""
},
{
"docid": "b0b2e50ea9020f6dd6419fbb0520cdfd",
"text": "Social interactions, such as an aggressive encounter between two conspecific males or a mating encounter between a male and a female, typically progress from an initial appetitive or motivational phase, to a final consummatory phase. This progression involves both changes in the intensity of the animals' internal state of arousal or motivation and sequential changes in their behavior. How are these internal states, and their escalating intensity, encoded in the brain? Does this escalation drive the progression from the appetitive/motivational to the consummatory phase of a social interaction and, if so, how are appropriate behaviors chosen during this progression? Recent work on social behaviors in flies and mice suggests possible ways in which changes in internal state intensity during a social encounter may be encoded and coupled to appropriate behavioral decisions at appropriate phases of the interaction. These studies may have relevance to understanding how emotion states influence cognitive behavioral decisions at higher levels of brain function.",
"title": ""
},
{
"docid": "0b22d7f6326210f02da44b0fa686f25a",
"text": "Current methods learn monolithic attribute predictors, with the assumption that a single model is sufficient to reflect human understanding of a visual attribute. However, in reality, humans vary in how they perceive the association between a named property and image content. For example, two people may have slightly different internal models for what makes a shoe look \"formal\", or they may disagree on which of two scenes looks \"more cluttered\". Rather than discount these differences as noise, we propose to learn user-specific attribute models. We adapt a generic model trained with annotations from multiple users, tailoring it to satisfy user-specific labels. Furthermore, we propose novel techniques to infer user-specific labels based on transitivity and contradictions in the user's search history. We demonstrate that adapted attributes improve accuracy over both existing monolithic models as well as models that learn from scratch with user-specific data alone. In addition, we show how adapted attributes are useful to personalize image search, whether with binary or relative attributes.",
"title": ""
},
{
"docid": "62efd4c3e2edc5d8124d5c926484d79b",
"text": "OBJECTIVE\nResearch studies show that social media may be valuable tools in the disease surveillance toolkit used for improving public health professionals' ability to detect disease outbreaks faster than traditional methods and to enhance outbreak response. A social media work group, consisting of surveillance practitioners, academic researchers, and other subject matter experts convened by the International Society for Disease Surveillance, conducted a systematic primary literature review using the PRISMA framework to identify research, published through February 2013, answering either of the following questions: Can social media be integrated into disease surveillance practice and outbreak management to support and improve public health?Can social media be used to effectively target populations, specifically vulnerable populations, to test an intervention and interact with a community to improve health outcomes?Examples of social media included are Facebook, MySpace, microblogs (e.g., Twitter), blogs, and discussion forums. For Question 1, 33 manuscripts were identified, starting in 2009 with topics on Influenza-like Illnesses (n = 15), Infectious Diseases (n = 6), Non-infectious Diseases (n = 4), Medication and Vaccines (n = 3), and Other (n = 5). For Question 2, 32 manuscripts were identified, the first in 2000 with topics on Health Risk Behaviors (n = 10), Infectious Diseases (n = 3), Non-infectious Diseases (n = 9), and Other (n = 10).\n\n\nCONCLUSIONS\nThe literature on the use of social media to support public health practice has identified many gaps and biases in current knowledge. Despite the potential for success identified in exploratory studies, there are limited studies on interventions and little use of social media in practice. However, information gleaned from the articles demonstrates the effectiveness of social media in supporting and improving public health and in identifying target populations for intervention. A primary recommendation resulting from the review is to identify opportunities that enable public health professionals to integrate social media analytics into disease surveillance and outbreak management practice.",
"title": ""
},
{
"docid": "8a37001733b0ee384277526bd864fe04",
"text": "Miscreants use DDoS botnets to attack a victim via a large number of malware-infected hosts, combining the bandwidth of the individual PCs. Such botnets have thus a high potential to render targeted services unavailable. However, the actual impact of attacks by DDoS botnets has never been evaluated. In this paper, we monitor C&C servers of 14 DirtJumper and Yoddos botnets and record the DDoS targets of these networks. We then aim to evaluate the availability of the DDoS victims, using a variety of measurements such as TCP response times and analyzing the HTTP content. We show that more than 65% of the victims are severely affected by the DDoS attacks, while also a few DDoS attacks likely failed.",
"title": ""
}
] |
scidocsrr
|
84cafae58d4e9c4e246b658f99433710
|
Eigenvalues and eigenvectors of generalized DFT, generalized DHT, DCT-IV and DST-IV matrices
|
[
{
"docid": "ba2d02d8c3e389b9b7659287eb406b16",
"text": "We propose and consolidate a definition of the discrete fractional Fourier transform that generalizes the discrete Fourier transform (DFT) in the same sense that the continuous fractional Fourier transform generalizes the continuous ordinary Fourier transform. This definition is based on a particular set of eigenvectors of the DFT matrix, which constitutes the discrete counterpart of the set of Hermite–Gaussian functions. The definition is exactlyunitary, index additive, and reduces to the DFT for unit order. The fact that this definition satisfies all the desirable properties expected of the discrete fractional Fourier transform supports our confidence that it will be accepted as the definitive definition of this transform.",
"title": ""
}
] |
[
{
"docid": "50e9cf4ff8265ce1567a9cc82d1dc937",
"text": "Thu, 06 Dec 2018 02:11:00 GMT bayesian reasoning and machine learning pdf Bayesian Reasoning and Machine Learning [David Barber] on Amazon.com. *FREE* shipping on qualifying offers. Machine learning methods extract value from vast data sets ... Thu, 06 Dec 2018 14:35:00 GMT Bayesian Reasoning and Machine Learning: David Barber ... A Bayesian network, Bayes network, belief network, Bayes(ian) model or probabilistic directed acyclic graphical model is a probabilistic graphical model (a type of ... Sat, 08 Dec 2018 04:53:00 GMT Bayesian network Wikipedia Bayesian Reasoning and Machine Learning. The book is available in hardcopy from Cambridge University Press. The publishers have kindly agreed to allow the online ... Sun, 09 Dec 2018 20:51:00 GMT Bayesian Reasoning and Machine Learning, David Barber Machine learning (ML) is the study of algorithms and mathematical models that computer systems use to progressively improve their performance on a specific task. Mon, 10 Dec 2018 14:02:00 GMT Machine learning Wikipedia Your friends and colleagues are talking about something called \"Bayes' Theorem\" or \"Bayes' Rule\", or something called Bayesian reasoning. They sound really ... Mon, 10 Dec 2018 14:24:00 GMT Yudkowsky Bayes' Theorem NIPS 2016 Tutorial on ML Methods for Personalization with Application to Medicine. More here. UAI 2017 Tutorial on Machine Learning and Counterfactual Reasoning for ... Thu, 06 Dec 2018 15:33:00 GMT Suchi Saria – Machine Learning, Computational Health ... Gaussian Processes and Kernel Methods Gaussian processes are non-parametric distributions useful for doing Bayesian inference and learning on unknown functions. Mon, 10 Dec 2018 05:12:00 GMT Machine Learning Group Publications University of This practical introduction is geared towards scientists who wish to employ Bayesian networks for applied research using the BayesiaLab software platform. Sun, 09 Dec 2018 17:17:00 GMT Bayesian Networks & BayesiaLab: A Practical Introduction ... Automated Bitcoin Trading via Machine Learning Algorithms Isaac Madan Department of Computer Science Stanford University Stanford, CA 94305 imadan@stanford.edu Tue, 27 Nov 2018 20:01:00 GMT Automated Bitcoin Trading via Machine Learning Algorithms 2.3. Naà ̄ve Bayesian classifier. A Naà ̄ve Bayesian classifier generally seems very simple; however, it is a pioneer in most information and computational applications ... Sun, 09 Dec 2018 03:48:00 GMT Proposed efficient algorithm to filter spam using machine ... Machine Learning: A Probabilistic Perspective (Adaptive Computation and Machine Learning) [Kevin P. Murphy, Francis Bach] on Amazon.com. *FREE* shipping on qualifying ... Sun, 01 Jul 2018 19:30:00 GMT Machine Learning: A Probabilistic Perspective (Adaptive ... So itâ€TMs pretty clear by now that statistics and machine learning arenâ€TMt very different fields. I was recently pointed to a very amusing comparison by the ... Fri, 07 Dec 2018 19:56:00 GMT Statistics vs. Machine Learning, fight! | AI and Social ... Need help with Statistics for Machine Learning? Take my free 7-day email crash course now (with sample code). Click to sign-up and also get a free PDF Ebook version ... Thu, 06 Dec 2018 23:39:00 GMT Statistics for Evaluating Machine Learning Models",
"title": ""
},
{
"docid": "26b8ec80d9fe7317e306bed3cd5c9fa4",
"text": "We describe a method for disambiguating Chinese commas that is central to Chinese sentence segmentation. Chinese sentence segmentation is viewed as the detection of loosely coordinated clauses separated by commas. Trained and tested on data derived from the Chinese Treebank, our model achieves a classification accuracy of close to 90% overall, which translates to an F1 score of 70% for detecting commas that signal sentence boundaries.",
"title": ""
},
{
"docid": "2084a38c285ebfb2d5e40e8667414d0d",
"text": "Differential Evolution (DE) algorithm is a new heuristic approach mainly having three advantages; finding the true global minimum regardless of the initial parameter values, fast convergence, and using few control parameters. DE algorithm is a population based algorithm like genetic algorithms using similar operators; crossover, mutation and selection. In this work, we have compared the performance of DE algorithm to that of some other well known versions of genetic algorithms: PGA, Grefensstette, Eshelman. In simulation studies, De Jong’s test functions have been used. From the simulation results, it was observed that the convergence speed of DE is significantly better than genetic algorithms. Therefore, DE algorithm seems to be a promising approach for engineering optimization problems.",
"title": ""
},
{
"docid": "8327cb7a8d39ce8f8f982aa38cdd517e",
"text": "Although many valuable visualizations have been developed to gain insights from large data sets, selecting an appropriate visualization for a specific data set and goal remains challenging for non-experts. In this paper, we propose a novel approach for knowledge-assisted, context-aware visualization recommendation. Both semantic web data and visualization components are annotated with formalized visualization knowledge from an ontology. We present a recommendation algorithm that leverages those annotations to provide visualization components that support the users’ data and task. We successfully proved the practicability of our approach by integrating it into two research prototypes. Keywords-recommendation, visualization, ontology, mashup",
"title": ""
},
{
"docid": "461786442ec8b8762019bb82d65491a5",
"text": "Fog computing is a new paradigm providing network services such as computing, storage between the end users and cloud. The distributed and open structure are the characteristics of fog computing, which make it vulnerable and very weak to security threats. In this article, the interaction between vulnerable nodes and malicious nodes in the fog computing is investigated as a non-cooperative differential game. The complex decision making process is reviewed and analyzed. To solve the game, a fictitious play-based algorithm is which the vulnerable node and the malicious nodes reach a feedback Nash equilibrium. We attain optimal strategy of energy consumption with QoS guarantee for the system, which are conveniently operated and suitable for fog nodes. The system simulation identifies the propagation of malicious nodes. We also determine the effects of various parameters on the optimal strategy. The simulation results support a theoretical foundation to limit malicious nodes in fog computing, which can help fog service providers make the optimal dynamic strategies when different types of nodes dynamically change their strategies.",
"title": ""
},
{
"docid": "5a525ccce94c64cd8b2d8cf9125a7802",
"text": "and others at both organizations for their support and valuable input. Special thanks to Grey Advertising's Ben Arno who suggested the term brand resonance. Additional thanks to workshop participants at Duke University and Dartmouth College. MSI was established in 1961 as a not-for profit institute with the goal of bringing together business leaders and academics to create knowledge that will improve business performance. The primary mission was to provide intellectual leadership in marketing and its allied fields. Over the years, MSI's global network of scholars from leading graduate schools of management and thought leaders from sponsoring corporations has expanded to encompass multiple business functions and disciplines. Issues of key importance to business performance are identified by the Board of Trustees, which represents MSI corporations and the academic community. MSI supports studies by academics on these issues and disseminates the results through conferences and workshops, as well as through its publications series. This report, prepared with the support of MSI, is being sent to you for your information and review. It is not to be reproduced or published, in any form or by any means, electronic or mechanical, without written permission from the Institute and the author. Building a strong brand has been shown to provide numerous financial rewards to firms, and has become a top priority for many organizations. In this report, author Keller outlines the Customer-Based Brand Equity (CBBE) model to assist management in their brand-building efforts. According to the model, building a strong brand involves four steps: (1) establishing the proper brand identity, that is, establishing breadth and depth of brand awareness, (2) creating the appropriate brand meaning through strong, favorable, and unique brand associations, (3) eliciting positive, accessible brand responses, and (4) forging brand relationships with customers that are characterized by intense, active loyalty. Achieving these four steps, in turn, involves establishing six brand-building blocks—brand salience, brand performance, brand imagery, brand judgments, brand feelings, and brand resonance. The most valuable brand-building block, brand resonance, occurs when all the other brand-building blocks are established. With true brand resonance, customers express a high degree of loyalty to the brand such that they actively seek means to interact with the brand and share their experiences with others. Firms that are able to achieve brand resonance should reap a host of benefits, for example, greater price premiums and more efficient and effective marketing programs. The CBBE model provides a yardstick by …",
"title": ""
},
{
"docid": "821b6ce6e6d51e9713bb44c4c9bf8cf0",
"text": "Rapidly destructive arthritis (RDA) of the shoulder is a rare disease. Here, we report two cases, with different destruction patterns, which were most probably due to subchondral insufficiency fractures (SIFs). Case 1 involved a 77-year-old woman with right shoulder pain. Rapid destruction of both the humeral head and glenoid was seen within 1 month of the onset of shoulder pain. We diagnosed shoulder RDA and performed a hemiarthroplasty. Case 2 involved a 74-year-old woman with left shoulder pain. Humeral head collapse was seen within 5 months of pain onset, without glenoid destruction. Magnetic resonance imaging showed a bone marrow edema pattern with an associated subchondral low-intensity band, typical of SIF. Total shoulder arthroplasty was performed in this case. Shoulder RDA occurs as a result of SIF in elderly women; the progression of the joint destruction is more rapid in cases with SIFs of both the humeral head and the glenoid. Although shoulder RDA is rare, this disease should be included in the differential diagnosis of acute onset shoulder pain in elderly female patients with osteoporosis and persistent joint effusion.",
"title": ""
},
{
"docid": "77652c8d471be4d28fb48aa5e2c3ee41",
"text": "This paper is a survey and an analysis of different ways of using deep learning to generate musical content. We propose a methodology based on five dimensions: Objective - What musical content is to be generated? Examples are: melody, polyphony, accompaniment or counterpoint. - For what destination and for what use? To be performed by a human(s) (in the case of a musical score), or by a machine (in the case of an audio file). Representation - What are the concepts to be manipulated? Examples are: waveform, spectrogram, note, chord, meter and beat. - What format is to be used? Examples are: MIDI, piano roll or text. - How will the representation be encoded? Examples are: scalar, one-hot or many-hot. Architecture - What type(s) of deep neural network is (are) to be used? Examples are: feedforward network, recurrent network, autoencoder or generative adversarial networks. Challenges - What are the limitations and open challenges? Examples are: variability, interactivity and creativity. Strategy - How do we model and control the process of generation? Examples are: single-step feedforward, iterative feedforward, sampling or input manipulation. For each dimension, we conduct a comparative analysis of various models and techniques and propose some tentative multidimensional typology which is bottom-up, based on the analysis of many existing deep-learning based systems for music generation selected from the relevant literature. These systems are described and used to exemplify the various choices of objective, representation, architecture, challenges and strategies. The last part of the paper includes some discussion and some prospects. This is a simplified version (weak DRM) of the book: Briot, J.-P., Hadjeres, G. and Pachet, F.-D. (2019) Deep Learning Techniques for Music Generation, Computational Synthesis and Creative Systems, Springer.",
"title": ""
},
{
"docid": "b8b4e582fbcc23a5a72cdaee1edade32",
"text": "In recent years, research into the mining of user check-in behavior for point-of-interest (POI) recommendations has attracted a lot of attention. Existing studies on this topic mainly treat such recommendations in a traditional manner—that is, they treat POIs as items and check-ins as ratings. However, users usually visit a place for reasons other than to simply say that they have visited. In this article, we propose an approach referred to as Urban POI-Walk (UPOI-Walk), which takes into account a user's social-triggered intentions (SI), preference-triggered intentions (PreI), and popularity-triggered intentions (PopI), to estimate the probability of a user checking-in to a POI. The core idea of UPOI-Walk involves building a HITS-based random walk on the normalized check-in network, thus supporting the prediction of POI properties related to each user's preferences. To achieve this goal, we define several user--POI graphs to capture the key properties of the check-in behavior motivated by user intentions. In our UPOI-Walk approach, we propose a new kind of random walk model—Dynamic HITS-based Random Walk—which comprehensively considers the relevance between POIs and users from different aspects. On the basis of similitude, we make an online recommendation as to the POI the user intends to visit. To the best of our knowledge, this is the first work on urban POI recommendations that considers user check-in behavior motivated by SI, PreI, and PopI in location-based social network data. Through comprehensive experimental evaluations on two real datasets, the proposed UPOI-Walk is shown to deliver excellent performance.",
"title": ""
},
{
"docid": "6ce429d7974c9593f4323ec306488b1f",
"text": "The encoder-decoder framework for neural machine translation (NMT) has been shown effective in large data scenarios, but is much less effective for low-resource languages. We present a transfer learning method that significantly improves BLEU scores across a range of low-resource languages. Our key idea is to first train a high-resource language pair (the parent model), then transfer some of the learned parameters to the low-resource pair (the child model) to initialize and constrain training. Using our transfer learning method we improve baseline NMT models by an average of 5.6 BLEU on four low-resource language pairs. Ensembling and unknown word replacement add another 2 BLEU which brings the NMT performance on low-resource machine translation close to a strong syntax based machine translation (SBMT) system, exceeding its performance on one language pair. Additionally, using the transfer learning model for re-scoring, we can improve the SBMT system by an average of 1.3 BLEU, improving the state-of-the-art on low-resource machine translation.",
"title": ""
},
{
"docid": "621d66aeff489c65eb9877270cb86b5f",
"text": "Electronic customer relationship management (e-CRM) emerges from the Internet and Web technology to facilitate the implementation of CRM. It focuses on Internet- or Web-based interaction between companies and their customers. Above all, e-CRM enables service sectors to provide appropriate services and products to satisfy the customers so as to retain customer royalty and enhance customer profitability. This research is to explore the key research issues about e-CRM performance influence for service sectors in Taiwan. A research model is proposed based on the widely applied technology-organization-environment (TOE) framework. Survey data from the questionnaire are collected to empirically assess our research model.",
"title": ""
},
{
"docid": "9222bd9fc9aeea6917b75bf0eb4aab63",
"text": "In this paper we implemented different models to solve the review usefulness classification problem. Both feed-forward neural network and LSTM were able to beat the baseline model. Performances of the models are evaluated using 0-1 loss and F-1 scores. In general, LSTM outperformed feed-forward neural network, as we trained our own word vectors in that model, and LSTM itself was able to store more information as it processes sequence of words. Besides, we built a recommender system using the user-item-rating data to further investigate this dataset and intended to make connection with review classification. The performance of recommender system is measured by RMSE in rating predictions.",
"title": ""
},
{
"docid": "00f2bb2dd3840379c2442c018407b1c8",
"text": "BACKGROUND\nFacebook is a social networking site (SNS) for communication, entertainment and information exchange. Recent research has shown that excessive use of Facebook can result in addictive behavior in some individuals.\n\n\nAIM\nTo assess the patterns of Facebook use in post-graduate students of Yenepoya University and evaluate its association with loneliness.\n\n\nMETHODS\nA cross-sectional study was done to evaluate 100 post-graduate students of Yenepoya University using Bergen Facebook Addiction Scale (BFAS) and University of California and Los Angeles (UCLA) loneliness scale version 3. Descriptive statistics were applied. Pearson's bivariate correlation was done to see the relationship between severity of Facebook addiction and the experience of loneliness.\n\n\nRESULTS\nMore than one-fourth (26%) of the study participants had Facebook addiction and 33% had a possibility of Facebook addiction. There was a significant positive correlation between severity of Facebook addiction and extent of experience of loneliness ( r = .239, p = .017).\n\n\nCONCLUSION\nWith the rapid growth of popularity and user-base of Facebook, a significant portion of the individuals are susceptible to develop addictive behaviors related to Facebook use. Loneliness is a factor which influences addiction to Facebook.",
"title": ""
},
{
"docid": "6d89321d33ba5d923a7f31589888f430",
"text": "OBJECTIVE\nThe pain experienced by burn patients during physical therapy range of motion exercises can be extreme and can discourage patients from complying with their physical therapy. We explored the novel use of immersive virtual reality (VR) to distract patients from pain during physical therapy.\n\n\nSETTING\nThis study was conducted at the burn care unit of a regional trauma center.\n\n\nPATIENTS\nTwelve patients aged 19 to 47 years (average of 21% total body surface area burned) performed range of motion exercises of their injured extremity under an occupational therapist's direction.\n\n\nINTERVENTION\nEach patient spent 3 minutes of physical therapy with no distraction and 3 minutes of physical therapy in VR (condition order randomized and counter-balanced).\n\n\nOUTCOME MEASURES\nFive visual analogue scale pain scores for each treatment condition served as the dependent variables.\n\n\nRESULTS\nAll patients reported less pain when distracted with VR, and the magnitude of pain reduction by VR was statistically significant (e.g., time spent thinking about pain during physical therapy dropped from 60 to 14 mm on a 100-mm scale). The results of this study may be examined in more detail at www.hitL.washington.edu/projects/burn/.\n\n\nCONCLUSIONS\nResults provided preliminary evidence that VR can function as a strong nonpharmacologic pain reduction technique for adult burn patients during physical therapy and potentially for other painful procedures or pain populations.",
"title": ""
},
{
"docid": "683fe7f0b577acca2ef3af95015a62d6",
"text": "Because of its high storage density with superior scalability, low integration cost and reasonably high access speed, spin-torque transfer random access memory (STT RAM) appears to have a promising potential to replace SRAM as last-level on-chip cache (e.g., L2 or L3 cache) for microprocessors. Due to unique operational characteristics of its storage device magnetic tunneling junction (MTJ), STT RAM is inherently subject to a write latency versus read latency tradeoff that is determined by the memory cell size. This paper first quantitatively studies how different memory cell sizing may impact the overall computing system performance, and shows that different computing workloads may have conflicting expectations on memory cell sizing. Leveraging MTJ device switching characteristics, we further propose an STT RAM architecture design method that can make STT RAM cache with relatively small memory cell size perform well over a wide spectrum of computing benchmarks. This has been well demonstrated using CACTI-based memory modeling and computing system performance simulations using SimpleScalar. Moreover, we show that this design method can also reduce STT RAM cache energy consumption by up to 30% over a variety of benchmarks.",
"title": ""
},
{
"docid": "216f97a97d240456d36ec765fd45739e",
"text": "This paper explores the growing trend of using mobile technology in university classrooms, exploring the use of tablets in particular, to identify learning benefits faced by students. Students, acting on their efficacy beliefs, make decisions regarding technology’s influence in improving their education. We construct a theoretical model in which internal and external factors affect a student’s self-efficacy which in turn affects the extent of adoption of a device for educational purposes. Through qualitative survey responses of university students who were given an Apple iPad to keep for the duration of a university course we find high levels of self-efficacy leading to positive views of the technology’s learning enhancement capabilities. Student observations on the practicality of the technology, off-topic use and its effects, communication, content, and perceived market advantage of using a tablet are also explored.",
"title": ""
},
{
"docid": "b7617b5dd2a6f392f282f6a34f5b6751",
"text": "In the semiconductor market, the trend of packaging for die stacking technology moves to high density with thinner chips and higher capacity of memory devices. Moreover, the wafer sawing process is becoming more important for thin wafer, because its process speed tends to affect sawn quality and yield. ULK (Ultra low-k) device could require laser grooving application to reduce the stress during wafer sawing. Furthermore under 75um-thick thin low-k wafer is not easy to use the laser grooving application. So, UV laser dicing technology that is very useful tool for Si wafer was selected as full cut application, which has been being used on low-k wafer as laser grooving method.",
"title": ""
},
{
"docid": "920306f59d16291d0cdf80e984a1b5de",
"text": "In contrast to common smooth cables, helix cables possess a spiral circular salient with a diameter ranging from 6 to 10mm on their surface. Helix cables can effectively inhibit the windand raininduced vibrations of cables and are thus commonly used on newly built bridges. In this study, a helix cable-detecting robot is proposed to inspect the inner broken wire condition of helix cables. This robot consists of a driving trolley, as well as upper and lower supporting links. The driving trolley and supporting links were connected by fixed joints and are mounted opposite to each other along the cable. To ensure that the body of the robot is not in contact with the cable surface, a magnetic absorption unit was designed in the driving trolley. A climbing unit was placed on the body of the robot which can enable the trolley to rotate arbitrarily to adapt its water conductivity lines on cables with different screw pitches. A centrifugal speed regulation method was also proposed to ensure the safe return of the robot to the ground. Theoretical analysis and experimental results suggest that the mechanism could carry a payload of 1.5 kg and climb steadily along the helix cable at an inclined angle ranging from 30◦ to 85◦. The load-carrying ability satisfied the requirement to carry sensors or instruments such as cameras to inspect the cable.",
"title": ""
},
{
"docid": "f36b96ef76841a018e76a3bc84072b5a",
"text": "Following the recent success of word embeddings, it has been argued that there is no such thing as an ideal representation for words, as different models tend to capture divergent and often mutually incompatible aspects like semantics/syntax and similarity/relatedness. In this paper, we show that each embedding model captures more information than directly apparent. A linear transformation that adjusts the similarity order of the model without any external resource can tailor it to achieve better results in those aspects, providing a new perspective on how embeddings encode divergent linguistic information. In addition, we explore the relation between intrinsic and extrinsic evaluation, as the effect of our transformations in downstream tasks is higher for unsupervised systems than for supervised ones.",
"title": ""
},
{
"docid": "7b8fc21d27c9eb7c8e1df46eec7d6b6d",
"text": "This paper examines two methods - magnet shifting and optimizing the magnet pole arc - for reducing cogging torque in permanent magnet machines. The methods were applied to existing machine designs and their performance was calculated using finite-element analysis (FEA). Prototypes of the machine designs were constructed and experimental results obtained. It is shown that the FEA predicted the cogging torque to be nearly eliminated using the two methods. However, there was some residual cogging in the prototypes due to manufacturing difficulties. In both methods, the back electromotive force was improved by reducing harmonics while preserving the magnitude.",
"title": ""
}
] |
scidocsrr
|
948ced35e7164c1092d9069e0b3efa85
|
Life cycle assessment of building materials : Comparative analysis of energy and environmental impacts and evaluation of the eco-ef fi ciency improvement potential
|
[
{
"docid": "85d4ac147a4517092b9f81f89af8b875",
"text": "This article is an update of an article five of us published in 1992. The areas of Multiple Criteria Decision Making (MCDM) and Multiattribute Utility Theory (MAUT) continue to be active areas of management science research and application. This paper extends the history of these areas and discusses topics we believe to be important for the future of these fields. as well as two anonymous reviewers for valuable comments.",
"title": ""
}
] |
[
{
"docid": "4cb49a91b5a30909c99138a8e36badcd",
"text": "The main goal of Business Process Management (BPM) is conceptualising, operationalizing and controlling workflows in organisations based on process models. In this paper we discuss several limitations of the workflow paradigm and suggest that process models can also play an important role in analysing how organisations think about themselves through storytelling. We contrast the workflow paradigm with storytelling through a comparative analysis. We also report a case study where storytelling has been used to elicit and document the practices of an IT maintenance team. This research contributes towards the development of better process modelling languages and tools.",
"title": ""
},
{
"docid": "ae3e9bf485d4945af625fca31eaedb76",
"text": "This document describes concisely the ubiquitous class of exponential family distributions met in statistics. The first part recalls definitions and summarizes main properties and duality with Bregman divergences (all proofs are skipped). The second part lists decompositions and related formula of common exponential family distributions. We recall the Fisher-Rao-Riemannian geometries and the dual affine connection information geometries of statistical manifolds. It is intended to maintain and update this document and catalog by adding new distribution items. See the jMEF library, a Java package for processing mixture of exponential families. Available for download at http://www.lix.polytechnique.fr/~nielsen/MEF/ École Polytechnique (France) and Sony Computer Science Laboratories Inc. (Japan). École Polytechnique (France).",
"title": ""
},
{
"docid": "c6e0843498747096ebdafd51d4b5cca6",
"text": "The use of on-body wearable sensors is widespread in several academic and industrial domains. Of great interest are their applications in ambulatory monitoring and pervasive computing systems; here, some quantitative analysis of human motion and its automatic classification are the main computational tasks to be pursued. In this paper, we discuss how human physical activity can be classified using on-body accelerometers, with a major emphasis devoted to the computational algorithms employed for this purpose. In particular, we motivate our current interest for classifiers based on Hidden Markov Models (HMMs). An example is illustrated and discussed by analysing a dataset of accelerometer time series.",
"title": ""
},
{
"docid": "bde4436370b1d5e1423d1b9c710a47ad",
"text": "This paper provides a review of the literature addressing sensorless operation methods of PM brushless machines. The methods explained are state-of-the-art of open and closed loop control strategies. The closed loop review includes those methods based on voltage and current measurements, those methods based on back emf measurements, and those methods based on novel techniques not included in the previous categories. The paper concludes with a comparison table including all main features for all control strategies",
"title": ""
},
{
"docid": "525a819d97e84862d4190b1e0aa4acc0",
"text": "HELIOS2014 is a 2D soccer simulation team which has been participating in the RoboCup competition since 2000. We recently focus on an online multiagent planning using tree search methodology. This paper describes the overview of our search framework and an evaluation method to select the best action sequence.",
"title": ""
},
{
"docid": "71e6994bf56ed193a3a04728c7022a45",
"text": "To evaluate timing and duration differences in airway protection and esophageal opening after oral intubation and mechanical ventilation for acute respiratory distress syndrome (ARDS) survivors versus age-matched healthy volunteers. Orally intubated adult (≥ 18 years old) patients receiving mechanical ventilation for ARDS were evaluated for swallowing impairments via a videofluoroscopic swallow study (VFSS) during usual care. Exclusion criteria were tracheostomy, neurological impairment, and head and neck cancer. Previously recruited healthy volunteers (n = 56) served as age-matched controls. All subjects were evaluated using 5-ml thin liquid barium boluses. VFSS recordings were reviewed frame-by-frame for the onsets of 9 pharyngeal and laryngeal events during swallowing. Eleven patients met inclusion criteria, with a median (interquartile range [IQR]) intubation duration of 14 (9, 16) days, and VFSSs completed a median of 5 (4, 13) days post-extubation. After arrival of the bolus in the pharynx, ARDS patients achieved maximum laryngeal closure a median (IQR) of 184 (158, 351) ms later than age-matched, healthy volunteers (p < 0.001) and it took longer to achieve laryngeal closure with a median (IQR) difference of 151 (103, 217) ms (p < 0.001), although there was no significant difference in duration of laryngeal closure. Pharyngoesophageal segment opening was a median (IQR) of − 116 (− 183, 1) ms (p = 0.004) shorter than in age-matched, healthy controls. Evaluation of swallowing physiology after oral endotracheal intubation in ARDS patients demonstrates slowed pharyngeal and laryngeal swallowing timing, suggesting swallow-related muscle weakness. These findings may highlight specific areas for further evaluation and potential therapeutic intervention to reduce post-extubation aspiration.",
"title": ""
},
{
"docid": "9fba167ef82aa8c153986ea498683ff6",
"text": "Purpose – The purpose of this conceptual paper is to identify important elements of brand building based on a literature review and case studies of successful brands in India. Design/methodology/approach – This paper is based on a review of the literature and takes a case study approach. The paper suggests the framework for building brand identity in sequential order, namely, positioning the brand, communicating the brand message, delivering the brand performance, and leveraging the brand equity. Findings – Brand-building effort has to be aligned with organizational processes that help deliver the promises to customers through all company departments, intermediaries, suppliers, etc., as all these play an important role in the experience customers have with the brand. Originality/value – The paper uses case studies of leading Indian brands to illustrate the importance of action elements in building brands in competitive markets.",
"title": ""
},
{
"docid": "80ee585d49685a24a2011a1ddc27bb55",
"text": "A developmental model of antisocial behavior is outlined. Recent findings are reviewed that concern the etiology and course of antisocial behavior from early childhood through adolescence. Evidence is presented in support of the hypothesis that the route to chronic delinquency is marked by a reliable developmental sequence of experiences. As a first step, ineffective parenting practices are viewed as determinants for childhood conduct disorders. The general model also takes into account the contextual variables that influence the family interaction process. As a second step, the conduct-disordered behaviors lead to academic failure and peer rejection. These dual failures lead, in turn, to increased risk for depressed mood and involvement in a deviant peer group. This third step usually occurs during later childhood and early adolescence. It is assumed that children following this developmental sequence are at high risk for engaging in chronic delinquent behavior. Finally, implications for prevention and intervention are discussed.",
"title": ""
},
{
"docid": "37af8daa32affcdedb0b4820651a0b62",
"text": "Bag of words (BoW) model, which was originally used for document processing field, has been introduced to computer vision field recently and used in object recognition successfully. However, in face recognition, the order less collection of local patches in BoW model cannot provide strong distinctive information since the objects (face images) belong to the same category. A new framework for extracting facial features based on BoW model is proposed in this paper, which can maintain holistic spatial information. Experimental results show that the improved method can obtain better face recognition performance on face images of AR database with extreme expressions, variant illuminations, and partial occlusions.",
"title": ""
},
{
"docid": "833ec45dfe660377eb7367e179070322",
"text": "It was predicted that high self-esteem Ss (HSEs) would rationalize an esteem-threatening decision less than low self-esteem Ss (LSEs), because HSEs presumably had more favorable self-concepts with which to affirm, and thus repair, their overall sense of self-integrity. This prediction was supported in 2 experiments within the \"free-choice\" dissonance paradigm--one that manipulated self-esteem through personality feedback and the other that varied it through selection of HSEs and LSEs, but only when Ss were made to focus on their self-concepts. A 3rd experiment countered an alternative explanation of the results in terms of mood effects that may have accompanied the experimental manipulations. The results were discussed in terms of the following: (a) their support for a resources theory of individual differences in resilience to self-image threats--an extension of self-affirmation theory, (b) their implications for self-esteem functioning, and (c) their implications for the continuing debate over self-enhancement versus self-consistency motivation.",
"title": ""
},
{
"docid": "10e6b505ba74b1c8aea1417a4eb36c30",
"text": "This meta-analysis summarizes teaching effectiveness studies of the past decade and investigates the role of theory and research design in disentangling results. Compared to past analyses based on the process–product model, a framework based on cognitive models of teaching and learning proved useful in analyzing studies and accounting for variations in effect sizes. Although the effects of teaching on student learning were diverse and complex, they were fairly systematic. The authors found the largest effects for domainspecific components of teaching—teaching most proximal to executive processes of learning. By taking into account research design, the authors further disentangled meta-analytic findings. For example, domain-specific teaching components were mainly studied with quasi-experimental or experimental designs. Finally, correlational survey studies dominated teaching effectiveness studies in the past decade but proved to be more distal from the teaching–learning process.",
"title": ""
},
{
"docid": "9a38b18bd69d17604b6e05b9da450c2d",
"text": "New invention of advanced technology, enhanced capacity of storage media, maturity of information technology and popularity of social media, business intelligence and Scientific invention, produces huge amount of data which made ample set of information that is responsible for birth of new concept well known as big data. Big data analytics is the process of examining large amounts of data. The analysis is done on huge amount of data which is structure, semi structure and unstructured. In big data, data is generated at exponentially for reason of increase use of social media, email, document and sensor data. The growth of data has affected all fields, whether it is business sector or the world of science. In this paper, the process of system is reviewed for managing "Big Data" and today's activities on big data tools and techniques.",
"title": ""
},
{
"docid": "9bf99d48bc201147a9a9ad5af547a002",
"text": "Consider a biped evolving in the sagittal plane. The unexpected rotation of the supporting foot can be avoided by controlling the zero moment point (ZMP). The objective of this study is to propose and analyze a control strategy for simultaneously regulating the position of the ZMP and the joints of the robot. If the tracking requirements were posed in the time domain, the problem would be underactuated in the sense that the number of inputs would be less than the number of outputs. To get around this issue, the proposed controller is based on a path-following control strategy, previously developed for dealing with the underactuation present in planar robots without actuated ankles. In particular, the control law is defined in such a way that only the kinematic evolution of the robot's state is regulated, but not its temporal evolution. The asymptotic temporal evolution of the robot is completely defined through a one degree-of-freedom subsystem of the closed-loop model. Since the ZMP is controlled, bipedal walking that includes a prescribed rotation of the foot about the toe can also be considered. Simple analytical conditions are deduced that guarantee the existence of a periodic motion and the convergence toward this motion.",
"title": ""
},
{
"docid": "a36e43f03735d7610677465bd78e9b6f",
"text": "Existing Poisson mesh editing techniques mainly focus on designing schemes to propagate deformation from a given boundary condition to a region of interest. Although solving the Poisson system in the least-squares sense distributes the distortion errors over the entire region of interest, large deformation in the boundary condition might still lead to severely distorted results. We propose to optimize the boundary condition (the merging boundary) for Poisson mesh merging. The user needs only to casually mark a source region and a target region. Our algorithm automatically searches for an optimal boundary condition within the marked regions such that the change of the found boundary during merging is minimal in terms of similarity transformation. Experimental results demonstrate that our merging tool is easy to use and produces visually better merging results than unoptimized techniques.",
"title": ""
},
{
"docid": "3c848d254ae907a75dcbf502ed94aa84",
"text": "We study the problem of computing routes for electric vehicles (EVs) in road networks. Since their battery capacity is limited, and consumed energy per distance increases with velocity, driving the fastest route is often not desirable and may even be infeasible. On the other hand, the energy-optimal route may be too conservative in that it contains unnecessary detours or simply takes too long. In this work, we propose to use multicriteria optimization to obtain Pareto sets of routes that trade energy consumption for speed. In particular, we exploit the fact that the same road segment can be driven at different speeds within reasonable intervals. As a result, we are able to provide routes with low energy consumption that still follow major roads, such as freeways. Unfortunately, the size of the resulting Pareto sets can be too large to be practical. We therefore also propose several nontrivial techniques that can be applied on-line at query time in order to speed up computation and filter insignificant solutions from the Pareto sets. Our extensive experimental study, which uses a real-world energy consumption model, reveals that we are able to compute diverse sets of alternative routes on continental networks that closely resemble the exact Pareto set in just under a second—several orders of magnitude faster than the exhaustive algorithm. 1998 ACM Subject Classification G.2.2 Graph Theory, G.2.3 Applications",
"title": ""
},
{
"docid": "abb45e408cb37a0ad89f0b810b7f583b",
"text": "In a mobile computing environment, a user carrying a portable computer can execute a mobile t11m,,· action by submitting the ope.rations of the transaction to distributed data servers from different locations. M a result of this mobility, the operations of the transaction may be executed at different servers. The distribution oC operations implies that the transmission of messages (such as those involved in a two phase commit protocol) may be required among these data servers in order to coordinate the execution ofthese operations. In this paper, we will address the distribution oC operations that update partitioned data in mobile environments. We show that, for operations pertaining to resource allocation, the message overhead (e.g., for a 2PC protocol) introduced by the distribution of operations is undesirable and unnecessary. We introduce a new algorithm, the RenlnJation Algorithm (RA), that does not necessitate the incurring of message overheads Cor the commitment of mobile transactions. We address two issues related to the RA algorithm: a termination protocol and a protocol for non_partition.commutotive operation\". We perform a comparison between the proposed RA algorithm and existing solutions that use a 2PC protocol.",
"title": ""
},
{
"docid": "ed3b8bfdd6048e4a07ee988f1e35fd21",
"text": "Accurate and automatic organ segmentation from 3D radiological scans is an important yet challenging problem for medical image analysis. Specifically, as a small, soft, and flexible abdominal organ, the pancreas demonstrates very high inter-patient anatomical variability in both its shape and volume. This inhibits traditional automated segmentation methods from achieving high accuracies, especially compared to the performance obtained for other organs, such as the liver, heart or kidneys. To fill this gap, we present an automated system from 3D computed tomography (CT) volumes that is based on a two-stage cascaded approach-pancreas localization and pancreas segmentation. For the first step, we localize the pancreas from the entire 3D CT scan, providing a reliable bounding box for the more refined segmentation step. We introduce a fully deep-learning approach, based on an efficient application of holistically-nested convolutional networks (HNNs) on the three orthogonal axial, sagittal, and coronal views. The resulting HNN per-pixel probability maps are then fused using pooling to reliably produce a 3D bounding box of the pancreas that maximizes the recall. We show that our introduced localizer compares favorably to both a conventional non-deep-learning method and a recent hybrid approach based on spatial aggregation of superpixels using random forest classification. The second, segmentation, phase operates within the computed bounding box and integrates semantic mid-level cues of deeply-learned organ interior and boundary maps, obtained by two additional and separate realizations of HNNs. By integrating these two mid-level cues, our method is capable of generating boundary-preserving pixel-wise class label maps that result in the final pancreas segmentation. Quantitative evaluation is performed on a publicly available dataset of 82 patient CT scans using 4-fold cross-validation (CV). We achieve a (mean ± std. dev.) Dice similarity coefficient (DSC) of 81.27 ± 6.27% in validation, which significantly outperforms both a previous state-of-the art method and a preliminary version of this work that report DSCs of 71.80 ± 10.70% and 78.01 ± 8.20%, respectively, using the same dataset.",
"title": ""
},
{
"docid": "ad7a5bccf168ac3b13e13ccf12a94f7d",
"text": "As one of the most popular social media platforms today, Twitter provides people with an effective way to communicate and interact with each other. Through these interactions, influence among users gradually emerges and changes people's opinions. Although previous work has studied interpersonal influence as the probability of activating others during information diffusion, they ignore an important fact that information diffusion is the result of influence, while dynamic interactions among users produce influence. In this article, the authors propose a novel temporal influence model to learn users' opinion behaviors regarding a specific topic by exploring how influence emerges during communications. The experiments show that their model performs better than other influence models with different influence assumptions when predicting users' future opinions, especially for the users with high opinion diversity.",
"title": ""
},
{
"docid": "c86aad62e950d7c10f93699d421492d5",
"text": "Carotid intima-media thickness (CIMT) is a good surrogate for atherosclerosis. Hyperhomocysteinemia is an independent risk factor for cardiovascular diseases. We aim to investigate the relationships between homocysteine (Hcy) related biochemical indexes and CIMT, the associations between Hcy related SNPs and CIMT, as well as the potential gene–gene interactions. The present study recruited full siblings (186 eligible families with 424 individuals) with no history of cardiovascular events from a rural area of Beijing. We examined CIMT, intima-media thickness for common carotid artery (CCA-IMT) and carotid bifurcation, tested plasma levels for Hcy, vitamin B6 (VB6), vitamin B12 (VB12) and folic acid (FA), and genotyped 9 SNPs on MTHFR, MTR, MTRR, BHMT, SHMT1, CBS genes. Associations between SNPs and biochemical indexes and CIMT indexes were analyzed using family-based association test analysis. We used multi-level mixed-effects regression model to verify SNP-CIMT associations and to explore the potential gene–gene interactions. VB6, VB12 and FA were negatively correlated with CIMT indexes (p < 0.05). rs2851391 T allele was associated with decreased plasma VB12 levels (p = 0.036). In FABT, CBS rs2851391 was significantly associated with CCA-IMT (p = 0.021) and CIMT (p = 0.019). In multi-level mixed-effects regression model, CBS rs2851391 was positively significantly associated with CCA-IMT (Coef = 0.032, se = 0.009, raw p < 0.001) after Bonferoni correction (corrected α = 0.0056). Gene–gene interactions were found between CBS rs2851391 and BHMT rs10037045 for CCA-IMT (p = 0.011), as well as between CBS rs2851391 and MTR rs1805087 for CCA-IMT (p = 0.007) and CIMT (p = 0.022). Significant associations are found between Hcy metabolism related genetic polymorphisms, biochemical indexes and CIMT indexes. There are complex interactions between genetic polymorphisms for CCA-IMT and CIMT.",
"title": ""
}
] |
scidocsrr
|
7dc33ca0df883f80793682ba14baff7a
|
Three-level neutral-point-clamped inverters in transformerless PV systems — State of the art
|
[
{
"docid": "a0e7cdeefc33d4078702e5368dd9f5b9",
"text": "This paper presents a single-phase five-level photovoltaic (PV) inverter topology for grid-connected PV systems with a novel pulsewidth-modulated (PWM) control scheme. Two reference signals identical to each other with an offset equivalent to the amplitude of the triangular carrier signal were used to generate PWM signals for the switches. A digital proportional-integral current control algorithm is implemented in DSP TMS320F2812 to keep the current injected into the grid sinusoidal and to have high dynamic performance with rapidly changing atmospheric conditions. The inverter offers much less total harmonic distortion and can operate at near-unity power factor. The proposed system is verified through simulation and is implemented in a prototype, and the experimental results are compared with that with the conventional single-phase three-level grid-connected PWM inverter.",
"title": ""
}
] |
[
{
"docid": "2220633d6343df0ebb2d292358ce182b",
"text": "This paper presents a system for fully automatic recognition and reconstruction of 3D objects in image databases. We pose the object recognition problem as one of finding consistent matches between all images, subject to the constraint that the images were taken from a perspective camera. We assume that the objects or scenes are rigid. For each image, we associate a camera matrix, which is parameterised by rotation, translation and focal length. We use invariant local features to find matches between all images, and the RANSAC algorithm to find those that are consistent with the fundamental matrix. Objects are recognised as subsets of matching images. We then solve for the structure and motion of each object, using a sparse bundle adjustment algorithm. Our results demonstrate that it is possible to recognise and reconstruct 3D objects from an unordered image database with no user input at all.",
"title": ""
},
{
"docid": "752e6d6f34ffc638e9a0d984a62db184",
"text": "Defect prediction models are classifiers that are trained to identify defect-prone software modules. Such classifiers have configurable parameters that control their characteristics (e.g., the number of trees in a random forest classifier). Recent studies show that these classifiers may underperform due to the use of suboptimal default parameter settings. However, it is impractical to assess all of the possible settings in the parameter spaces. In this paper, we investigate the performance of defect prediction models where Caret --- an automated parameter optimization technique --- has been applied. Through a case study of 18 datasets from systems that span both proprietary and open source domains, we find that (1) Caret improves the AUC performance of defect prediction models by as much as 40 percentage points; (2) Caret-optimized classifiers are at least as stable as (with 35% of them being more stable than) classifiers that are trained using the default settings; and (3) Caret increases the likelihood of producing a top-performing classifier by as much as 83%. Hence, we conclude that parameter settings can indeed have a large impact on the performance of defect prediction models, suggesting that researchers should experiment with the parameters of the classification techniques. Since automated parameter optimization techniques like Caret yield substantially benefits in terms of performance improvement and stability, while incurring a manageable additional computational cost, they should be included in future defect prediction studies.",
"title": ""
},
{
"docid": "667a457dcb1f379abd4e355e429dc40d",
"text": "BACKGROUND\nViolent death is a serious problem in the United States. Previous research showing US rates of violent death compared with other high-income countries used data that are more than a decade old.\n\n\nMETHODS\nWe examined 2010 mortality data obtained from the World Health Organization for populous, high-income countries (n = 23). Death rates per 100,000 population were calculated for each country and for the aggregation of all non-US countries overall and by age and sex. Tests of significance were performed using Poisson and negative binomial regressions.\n\n\nRESULTS\nUS homicide rates were 7.0 times higher than in other high-income countries, driven by a gun homicide rate that was 25.2 times higher. For 15- to 24-year-olds, the gun homicide rate in the United States was 49.0 times higher. Firearm-related suicide rates were 8.0 times higher in the United States, but the overall suicide rates were average. Unintentional firearm deaths were 6.2 times higher in the United States. The overall firearm death rate in the United States from all causes was 10.0 times higher. Ninety percent of women, 91% of children aged 0 to 14 years, 92% of youth aged 15 to 24 years, and 82% of all people killed by firearms were from the United States.\n\n\nCONCLUSIONS\nThe United States has an enormous firearm problem compared with other high-income countries, with higher rates of homicide and firearm-related suicide. Compared with 2003 estimates, the US firearm death rate remains unchanged while firearm death rates in other countries decreased. Thus, the already high relative rates of firearm homicide, firearm suicide, and unintentional firearm death in the United States compared with other high-income countries increased between 2003 and 2010.",
"title": ""
},
{
"docid": "b42e92aba32ff037362ecc40b816d063",
"text": "In this paper we discuss security issues for cloud computing including storage security, data security, and network security and secure virtualization. Then we select some topics and describe them in more detail. In particular, we discuss a scheme for secure third party publications of documents in a cloud. Next we discuss secure federated query processing with map Reduce and Hadoop. Next we discuss the use of secure coprocessors for cloud computing. Third we discuss XACML implementation for Hadoop. We believe that building trusted applications from untrusted components will be a major aspect of secure cloud computing.",
"title": ""
},
{
"docid": "2ecd815af00b9961259fa9b2a9185483",
"text": "This paper describes the current development status of a mobile robot designed to inspect the outer surface of large oil ship hulls and floating production storage and offloading platforms. These vessels require a detailed inspection program, using several nondestructive testing techniques. A robotic crawler designed to perform such inspections is presented here. Locomotion over the hull is provided through magnetic tracks, and the system is controlled by two networked PCs and a set of custom hardware devices to drive motors, video cameras, ultrasound, inertial platform, and other devices. Navigation algorithm uses an extended-Kalman-filter (EKF) sensor-fusion formulation, integrating odometry and inertial sensors. It was shown that the inertial navigation errors can be decreased by selecting appropriate Q and R matrices in the EKF formulation.",
"title": ""
},
{
"docid": "5343db8a8bc5e300b9ad488d0eda56d4",
"text": "The paper analyzes some forms of linguistic ambiguity in English in a specific register, i.e. newspaper headlines. In particular, the focus of the research is on examples of lexical and syntactic ambiguity that result in sources of voluntary or involuntary humor. The study is based on a corpus of 135 verbally ambiguous headlines found on web sites presenting humorous bits of information. The linguistic phenomena that contribute to create this kind of semantic confusion in headlines will be analyzed and divided into the three main categories of lexical, syntactic, and phonological ambiguity, and examples from the corpus will be discussed for each category. The main results of the study were that, firstly, contrary to the findings of previous research on jokes, syntactically ambiguous headlines were found in good percentage in the corpus and that this might point to differences in genre. Secondly, two new configurations for the processing of the disjunctor/connector order were found. In the first of these configurations the disjunctor appears before the connector, instead of being placed after or coinciding with the ambiguous element, while in the second, two ambiguous elements are present, each of which functions both as a connector and a disjunctor.",
"title": ""
},
{
"docid": "9cb13d599da25991d11d276aaa76a005",
"text": "We propose a quasi real-time method for discrimination of ventricular ectopic beats from both supraventricular and paced beats in the electrocardiogram (ECG). The heartbeat waveforms were evaluated within a fixed-length window around the fiducial points (100 ms before, 450 ms after). Our algorithm was designed to operate with minimal expert intervention and we define that the operator is required only to initially select up to three ‘normal’ heartbeats (the most frequently seen supraventricular or paced complexes). These were named original QRS templates and their copies were substituted continuously throughout the ECG analysis to capture slight variations in the heartbeat waveforms of the patient’s sustained rhythm. The method is based on matching of the evaluated heartbeat with the QRS templates by a complex set of ECG descriptors, including maximal cross-correlation, area difference and frequency spectrum difference. Temporal features were added by analyzing the R-R intervals. The classification criteria were trained by statistical assessment of the ECG descriptors calculated for all heartbeats in MIT-BIH Supraventricular Arrhythmia Database. The performance of the classifiers was tested on the independent MIT-BIH Arrhythmia Database. The achieved unbiased accuracy is represented by sensitivity of 98.4% and specificity of 98.86%, both being competitive to other published studies. The provided computationally efficient techniques enable the fast post-recording analysis of lengthy Holter-monitor ECG recordings, as well as they can serve as a quasi real-time detection method embedded into surface ECG monitors.",
"title": ""
},
{
"docid": "3a852aa880c564a85cc8741ce7427ced",
"text": "INTRODUCTION\nTumeric is a spice that comes from the root Curcuma longa, a member of the ginger family, Zingaberaceae. In Ayurveda (Indian traditional medicine), tumeric has been used for its medicinal properties for various indications and through different routes of administration, including topically, orally, and by inhalation. Curcuminoids are components of tumeric, which include mainly curcumin (diferuloyl methane), demethoxycurcumin, and bisdemethoxycurcmin.\n\n\nOBJECTIVES\nThe goal of this systematic review of the literature was to summarize the literature on the safety and anti-inflammatory activity of curcumin.\n\n\nMETHODS\nA search of the computerized database MEDLINE (1966 to January 2002), a manual search of bibliographies of papers identified through MEDLINE, and an Internet search using multiple search engines for references on this topic was conducted. The PDR for Herbal Medicines, and four textbooks on herbal medicine and their bibliographies were also searched.\n\n\nRESULTS\nA large number of studies on curcumin were identified. These included studies on the antioxidant, anti-inflammatory, antiviral, and antifungal properties of curcuminoids. Studies on the toxicity and anti-inflammatory properties of curcumin have included in vitro, animal, and human studies. A phase 1 human trial with 25 subjects using up to 8000 mg of curcumin per day for 3 months found no toxicity from curcumin. Five other human trials using 1125-2500 mg of curcumin per day have also found it to be safe. These human studies have found some evidence of anti-inflammatory activity of curcumin. The laboratory studies have identified a number of different molecules involved in inflammation that are inhibited by curcumin including phospholipase, lipooxygenase, cyclooxygenase 2, leukotrienes, thromboxane, prostaglandins, nitric oxide, collagenase, elastase, hyaluronidase, monocyte chemoattractant protein-1 (MCP-1), interferon-inducible protein, tumor necrosis factor (TNF), and interleukin-12 (IL-12).\n\n\nCONCLUSIONS\nCurcumin has been demonstrated to be safe in six human trials and has demonstrated anti-inflammatory activity. It may exert its anti-inflammatory activity by inhibition of a number of different molecules that play a role in inflammation.",
"title": ""
},
{
"docid": "64c156ee4171b5b84fd4eedb1d922f55",
"text": "We introduce a large computational subcategorization lexicon which includes subcategorization frame (SCF) and frequency information for 6,397 English verbs. This extensive lexicon was acquired automatically from five corpora and the Web using the current version of the comprehensive subcategorization acquisition system of Briscoe and Carroll (1997). The lexicon is provided freely for research use, along with a script which can be used to filter and build sub-lexicons suited for different natural language processing (NLP) purposes. Documentation is also provided which explains each sub-lexicon option and evaluates its accuracy.",
"title": ""
},
{
"docid": "c6029c95b8a6b2c6dfb688ac049427dc",
"text": "This paper presents development of a two-fingered robotic device for amputees whose hands are partially impaired. In this research, we focused on developing a compact and lightweight robotic finger system, so the target amputee would be able to execute simple activities in daily living (ADL), such as grasping a bottle or a cup for a long time. The robotic finger module was designed by considering the impaired shape and physical specifications of the target patient's hand. The proposed prosthetic finger was designed using a linkage mechanism which was able to create underactuated finger motion. This underactuated mechanism contributes to minimizing the number of required actuators for finger motion. In addition, the robotic finger was not driven by an electro-magnetic rotary motor, but a shape-memory alloy (SMA) actuator. Having a driving method using SMA wire contributed to reducing the total weight of the prosthetic robot finger as it has higher energy density than that offered by the method using the electrical DC motor. In this paper, we confirmed the performance of the proposed robotic finger by fundamental driving tests and the characterization of the SMA actuator.",
"title": ""
},
{
"docid": "17d1439650efccf83390834ba933db1a",
"text": "The arterial vascularization of the pineal gland (PG) remains a debatable subject. This study aims to provide detailed information about the arterial vascularization of the PG. Thirty adult human brains were obtained from routine autopsies. Cerebral arteries were separately cannulated and injected with colored latex. The dissections were carried out using a surgical microscope. The diameters of the branches supplying the PG at their origin and vascularization areas of the branches of the arteries were investigated. The main artery of the PG was the lateral pineal artery, and it originated from the posterior circulation. The other arteries included the medial pineal artery from the posterior circulation and the rostral pineal artery mainly from the anterior circulation. Posteromedial choroidal artery was an important artery that branched to the PG. The arterial supply to the PG was studied comprehensively considering the debate and inadequacy of previously published studies on this issue available in the literature. This anatomical knowledge may be helpful for surgical treatment of pathologies of the PG, especially in children who develop more pathology in this region than adults.",
"title": ""
},
{
"docid": "1ddfbf702c35a689367cd2b27dc1c6c6",
"text": "In this paper, we propose a simple but powerful prior, color attenuation prior, for haze removal from a single input hazy image. By creating a linear model for modelling the scene depth of the hazy image under this novel prior and learning the parameters of the model by using a supervised learning method, the depth information can be well recovered. With the depth map of the hazy image, we can easily remove haze from a single image. Experimental results show that the proposed approach is highly efficient and it outperforms state-of-the-art haze removal algorithms in terms of the dehazing effect as well.",
"title": ""
},
{
"docid": "333fd7802029f38bda35cd2077e7de59",
"text": "Human shape estimation is an important task for video editing, animation and fashion industry. Predicting 3D human body shape from natural images, however, is highly challenging due to factors such as variation in human bodies, clothing and viewpoint. Prior methods addressing this problem typically attempt to fit parametric body models with certain priors on pose and shape. In this work we argue for an alternative representation and propose BodyNet, a neural network for direct inference of volumetric body shape from a single image. BodyNet is an end-to-end trainable network that benefits from (i) a volumetric 3D loss, (ii) a multi-view re-projection loss, and (iii) intermediate supervision of 2D pose, 2D body part segmentation, and 3D pose. Each of them results in performance improvement as demonstrated by our experiments. To evaluate the method, we fit the SMPL model to our network output and show state-of-the-art results on the SURREAL and Unite the People datasets, outperforming recent approaches. Besides achieving state-of-the-art performance, our method also enables volumetric bodypart segmentation.",
"title": ""
},
{
"docid": "3bd2bfd1c7652f8655d009c085d6ed5c",
"text": "The past decade has witnessed the boom of human-machine interactions, particularly via dialog systems. In this paper, we study the task of response generation in open-domain multi-turn dialog systems. Many research efforts have been dedicated to building intelligent dialog systems, yet few shed light on deepening or widening the chatting topics in a conversational session, which would attract users to talk more. To this end, this paper presents a novel deep scheme consisting of three channels, namely global, wide, and deep ones. The global channel encodes the complete historical information within the given context, the wide one employs an attention-based recurrent neural network model to predict the keywords that may not appear in the historical context, and the deep one trains a Multi-layer Perceptron model to select some keywords for an in-depth discussion. Thereafter, our scheme integrates the outputs of these three channels to generate desired responses. To justify our model, we conducted extensive experiments to compare our model with several state-of-the-art baselines on two datasets: one is constructed by ourselves and the other is a public benchmark dataset. Experimental results demonstrate that our model yields promising performance by widening or deepening the topics of interest.",
"title": ""
},
{
"docid": "d473619f76f81eced041df5bc012c246",
"text": "Monocular visual odometry (VO) and simultaneous localization and mapping (SLAM) have seen tremendous improvements in accuracy, robustness, and efficiency, and have gained increasing popularity over recent years. Nevertheless, not so many discussions have been carried out to reveal the influences of three very influential yet easily overlooked aspects, such as photometric calibration, motion bias, and rolling shutter effect. In this work, we evaluate these three aspects quantitatively on the state of the art of direct, feature-based, and semi-direct methods, providing the community with useful practical knowledge both for better applying existing methods and developing new algorithms of VO and SLAM. Conclusions (some of which are counterintuitive) are drawn with both technical and empirical analyses to all of our experiments. Possible improvements on existing methods are directed or proposed, such as a subpixel accuracy refinement of oriented fast and rotated brief (ORB)-SLAM, which boosts its performance.",
"title": ""
},
{
"docid": "17676785398d4ed24cc04cb3363a7596",
"text": "Generative models (GMs) such as Generative Adversary Network (GAN) and Variational Auto-Encoder (VAE) have thrived these years and achieved high quality results in generating new samples. Especially in Computer Vision, GMs have been used in image inpainting, denoising and completion, which can be treated as the inference from observed pixels to corrupted pixels. However, images are hierarchically structured which are quite different from many real-world inference scenarios with non-hierarchical features. These inference scenarios contain heterogeneous stochastic variables and irregular mutual dependences. Traditionally they are modeled by Bayesian Network (BN). However, the learning and inference of BN model are NP-hard thus the number of stochastic variables in BN is highly constrained. In this paper, we adapt typical GMs to enable heterogeneous learning and inference in polynomial time. We also propose an extended autoregressive (EAR) model and an EAR with adversary loss (EARA) model and give theoretical results on their effectiveness. Experiments on several BN datasets show that our proposed EAR model achieves the best performance in most cases compared to other GMs. Except for black box analysis, we’ve also done a serial of experiments on Markov border inference of GMs for white box analysis and give theoretical results.",
"title": ""
},
{
"docid": "4b74b9d4c4b38082f9f667e363f093b2",
"text": "We have developed Textpresso, a new text-mining system for scientific literature whose capabilities go far beyond those of a simple keyword search engine. Textpresso's two major elements are a collection of the full text of scientific articles split into individual sentences, and the implementation of categories of terms for which a database of articles and individual sentences can be searched. The categories are classes of biological concepts (e.g., gene, allele, cell or cell group, phenotype, etc.) and classes that relate two objects (e.g., association, regulation, etc.) or describe one (e.g., biological process, etc.). Together they form a catalog of types of objects and concepts called an ontology. After this ontology is populated with terms, the whole corpus of articles and abstracts is marked up to identify terms of these categories. The current ontology comprises 33 categories of terms. A search engine enables the user to search for one or a combination of these tags and/or keywords within a sentence or document, and as the ontology allows word meaning to be queried, it is possible to formulate semantic queries. Full text access increases recall of biological data types from 45% to 95%. Extraction of particular biological facts, such as gene-gene interactions, can be accelerated significantly by ontologies, with Textpresso automatically performing nearly as well as expert curators to identify sentences; in searches for two uniquely named genes and an interaction term, the ontology confers a 3-fold increase of search efficiency. Textpresso currently focuses on Caenorhabditis elegans literature, with 3,800 full text articles and 16,000 abstracts. The lexicon of the ontology contains 14,500 entries, each of which includes all versions of a specific word or phrase, and it includes all categories of the Gene Ontology database. Textpresso is a useful curation tool, as well as search engine for researchers, and can readily be extended to other organism-specific corpora of text. Textpresso can be accessed at http://www.textpresso.org or via WormBase at http://www.wormbase.org.",
"title": ""
},
{
"docid": "b885526ab7db7d7ed502698758117c80",
"text": "Cancer, more than any other human disease, now has a surfeit of potential molecular targets poised for therapeutic exploitation. Currently, a number of attractive and validated cancer targets remain outside of the reach of pharmacological regulation. Some have been described as undruggable, at least by traditional strategies. In this article, we outline the basis for the undruggable moniker, propose a reclassification of these targets as undrugged, and highlight three general classes of this imposing group as exemplars with some attendant strategies currently being explored to reclassify them. Expanding the spectrum of disease-relevant targets to pharmacological manipulation is central to reducing cancer morbidity and mortality.",
"title": ""
},
{
"docid": "ec0733962301d6024da773ad9d0f636d",
"text": "This paper focuses on the design, fabrication and characterization of unimorph actuators for a microaerial flapping mechanism. PZT-5H and PZN-PT are investigated as piezoelectric layers in the unimorph actuators. Design issues for microaerial flapping actuators are discussed, and criteria for the optimal dimensions of actuators are determined. For low power consumption actuation, a square wave based electronic driving circuit is proposed. Fabricated piezoelectric unimorphs are characterized by an optical measurement system in quasi-static and dynamic mode. Experimental performance of PZT5H and PZN-PT based unimorphs is compared with desired design specifications. A 1 d.o.f. flapping mechanism with a PZT-5H unimorph is constructed, and 180◦ stroke motion at 95 Hz is achieved. Thus, it is shown that unimorphs could be promising flapping mechanism actuators.",
"title": ""
},
{
"docid": "21c7cbcf02141c60443f912ae5f1208b",
"text": "A novel driving scheme based on simultaneous emission is reported for 2D/3D AMOLED TVs. The new method reduces leftright crosstalk without sacrificing luminance. The new scheme greatly simplifies the pixel circuit as the number of transistors for Vth compensation is reduced from 6 to 3. The capacitive load of scan lines is reduced by 48%, enabling very high refresh rate (240 Hz).",
"title": ""
}
] |
scidocsrr
|
85d2de403377831ff1a6f5b7c671d438
|
Discrimination of focal and non-focal EEG signals using entropy-based features in EEMD and CEEMDAN domains
|
[
{
"docid": "8ff0683625b483ed1e77b1720bcc0a15",
"text": "A new Ensemble Empirical Mode Decomposition (EEMD) is presented. This new approach consists of sifting an ensemble of white noise-added signal (data) and treats the mean as the final true result. Finite, not infinitesimal, amplitude white noise is necessary to force the ensemble to exhaust all possible solutions in the sifting process, thus making the different scale signals to collate in the proper intrinsic mode functions (IMF) dictated by the dyadic filter banks. As EEMD is a time–space analysis method, the added white noise is averaged out with sufficient number of trials; the only persistent part that survives the averaging process is the component of the signal (original data), which is then treated as the true and more physical meaningful answer. The effect of the added white noise is to provide a uniform reference frame in the time–frequency space; therefore, the added noise collates the portion of the signal of comparable scale in one IMF. With this ensemble mean, one can separate scales naturally without any a priori subjective criterion selection as in the intermittence test for the original EMD algorithm. This new approach utilizes the full advantage of the statistical characteristics of white noise to perturb the signal in its true solution neighborhood, and to cancel itself out after serving its purpose; therefore, it represents a substantial improvement over the original EMD and is a truly noise-assisted data analysis (NADA) method.",
"title": ""
}
] |
[
{
"docid": "43919b011f7d65d82d03bb01a5e85435",
"text": "Self-inflicted burns are regularly admitted to burns units worldwide. Most of these patients are referred to psychiatric services and are successfully treated however some return to hospital with recurrent self-inflicted burns. The aim of this study is to explore the characteristics of the recurrent self-inflicted burn patients admitted to the Royal North Shore Hospital during 2004-2011. Burn patients were drawn from a computerized database and recurrent self-inflicted burn patients were identified. Of the total of 1442 burn patients, 40 (2.8%) were identified as self-inflicted burns. Of these patients, 5 (0.4%) were identified to have sustained previous self-inflicted burns and were interviewed by a psychiatrist. Each patient had been diagnosed with a borderline personality disorder and had suffered other forms of deliberate self-harm. Self-inflicted burns were utilized to relieve or help regulate psychological distress, rather than to commit suicide. Most patients had a history of emotional neglect, physical and/or sexual abuse during their early life experience. Following discharge from hospital, the patients described varying levels of psychiatric follow-up, from a post-discharge review at a local community mental health centre to twice-weekly psychotherapy. The patients who engaged in regular psychotherapy described feeling more in control of their emotions and reported having a longer period of abstinence from self-inflicted burn. Although these patients represent a small proportion of all burns, the repeat nature of their injuries led to a significant use of clinical resources. A coordinated and consistent treatment pathway involving surgical and psychiatric services for recurrent self-inflicted burns may assist in the management of these challenging patients.",
"title": ""
},
{
"docid": "916e10c8bd9f5aa443fa4d8316511c94",
"text": "A full-bridge LLC resonant converter with series-parallel connected transformers for an onboard battery charger of electric vehicles is proposed, which can realize zero voltage switching turn-on of power switches and zero current switching turn-off of rectifier diodes. In this converter, two same small transformers are employed instead of the single transformer in the traditional LLC resonant converter. The primary windings of these two transformers are series-connected to obtain equal primary current, while the secondary windings are parallel-connected to be provided with the same secondary voltage, so the power can be automatically balanced. Series-connection can reduce the turns of primary windings. Parallel-connection can reduce the current stress of the secondary windings and the conduction loss of rectifier diodes. Compared with the traditional LLC resonant converter with single transformer under same power level, the smaller low-profile cores can be used to reduce the transformers loss and improve heat dissipation. In this paper, the operating principle, steady state analysis, and design of the proposed converter are described, simulation and experimental prototype of the proposed LLC converter is established to verify the effectiveness of the proposed converter.",
"title": ""
},
{
"docid": "91f20c48f5a4329260aadb87a0d8024c",
"text": "In this paper, we survey key design for manufacturing issues for extreme scaling with emerging nanolithography technologies, including double/multiple patterning lithography, extreme ultraviolet lithography, and electron-beam lithography. These nanolithography and nanopatterning technologies have different manufacturing processes and their unique challenges to very large scale integration (VLSI) physical design, mask synthesis, and so on. It is essential to have close VLSI design and underlying process technology co-optimization to achieve high product quality (power/performance, etc.) and yield while making future scaling cost-effective and worthwhile. Recent results and examples will be discussed to show the enablement and effectiveness of such design and process integration, including lithography model/analysis, mask synthesis, and lithography friendly physical design.",
"title": ""
},
{
"docid": "f76717050a5d891f63e475ba3e3ff955",
"text": "Computational Advertising is the currently emerging multidimensional statistical modeling sub-discipline in digital advertising industry. Web pages visited per user every day is considerably increasing, resulting in an enormous access to display advertisements (ads). The rate at which the ad is clicked by users is termed as the Click Through Rate (CTR) of an advertisement. This metric facilitates the measurement of the effectiveness of an advertisement. The placement of ads in appropriate location leads to the rise in the CTR value that influences the growth of customer access to advertisement resulting in increased profit rate for the ad exchange, publishers and advertisers. Thus it is imperative to predict the CTR metric in order to formulate an efficient ad placement strategy. This paper proposes a predictive model that generates the click through rate based on different dimensions of ad placement for display advertisements using statistical machine learning regression techniques such as multivariate linear regression (LR), poisson regression (PR) and support vector regression(SVR). The experiment result reports that SVR based click model outperforms in predicting CTR through hyperparameter optimization.",
"title": ""
},
{
"docid": "210a1dda2fc4390a5b458528b176341e",
"text": "Many classic methods have shown non-local self-similarity in natural images to be an effective prior for image restoration. However, it remains unclear and challenging to make use of this intrinsic property via deep networks. In this paper, we propose a non-local recurrent network (NLRN) as the first attempt to incorporate non-local operations into a recurrent neural network (RNN) for image restoration. The main contributions of this work are: (1) Unlike existing methods that measure self-similarity in an isolated manner, the proposed non-local module can be flexibly integrated into existing deep networks for end-to-end training to capture deep feature correlation between each location and its neighborhood. (2) We fully employ the RNN structure for its parameter efficiency and allow deep feature correlation to be propagated along adjacent recurrent states. This new design boosts robustness against inaccurate correlation estimation due to severely degraded images. (3) We show that it is essential to maintain a confined neighborhood for computing deep feature correlation given degraded images. This is in contrast to existing practice [43] that deploys the whole image. Extensive experiments on both image denoising and super-resolution tasks are conducted. Thanks to the recurrent non-local operations and correlation propagation, the proposed NLRN achieves superior results to state-of-the-art methods with many fewer parameters. The code is available at https://github.com/Ding-Liu/NLRN.",
"title": ""
},
{
"docid": "788ee16a0f05fe09340e80f14722ee77",
"text": "This paper presents an approach for detecting anomalous events in videos with crowds. The main goal is to recognize patterns that might lead to an anomalous event. An anomalous event might be characterized by the deviation from the normal or usual, but not necessarily in an undesirable manner, e.g., an anomalous event might just be different from normal but not a suspicious event from the surveillance point of view. One of the main challenges of detecting such events is the difficulty to create models due to their unpredictability and their dependency on the context of the scene. Based on these challenges, we present a model that uses general concepts, such as orientation, velocity, and entropy to capture anomalies. Using such a type of information, we can define models for different cases and environments. Assuming images captured from a single static camera, we propose a novel spatiotemporal feature descriptor, called histograms of optical flow orientation and magnitude and entropy, based on optical flow information. To determine the normality or abnormality of an event, the proposed model is composed of training and test steps. In the training, we learn the normal patterns. Then, during test, events are described and if they differ significantly from the normal patterns learned, they are considered as anomalous. The experimental results demonstrate that our model can handle different situations and is able to recognize anomalous events with success. We use the well-known UCSD and Subway data sets and introduce a new data set, namely, Badminton.",
"title": ""
},
{
"docid": "fbb48416c34d4faee1a87ac2efaf466d",
"text": "Do unsupervised methods for learning rich, contextualized token representations obviate the need for explicit modeling of linguistic structure in neural network models for semantic role labeling (SRL)? We address this question by incorporating the massively successful ELMo embeddings (Peters et al., 2018) into LISA (Strubell et al., 2018), a strong, linguisticallyinformed neural network architecture for SRL. In experiments on the CoNLL-2005 shared task we find that though ELMo outperforms typical word embeddings, beginning to close the gap in F1 between LISA with predicted and gold syntactic parses, syntactically-informed models still outperform syntax-free models when both use ELMo, especially on out-of-domain data. Our results suggest that linguistic structures are indeed still relevant in this golden age of deep learning for NLP.",
"title": ""
},
{
"docid": "7ba61c8c5eba7d8140c84b3e7cbc851a",
"text": "One of the aims of modern First-Person Shooter (FPS ) design is to provide an immersive experience to the player. This paper examines the role of sound in enabling s uch immersion and argues that, even in ‘realism’ FPS ga mes, it may be achieved sonically through a focus on carica ture rather than realism. The paper utilizes and develo ps previous work in which both a conceptual framework for the d sign and analysis of run and gun FPS sound is developed and the notion of the relationship between player and FPS soundscape as an acoustic ecology is put forward (G rimshaw and Schott 2007a; Grimshaw and Schott 2007b). Some problems of sound practice and sound reproduction i n the game are highlighted and a conceptual solution is p roposed.",
"title": ""
},
{
"docid": "f7fa80456b0fb479bc694cb89fbd84e5",
"text": "In the past two decades, social capital in its various forms and contexts has emerged as one of the most salient concepts in social sciences. While much excitement has been generated, divergent views, perspectives, and expectations have also raised the serious question : is it a fad or does it have enduring qualities that will herald a new intellectual enterprise? This presentation's purpose is to review social capital as discussed in the literature, identify controversies and debates, consider some critical issues, and propose conceptual and research strategies in building a theory. I will argue that such a theory and the research enterprise must be based on the fundamental understanding that social capital is captured from embedded resources in social networks . Deviations from this understanding in conceptualization and measurement lead to confusion in analyzing causal mechanisms in the macroand microprocesses. It is precisely these mechanisms and processes, essential for an interactive theory about structure and action, to which social capital promises to make contributions .",
"title": ""
},
{
"docid": "45578369630e65fe60be3495767d1367",
"text": "The performance of brain-machine interfaces (BMIs) that continuously control upper limb neuroprostheses may benefit from distinguishing periods of posture and movement so as to prevent inappropriate movement of the prosthesis. Few studies, however, have investigated how decoding behavioral states and detecting the transitions between posture and movement could be used autonomously to trigger a kinematic decoder. We recorded simultaneous neuronal ensemble and local field potential (LFP) activity from microelectrode arrays in primary motor cortex (M1) and dorsal (PMd) and ventral (PMv) premotor areas of two male rhesus monkeys performing a center-out reach-and-grasp task, while upper limb kinematics were tracked with a motion capture system with markers on the dorsal aspect of the forearm, hand, and fingers. A state decoder was trained to distinguish four behavioral states (baseline, reaction, movement, hold), while a kinematic decoder was trained to continuously decode hand end point position and 18 joint angles of the wrist and fingers. LFP amplitude most accurately predicted transition into the reaction (62%) and movement (73%) states, while spikes most accurately decoded arm, hand, and finger kinematics during movement. Using an LFP-based state decoder to trigger a spike-based kinematic decoder [r = 0.72, root mean squared error (RMSE) = 0.15] significantly improved decoding of reach-to-grasp movements from baseline to final hold, compared with either a spike-based state decoder combined with a spike-based kinematic decoder (r = 0.70, RMSE = 0.17) or a spike-based kinematic decoder alone (r = 0.67, RMSE = 0.17). Combining LFP-based state decoding with spike-based kinematic decoding may be a valuable step toward the realization of BMI control of a multifingered neuroprosthesis performing dexterous manipulation.",
"title": ""
},
{
"docid": "ecd79e88962ca3db82eaf2ab94ecd5f4",
"text": "Automatic surgical gesture segmentation and recognition can provide useful feedback for surgical training in robotic surgery. Most prior work in this field relies on the robot's kinematic data. Although recent work [1,2] shows that the robot's video data can be equally effective for surgical gesture recognition, the segmentation of the video into gestures is assumed to be known. In this paper, we propose a framework for joint segmentation and recognition of surgical gestures from kinematic and video data. Unlike prior work that relies on either frame-level kinematic cues, or segment-level kinematic or video cues, our approach exploits both cues by using a combined Markov/semi-Markov conditional random field (MsM-CRF) model. Our experiments show that the proposed model improves over a Markov or semi-Markov CRF when using video data alone, gives results that are comparable to state-of-the-art methods on kinematic data alone, and improves over state-of-the-art methods when combining kinematic and video data.",
"title": ""
},
{
"docid": "c9b6f91a7b69890db88b929140f674ec",
"text": "Pedestrian detection is a key problem in computer vision, with several applications that have the potential to positively impact quality of life. In recent years, the number of approaches to detecting pedestrians in monocular images has grown steadily. However, multiple data sets and widely varying evaluation protocols are used, making direct comparisons difficult. To address these shortcomings, we perform an extensive evaluation of the state of the art in a unified framework. We make three primary contributions: 1) We put together a large, well-annotated, and realistic monocular pedestrian detection data set and study the statistics of the size, position, and occlusion patterns of pedestrians in urban scenes, 2) we propose a refined per-frame evaluation methodology that allows us to carry out probing and informative comparisons, including measuring performance in relation to scale and occlusion, and 3) we evaluate the performance of sixteen pretrained state-of-the-art detectors across six data sets. Our study allows us to assess the state of the art and provides a framework for gauging future efforts. Our experiments show that despite significant progress, performance still has much room for improvement. In particular, detection is disappointing at low resolutions and for partially occluded pedestrians.",
"title": ""
},
{
"docid": "f4639c2523687aa0d5bfdd840df9cfa4",
"text": "This established database of manufacturers and thei r design specification, determined the condition and design of the vehicle based on the perception and preference of jeepney drivers and passengers, and compared the pa rts of the jeepney vehicle using Philippine National Standards and international sta ndards. The study revealed that most jeepney manufacturing firms have varied specificati ons with regard to the capacity, dimensions and weight of the vehicle and similar sp ecification on the parts and equipment of the jeepney vehicle. Most of the jeepney drivers an d passengers want to improve, change and standardize the parts of the jeepney vehicle. The p arts of jeepney vehicles have similar specifications compared to the 4 out of 5 mandatory PNS and 22 out 32 UNECE Regulations applicable for jeepney vehicle. It is concluded tha t t e jeepney vehicle can be standardized in terms of design, safety and environmental concerns.",
"title": ""
},
{
"docid": "f90fcd27a0ac4a22dc5f229f826d64bf",
"text": "While deep reinforcement learning (deep RL) agents are effective at maximizing rewards, it is often unclear what strategies they use to do so. In this paper, we take a step toward explaining deep RL agents through a case study using Atari 2600 environments. In particular, we focus on using saliency maps to understand how an agent learns and executes a policy. We introduce a method for generating useful saliency maps and use it to show 1) what strong agents attend to, 2) whether agents are making decisions for the right or wrong reasons, and 3) how agents evolve during learning. We also test our method on non-expert human subjects and find that it improves their ability to reason about these agents. Overall, our results show that saliency information can provide significant insight into an RL agent’s decisions and learning behavior.",
"title": ""
},
{
"docid": "8800dba6bb4cea195c8871eb5be5b0a8",
"text": "Text summarization and sentiment classification, in NLP, are two main tasks implemented on text analysis, focusing on extracting the major idea of a text at different levels. Based on the characteristics of both, sentiment classification can be regarded as a more abstractive summarization task. According to the scheme, a Self-Attentive Hierarchical model for jointly improving text Summarization and Sentiment Classification (SAHSSC) is proposed in this paper. This model jointly performs abstractive text summarization and sentiment classification within a hierarchical end-to-end neural framework, in which the sentiment classification layer on top of the summarization layer predicts the sentiment label in the light of the text and the generated summary. Furthermore, a self-attention layer is also proposed in the hierarchical framework, which is the bridge that connects the summarization layer and the sentiment classification layer and aims at capturing emotional information at text-level as well as summary-level. The proposed model can generate a more relevant summary and lead to a more accurate summary-aware sentiment prediction. Experimental results evaluated on SNAP amazon online review datasets show that our model outperforms the state-of-the-art baselines on both abstractive text summarization and sentiment classification by a considerable margin.",
"title": ""
},
{
"docid": "17ed907c630ec22cbbb5c19b5971238d",
"text": "The fastest tools for network reachability queries use adhoc algorithms to compute all packets from a source S that can reach a destination D. This paper examines whether network reachability can be solved efficiently using existing verification tools. While most verification tools only compute reachability (“Can S reach D?”), we efficiently generalize them to compute all reachable packets. Using new and old benchmarks, we compare model checkers, SAT solvers and various Datalog implementations. The only existing verification method that worked competitively on all benchmarks in seconds was Datalog with a new composite Filter-Project operator and a Difference of Cubes representation. While Datalog is slightly slower than the Hassel C tool, it is far more flexible. We also present new results that more precisely characterize the computational complexity of network verification. This paper also provides a gentle introduction to program verification for the networking community.",
"title": ""
},
{
"docid": "1b9ecdeb1df8eaf7cfef88acbe093d78",
"text": "Chemical databases store information in text representations, and the SMILES format is a universal standard used in many cheminformatics soware. Encoded in each SMILES string is structural information that can be used to predict complex chemical properties. In this work, we develop SMILES2vec, a deep RNN that automatically learns features from SMILES to predict chemical properties, without the need for additional explicit feature engineering. Using Bayesian optimization methods to tune the network architecture, we show that an optimized SMILES2vec model can serve as a general-purpose neural network for predicting distinct chemical properties including toxicity, activity, solubility and solvation energy, while also outperforming contemporary MLP neural networks that uses engineered features. Furthermore, we demonstrate proof-of-concept of interpretability by developing an explanation mask that localizes on the most important characters used in making a prediction. When tested on the solubility dataset, it identied specic parts of a chemical that is consistent with established rst-principles knowledge with an accuracy of 88%. Our work demonstrates that neural networks can learn technically accurate chemical concept and provide state-of-the-art accuracy, making interpretable deep neural networks a useful tool of relevance to the chemical industry.",
"title": ""
},
{
"docid": "e139355ddbe5a8d6293f028e379abc93",
"text": "The IoT is a network of interconnected everyday objects called “things” that have been augmented with a small measure of computing capabilities. Lately, the IoT has been affected by a variety of different botnet activities. As botnets have been the cause of serious security risks and financial damage over the years, existing Network forensic techniques cannot identify and track current sophisticated methods of botnets. This is because commercial tools mainly depend on signature-based approaches that cannot discover new forms of botnet. In literature, several studies have conducted the use of Machine Learning (ML) techniques in order to train and validate a model for defining such attacks, but they still produce high false alarm rates with the challenge of investigating the tracks of botnets. This paper investigates the role of ML techniques for developing a Network forensic mechanism based on network flow identifiers that can track suspicious activities of botnets. The experimental results using the UNSW-NB15 dataset revealed that ML techniques with flow identifiers can effectively and efficiently detect botnets’ attacks and their tracks.",
"title": ""
},
{
"docid": "f01a19652bff88923a3141fb56d805e2",
"text": "This paper presents a visible light communication system, focusing mostly on the aspects related with the hardware design and implementation. The designed system is aimed to ensure a highly-reliable communication between a commercial LED-based traffic light and a receiver mounted on a vehicle. Enabling wireless data transfer between the road infrastructure and vehicles has the potential to significantly increase the safety and efficiency of the transportation system. The paper presents the advantages of the proposed system and explains same of the choices made in the implementation process.",
"title": ""
},
{
"docid": "1157ced7937578d8a54bc9bb462b5706",
"text": "In recent years, the problem of associating a sentence with an image has gained a lot of attention. This work continues to push the envelope and makes further progress in the performance of image annotation and image search by a sentence tasks. In this work, we are using the Fisher Vector as a sentence representation by pooling the word2vec embedding of each word in the sentence. The Fisher Vector is typically taken as the gradients of the log-likelihood of descriptors, with respect to the parameters of a Gaussian Mixture Model (GMM). In this work we present two other Mixture Models and derive their Expectation-Maximization and Fisher Vector expressions. The first is a Laplacian Mixture Model (LMM), which is based on the Laplacian distribution. The second Mixture Model presented is a Hybrid Gaussian-Laplacian Mixture Model (HGLMM) which is based on a weighted geometric mean of the Gaussian and Laplacian distribution. Finally, by using the new Fisher Vectors derived from HGLMMs to represent sentences, we achieve state-of-the-art results for both the image annotation and the image search by a sentence tasks on four benchmarks: Pascal1K, Flickr8K, Flickr30K, and COCO.",
"title": ""
}
] |
scidocsrr
|
29cff8a03006ac91f79f8f420d2267d2
|
Driver Action Prediction Using Deep (Bidirectional) Recurrent Neural Network
|
[
{
"docid": "1169d70de6d0c67f52ecac4d942d2224",
"text": "All drivers have habits behind the wheel. Different drivers vary in how they hit the gas and brake pedals, how they turn the steering wheel, and how much following distance they keep to follow a vehicle safely and comfortably. In this paper, we model such driving behaviors as car-following and pedal operation patterns. The relationship between following distance and velocity mapped into a two-dimensional space is modeled for each driver with an optimal velocity model approximated by a nonlinear function or with a statistical method of a Gaussian mixture model (GMM). Pedal operation patterns are also modeled with GMMs that represent the distributions of raw pedal operation signals or spectral features extracted through spectral analysis of the raw pedal operation signals. The driver models are evaluated in driver identification experiments using driving signals collected in a driving simulator and in a real vehicle. Experimental results show that the driver model based on the spectral features of pedal operation signals efficiently models driver individual differences and achieves an identification rate of 76.8% for a field test with 276 drivers, resulting in a relative error reduction of 55% over driver models that use raw pedal operation signals without spectral analysis",
"title": ""
},
{
"docid": "c1235195e9ce4a9db0e22b165915a5ff",
"text": "Advanced Driver Assistance Systems (ADAS) have made driving safer over the last decade. They prepare vehicles for unsafe road conditions and alert drivers if they perform a dangerous maneuver. However, many accidents are unavoidable because by the time drivers are alerted, it is already too late. Anticipating maneuvers beforehand can alert drivers before they perform the maneuver and also give ADAS more time to avoid or prepare for the danger. In this work we propose a vehicular sensor-rich platform and learning algorithms for maneuver anticipation. For this purpose we equip a car with cameras, Global Positioning System (GPS), and a computing device to capture the driving context from both inside and outside of the car. In order to anticipate maneuvers, we propose a sensory-fusion deep learning architecture which jointly learns to anticipate and fuse multiple sensory streams. Our architecture consists of Recurrent Neural Networks (RNNs) that use Long Short-Term Memory (LSTM) units to capture long temporal dependencies. We propose a novel training procedure which allows the network to predict the future given only a partial temporal context. We introduce a diverse data set with 1180 miles of natural freeway and city driving, and show that we can anticipate maneuvers 3.5 seconds before they occur in realtime with a precision and recall of 90.5% and 87.4% respectively.",
"title": ""
}
] |
[
{
"docid": "601d9060ac35db540cdd5942196db9e0",
"text": "In this paper, we review nine visualization techniques that can be used for visual exploration of multidimensional financial data. We illustrate the use of these techniques by studying the financial performance of companies from the pulp and paper industry. We also illustrate the use of visualization techniques for detecting multivariate outliers, and other patterns in financial performance data in the form of clusters, relationships, and trends. We provide a subjective comparison between different visualization techniques as to their capabilities for providing insight into financial performance data. The strengths of each technique and the potential benefits of using multiple visualization techniques for gaining insight into financial performance data are highlighted.",
"title": ""
},
{
"docid": "a3da533f428b101c8f8cb0de04546e48",
"text": "In this paper we investigate the challenging problem of cursive text recognition in natural scene images. In particular, we have focused on isolated Urdu character recognition in natural scenes that could not be handled by tradition Optical Character Recognition (OCR) techniques developed for Arabic and Urdu scanned documents. We also present a dataset of Urdu characters segmented from images of signboards, street scenes, shop scenes and advertisement banners containing Urdu text. A variety of deep learning techniques have been proposed by researchers for natural scene text detection and recognition. In this work, a Convolutional Neural Network (CNN) is applied as a classifier, as CNN approaches have been reported to provide high accuracy for natural scene text detection and recognition. A dataset of manually segmented characters was developed and deep learning based data augmentation techniques were applied to further increase the size of the dataset. The training is formulated using filter sizes of 3x3, 5x5 and mixed 3x3 and 5x5 with a stride value of 1 and 2. The CNN model is trained with various learning rates and state-of-the-art results are achieved.",
"title": ""
},
{
"docid": "06c65b566b298cc893388a6f317bfcb1",
"text": "Emotion recognition from speech is one of the key steps towards emotional intelligence in advanced human-machine interaction. Identifying emotions in human speech requires learning features that are robust and discriminative across diverse domains that differ in terms of language, spontaneity of speech, recording conditions, and types of emotions. This corresponds to a learning scenario in which the joint distributions of features and labels may change substantially across domains. In this paper, we propose a deep architecture that jointly exploits a convolutional network for extracting domain-shared features and a long short-term memory network for classifying emotions using domain-specific features. We use transferable features to enable model adaptation from multiple source domains, given the sparseness of speech emotion data and the fact that target domains are short of labeled data. A comprehensive cross-corpora experiment with diverse speech emotion domains reveals that transferable features provide gains ranging from 4.3% to 18.4% in speech emotion recognition. We evaluate several domain adaptation approaches, and we perform an ablation study to understand which source domains add the most to the overall recognition effectiveness for a given target domain.",
"title": ""
},
{
"docid": "1ea8990241b140c1c06d935a5f73abec",
"text": "This paper presents design and implementation of a mobile embedded system to monitor and record key operation indicators of a distribution transformer like load currents, transformer oil and ambient temperatures. The proposed on-line monitoring system integrates a global service mobile (GSM) Modem, with stand alone single chip microcontroller and sensor packages. It is installed at the distribution transformer site and the above mentioned parameters are recorded using the built-in S-channel analog to digital converter (ADC) of the embedded system. The acquired parameters are processed and recorded in the system memory. If there is any abnormality or an emergency situation the system sends SMS (short message service) messages to designated mobile telephones containing information about the abnormality according to some predefined instructions and policies that are stored on the embedded system EEPROM. Also, it sends SMS to a central database via the GSM modem for further processing. This mobile system will help the utilities to optimally utilize transformers and identify problems before any catastrophic failure.",
"title": ""
},
{
"docid": "62d86051d5f3f53f59547a98632c1e5c",
"text": "Infantile hemangiomas are the most common benign vascular tumors in infancy and childhood. As hemangioma could regress spontaneously, it generally does not require treatment unless proliferation interferes with normal function or gives rise to risk of serious disfigurement and complications unlikely to resolve without treatment. Various methods for treating infant hemangiomas have been documented, including wait and see policy, laser therapy, drug therapy, sclerotherapy, radiotherapy, surgery and so on, but none of these therapies can be used for all hemangiomas. To obtain the best treatment outcomes, the treatment protocol should be individualized and comprehensive as well as sequential. Based on published literature and clinical experiences, we established a treatment guideline in order to provide criteria for the management of head and neck hemangiomas. This protocol will be renewed and updated to include and reflect any cutting-edge medical knowledge, and provide the newest treatment modalities which will benefit our patients.",
"title": ""
},
{
"docid": "c07e6639d32403b267d9b6ef0f475d21",
"text": "Exudates are the primary sign of Diabetic Retinopathy. Early detection can potentially reduce the risk of blindness. An automatic method to detect exudates from low-contrast digital images of retinopathy patients with non-dilated pupils using a Fuzzy C-Means (FCM) clustering is proposed. Contrast enhancement preprocessing is applied before four features, namely intensity, standard deviation on intensity, hue and a number of edge pixels, are extracted to supply as input parameters to coarse segmentation using FCM clustering method. The first result is then fine-tuned with morphological techniques. The detection results are validated by comparing with expert ophthalmologists' hand-drawn ground-truths. Sensitivity, specificity, positive predictive value (PPV), positive likelihood ratio (PLR) and accuracy are used to evaluate overall performance. It is found that the proposed method detects exudates successfully with sensitivity, specificity, PPV, PLR and accuracy of 87.28%, 99.24%, 42.77%, 224.26 and 99.11%, respectively.",
"title": ""
},
{
"docid": "03b3aa5c74eb4d66c1bd969fbce835c7",
"text": "In the past few decades, unmanned aerial vehicles (UAVs) have become promising mobile platforms capable of navigating semiautonomously or autonomously in uncertain environments. The level of autonomy and the flexible technology of these flying robots have rapidly evolved, making it possible to coordinate teams of UAVs in a wide spectrum of tasks. These applications include search and rescue missions; disaster relief operations, such as forest fires [1]; and environmental monitoring and surveillance. In some of these tasks, UAVs work in coordination with other robots, as in robot-assisted inspection at sea [2]. Recently, radio-controlled UAVs carrying radiation sensors and video cameras were used to monitor, diagnose, and evaluate the situation at Japans Fukushima Daiichi nuclear plant facility [3].",
"title": ""
},
{
"docid": "753eb03a060a5e5999eee478d6d164f9",
"text": "Recently reported results with distributed-vector word representations in natural language processing make them appealing for incorporation into a general cognitive architecture like Sigma. This paper describes a new algorithm for learning such word representations from large, shallow information resources, and how this algorithm can be implemented via small modifications to Sigma. The effectiveness and speed of the algorithm are evaluated via a comparison of an external simulation of it with state-of-the-art algorithms. The results from more limited experiments with Sigma are also promising, but more work is required for it to reach the effectiveness and speed of the simulation.",
"title": ""
},
{
"docid": "cc31337277f8816eee0762fe47415f3f",
"text": "Nowadays Photovoltaic (PV) plants have become significant investment projects with long term Return Of Investment (ROI). This is making investors together with operations managers to deal with reliable information on photovoltaic plants performances. Most of the information is gathered through data monitoring systems and also supplied by proper inverters in case of grid connected plants. It usually relates to series/parallel combinations of PV panels strings, in most cases, but rarely to individual PV panels. Furthermore, in case of huge dimensions PV plants, with different ground profiles, etc., should any adverse circumstances happen (panel failure, sudden shadowing, clouds, strong wind), it is difficult to identify the exact problem location. The use of distributed wired or wireless sensors can be a solution. Nevertheless, no one is problems free and all are significant cost. In this article is proposed a low cost DC Power Lines Communications (DC PLC) based PV plant parameters smart monitoring communications and control module. The aim is the development of a micro controller (uC) based sensor module with corresponding modem for communications through already existing DC plant power wiring as data transmission lines. This will reduce drastically both hardware and transmission lines costs.",
"title": ""
},
{
"docid": "b250ac830e1662252069cc85128358a7",
"text": "Several recent works have shown that image descriptors produced by deep convolutional neural networks provide state-of-the-art performance for image classification and retrieval problems. It also has been shown that the activations from the convolutional layers can be interpreted as local features describing particular image regions. These local features can be aggregated using aggregating methods developed for local features (e.g. Fisher vectors), thus providing new powerful global descriptor. In this paper we investigate possible ways to aggregate local deep features to produce compact descriptors for image retrieval. First, we show that deep features and traditional hand-engineered features have quite different distributions of pairwise similarities, hence existing aggregation methods have to be carefully re-evaluated. Such re-evaluation reveals that in contrast to shallow features, the simple aggregation method based on sum pooling provides the best performance for deep convolutional features. This method is efficient, has few parameters, and bears little risk of overfitting when e.g. learning the PCA matrix. In addition, we suggest a simple yet efficient query expansion scheme suitable for the proposed aggregation method. Overall, the new compact global descriptor improves the state-of-the-art on four common benchmarks considerably.",
"title": ""
},
{
"docid": "da7beedfca8e099bb560120fc5047399",
"text": "OBJECTIVE\nThis study aims to assess the relationship of late-night cell phone use with sleep duration and quality in a sample of Iranian adolescents.\n\n\nMETHODS\nThe study population consisted of 2400 adolescents, aged 12-18 years, living in Isfahan, Iran. Age, body mass index, sleep duration, cell phone use after 9p.m., and physical activity were documented. For sleep assessment, the Pittsburgh Sleep Quality Index questionnaire was used.\n\n\nRESULTS\nThe participation rate was 90.4% (n=2257 adolescents). The mean (SD) age of participants was 15.44 (1.55) years; 1270 participants reported to use cell phone after 9p.m. Overall, 56.1% of girls and 38.9% of boys reported poor quality sleep, respectively. Wake-up time was 8:17 a.m. (2.33), among late-night cell phone users and 8:03a.m. (2.11) among non-users. Most (52%) late-night cell phone users had poor sleep quality. Sedentary participants had higher sleep latency than their peers. Adjusted binary and multinomial logistic regression models showed that late-night cell users were 1.39 times more likely to have a poor sleep quality than non-users (p-value<0.001).\n\n\nCONCLUSION\nLate-night cell phone use by adolescents was associated with poorer sleep quality. Participants who were physically active had better sleep quality and quantity. As part of healthy lifestyle recommendations, avoidance of late-night cell phone use should be encouraged in adolescents.",
"title": ""
},
{
"docid": "341f04892cc9f965abca32458b67f63c",
"text": "In this paper, two single fed low-profile cavity-backed planar slot antennas for circular polarization (CP) applications are first introduced by half mode substrate integrated waveguide (HMSIW) technique. One of the structures presents right handed CP (RHCP), while the other one offers left handed CP (LHCP). A single layer of low cost printed circuit board (PCB) is employed for both antennas providing low-cost, lightweight, and also easy integration with planar circuits. An inset microstrip line is used to excite two orthogonal quarter-wave length patch modes with required phase difference for generating CP wave. The new proposed antennas are successfully designed and fabricated. Measured results are in good agreement with those obtained by numerical investigation using HFSS. Results exhibit that both antennas present the advantages of conventional cavity backed antennas including high gain and high front to back ratio (FTBR).",
"title": ""
},
{
"docid": "7842e5c7ad3dc11d9d53b360e4e2691a",
"text": "It is becoming obvious that all cancers have a defe ctiv p53 pathway, either through TP53 mutation or deregulation of the tumor suppressor function of the wild type TP53 . In this study we examined the expression of P53 and Caspase 3 in transperitoneally injected Ehrlich As cite carcinoma cells (EAC) treated with Tetrodotoxin in the liver of adult mice in order to evaluate the po ssible pro apoptotic effect of Tetrodotoxin . Results: Early in the treatment, num erous EAC detected in the large blood vessels & cen tral veins and expressed both of P53 & Caspase 3 in contrast to the late absence of P53 expressing EAC at the 12 th day of Tetrodotoxin treatment. In the same context , predominantly the perivascular hepatocytes expresse d Caspase 3 in contrast to the more diffuse express ion pattern late with Tetrodotoxin treatment. Non of the hepatocytes ever expressed P5 3 neither with early nor late Tetrodotoxin treatmen t. Conclusion: Tetrodotoxin therapy has a proapoptotic effect on Ehrlich Ascites carcin oma Cells (EAC). This may be through enhancing the tumor suppressor function of the wild type TP53 with subsequent Caspase 3 activation .",
"title": ""
},
{
"docid": "9b98e43825bd36736c7c87bb2cee5a8c",
"text": "Corresponding Author: Daniel Strmečki Faculty of Organization and Informatics, Pavlinska 2, 42000 Varaždin, Croatia Email: danstrmecki@gmail.com Abstract: Gamification is the usage of game mechanics, dynamics, aesthetics and game thinking in non-game systems. Its main objective is to increase user’s motivation, experience and engagement. For the same reason, it has started to penetrate in e-learning systems. However, when using gamified design elements in e-learning, we must consider various types of learners. In the phases of analysis and design of such elements, the cooperation of education, technology, pedagogy, design and finance experts is required. This paper discusses the development phases of introducing gamification into e-learning systems, various gamification design elements and their suitability for usage in e-learning systems. Several gamified design elements are found suited for e-learning (including points, badges, trophies, customization, leader boards, levels, progress tracking, challenges, feedback, social engagement loops and the freedom to fail). Advices for the usage of each of those elements in e-learning systems are also provided in this study. Based on those advises and the identified phases of introducing gamification info e-learning systems, we conducted an experimental study to investigate the effectiveness of gamification of an informatics online course. Results showed that students enrolled in the gamified version of the online module achieved greater learning success. Positive results encourage us to investigate the gamification of online learning content for other topics and courses. We also encourage more research on the influence of specific gamified design elements on learner’s motivation and engagement.",
"title": ""
},
{
"docid": "4c5ac799c97f99d3a64bcbea6b6cb88d",
"text": "This paper presents a new type of monolithic microwave integrated circuit (MMIC)-based active quasi-circulator using phase cancellation and combination techniques for simultaneous transmit and receive (STAR) phased-array applications. The device consists of a passive core of three quadrature hybrids and active components to provide active quasi-circulation operation. The core of three quadrature hybrids can be implemented using Lange couplers. The device is capable of high isolation performance, high-frequency operation, broadband performance, and improvement of the noise figure (NF) at the receive port by suppressing transmit noise. For passive quasi-circulation operation, the device can achieve 35-dB isolation between the transmit and receive port with 2.6-GHz bandwidth (BW) and insertion loss of 4.5 dB at X-band. For active quasi-operation, the device is shown to have 2.3-GHz BW of 30-dB isolation with 1.5-dB transmit-to-antenna gain and 4.7-dB antenna-to-receive insertion loss, while the NF at the receive port is approximately 5.5 dB. The device is capable of a power stress test up to 34 dBm at the output ports at 10.5 GHz. For operation with typical 25-dB isolation, the device is capable of operation up to 5.6-GHz BW at X-band. The device is also shown to be operable up to W -band by simulation with ~15-GHz BW of 20-dB isolation. The proposed architecture is suitable for MMIC integration and system-on-chip applications.",
"title": ""
},
{
"docid": "dde9424652393fa66350ec6510c20e97",
"text": "Framed under a cognitive approach to task-based L2 learning, this study used a pedagogical approach to investigate the effects of three vocabulary lessons (one traditional and two task-based) on acquisition of basic meanings, forms and morphological aspects of Spanish words. Quantitative analysis performed on the data suggests that the type of pedagogical approach had no impact on immediate retrieval (after treatment) of targeted word forms, but it had an impact on long-term retrieval (one week) of targeted forms. In particular, task-based lessons seemed to be more effective than the Presentation, Practice and Production (PPP) lesson. The analysis also suggests that a task-based lesson with an explicit focus-on-forms component was more effective than a task-based lesson that did not incorporate this component in promoting acquisition of word morphological aspects. The results also indicate that the explicit focus on forms component may be more effective when placed at the end of the lesson, when meaning has been acquired. Results are explained in terms of qualitative differences in amounts of focus on form and meaning, type of form-focused instruction provided, and opportunities for on-line targeted output retrieval. The findings of this study provide evidence for the value of a proactive (Doughty and Williams, 1998a) form-focused approach to Task-Based L2 vocabulary learning, especially structure-based production tasks (Ellis, 2003). Overall, they suggest an important role of pedagogical tasks in teaching L2 vocabulary.",
"title": ""
},
{
"docid": "6436f0137e5dbc3fb3dac031ddb93629",
"text": "Perovskite solar cells based on organometal halide light absorbers have been considered a promising photovoltaic technology due to their superb power conversion efficiency (PCE) along with very low material costs. Since the first report on a long-term durable solid-state perovskite solar cell with a PCE of 9.7% in 2012, a PCE as high as 19.3% was demonstrated in 2014, and a certified PCE of 17.9% was shown in 2014. Such a high photovoltaic performance is attributed to optically high absorption characteristics and balanced charge transport properties with long diffusion lengths. Nevertheless, there are lots of puzzles to unravel the basis for such high photovoltaic performances. The working principle of perovskite solar cells has not been well established by far, which is the most important thing for understanding perovksite solar cells. In this review, basic fundamentals of perovskite materials including opto-electronic and dielectric properties are described to give a better understanding and insight into high-performing perovskite solar cells. In addition, various fabrication techniques and device structures are described toward the further improvement of perovskite solar cells.",
"title": ""
},
{
"docid": "ddab10d66473ac7c4de26e923bf59083",
"text": "Phased arrays allow electronic scanning of the antenna beam. However, these phased arrays are not widely used due to a high implementation cost. This article discusses the advantages of the RF architecture and the implementation of silicon RFICs for phased-array transmitters/receivers. In addition, this work also demonstrates how silicon RFICs can play a vital role in lowering the cost of phased arrays.",
"title": ""
},
{
"docid": "953e70084692643648e6f489aa1e761e",
"text": "To successfully select and implement nudges, policy makers need a psychological understanding of who opposes nudges, how they are perceived, and when alternative methods (e.g., forced choice) might work better. Using two representative samples, we examined four factors that influence U.S. attitudes toward nudges – types of nudges, individual dispositions, nudge perceptions, and nudge frames. Most nudges were supported, although opt-out defaults for organ donations were opposed in both samples. “System 1” nudges (e.g., defaults and sequential orderings) were viewed less favorably than “System 2” nudges (e.g., educational opportunities or reminders). System 1 nudges were perceived as more autonomy threatening, whereas System 2 nudges were viewed as more effective for better decision making and more necessary for changing behavior. People with greater empathetic concern tended to support both types of nudges and viewed them as the “right” kind of goals to have. Individualists opposed both types of nudges, and conservatives tended to oppose both types. Reactant people and those with a strong desire for control opposed System 1 nudges. To see whether framing could influence attitudes, we varied the description of the nudge in terms of the target (Personal vs. Societal) and the reference point for the nudge (Costs vs. Benefits). Empathetic people were more supportive when framing highlighted societal costs or benefits, and reactant people were more opposed to nudges when frames highlighted the personal costs of rejection.",
"title": ""
}
] |
scidocsrr
|
530f8bf58b05f05dd09dd9df731e50bb
|
PACIS ) 2014 EXPLORING MOBILE PAYMENT ADOPTION IN CHINA
|
[
{
"docid": "401e7ab4d97d7f0f113b8ca9ec1c91ce",
"text": "The probability sampling techniques used for quantitative studies are rarely appropriate when conducting qualitative research. This article considers and explains the differences between the two approaches and describes three broad categories of naturalistic sampling: convenience, judgement and theoretical models. The principles are illustrated with practical examples from the author's own research.",
"title": ""
},
{
"docid": "3e691cf6055eb564dedca955b816a654",
"text": "Many Internet-based services have already been ported to the mobile-based environment, embracing the new services is therefore critical to deriving revenue for services providers. Based on a valence framework and trust transfer theory, we developed a trust-based customer decision-making model of the non-independent, third-party mobile payment services context. We empirically investigated whether a customer’s established trust in Internet payment services is likely to influence his or her initial trust in mobile payment services. We also examined how these trust beliefs might interact with both positive and negative valence factors and affect a customer’s adoption of mobile payment services. Our SEM analysis indicated that trust indeed had a substantial impact on the cross-environment relationship and, further, that trust in combination with the positive and negative valence determinants directly and indirectly influenced behavioral intention. In addition, the magnitudes of these effects on workers and students were significantly different from each other. 2011 Elsevier B.V. All rights reserved. * Corresponding author. Tel.: +86 27 8755 8100; fax: +86 27 8755 6437. E-mail addresses: luyb@mail.hust.edu.cn (Y. Lu), xtysq@smail.hust.edu.cn (S. Yang), Chau@business.hku.hk (Patrick Y.K. Chau), skysharecao@163.com (Y. Cao). 1 Tel.: +86 27 8755 6448. 2 Tel.: +852 2859 1025. 3 Tel.: +86 27 8755 8100.",
"title": ""
}
] |
[
{
"docid": "a1f1d34e8ceeb984976e45074694d4c2",
"text": "This paper proposes a model of the doubly fed induction generator (DFIG) suitable for transient stability studies. The main assumption adopted in the model is that the current control loops, which are much faster than the electromechanic transients under study, do not have a significant influence on the transient stability of the power system and may be considered instantaneous. The proposed DFIG model is a set of algebraic equations which are solved using an iterative procedure. A method is also proposed to calculate the DFIG initial conditions. A detailed variable-speed windmill model has been developed using the proposed DFIG model. This windmill model has been integrated in a transient stability simulation program in order to demonstrate its feasibility. Several simulations have been performed using a base case which includes a small grid, a wind farm represented by a single windmill, and different operation points. The evolution of several electric variables during the simulations is shown and discussed.",
"title": ""
},
{
"docid": "a62a23df11fd72522a3d9726b60d4497",
"text": "In this paper, a simple single-phase grid-connected photovoltaic (PV) inverter topology consisting of a boost section, a low-voltage single-phase inverter with an inductive filter, and a step-up transformer interfacing the grid is considered. Ideally, this topology will not inject any lower order harmonics into the grid due to high-frequency pulse width modulation operation. However, the nonideal factors in the system such as core saturation-induced distorted magnetizing current of the transformer and the dead time of the inverter, etc., contribute to a significant amount of lower order harmonics in the grid current. A novel design of inverter current control that mitigates lower order harmonics is presented in this paper. An adaptive harmonic compensation technique and its design are proposed for the lower order harmonic compensation. In addition, a proportional-resonant-integral (PRI) controller and its design are also proposed. This controller eliminates the dc component in the control system, which introduces even harmonics in the grid current in the topology considered. The dynamics of the system due to the interaction between the PRI controller and the adaptive compensation scheme is also analyzed. The complete design has been validated with experimental results and good agreement with theoretical analysis of the overall system is observed.",
"title": ""
},
{
"docid": "9a19caf553338e950c89f5f670016f50",
"text": "Countering distributed denial of service (DDoS) attacks is becoming ever more challenging with the vast resources and techniques increasingly available to attackers. In this paper, we consider sophisticated attacks that are protocol-compliant, non-intrusive, and utilize legitimate application-layer requests to overwhelm system resources. We characterize application-layer resource attacks as either request flooding, asymmetric, or repeated one-shot, on the basis of the application workload parameters that they exploit. To protect servers from these attacks, we propose a counter-mechanism namely DDoS Shield that consists of a suspicion assignment mechanism and a DDoS-resilient scheduler. In contrast to prior work, our suspicion mechanism assigns a continuous value as opposed to a binary measure to each client session, and the scheduler utilizes these values to determine if and when to schedule a session's requests. Using testbed experiments on a web application, we demonstrate the potency of these resource attacks and evaluate the efficacy of our counter-mechanism. For instance, we mount an asymmetric attack which overwhelms the server resources, increasing the response time of legitimate clients from 0.3 seconds to 40 seconds. Under the same attack scenario, DDoS Shield improves the victims' performance to 1.5 seconds.",
"title": ""
},
{
"docid": "72cff051b5d2bcd8eaf41b6e9ae9eca9",
"text": "We propose a new method for detecting patterns of anomalies in categorical datasets. We assume that anomalies are generated by some underlying process which affects only a particular subset of the data. Our method consists of two steps: we first use a \"local anomaly detector\" to identify individual records with anomalous attribute values, and then detect patterns where the number of anomalous records is higher than expected. Given the set of anomalies flagged by the local anomaly detector, we search over all subsets of the data defined by any set of fixed values of a subset of the attributes, in order to detect self-similar patterns of anomalies. We wish to detect any such subset of the test data which displays a significant increase in anomalous activity as compared to the normal behavior of the system (as indicated by the training data). We perform significance testing to determine if the number of anomalies in any subset of the test data is significantly higher than expected, and propose an efficient algorithm to perform this test over all such subsets of the data. We show that this algorithm is able to accurately detect anomalous patterns in real-world hospital, container shipping and network intrusion data.",
"title": ""
},
{
"docid": "775e78af608c07853af2e2c31a59bf5c",
"text": "This investigation compared the effect of high-volume (VOL) versus high-intensity (INT) resistance training on stimulating changes in muscle size and strength in resistance-trained men. Following a 2-week preparatory phase, participants were randomly assigned to either a high-volume (VOL; n = 14, 4 × 10-12 repetitions with ~70% of one repetition maximum [1RM], 1-min rest intervals) or a high-intensity (INT; n = 15, 4 × 3-5 repetitions with ~90% of 1RM, 3-min rest intervals) training group for 8 weeks. Pre- and posttraining assessments included lean tissue mass via dual energy x-ray absorptiometry, muscle cross-sectional area and thickness of the vastus lateralis (VL), rectus femoris (RF), pectoralis major, and triceps brachii muscles via ultrasound images, and 1RM strength in the back squat and bench press (BP) exercises. Blood samples were collected at baseline, immediately post, 30 min post, and 60 min postexercise at week 3 (WK3) and week 10 (WK10) to assess the serum testosterone, growth hormone (GH), insulin-like growth factor-1 (IGF1), cortisol, and insulin concentrations. Compared to VOL, greater improvements (P < 0.05) in lean arm mass (5.2 ± 2.9% vs. 2.2 ± 5.6%) and 1RM BP (14.8 ± 9.7% vs. 6.9 ± 9.0%) were observed for INT. Compared to INT, area under the curve analysis revealed greater (P < 0.05) GH and cortisol responses for VOL at WK3 and cortisol only at WK10. Compared to WK3, the GH and cortisol responses were attenuated (P < 0.05) for VOL at WK10, while the IGF1 response was reduced (P < 0.05) for INT. It appears that high-intensity resistance training stimulates greater improvements in some measures of strength and hypertrophy in resistance-trained men during a short-term training period.",
"title": ""
},
{
"docid": "a72c9eb8382d3c94aae77fa4eadd1df8",
"text": "Techniques for identifying the author of an unattributed document can be applied to problems in information analysis and in academic scholarship. A range of methods have been proposed in the research literature, using a variety of features and machine learning approaches, but the methods have been tested on very different data and the results cannot be compared. It is not even clear whether the differences in performance are due to feature selection or other variables. In this paper we examine the use of a large publicly available collection of newswire articles as a benchmark for comparing authorship attribution methods. To demonstrate the value of having a benchmark, we experimentally compare several recent feature-based techniques for authorship attribution, and test how well these methods perform as the volume of data is increased. We show that the benchmark is able to clearly distinguish between different approaches, and that the scalability of the best methods based on using function words features is acceptable, with only moderate decline as the difficulty of the problem is increased.",
"title": ""
},
{
"docid": "5a248466c2e82b8453baa483a05bc25b",
"text": "Early severe stress and maltreatment produces a cascade of neurobiological events that have the potential to cause enduring changes in brain development. These changes occur on multiple levels, from neurohumoral (especially the hypothalamic-pituitary-adrenal [HPA] axis) to structural and functional. The major structural consequences of early stress include reduced size of the mid-portions of the corpus callosum and attenuated development of the left neocortex, hippocampus, and amygdala. Major functional consequences include increased electrical irritability in limbic structures and reduced functional activity of the cerebellar vermis. There are also gender differences in vulnerability and functional consequences. The neurobiological sequelae of early stress and maltreatment may play a significant role in the emergence of psychiatric disorders during development.",
"title": ""
},
{
"docid": "aa234355d0b0493e1d8c7a04e7020781",
"text": "Cancer is associated with mutated genes, and analysis of tumour-linked genetic alterations is increasingly used for diagnostic, prognostic and treatment purposes. The genetic profile of solid tumours is currently obtained from surgical or biopsy specimens; however, the latter procedure cannot always be performed routinely owing to its invasive nature. Information acquired from a single biopsy provides a spatially and temporally limited snap-shot of a tumour and might fail to reflect its heterogeneity. Tumour cells release circulating free DNA (cfDNA) into the blood, but the majority of circulating DNA is often not of cancerous origin, and detection of cancer-associated alleles in the blood has long been impossible to achieve. Technological advances have overcome these restrictions, making it possible to identify both genetic and epigenetic aberrations. A liquid biopsy, or blood sample, can provide the genetic landscape of all cancerous lesions (primary and metastases) as well as offering the opportunity to systematically track genomic evolution. This Review will explore how tumour-associated mutations detectable in the blood can be used in the clinic after diagnosis, including the assessment of prognosis, early detection of disease recurrence, and as surrogates for traditional biopsies with the purpose of predicting response to treatments and the development of acquired resistance.",
"title": ""
},
{
"docid": "88f643b2bd917e47e5173d34744c4b20",
"text": "Large image datasets such as ImageNet or open-ended photo websites like Flickr are revealing new challenges to image classification that were not apparent in smaller, fixed sets. In particular, the efficient handling of dynamically growing datasets, where not only the amount of training data but also the number of classes increases over time, is a relatively unexplored problem. In this challenging setting, we study how two variants of Random Forests (RF) perform under four strategies to incorporate new classes while avoiding to retrain the RFs from scratch. The various strategies account for different trade-offs between classification accuracy and computational efficiency. In our extensive experiments, we show that both RF variants, one based on Nearest Class Mean classifiers and the other on SVMs, outperform conventional RFs and are well suited for incrementally learning new classes. In particular, we show that RFs initially trained with just 10 classes can be extended to 1,000 classes with an acceptable loss of accuracy compared to training from the full data and with great computational savings compared to retraining for each new batch of classes.",
"title": ""
},
{
"docid": "bbfdc30b412df84861e242d4305ca20d",
"text": "OBJECTIVES\nLocal anesthetic injection into the interspace between the popliteal artery and the posterior capsule of the knee (IPACK) has the potential to provide motor-sparing analgesia to the posterior knee after total knee arthroplasty. The primary objective of this cadaveric study was to evaluate injectate spread to relevant anatomic structures with IPACK injection.\n\n\nMETHODS\nAfter receipt of Institutional Review Board Biospecimen Subcommittee approval, IPACK injection was performed on fresh-frozen cadavers. The popliteal fossa in each specimen was dissected and examined for injectate spread.\n\n\nRESULTS\nTen fresh-frozen cadaver knees were included in the study. Injectate was observed to spread in the popliteal fossa at a mean ± SD of 6.1 ± 0.7 cm in the medial-lateral dimension and 10.1 ± 3.2 cm in the proximal-distal dimension. No injectate was noted to be in contact with the proximal segment of the sciatic nerve, but 3 specimens showed injectate spread to the tibial nerve. In 3 specimens, the injectate showed possible contact with the common peroneal nerve. The middle genicular artery was consistently surrounded by injectate.\n\n\nCONCLUSIONS\nThis cadaver study of IPACK injection demonstrated spread throughout the popliteal fossa without proximal sciatic involvement. However, the potential for injectate to spread to the tibial or common peroneal nerve was demonstrated. Consistent surrounding of the middle genicular artery with injectate suggests a potential mechanism of analgesia for the IPACK block, due to the predictable relationship between articular sensory nerves and this artery. Further study is needed to determine the ideal site of IPACK injection.",
"title": ""
},
{
"docid": "09b273c9e77f6fc1b2de20f50227c44d",
"text": "Age and gender are complementary soft biometric traits for face recognition. Successful estimation of age and gender from facial images taken under real-world conditions can contribute improving the identification results in the wild. In this study, in order to achieve robust age and gender classification in the wild, we have benefited from Deep Convolutional Neural Networks based representation. We have explored transferability of existing deep convolutional neural network (CNN) models for age and gender classification. The generic AlexNet-like architecture and domain specific VGG-Face CNN model are employed and fine-tuned with the Adience dataset prepared for age and gender classification in uncontrolled environments. In addition, task specific GilNet CNN model has also been utilized and used as a baseline method in order to compare with transferred models. Experimental results show that both transferred deep CNN models outperform the GilNet CNN model, which is the state-of-the-art age and gender classification approach on the Adience dataset, by an absolute increase of 7% and 4.5% in accuracy, respectively. This outcome indicates that transferring a deep CNN model can provide better classification performance than a task specific CNN model, which has a limited number of layers and trained from scratch using a limited amount of data as in the case of GilNet. Domain specific VGG-Face CNN model has been found to be more useful and provided better performance for both age and gender classification tasks, when compared with generic AlexNet-like model, which shows that transfering from a closer domain is more useful.",
"title": ""
},
{
"docid": "743424b3b532b16f018e92b2563458d5",
"text": "We consider the problem of finding a few representatives for a dataset, i.e., a subset of data points that efficiently describes the entire dataset. We assume that each data point can be expressed as a linear combination of the representatives and formulate the problem of finding the representatives as a sparse multiple measurement vector problem. In our formulation, both the dictionary and the measurements are given by the data matrix, and the unknown sparse codes select the representatives via convex optimization. In general, we do not assume that the data are low-rank or distributed around cluster centers. When the data do come from a collection of low-rank models, we show that our method automatically selects a few representatives from each low-rank model. We also analyze the geometry of the representatives and discuss their relationship to the vertices of the convex hull of the data. We show that our framework can be extended to detect and reject outliers in datasets, and to efficiently deal with new observations and large datasets. The proposed framework and theoretical foundations are illustrated with examples in video summarization and image classification using representatives.",
"title": ""
},
{
"docid": "265bf26646113a56101c594f563cb6dc",
"text": "A system, particularly a decision-making concept, that facilitates highly automated driving on freeways in real traffic is presented. The system is capable of conducting fully automated lane change (LC) maneuvers with no need for driver approval. Due to the application in real traffic, a robust functionality and the general safety of all traffic participants are among the main requirements. Regarding these requirements, the consideration of measurement uncertainties demonstrates a major challenge. For this reason, a fully integrated probabilistic concept is developed. By means of this approach, uncertainties are regarded in the entire process of determining driving maneuvers. While this also includes perception tasks, this contribution puts a focus on the driving strategy and the decision-making process for the execution of driving maneuvers. With this approach, the BMW Group Research and Technology managed to drive 100% automated in real traffic on the freeway A9 from Munich to Ingolstadt, showing a robust, comfortable, and safe driving behavior, even during multiple automated LC maneuvers.",
"title": ""
},
{
"docid": "3b6b746f4467fd53ade1d6d2798c45b7",
"text": "We present a new deep learning architecture (called Kdnetwork) that is designed for 3D model recognition tasks and works with unstructured point clouds. The new architecture performs multiplicative transformations and shares parameters of these transformations according to the subdivisions of the point clouds imposed onto them by kdtrees. Unlike the currently dominant convolutional architectures that usually require rasterization on uniform twodimensional or three-dimensional grids, Kd-networks do not rely on such grids in any way and therefore avoid poor scaling behavior. In a series of experiments with popular shape recognition benchmarks, Kd-networks demonstrate competitive performance in a number of shape recognition tasks such as shape classification, shape retrieval and shape part segmentation.",
"title": ""
},
{
"docid": "3afa34f0420e422cfe1b3d61abad5e7f",
"text": "One of the many challenges in designing autonomy for operation in uncertain and dynamic environments is the planning of collision-free paths. Roadmap-based motion planning is a popular technique for identifying collision-free paths, since it approximates the often infeasible space of all possible motions with a networked structure of valid configurations. We use stochastic reachable sets to identify regions of low collision probability, and to create roadmaps which incorporate likelihood of collision. We complete a small number of stochastic reachability calculations with individual obstacles a priori. This information is then associated with the weight, or preference for traversal, given to a transition in the roadmap structure. Our method is novel, and scales well with the number of obstacles, maintaining a relatively high probability of reaching the goal in a finite time horizon without collision, as compared to other methods. We demonstrate our method on systems with up to 50 dynamic obstacles.",
"title": ""
},
{
"docid": "9d5593d89a206ac8ddb82921c2a68c43",
"text": "This paper presents an automatic traffic surveillance system to estimate important traffic parameters from video sequences using only one camera. Different from traditional methods that can classify vehicles to only cars and noncars, the proposed method has a good ability to categorize vehicles into more specific classes by introducing a new \"linearity\" feature in vehicle representation. In addition, the proposed system can well tackle the problem of vehicle occlusions caused by shadows, which often lead to the failure of further vehicle counting and classification. This problem is solved by a novel line-based shadow algorithm that uses a set of lines to eliminate all unwanted shadows. The used lines are devised from the information of lane-dividing lines. Therefore, an automatic scheme to detect lane-dividing lines is also proposed. The found lane-dividing lines can also provide important information for feature normalization, which can make the vehicle size more invariant, and thus much enhance the accuracy of vehicle classification. Once all features are extracted, an optimal classifier is then designed to robustly categorize vehicles into different classes. When recognizing a vehicle, the designed classifier can collect different evidences from its trajectories and the database to make an optimal decision for vehicle classification. Since more evidences are used, more robustness of classification can be achieved. Experimental results show that the proposed method is more robust, accurate, and powerful than other traditional methods, which utilize only the vehicle size and a single frame for vehicle classification.",
"title": ""
},
{
"docid": "04647771810ac62b27ee8da12833a02d",
"text": "Multi-task learning is a learning paradigm which seeks to improve the generalization performance of a learning task with the help of some other related tasks. In this paper, we propose a regularization formulation for learning the relationships between tasks in multi-task learning. This formulation can be viewed as a novel generalization of the regularization framework for single-task learning. Besides modeling positive task correlation, our method, called multi-task relationship learning (MTRL), can also describe negative task correlation and identify outlier tasks based on the same underlying principle. Under this regularization framework, the objective function of MTRL is convex. For efficiency, we use an alternating method to learn the optimal model parameters for each task as well as the relationships between tasks. We study MTRL in the symmetric multi-task learning setting and then generalize it to the asymmetric setting as well. We also study the relationships between MTRL and some existing multi-task learning methods. Experiments conducted on a toy problem as well as several benchmark data sets demonstrate the effectiveness of MTRL.",
"title": ""
},
{
"docid": "2b32087daf5c104e60f91ebf19cd744d",
"text": "A large amount of food photos are taken in restaurants for diverse reasons. This dish recognition problem is very challenging, due to different cuisines, cooking styles and the intrinsic difficulty of modeling food from its visual appearance. Contextual knowledge is crucial to improve recognition in such scenario. In particular, geocontext has been widely exploited for outdoor landmark recognition. Similarly, we exploit knowledge about menus and geolocation of restaurants and test images. We first adapt a framework based on discarding unlikely categories located far from the test image. Then we reformulate the problem using a probabilistic model connecting dishes, restaurants and geolocations. We apply that model in three different tasks: dish recognition, restaurant recognition and geolocation refinement. Experiments on a dataset including 187 restaurants and 701 dishes show that combining multiple evidences (visual, geolocation, and external knowledge) can boost the performance in all tasks.",
"title": ""
},
{
"docid": "da19fd683e64b0192bd52eadfade33a2",
"text": "For professional users such as firefighters and other first responders, GNSS positioning technology (GPS, assisted GPS) can satisfy outdoor positioning requirements in many instances. However, there is still a need for high-performance deep indoor positioning for use by these same professional users. This need has already been clearly expressed by various communities of end users in the context of WearIT@Work, an R&D project funded by the European Community's Sixth Framework Program. It is known that map matching can help for indoor pedestrian navigation. In most previous research, it was assumed that detailed building plans are available. However, in many emergency / rescue scenarios, only very limited building plan information may be at hand. For example a building outline might be obtained from aerial photographs or cataster databases. Alternatively, an escape plan posted at the entrances to many building would yield only approximate exit door and stairwell locations as well as hallway and room orientation. What is not known is how much map information is really required for a USAR mission and how much each level of map detail might help to improve positioning accuracy. Obviously, the geometry of the building and the course through will be factors consider. The purpose of this paper is to show how a previously published Backtracking Particle Filter (BPF) can be combined with different levels of building plan detail to improve PDR performance. A new in/out scenario that might be typical of a reconnaissance mission during a fire in a two-story office building was evaluated. Using only external wall information, the new scenario yields positioning performance (2.56 m mean 2D error) that is greatly superior to the PDR-only, no map base case (7.74 m mean 2D error). This result has a substantial practical significance since this level of building plan detail could be quickly and easily generated in many emergency instances. The technique could be used to mitigate heading errors that result from exposing the IMU to extreme operating conditions. It is hoped that this mitigating effect will also occur for more irregular paths and in larger traversed spaces such as parking garages and warehouses.",
"title": ""
},
{
"docid": "f4009fde2b4ac644d3b83b664e178b5f",
"text": "This chapter describes the history of metaheuristics in five distinct periods, starting long before the first use of the term and ending a long time in the future.",
"title": ""
}
] |
scidocsrr
|
a77f69045f4fb1cd9df339bb888672cd
|
ASR-based Features for Emotion Recognition: A Transfer Learning Approach
|
[
{
"docid": "33bee298704171e68e413e875e413af3",
"text": "We introduce multiplicative LSTM (mLSTM), a novel recurrent neural network architecture for sequence modelling that combines the long short-term memory (LSTM) and multiplicative recurrent neural network architectures. mLSTM is characterised by its ability to have different recurrent transition functions for each possible input, which we argue makes it more expressive for autoregressive density estimation. We demonstrate empirically that mLSTM outperforms standard LSTM and its deep variants for a range of character level modelling tasks, and that this improvement increases with the complexity of the task. This model achieves a test error of 1.19 bits/character on the last 4 million characters of the Hutter prize dataset when combined with dynamic evaluation.",
"title": ""
},
{
"docid": "9d672a1d45bfd078c16915b7f5d949b0",
"text": "To design a useful recommender system, it is important to understand how products relate to each other. For example, while a user is browsing mobile phones, it might make sense to recommend other phones, but once they buy a phone, we might instead want to recommend batteries, cases, or chargers. In economics, these two types of recommendations are referred to as substitutes and complements: substitutes are products that can be purchased instead of each other, while complements are products that can be purchased in addition to each other. Such relationships are essential as they help us to identify items that are relevant to a user's search.\n Our goal in this paper is to learn the semantics of substitutes and complements from the text of online reviews. We treat this as a supervised learning problem, trained using networks of products derived from browsing and co-purchasing logs. Methodologically, we build topic models that are trained to automatically discover topics from product reviews that are successful at predicting and explaining such relationships. Experimentally, we evaluate our system on the Amazon product catalog, a large dataset consisting of 9 million products, 237 million links, and 144 million reviews.",
"title": ""
},
{
"docid": "3f5eed1f718e568dc3ba9abbcd6bfedd",
"text": "The automatic recognition of spontaneous emotions from speech is a challenging task. On the one hand, acoustic features need to be robust enough to capture the emotional content for various styles of speaking, and while on the other, machine learning algorithms need to be insensitive to outliers while being able to model the context. Whereas the latter has been tackled by the use of Long Short-Term Memory (LSTM) networks, the former is still under very active investigations, even though more than a decade of research has provided a large set of acoustic descriptors. In this paper, we propose a solution to the problem of `context-aware' emotional relevant feature extraction, by combining Convolutional Neural Networks (CNNs) with LSTM networks, in order to automatically learn the best representation of the speech signal directly from the raw time representation. In this novel work on the so-called end-to-end speech emotion recognition, we show that the use of the proposed topology significantly outperforms the traditional approaches based on signal processing techniques for the prediction of spontaneous and natural emotions on the RECOLA database.",
"title": ""
}
] |
[
{
"docid": "3c66777d5f6c88c9e2881df4fb7783e6",
"text": "Large-scale Internet of Things (IoT) services such as healthcare, smart cities, and marine monitoring are pervasive in cyber-physical environments strongly supported by Internet technologies and fog computing. Complex IoT services are increasingly composed of sensors, devices, and compute resources within fog computing infrastructures. The orchestration of such applications can be leveraged to alleviate the difficulties of maintenance and enhance data security and system reliability. However, efficiently dealing with dynamic variations and transient operational behavior is a crucial challenge within the context of choreographing complex services. Furthermore, with the rapid increase of the scale of IoT deployments, the heterogeneity, dynamicity, and uncertainty within fog environments and increased computational complexity further aggravate this challenge. This article gives an overview of the core issues, challenges, and future research directions in fog-enabled orchestration for IoT services. Additionally, it presents early experiences of an orchestration scenario, demonstrating the feasibility and initial results of using a distributed genetic algorithm in this context.",
"title": ""
},
{
"docid": "4f747c2fb562be4608d1f97ead32e00b",
"text": "With rapid development of the Internet, the web contents become huge. Most of the websites are publicly available and anyone can access the contents everywhere such as workplace, home and even schools. Nevertheless, not all the web contents are appropriate for all users, especially children. An example of these contents is pornography images which should be restricted to certain age group. Besides, these images are not safe for work (NSFW) in which employees should not be seen accessing such contents. Recently, convolutional neural networks have been successfully applied to many computer vision problems. Inspired by these successes, we propose a mixture of convolutional neural networks for adult content recognition. Unlike other works, our method is formulated on a weighted sum of multiple deep neural network models. The weights of each CNN models are expressed as a linear regression problem learnt using Ordinary Least Squares (OLS). Experimental results demonstrate that the proposed model outperforms both single CNN model and the average sum of CNN models in adult content recognition.",
"title": ""
},
{
"docid": "5f068a11901763af752df9480b97e0c0",
"text": "Beginning with a brief review of CMOS scaling trends from 1 m to 0.1 m, this paper examines the fundamental factors that will ultimately limit CMOS scaling and considers the design issues near the limit of scaling. The fundamental limiting factors are electron thermal energy, tunneling leakage through gate oxide, and 2D electrostatic scale length. Both the standby power and the active power of a processor chip will increase precipitously below the 0.1m or 100-nm technology generation. To extend CMOS scaling to the shortest channel length possible while still gaining significant performance benefit, an optimized, vertically and laterally nonuniform doping design (superhalo) is presented. It is projected that room-temperature CMOS will be scaled to 20-nm channel length with the superhalo profile. Low-temperature CMOS allows additional design space to further extend CMOS scaling to near 10 nm.",
"title": ""
},
{
"docid": "846931a1e4c594626da26931110c02d6",
"text": "A large volume of research has been conducted in the cognitive radio (CR) area the last decade. However, the deployment of a commercial CR network is yet to emerge. A large portion of the existing literature does not build on real world scenarios, hence, neglecting various important aspects of commercial telecommunication networks. For instance, a lot of attention has been paid to spectrum sensing as the front line functionality that needs to be completed in an efficient and accurate manner to enable an opportunistic CR network architecture. While on the one hand it is necessary to detect the existence of spectrum holes, on the other hand, simply sensing (cooperatively or not) the energy emitted from a primary transmitter cannot enable correct dynamic spectrum access. For example, the presence of a primary transmitter's signal does not mean that CR network users cannot access the spectrum since there might not be any primary receiver in the vicinity. Despite the existing solutions to the DSA problem no robust, implementable scheme has emerged. The set of assumptions that these schemes are built upon do not always hold in realistic, wireless environments. Specific settings are assumed, which differ significantly from how existing telecommunication networks work. In this paper, we challenge the basic premises of the proposed schemes. We further argue that addressing the technical challenges we face in deploying robust CR networks can only be achieved if we radically change the way we design their basic functionalities. In support of our argument, we present a set of real-world scenarios, inspired by realistic settings in commercial telecommunications networks, namely TV and cellular, focusing on spectrum sensing as a basic and critical functionality in the deployment of CRs. We use these scenarios to show why existing DSA paradigms are not amenable to realistic deployment in complex wireless environments. The proposed study extends beyond cognitive radio networks, and further highlights the often existing gap between research and commercialization, paving the way to new thinking about how to accelerate commercialization and adoption of new networking technologies and services.",
"title": ""
},
{
"docid": "8607b42b5c5ee1d535794390e06eb1bf",
"text": "Quantitative association rule (QAR) mining has been recognized an influential research problem over the last decade due to the popularity of quantitative databases and the usefulness of association rules in real life. Unlike boolean association rules (BARs), which only consider boolean attributes, QARs consist of quantitative attributes which contain much richer information than the boolean attributes. However, the combination of these quantitative attributes and their value intervals always gives rise to the generation of an explosively large number of itemsets, thereby severely degrading the mining efficiency. In this paper, we propose an information-theoretic approach to avoid unrewarding combinations of both the attributes and their value intervals being generated in the mining process. We study the mutual information between the attributes in a quantitative database and devise a normalization on the mutual information to make it applicable in the context of QAR mining. To indicate the strong informative relationships among the attributes, we construct a mutual information graph (MI graph), whose edges are attribute pairs that have normalized mutual information no less than a predefined information threshold. We find that the cliques in the MI graph represent a majority of the frequent itemsets. We also show that frequent itemsets that do not form a clique in the MI graph are those whose attributes are not informatively correlated to each other. By utilizing the cliques in the MI graph, we devise an efficient algorithm that significantly reduces the number of value intervals of the attribute sets to be joined during the mining process. Extensive experiments show that our algorithm speeds up the mining process by up to two orders of magnitude. Most importantly, we are able to obtain most of the high-confidence QARs, whereas the QARs that are not returned by MIC are shown to be less interesting.",
"title": ""
},
{
"docid": "a7683aa1cdb5cec5c00de191463acd8b",
"text": "A novel PN diode decoding method for 3D NAND Flash is proposed. The PN diodes are fabricated self-aligned at the source side of the Vertical Gate (VG) 3D NAND architecture. Contrary to the previous 3D NAND approaches, there is no need to fabricate plural string select (SSL) transistors inside the array, thus enabling a highly symmetrical and scalable cell structure. A novel three-step programming pulse waveform is integrated to implement the program-inhibit method, capitalizing on that the PN diodes can prevent leakage of the self-boosted channel potential. A large program-disturb-free window >5V is demonstrated.",
"title": ""
},
{
"docid": "f6167a74c881d16faaf8fb4e804191e2",
"text": "Automation, machine learning, and artificial intelligence (AI) are changing the landscape of echocardiography providing complimentary tools to physicians to enhance patient care. Multiple vendor software programs have incorporated automation to improve accuracy and efficiency of manual tracings. Automation with longitudinal strain and 3D echocardiography has shown great accuracy and reproducibility allowing the incorporation of these techniques into daily workflow. This will give further experience to nonexpert readers and allow the integration of these essential tools into more echocardiography laboratories. The potential for machine learning in cardiovascular imaging is still being discovered as algorithms are being created, with training on large data sets beyond what traditional statistical reasoning can handle. Deep learning when applied to large image repositories will recognize complex relationships and patterns integrating all properties of the image, which will unlock further connections about the natural history and prognosis of cardiac disease states. The purpose of this review article was to describe the role and current use of automation, machine learning, and AI in echocardiography and discuss potential limitations and challenges of in the future.",
"title": ""
},
{
"docid": "56ec8f3e88731992a028a9322dbc4890",
"text": "The term knowledge visualization has been used in many different fields with many different definitions. In this paper, we propose a new definition of knowledge visualization specifically in the context of visual analysis and reasoning. Our definition begins with the differentiation of knowledge as either explicit and tacit knowledge. We then present a model for the relationship between the two through the use visualization. Instead of directly representing data in a visualization, we first determine the value of the explicit knowledge associated with the data based on a cost/benefit analysis and display the knowledge in accordance to its importance. We propose that the displayed explicit knowledge leads us to create our own tacit knowledge through visual analytical reasoning and discovery.",
"title": ""
},
{
"docid": "a701b681b5fb570cf8c0668fe691ee15",
"text": "Coagulation-flocculation is a relatively simple physical-chemical technique in treatment of old and stabilized leachate which has been practiced using a variety of conventional coagulants. Polymeric forms of metal coagulants which are increasingly applied in water treatment are not well documented in leachate treatment. In this research, capability of poly-aluminum chloride (PAC) in the treatment of stabilized leachate from Pulau Burung Landfill Site (PBLS), Penang, Malaysia was studied. The removal efficiencies for chemical oxygen demand (COD), turbidity, color and total suspended solid (TSS) obtained using PAC were compared with those obtained using alum as a conventional coagulant. Central composite design (CCD) and response surface method (RSM) were applied to optimize the operating variables viz. coagulant dosage and pH. Quadratic models developed for the four responses (COD, turbidity, color and TSS) studied indicated the optimum conditions to be PAC dosage of 2g/L at pH 7.5 and alum dosage of 9.5 g/L at pH 7. The experimental data and model predictions agreed well. COD, turbidity, color and TSS removal efficiencies of 43.1, 94.0, 90.7, and 92.2% for PAC, and 62.8, 88.4, 86.4, and 90.1% for alum were demonstrated.",
"title": ""
},
{
"docid": "8b7715a1a7d9d668e52a8f2bd90c89fa",
"text": "A 275mm2 network-on-chip architecture contains 80 tiles arranged as a 10 times 8 2D array of floating-point cores and packet-switched routers, operating at 4GHz. The 15-F04 design employs mesochronous clocking, fine-grained clock gating, dynamic sleep transistors, and body-bias techniques. The 65nm 100M transistor die is designed to achieve a peak performance of 1.0TFLOPS at 1V while dissipating 98W.",
"title": ""
},
{
"docid": "65a87f693d78e69c01d812fef7e9e85a",
"text": "MDPL has been proposed as a masked logic style that counteracts DPA attacks. Recently, it has been shown that the so-called “early propagation effect” might reduce the security of this logic style significantly. In the light of these findings, a 0.13 μm prototype chip that includes the implementation of an 8051-compatible microcontroller in MDPL has been analyzed. Attacks on the measured power traces of this implementation show a severe DPA leakage. In this paper, the results of a detailed analysis of the reasons for this leakage are presented. Furthermore, a proposal is made on how to improve MDPL with respect to the identified problems.",
"title": ""
},
{
"docid": "ad131f6baec15a011252f484f1ef8f18",
"text": "Recent studies have shown that Alzheimer's disease (AD) is related to alteration in brain connectivity networks. One type of connectivity, called effective connectivity, defined as the directional relationship between brain regions, is essential to brain function. However, there have been few studies on modeling the effective connectivity of AD and characterizing its difference from normal controls (NC). In this paper, we investigate the sparse Bayesian Network (BN) for effective connectivity modeling. Specifically, we propose a novel formulation for the structure learning of BNs, which involves one L1-norm penalty term to impose sparsity and another penalty to ensure the learned BN to be a directed acyclic graph - a required property of BNs. We show, through both theoretical analysis and extensive experiments on eleven moderate and large benchmark networks with various sample sizes, that the proposed method has much improved learning accuracy and scalability compared with ten competing algorithms. We apply the proposed method to FDG-PET images of 42 AD and 67 NC subjects, and identify the effective connectivity models for AD and NC, respectively. Our study reveals that the effective connectivity of AD is different from that of NC in many ways, including the global-scale effective connectivity, intra-lobe, inter-lobe, and inter-hemispheric effective connectivity distributions, as well as the effective connectivity associated with specific brain regions. These findings are consistent with known pathology and clinical progression of AD, and will contribute to AD knowledge discovery.",
"title": ""
},
{
"docid": "fa4f9e00ae199f34f2c28cb56799c7e5",
"text": "OBJECTIVE\nTo examine how concurrent partnerships amplify the rate of HIV spread, using methods that can be supported by feasible data collection.\n\n\nMETHODS\nA fully stochastic simulation is used to represent a population of individuals, the sexual partnerships that they form and dissolve over time, and the spread of an infectious disease. Sequential monogamy is compared with various levels of concurrency, holding all other features of the infection process constant. Effective summary measures of concurrency are developed that can be estimated on the basis of simple local network data.\n\n\nRESULTS\nConcurrent partnerships exponentially increase the number of infected individuals and the growth rate of the epidemic during its initial phase. For example, when one-half of the partnerships in a population are concurrent, the size of the epidemic after 5 years is 10 times as large as under sequential monogamy. The primary cause of this amplification is the growth in the number of people connected in the network at any point in time: the size of the largest \"component'. Concurrency increases the size of this component, and the result is that the infectious agent is no longer trapped in a monogamous partnership after transmission occurs, but can spread immediately beyond this partnership to infect others. The summary measure of concurrency developed here does a good job in predicting the size of the amplification effect, and may therefore be a useful and practical tool for evaluation and intervention at the beginning of an epidemic.\n\n\nCONCLUSION\nConcurrent partnerships may be as important as multiple partners or cofactor infections in amplifying the spread of HIV. The public health implications are that data must be collected properly to measure the levels of concurrency in a population, and that messages promoting one partner at a time are as important as messages promoting fewer partners.",
"title": ""
},
{
"docid": "4927fee47112be3d859733c498fbf594",
"text": "To design effective tools for detecting and recovering from software failures requires a deep understanding of software bug characteristics. We study software bug characteristics by sampling 2,060 real world bugs in three large, representative open-source projects—the Linux kernel, Mozilla, and Apache. We manually study these bugs in three dimensions—root causes, impacts, and components. We further study the correlation between categories in different dimensions, and the trend of different types of bugs. The findings include: (1) semantic bugs are the dominant root cause. As software evolves, semantic bugs increase, while memory-related bugs decrease, calling for more research effort to address semantic bugs; (2) the Linux kernel operating system (OS) has more concurrency bugs than its non-OS counterparts, suggesting more effort into detecting concurrency bugs in operating system code; and (3) reported security bugs are increasing, and the majority of them are caused by semantic bugs, suggesting more support to help developers diagnose and fix security bugs, especially semantic security bugs. In addition, to reduce the manual effort in building bug benchmarks for evaluating bug detection and diagnosis tools, we use machine learning techniques to classify 109,014 bugs automatically.",
"title": ""
},
{
"docid": "80912c6ff371cdc47ef92e793f2497a0",
"text": "Since the explosion of the Web as a business medium, one of its primary uses has been for marketing. Soon, the Web will become a critical distribution channel for the majority of successful enterprises. The mass media, consumer marketers and advertising agencies seem to be in the midst of Internet discovery and exploitation. Before a company can envision what might sell online in the coming years, it must ®rst understand the attitudes and behaviour of its potential customers. Hence, this study examines attitudes toward various aspects of online shopping and provides a better understanding of the potential of electronic commerce for both researchers and practitioners.",
"title": ""
},
{
"docid": "fc06673e86c237e06d9e927e2f6468a8",
"text": "Locality sensitive hashing (LSH) is a computationally efficient alternative to the distance based anomaly detection. The main advantages of LSH lie in constant detection time, low memory requirement, and simple implementation. However, since the metric of distance in LSHs does not consider the property of normal training data, a naive use of existing LSHs would not perform well. In this paper, we propose a new hashing scheme so that hash functions are selected dependently on the properties of the normal training data for reliable anomaly detection. The distance metric of the proposed method, called NSH (Normality Sensitive Hashing) is theoretically interpreted in terms of the region of normal training data and its effectiveness is demonstrated through experiments on real-world data. Our results are favorably comparable to state-of-the arts with the low-level features.",
"title": ""
},
{
"docid": "f95ace29fea990f496f011446d4ed88f",
"text": "Deep-learning has dramatically changed the world overnight. It greatly boosted the development of visual perception, object detection, and speech recognition, etc. That was attributed to the multiple convolutional processing layers for abstraction of learning representations from massive data. The advantages of deep convolutional structures in data processing motivated the applications of artificial intelligence methods in robotic problems, especially perception and control system, the two typical and challenging problems in robotics. This paper presents a survey of the deep-learning research landscape in mobile robotics. We start with introducing the definition and development of deep-learning in related fields, especially the essential distinctions between image processing and robotic tasks. We described and discussed several typical applications and related works in this domain, followed by the benefits from deeplearning, and related existing frameworks. Besides, operation in the complex dynamic environment is regarded as a critical bottleneck for mobile robots, such as that for autonomous driving. We thus further emphasize the recent achievement on how deeplearning contributes to navigation and control systems for mobile robots. At the end, we discuss the open challenges and research frontiers.",
"title": ""
},
{
"docid": "4fc4008c6762a18fef474ad251359bfa",
"text": "Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, and provide a review of the relevant literature. Finally, we address security implications from self-improving intelligent software.",
"title": ""
},
{
"docid": "d34b81ac6c521cbf466b4b898486a201",
"text": "We introduce the novel task of identifying important citations in scholarly literature, i.e., citations that indicate that the cited work is used or extended in the new effort. We believe this task is a crucial component in algorithms that detect and follow research topics and in methods that measure the quality of publications. We model this task as a supervised classification problem at two levels of detail: a coarse one with classes (important vs. non-important), and a more detailed one with four importance classes. We annotate a dataset of approximately 450 citations with this information, and release it publicly. We propose a supervised classification approach that addresses this task with a battery of features that range from citation counts to where the citation appears in the body of the paper, and show that, our approach achieves a precision of 65% for a recall of 90%.",
"title": ""
},
{
"docid": "737a7c63bab1a6688ec280d5d1abc7b5",
"text": "Medicine continues to struggle in its approaches to numerous common subjective pain syndromes that lack objective signs and remain treatment resistant. Foremost among these are migraine, fibromyalgia, and irritable bowel syndrome, disorders that may overlap in their affected populations and whose sufferers have all endured the stigma of a psychosomatic label, as well as the failure of endless pharmacotherapeutic interventions with substandard benefit. The commonality in symptomatology in these conditions displaying hyperalgesia and central sensitization with possible common underlying pathophysiology suggests that a clinical endocannabinoid deficiency might characterize their origin. Its base hypothesis is that all humans have an underlying endocannabinoid tone that is a reflection of levels of the endocannabinoids, anandamide (arachidonylethanolamide), and 2-arachidonoylglycerol, their production, metabolism, and the relative abundance and state of cannabinoid receptors. Its theory is that in certain conditions, whether congenital or acquired, endocannabinoid tone becomes deficient and productive of pathophysiological syndromes. When first proposed in 2001 and subsequently, this theory was based on genetic overlap and comorbidity, patterns of symptomatology that could be mediated by the endocannabinoid system (ECS), and the fact that exogenous cannabinoid treatment frequently provided symptomatic benefit. However, objective proof and formal clinical trial data were lacking. Currently, however, statistically significant differences in cerebrospinal fluid anandamide levels have been documented in migraineurs, and advanced imaging studies have demonstrated ECS hypofunction in post-traumatic stress disorder. Additional studies have provided a firmer foundation for the theory, while clinical data have also produced evidence for decreased pain, improved sleep, and other benefits to cannabinoid treatment and adjunctive lifestyle approaches affecting the ECS.",
"title": ""
}
] |
scidocsrr
|
31ccf1dd6e5a1eb7e95939d057258805
|
An efficient lane detection algorithm for lane departure detection
|
[
{
"docid": "b44df1268804e966734ea404b8c29360",
"text": "A new night-time lane detection system and its accompanying framework are presented in this paper. The accompanying framework consists of an automated ground truth process and systematic storage of captured videos that will be used for training and testing. The proposed Advanced Lane Detector 2.0 (ALD 2.0) is an improvement over the ALD 1.0 or Layered Approach with integration of pixel remapping, outlier removal, and prediction with tracking. Additionally, a novel procedure to generate the ground truth data for lane marker locations is also proposed. The procedure consists of an original process called time slicing, which provides the user with unique visualization of the captured video and enables quick generation of ground truth information. Finally, the setup and implementation of a database hosting lane detection videos and standardized data sets for testing are also described. The ALD 2.0 is evaluated by means of the user-created annotations accompanying the videos. Finally, the planned improvements and remaining work are addressed.",
"title": ""
}
] |
[
{
"docid": "d3049fee1ed622515f5332bcfa3bdd7b",
"text": "PURPOSE\nTo prospectively analyze, using validated outcome measures, symptom improvement in patients with mild to moderate cubital tunnel syndrome treated with rigid night splinting and activity modifications.\n\n\nMETHODS\nNineteen patients (25 extremities) were enrolled prospectively between August 2009 and January 2011 following a diagnosis of idiopathic cubital tunnel syndrome. Patients were treated with activity modifications as well as a 3-month course of rigid night splinting maintaining 45° of elbow flexion. Treatment failure was defined as progression to operative management. Outcome measures included patient-reported splinting compliance as well as the Quick Disabilities of the Arm, Shoulder, and Hand questionnaire and the Short Form-12. Follow-up included a standardized physical examination. Subgroup analysis included an examination of the association between splinting success and ulnar nerve hypermobility.\n\n\nRESULTS\nTwenty-four of 25 extremities were available at mean follow-up of 2 years (range, 15-32 mo). Twenty-one of 24 (88%) extremities were successfully treated without surgery. We observed a high compliance rate with the splinting protocol during the 3-month treatment period. Quick Disabilities of the Arm, Shoulder, and Hand scores improved significantly from 29 to 11, Short Form-12 physical component summary score improved significantly from 45 to 54, and Short Form-12 mental component summary score improved significantly from 54 to 62. Average grip strength increased significantly from 32 kg to 35 kg, and ulnar nerve provocative testing resolved in 82% of patients available for follow-up examination.\n\n\nCONCLUSIONS\nRigid night splinting when combined with activity modification appears to be a successful, well-tolerated, and durable treatment modality in the management of cubital tunnel syndrome. We recommend that patients presenting with mild to moderate symptoms consider initial treatment with activity modification and rigid night splinting for 3 months based on a high likelihood of avoiding surgical intervention.\n\n\nTYPE OF STUDY/LEVEL OF EVIDENCE\nTherapeutic II.",
"title": ""
},
{
"docid": "64cf7bd992bc6fea358273497d962619",
"text": "Magnetic skyrmions are promising candidates for next-generation information carriers, owing to their small size, topological stability, and ultralow depinning current density. A wide variety of skyrmionic device concepts and prototypes have recently been proposed, highlighting their potential applications. Furthermore, the intrinsic properties of skyrmions enable new functionalities that may be inaccessible to conventional electronic devices. Here, we report on a skyrmion-based artificial synapse device for neuromorphic systems. The synaptic weight of the proposed device can be strengthened/weakened by positive/negative stimuli, mimicking the potentiation/depression process of a biological synapse. Both short-term plasticity and long-term potentiation functionalities have been demonstrated with micromagnetic simulations. This proposal suggests new possibilities for synaptic devices in neuromorphic systems with adaptive learning function.",
"title": ""
},
{
"docid": "8b79816cc07237489dafde316514702a",
"text": "In this dataset paper we describe our work on the collection and analysis of public WhatsApp group data. Our primary goal is to explore the feasibility of collecting and using WhatsApp data for social science research. We therefore present a generalisable data collection methodology, and a publicly available dataset for use by other researchers. To provide context, we perform statistical exploration to allow researchers to understand what public WhatsApp group data can be collected and how this data can be used. Given the widespread use of WhatsApp, our techniques to obtain public data and potential applications are important for the community.",
"title": ""
},
{
"docid": "fc67e1213423e599d488a1974d29bca0",
"text": "The next generation communication system demands for high data rate transfer leading towards exploring a higher level of frequency spectrum. In view of this demand, design of substrate integrated waveguide filters has been presented here in conjunction with metamaterial technology to increase the performance. A metamaterial based substrate integrated waveguide filter operating in the K band (18 – 26.5 GHz) has been demonstrated in this paper with the insertion loss of −0.57 dB in passband and provides a rejection band of 4.1 GHz.",
"title": ""
},
{
"docid": "2c9f7053d9bcd6bc421b133dd7e62d08",
"text": "Recurrent neural networks (RNN) combined with attention mechanism has proved to be useful for various NLP tasks including machine translation, sequence labeling and syntactic parsing. The attention mechanism is usually applied by estimating the weights (or importance) of inputs and taking the weighted sum of inputs as derived features. Although such features have demonstrated their effectiveness, they may fail to capture the sequence information due to the simple weighted sum being used to produce them. The order of the words does matter to the meaning or the structure of the sentences, especially for syntactic parsing, which aims to recover the structure from a sequence of words. In this study, we propose an RNN-based attention to capture the relevant and sequence-preserved features from a sentence, and use the derived features to perform the dependency parsing. We evaluated the graph-based and transition-based parsing models enhanced with the RNN-based sequence-preserved attention on the both English PTB and Chinese CTB datasets. The experimental results show that the enhanced systems were improved with significant increase in parsing accuracy.",
"title": ""
},
{
"docid": "79910e1dadf52be1b278d2e57d9bdb9e",
"text": "Information Visualization systems have traditionally followed a one-size-fits-all model, typically ignoring an individual user's needs, abilities and preferences. However, recent research has indicated that visualization performance could be improved by adapting aspects of the visualization to each individual user. To this end, this paper presents research aimed at supporting the design of novel user-adaptive visualization systems. In particular, we discuss results on using information on user eye gaze patterns while interacting with a given visualization to predict the user's visualization tasks, as well as user cognitive abilities including perceptual speed, visual working memory, and verbal working memory. We show that such predictions are significantly better than a baseline classifier even during the early stages of visualization usage. These findings are discussed in view of designing visualization systems that can adapt to each individual user in real-time.",
"title": ""
},
{
"docid": "8b5ca0f4b12aa5d07619078d44dbb337",
"text": "Crimeware-as-a-service (CaaS) has become a prominent component of the underground economy. CaaS provides a new dimension to cyber crime by making it more organized, automated, and accessible to criminals with limited technical skills. This paper dissects CaaS and explains the essence of the underground economy that has grown around it. The paper also describes the various crimeware services that are provided in the underground",
"title": ""
},
{
"docid": "1c01d2d8d9a11fa71b811a5afbfc0250",
"text": "This paper describes an interactive tour-guide robot, whic h was successfully exhibited in a Smithsonian museum. During its two weeks of operation, the robot interacted with more than 50,000 people, traversing more than 44km. Our approach specifically addresses issues such as safe navigation in unmodified and dynamic environments, and shortterm human-robot interaction.",
"title": ""
},
{
"docid": "8dee3ada764a40fce6b5676287496ccd",
"text": "We study the problem of video-to-video synthesis, whose goal is to learn a mapping function from an input source video (e.g., a sequence of semantic segmentation masks) to an output photorealistic video that precisely depicts the content of the source video. While its image counterpart, the image-to-image translation problem, is a popular topic, the video-to-video synthesis problem is less explored in the literature. Without modeling temporal dynamics, directly applying existing image synthesis approaches to an input video often results in temporally incoherent videos of low visual quality. In this paper, we propose a video-to-video synthesis approach under the generative adversarial learning framework. Through carefully-designed generators and discriminators, coupled with a spatio-temporal adversarial objective, we achieve high-resolution, photorealistic, temporally coherent video results on a diverse set of input formats including segmentation masks, sketches, and poses. Experiments on multiple benchmarks show the advantage of our method compared to strong baselines. In particular, our model is capable of synthesizing 2K resolution videos of street scenes up to 30 seconds long, which significantly advances the state-of-the-art of video synthesis. Finally, we apply our method to future video prediction, outperforming several competing systems. Code, models, and more results are available at our website.",
"title": ""
},
{
"docid": "5dc25d44b0ae6ee44ee7e24832b1bc25",
"text": "The present research aims to investigate the students' perceptions levels of Edmodo and Mobile learning and to identify the real barriers of them at Taibah University in KSA. After implemented Edmodo application as an Mlearning platform, two scales were applied on the research sample, the first scale consisted of 36 statements was constructed to measure students' perceptions towards Edmodo and M-learning, and the second scale consisted of 17 items was constructed to determine the barriers of Edmodo and M-learning. The scales were distributed on 27 students during the second semester of the academic year 2013/2014. Findings indicated that students' perceptions of Edmodo and Mobile learning is in “High” level in general, and majority of students have positive perceptions towards Edmodo and Mobile learning since they think that learning using Edmodo facilitates and increases effectiveness communication of learning, and they appreciate Edmodo because it save time. Regarding the barriers of Edmodo and Mobile learning that facing several students seem like normal range, however, they were facing a problem of low mobile battery, and storing large files in their mobile phones, but they do not face any difficulty to enter the information on small screen size of mobile devices. Finally, it is suggested adding a section for M-learning in the universities to start application of M-learning and prepare a visible and audible guide for using of M-learning in teaching and learning.",
"title": ""
},
{
"docid": "4c004745828100f6ccc6fd660ee93125",
"text": "Steganography has been proposed as a new alternative technique to enforce data security. Lately, novel and versatile audio steganographic methods have been proposed. A perfect audio Steganographic technique aim at embedding data in an imperceptible, robust and secure way and then extracting it by authorized people. Hence, up to date the main challenge in digital audio steganography is to obtain robust high capacity steganographic systems. Leaning towards designing a system that ensures high capacity or robustness and security of embedded data has led to great diversity in the existing steganographic techniques. In this paper, we present a current state of art literature in digital audio steganographic techniques. We explore their potentials and limitations to ensure secure communication. A comparison and an evaluation for the reviewed techniques is also presented in this paper.",
"title": ""
},
{
"docid": "92963d6a511d5e0a767aa34f8932fe86",
"text": "A 77-GHz transmit-array on dual-layer printed circuit board (PCB) is proposed for automotive radar applications. Coplanar patch unit-cells are etched on opposite sides of the PCB and connected by through-via. The unit-cells are arranged in concentric rings to form the transmit-array for 1-bit in-phase transmission. When combined with four-substrate-integrated waveguide (SIW) slot antennas as the primary feeds, the transmit-array is able to generate four beams with a specific coverage of ±15°. The simulated and measured results of the antenna prototype at 76.5 GHz agree well, with gain greater than 18.5 dBi. The coplanar structure significantly simplifies the transmit-array design and eases the fabrication, in particular, at millimeter-wave frequencies.",
"title": ""
},
{
"docid": "c27eecae33fe87779d3452002c1bdf8a",
"text": "When intelligent agents learn visuomotor behaviors from human demonstrations, they may benefit from knowing where the human is allocating visual attention, which can be inferred from their gaze. A wealth of information regarding intelligent decision making is conveyed by human gaze allocation; hence, exploiting such information has the potential to improve the agents’ performance. With this motivation, we propose the AGIL (Attention Guided Imitation Learning) framework. We collect high-quality human action and gaze data while playing Atari games in a carefully controlled experimental setting. Using these data, we first train a deep neural network that can predict human gaze positions and visual attention with high accuracy (the gaze network) and then train another network to predict human actions (the policy network). Incorporating the learned attention model from the gaze network into the policy network significantly improves the action prediction accuracy and task performance.",
"title": ""
},
{
"docid": "fcd349147673758eedb6dba0cd7af850",
"text": "We present VideoLSTM for end-to-end sequence learning of actions in video. Rather than adapting the video to the peculiarities of established recurrent or convolutional architectures, we adapt the architecture to fit the requirements of the video medium. Starting from the soft-Attention LSTM, VideoLSTM makes three novel contributions. First, video has a spatial layout. To exploit the spatial correlation we hardwire convolutions in the soft-Attention LSTM architecture. Second, motion not only informs us about the action content, but also guides better the attention towards the relevant spatio-temporal locations. We introduce motion-based attention. And finally, we demonstrate how the attention from VideoLSTM can be exploited for action localization by relying on the action class label and temporal attention smoothing. Experiments on UCF101, HMDB51 and THUMOS13 reveal the benefit of the video-specific adaptations of VideoLSTM in isolation as well as when integrated in a combined architecture. It compares favorably against other LSTM architectures for action classification and especially action localization.",
"title": ""
},
{
"docid": "d71040311b8753299377b02023ba5b4c",
"text": "Learning based methods have shown very promising results for the task of depth estimation in single images. However, most existing approaches treat depth prediction as a supervised regression problem and as a result, require vast quantities of corresponding ground truth depth data for training. Just recording quality depth data in a range of environments is a challenging problem. In this paper, we innovate beyond existing approaches, replacing the use of explicit depth data during training with easier-to-obtain binocular stereo footage. We propose a novel training objective that enables our convolutional neural network to learn to perform single image depth estimation, despite the absence of ground truth depth data. Ex-ploiting epipolar geometry constraints, we generate disparity images by training our network with an image reconstruction loss. We show that solving for image reconstruction alone results in poor quality depth images. To overcome this problem, we propose a novel training loss that enforces consistency between the disparities produced relative to both the left and right images, leading to improved performance and robustness compared to existing approaches. Our method produces state of the art results for monocular depth estimation on the KITTI driving dataset, even outperforming supervised methods that have been trained with ground truth depth.",
"title": ""
},
{
"docid": "9794653cc79a0835851fdc890e908823",
"text": "In 1988, Hickerson proved the celebrated “mock theta conjectures”, a collection of ten identities from Ramanujan’s “lost notebook” which express certain modular forms as linear combinations of mock theta functions. In the context of Maass forms, these identities arise from the peculiar phenomenon that two different harmonic Maass forms may have the same non-holomorphic parts. Using this perspective, we construct several infinite families of modular forms which are differences of mock theta functions.",
"title": ""
},
{
"docid": "722b045f93c8535c64cc87a47b8c8d1f",
"text": "The kelp Laminaria digitata (Hudson) J.V. Lamouroux (Laminariales, Phaeophyceae) is currently cultivated on a small-scale in several north Atlantic countries, with much potential for expansion. The initial stages of kelp cultivation follow one of two methods: either maximising (gametophyte method) or minimising (direct method) the vegetative growth phase prior to gametogenesis. The gametophyte method is of increasing interest because of its utility in strain selection programmes. In spite of this, there are no studies of L. digitata gametophyte growth and reproductive capacity under commercially relevant conditions. Vegetative growth measured by length and biomass, and rate of gametogenesis, was examined in a series of experiments. A two-way fixed-effects model was used to examine the effects of both photoperiod (8:12; 12:12; 16:8, 24:0 L:D) and commonly used/commercially available growth media (f/2; Algoflash; Provasoli Enriched Seawater) on the aforementioned parameters. All media resulted in good performance of gametophytes under conditions favouring vegetative growth, while f/2 clearly resulted in better gametophyte performance and a faster rate of gametogenesis under conditions stimulating transition to fertility. Particularly, the extent of sporophyte production (% of gametophytes that produced sporophytes) at the end of the experiment showed clear differences between treatments in favour of f/2: f/2 = 30%; Algoflash = 9%; Provasoli Enriched Seawater = 2%. The effect of photoperiod was ambiguous, with evidence to suggest that the benefit of continuous illumination is less than expected. Confirmation of photoperiodic effect is necessary, using biomass as a measure of productivity and taking greater account of effects of genotypic variability.",
"title": ""
},
{
"docid": "0e459d7e3ffbf23c973d4843f701a727",
"text": "The role of psychological flexibility in mental health stigma and psychological distress for the stigmatizer.",
"title": ""
},
{
"docid": "42a6b6ac31383046cf11bcf16da3207e",
"text": "Epigenome-wide association studies represent one means of applying genome-wide assays to identify molecular events that could be associated with human phenotypes. The epigenome is especially intriguing as a target for study, as epigenetic regulatory processes are, by definition, heritable from parent to daughter cells and are found to have transcriptional regulatory properties. As such, the epigenome is an attractive candidate for mediating long-term responses to cellular stimuli, such as environmental effects modifying disease risk. Such epigenomic studies represent a broader category of disease -omics, which suffer from multiple problems in design and execution that severely limit their interpretability. Here we define many of the problems with current epigenomic studies and propose solutions that can be applied to allow this and other disease -omics studies to achieve their potential for generating valuable insights.",
"title": ""
},
{
"docid": "9cdddf98d24d100c752ea9d2b368bb77",
"text": "Using predictive models to identify patterns that can act as biomarkers for different neuropathoglogical conditions is becoming highly prevalent. In this paper, we consider the problem of Autism Spectrum Disorder (ASD) classification where previous work has shown that it can be beneficial to incorporate a wide variety of meta features, such as socio-cultural traits, into predictive modeling. A graph-based approach naturally suits these scenarios, where a contextual graph captures traits that characterize a population, while the specific brain activity patterns are utilized as a multivariate signal at the nodes. Graph neural networks have shown improvements in inferencing with graph-structured data. Though the underlying graph strongly dictates the overall performance, there exists no systematic way of choosing an appropriate graph in practice, thus making predictive models non-robust. To address this, we propose a bootstrapped version of graph convolutional neural networks (G-CNNs) that utilizes an ensemble of weakly trained G-CNNs, and reduce the sensitivity of models on the choice of graph construction. We demonstrate its effectiveness on the challenging Autism Brain Imaging Data Exchange (ABIDE) dataset and show that our approach improves upon recently proposed graph-based neural networks. We also show that our method remains more robust to noisy graphs.",
"title": ""
}
] |
scidocsrr
|
eec3b8577a6a4e08957132ee20df5fb2
|
Management accounting and integrated information systems: A literature review
|
[
{
"docid": "97cfd37d4dc87bbd2c454d07d5ec664e",
"text": "The current study examined the longitudinal impact of ERP adoption on firm performance by matching 63 firms identified by Hayes et al. [J. Inf. Syst. 15 (2001) 3] with peer firms that had not adopted ERP systems. Results indicate that return on assets (ROA), return on investment (ROI), and asset turnover (ATO) were significantly better over a 3-year period for adopters, as compared to nonadopters. Interestingly, our results are consistent with Poston and Grabski [Int. J. Account. Inf. Syst. 2 (2001) 271] who reported no preto post-adoption improvement in financial performance for ERP firms. Rather, significant differences arise in the current study because the financial performance of nonadopters decreased over time while it held steady for adopters. We also report a significant interaction between firm size and financial health for ERP adopters with respect to ROA, ROI, and return on sales (ROS). Specifically, we found a positive (negative) relationship between financial health and performance for small (large) firms. Study findings shed new light on the productivity paradox associated with ERP systems and suggest that ERP adoption helps firms gain a competitive advantage over nonadopters. D 2003 Elsevier Science Inc. All rights reserved.",
"title": ""
},
{
"docid": "3a061755fbb1291046b95ba425dfe77e",
"text": "Understanding the return on investments in information technology (IT) is the focus of a large and growing body of research. The objective of this paper is to synthesize this research and develop a model to guide future research in the evaluation of information technology investments. We focus on archival studies that use accounting or market measures of firm performance. We emphasize those studies where accounting researchers with interest in market-level analyses of systems and technology issues may hold a competitive advantage over traditional information systems (IS) researchers. We propose numerous opportunities for future research. These include examining the relation between IT and business processes, and business processes and overall firm performance, understanding the effect of contextual factors on the IT-performance relation, examining the IT-performance relation in an international context, and examining the interactive effects of IT spending and IT management on firm performance.",
"title": ""
}
] |
[
{
"docid": "c077231164a8a58f339f80b83e5b4025",
"text": "It is widely believed that refactoring improves software quality and developer productivity. However, few empirical studies quantitatively assess refactoring benefits or investigate developers' perception towards these benefits. This paper presents a field study of refactoring benefits and challenges at Microsoft through three complementary study methods: a survey, semi-structured interviews with professional software engineers, and quantitative analysis of version history data. Our survey finds that the refactoring definition in practice is not confined to a rigorous definition of semantics-preserving code transformations and that developers perceive that refactoring involves substantial cost and risks. We also report on interviews with a designated refactoring team that has led a multi-year, centralized effort on refactoring Windows. The quantitative analysis of Windows 7 version history finds that the binary modules refactored by this team experienced significant reduction in the number of inter-module dependencies and post-release defects, indicating a visible benefit of refactoring.",
"title": ""
},
{
"docid": "1dac710a7c845bd3a55d8d92c18e3648",
"text": "PURPOSE\nWe have conducted experiments with an innovatively designed robot endoscope holder for laparoscopic surgery that is small and low cost.\n\n\nMATERIALS AND METHODS\nA compact light endoscope robot (LER) that is placed on the patient's skin and can be used with the patient in the lateral or dorsal supine position was tested on cadavers and laboratory pigs in order to allow successive modifications. The current control system is based on voice recognition. The range of vision is 360 degrees with an angle of 160 degrees . Twenty-three procedures were performed.\n\n\nRESULTS\nThe tests made it possible to advance the prototype on a variety of aspects, including reliability, steadiness, ergonomics, and dimensions. The ease of installation of the robot, which takes only 5 minutes, and the easy handling made it possible for 21 of the 23 procedures to be performed without an assistant.\n\n\nCONCLUSION\nThe LER is a camera holder guided by the surgeon's voice that can eliminate the need for an assistant during laparoscopic surgery. The ease of installation and manufacture should make it an effective and inexpensive system for use on patients in the lateral and dorsal supine positions. Randomized clinical trials will soon validate a new version of this robot prior to marketing.",
"title": ""
},
{
"docid": "4cb41f9de259f18cd8fe52d2f04756a6",
"text": "The Effects of Lottery Prizes on Winners and their Neighbors: Evidence from the Dutch Postcode Lottery Each week, the Dutch Postcode Lottery (PCL) randomly selects a postal code, and distributes cash and a new BMW to lottery participants in that code. We study the effects of these shocks on lottery winners and their neighbors. Consistent with the life-cycle hypothesis, the effects on winners’ consumption are largely confined to cars and other durables. Consistent with the theory of in-kind transfers, the vast majority of BMW winners liquidate their BMWs. We do, however, detect substantial social effects of lottery winnings: PCL nonparticipants who live next door to winners have significantly higher levels of car consumption than other nonparticipants. JEL Classification: D12, C21",
"title": ""
},
{
"docid": "1cacfd4da5273166debad8a6c1b72754",
"text": "This article presents a paradigm case portrait of female romantic partners of heavy pornography users. Based on a sample of 100 personal letters, this portrait focuses on their often traumatic discovery of the pornography usage and the significance they attach to this usage for (a) their relationships, (b) their own worth and desirability, and (c) the character of their partners. Finally, we provide a number of therapeutic recommendations for helping these women to think and act more effectively in their very difficult circumstances.",
"title": ""
},
{
"docid": "716cb240d2fcf14d3f248e02d79d9d57",
"text": "OBJECTIVE\nSocial media is becoming increasingly popular as a platform for sharing personal health-related information. This information can be utilized for public health monitoring tasks, particularly for pharmacovigilance, via the use of natural language processing (NLP) techniques. However, the language in social media is highly informal, and user-expressed medical concepts are often nontechnical, descriptive, and challenging to extract. There has been limited progress in addressing these challenges, and thus far, advanced machine learning-based NLP techniques have been underutilized. Our objective is to design a machine learning-based approach to extract mentions of adverse drug reactions (ADRs) from highly informal text in social media.\n\n\nMETHODS\nWe introduce ADRMine, a machine learning-based concept extraction system that uses conditional random fields (CRFs). ADRMine utilizes a variety of features, including a novel feature for modeling words' semantic similarities. The similarities are modeled by clustering words based on unsupervised, pretrained word representation vectors (embeddings) generated from unlabeled user posts in social media using a deep learning technique.\n\n\nRESULTS\nADRMine outperforms several strong baseline systems in the ADR extraction task by achieving an F-measure of 0.82. Feature analysis demonstrates that the proposed word cluster features significantly improve extraction performance.\n\n\nCONCLUSION\nIt is possible to extract complex medical concepts, with relatively high performance, from informal, user-generated content. Our approach is particularly scalable, suitable for social media mining, as it relies on large volumes of unlabeled data, thus diminishing the need for large, annotated training data sets.",
"title": ""
},
{
"docid": "eed511e921c130204354cafceb5b0624",
"text": "Mobile technology has become increasingly common in today’s everyday life. However, mobile payment is surprisingly not among the frequently used mobile services, although technologically advanced solutions exist. Apparently, there is still a lack of acceptance of mobile payment services among consumers. The conceptual model developed and tested in this research thus focuses on factors determining consumers’ acceptance of mobile payment services. The empirical results show particularly strong support for the effects of compatibility, individual mobility, and subjective norm. Our study offers several implications for managers in regards to marketing mobile payment solutions to increase consumers’ intention to use these services. 2009 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "5487dd1976a164447c821303b53ebdf8",
"text": "Rapid and pervasive digitization of innovation processes and outcomes has upended extant theories on innovation management by calling into question fundamental assumptions about the definitional boundaries for innovation, agency for innovation, and the relationship between innovation processes and outcomes. There is a critical need for novel theorizing on digital innovation management that does not rely on such assumptions and draws on the rich and rapidly emerging research on digital technologies. We offer suggestions for such theorizing in the form of four new theorizing logics, or elements, that are likely to be valuable in constructing more accurate explanations of innovation processes and outcomes in an increasingly digital world. These logics can open new avenues for researchers to contribute to this important area. Our suggestions in this paper, coupled with the six research notes included in the special issue on digital innovation management, seek to offer a broader foundation for reinventing innovation management research in a digital world.",
"title": ""
},
{
"docid": "66fa9b79b1034e1fa3bf19857b5367c2",
"text": "We propose a boundedly-rational model of opinion formation in which individuals are subject to persuasion bias; that is, they fail to account for possible repetition in the information they receive. We show that persuasion bias implies the phenomenon of social influence, whereby one’s influence on group opinions depends not only on accuracy, but also on how well-connected one is in the social network that determines communication. Persuasion bias also implies the phenomenon of unidimensional opinions; that is, individuals’ opinions over a multidimensional set of issues converge to a single “left-right” spectrum. We explore the implications of our model in several natural settings, including political science and marketing, and we obtain a number of novel empirical implications. DeMarzo and Zwiebel: Graduate School of Business, Stanford University, Stanford CA 94305, Vayanos: MIT Sloan School of Management, 50 Memorial Drive E52-437, Cambridge MA 02142. This paper is an extensive revision of our paper, “A Model of Persuasion – With Implication for Financial Markets,” (first draft, May 1997). We are grateful to Nick Barberis, Gary Becker, Jonathan Bendor, Larry Blume, Simon Board, Eddie Dekel, Stefano DellaVigna, Darrell Duffie, David Easley, Glenn Ellison, Simon Gervais, Ed Glaeser, Ken Judd, David Kreps, Edward Lazear, George Loewenstein, Lee Nelson, Anthony Neuberger, Matthew Rabin, José Scheinkman, Antoinette Schoar, Peter Sorenson, Pietro Veronesi, Richard Zeckhauser, three anonymous referees, and seminar participants at the American Finance Association Annual Meetings, Boston University, Cornell, Carnegie-Mellon, ESSEC, the European Summer Symposium in Financial Markets at Gerzensee, HEC, the Hoover Institution, Insead, MIT, the NBER Asset Pricing Conference, the Northwestern Theory Summer Workshop, NYU, the Stanford Institute for Theoretical Economics, Stanford, Texas A&M, UCLA, U.C. Berkeley, Université Libre de Bruxelles, University of Michigan, University of Texas at Austin, University of Tilburg, and the Utah Winter Finance Conference for helpful comments and discussions. All errors are our own.",
"title": ""
},
{
"docid": "d1475e197b300489acedf8c0cbe8f182",
"text": "—The publication of IEC 61850-90-1 \" Use of IEC 61850 for the communication between substations \" and the draft of IEC 61850-90-5 \" Use of IEC 61850 to transmit synchrophasor information \" opened the possibility to study IEC 61850 GOOSE Message over WAN not only in the layer 2 (link layer) but also in the layer 3 (network layer) in the OSI model. In this paper we examine different possibilities to make feasible teleprotection in the network layer over WAN sharing the communication channel with automation, management and maintenance convergence services among electrical energy substations.",
"title": ""
},
{
"docid": "c1b34059a896564df02ef984085b93a0",
"text": "Robotics has become a standard tool in outreaching to grades K-12 and attracting students to the STEM disciplines. Performing these activities in the class room usually requires substantial time commitment by the teacher and integration into the curriculum requires major effort, which makes spontaneous and short-term engagements difficult. This paper studies using “Cubelets”, a modular robotic construction kit, which requires virtually no setup time and allows substantial engagement and change of perception of STEM in as little as a 1-hour session. This paper describes the constructivist curriculum and provides qualitative and quantitative results on perception changes with respect to STEM and computer science in particular as a field of study.",
"title": ""
},
{
"docid": "15316c80d2a880b06846e8dd398a5c3f",
"text": "One weak spot is all it takes to open secured digital doors and online accounts causing untold damage and consequences.",
"title": ""
},
{
"docid": "f8821f651731943ce1652bc8a1d2c0d6",
"text": "business units and thus not even practiced in a cohesive, coherent manner. In the worst cases, busy business unit executives trade roving bands of developers like Pokémon cards in a fifth-grade classroom (in an attempt to get ahead). Suffice it to say, none of this is good. The disconnect between security and development has ultimately produced software development efforts that lack any sort of contemporary understanding of technical security risks. Today's complex and highly connected computing environments trigger myriad security concerns, so by blowing off the idea of security entirely, software builders virtually guarantee that their creations will have way too many security weaknesses that could—and should—have been avoided. This article presents some recommendations for solving this problem. Our approach is born out of experience in two diverse fields: software security and information security. Central among our recommendations is the notion of using the knowledge inherent in information security organizations to enhance secure software development efforts. Don't stand so close to me Best practices in software security include a manageable number of simple activities that should be applied throughout any software development process (see Figure 1). These lightweight activities should start at the earliest stages of software development and then continue throughout the development process and into deployment and operations. Although an increasing number of software shops and individual developers are adopting the software security touchpoints we describe here as their own, they often lack the requisite security domain knowledge required to do so. This critical knowledge arises from years of observing system intrusions, dealing with malicious hackers, suffering the consequences of software vulnera-bilities, and so on. Put in this position , even the best-intended development efforts can fail to take into account real-world attacks previously observed on similar application architectures. Although recent books 1,2 are starting to turn this knowledge gap around, the science of attack is a novel one. Information security staff—in particular, incident handlers and vulnerability/patch specialists— have spent years responding to attacks against real systems and thinking about the vulnerabilities that spawned them. In many cases, they've studied software vulnerabili-ties and their resulting attack profiles in minute detail. However, few information security professionals are software developers (at least, on a full-time basis), and their solution sets tend to be limited to reactive techniques such as installing software patches, shoring up firewalls, updating intrusion detection signature databases, and the like. It's very rare to find information security …",
"title": ""
},
{
"docid": "df4b4119653789266134cf0b7571e332",
"text": "Automatic detection of lymphocyte in H&E images is a necessary first step in lots of tissue image analysis algorithms. An accurate and robust automated lymphocyte detection approach is of great importance in both computer science and clinical studies. Most of the existing approaches for lymphocyte detection are based on traditional image processing algorithms and/or classic machine learning methods. In the recent years, deep learning techniques have fundamentally transformed the way that a computer interprets images and have become a matchless solution in various pattern recognition problems. In this work, we design a new deep neural network model which extends the fully convolutional network by combining the ideas in several recent techniques, such as shortcut links. Also, we design a new training scheme taking the prior knowledge about lymphocytes into consideration. The training scheme not only efficiently exploits the limited amount of free-form annotations from pathologists, but also naturally supports efficient fine-tuning. As a consequence, our model has the potential of self-improvement by leveraging the errors collected during real applications. Our experiments show that our deep neural network model achieves good performance in the images of different staining conditions or different types of tissues.",
"title": ""
},
{
"docid": "8306854901811a5a64a2a2fe8ec554d0",
"text": "OBJECTIVE\nTo summarise the benefits and harms of treatments for women with gestational diabetes mellitus.\n\n\nDESIGN\nSystematic review and meta-analysis of randomised controlled trials.\n\n\nDATA SOURCES\nEmbase, Medline, AMED, BIOSIS, CCMed, CDMS, CDSR, CENTRAL, CINAHL, DARE, HTA, NHS EED, Heclinet, SciSearch, several publishers' databases, and reference lists of relevant secondary literature up to October 2009. Review methods Included studies were randomised controlled trials of specific treatment for gestational diabetes compared with usual care or \"intensified\" compared with \"less intensified\" specific treatment.\n\n\nRESULTS\nFive randomised controlled trials matched the inclusion criteria for specific versus usual treatment. All studies used a two step approach with a 50 g glucose challenge test or screening for risk factors, or both, and a subsequent 75 g or 100 g oral glucose tolerance test. Meta-analyses did not show significant differences for most single end points judged to be of direct clinical importance. In women specifically treated for gestational diabetes, shoulder dystocia was significantly less common (odds ratio 0.40, 95% confidence interval 0.21 to 0.75), and one randomised controlled trial reported a significant reduction of pre-eclampsia (2.5 v 5.5%, P=0.02). For the surrogate end point of large for gestational age infants, the odds ratio was 0.48 (0.38 to 0.62). In the 13 randomised controlled trials of different intensities of specific treatments, meta-analysis showed a significant reduction of shoulder dystocia in women with more intensive treatment (0.31, 0.14 to 0.70).\n\n\nCONCLUSIONS\nTreatment for gestational diabetes, consisting of treatment to lower blood glucose concentration alone or with special obstetric care, seems to lower the risk for some perinatal complications. Decisions regarding treatment should take into account that the evidence of benefit is derived from trials for which women were selected with a two step strategy (glucose challenge test/screening for risk factors and oral glucose tolerance test).",
"title": ""
},
{
"docid": "c18910a5fd622da55f2a2bc61703d6b8",
"text": "The emergence of online social networks has revolutionized the way people seek and share information. Nowadays, popular online social sites as Twitter, Facebook and Google+ are among the major news sources as well as the most effective channels for viral marketing. However, these networks also became the most effective channel for spreading misinformation, accidentally or maliciously. The widespread diffusion of inaccurate information or fake news can lead to undesirable and severe consequences, such as widespread panic, libelous campaigns and conspiracies. In order to guarantee the trustworthiness of online social networks it is a crucial challenge to find effective strategies to contrast the spread of the misinformation in the network. In this paper we concentrate our attention on two problems related to the diffusion of misinformation in social networks: identify the misinformation sources and limit its diffusion in the network. We consider a social network where some nodes have already been infected from misinformation. We first provide an heuristics to recognize the set of most probable sources of the infection. Then, we provide an heuristics to place a few monitors in some network nodes in order to control information diffused by the suspected nodes and block misinformation they injected in the network before it reaches a large part of the network. To verify the quality and efficiency of our suggested solutions, we conduct experiments on several real-world networks. Empirical results indicate that our heuristics are among the most effective known in literature.",
"title": ""
},
{
"docid": "ca4e3f243b2868445ecb916c081e108e",
"text": "The task in the multi-agent path finding problem (MAPF) is to find paths for multiple agents, each with a different start and goal position, such that agents do not collide. It is possible to solve this problem optimally with algorithms that are based on the A* algorithm. Recently, we proposed an alternative algorithm called Conflict-Based Search (CBS) (Sharon et al. 2012), which was shown to outperform the A*-based algorithms in some cases. CBS is a two-level algorithm. At the high level, a search is performed on a tree based on conflicts between agents. At the low level, a search is performed only for a single agent at a time. While in some cases CBS is very efficient, in other cases it is worse than A*-based algorithms. This paper focuses on the latter case by generalizing CBS to Meta-Agent CBS (MA-CBS). The main idea is to couple groups of agents into meta-agents if the number of internal conflicts between them exceeds a given bound. MACBS acts as a framework that can run on top of any complete MAPF solver. We analyze our new approach and provide experimental results demonstrating that it outperforms basic CBS and other A*-based optimal solvers in many cases. Introduction and Background In the multi-agent path finding (MAPF) problem, we are given a graph, G(V,E), and a set of k agents labeled a1 . . . ak. Each agent ai has a start position si ∈ V and goal position gi ∈ V . At each time step an agent can either move to a neighboring location or can wait in its current location. The task is to return the least-cost set of actions for all agents that will move each of the agents to its goal without conflicting with other agents (i.e., without being in the same location at the same time or crossing the same edge simultaneously in opposite directions). MAPF has practical applications in robotics, video games, vehicle routing, and other domains (Silver 2005; Dresner & Stone 2008). In its general form, MAPF is NPcomplete, because it is a generalization of the sliding tile puzzle, which is NP-complete (Ratner & Warrnuth 1986). There are many variants to the MAPF problem. In this paper we consider the following common setting. The cumulative cost function to minimize is the sum over all agents of the number of time steps required to reach the goal location (Standley 2010; Sharon et al. 2011a). Both move Copyright c © 2012, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. and wait actions cost one. A centralized computing setting with a single CPU that controls all the agents is assumed. Note that a centralized computing setting is logically equivalent to a decentralized setting where each agent has its own computing power but agents are fully cooperative with full knowledge sharing and free communication. There are two main approaches for solving the MAPF in the centralized computing setting: the coupled and the decoupled approaches. In the decoupled approach, paths are planned for each agent separately. Algorithms from the decoupled approach run relatively fast, but optimality and even completeness are not always guaranteed (Silver 2005; Wang & Botea 2008; Jansen & Sturtevant 2008). New complete (but not optimal) decoupled algorithms were recently introduced for trees (Khorshid, Holte, & Sturtevant 2011) and for general graphs (Luna & Bekris 2011). Our aim is to solve the MAPF problem optimally and therefore the focus of this paper is on the coupled approach. In this approach MAPF is formalized as a global, singleagent search problem. One can activate an A*-based algorithm that searches a state space that includes all the different ways to permute the k agents into |V | locations. Consequently, the state space that is searched by the A*-based algorithms grow exponentially with the number of agents. Hence, finding the optimal solutions with A*-based algorithms requires significant computational expense. Previous optimal solvers dealt with this large search space in several ways. Ryan (2008; 2010) abstracted the problem into pre-defined structures such as cliques, halls and rings. He then modeled and solved the problem as a CSP problem. Note that the algorithm Ryan proposed does not necessarily returns the optimal solutions. Standley (2010; 2011) partitioned the given problem into smaller independent problems, if possible. Sharon et. al. (2011a; 2011b) suggested the increasing cost search tree (ICTS) a two-level framework where the high-level phase searches a tree with exact path costs for each of the agents and the low-level phase aims to verify whether there is a solution of this cost. In this paper we focus on the new Conflict Based Search algorithm (CBS) (Sharon et al. 2012) which optimally solves MAPF. CBS is a two-level algorithm where the highlevel search is performed on a constraint tree (CT) whose nodes include constraints on time and locations of a single agent. At each node in the constraint tree a low-level search is performed to find individual paths for all agents under the constraints given by the high-level node. Sharon et al. (2011a; 2011b; 2012) showed that the behavior of optimal MAPF algorithms can be very sensitive to characteristics of the given problem instance such as the topology and size of the graph, the number of agents, the branching factor etc. There is no universally dominant algorithm; different algorithms work well in different circumstances. In particular, experimental results have shown that CBS can significantly outperform all existing optimal MAPF algorithms on some domains (Sharon et al. 2012). However, Sharon et al. (2012) also identified cases where the CBS algorithm performs poorly. In such cases, CBS may even perform exponentially worse than A*. In this paper we aim at mitigating the worst-case performance of CBS by generalizing CBS into a new algorithm called Meta-agent CBS (MA-CBS). In MA-CBS the number of conflicts allowed at the high-level phase between any pair of agents is bounded by a predefined parameter B. When the number of conflicts exceed B, the conflicting agents are merged into a meta-agent and then treated as a joint composite agent by the low-level solver. By bounding the number of conflicts between any pair of agents, we prevent the exponential worst-case of basic CBS. This results in an new MAPF solver that significantly outperforms existing algorithms in a variety of domains. We present both theoretical and empirical support for this claim. In the low-level search, MA-CBS can use any complete MAPF solver. Thus, MA-CBS can be viewed as a solving framework and future MAPF algorithms could also be used by MA-CBS to improve its performance. Furthermore, we show that the original CBS algorithm corresponds to the extreme cases where B = ∞ (never merge agents), and the Independence Dependence (ID) framework (Standley 2010) is the other extreme case where B = 0 (always merge agents when conflicts occur). Thus, MA-CBS allows a continuum between CBS and ID, by setting different values of B between these two extremes. The Conflict Based Search Algorithm (CBS) The MA-CBS algorithm presented in this paper is based on the CBS algorithm (Sharon et al. 2012). We thus first describe the CBS algorithm in detail. Definitions for CBS We use the term path only in the context of a single agent and use the term solution to denote a set of k paths for the given set of k agents. A constraint for a given agent ai is a tuple (ai, v, t) where agent ai is prohibited from occupying vertex v at time step t.1 During the course of the algorithm, agents are associated with constraints. A consistent path for agent ai is a path that satisfies all its constraints. Likewise, a consistent solution is a solution that is made up from paths, such that the path for agent ai is consistent with the constraints of ai. A conflict is a tuple (ai, aj , v, t) where agent ai and agent aj occupy vertex v at time point t. A solution (of k paths) is valid if all its A conflict (as well as a constraint) may apply also to an edge when two agents traverse the same edge in opposite directions. paths have no conflicts. A consistent solution can be invalid if, despite the fact that the paths are consistent with their individual agent constraints, these paths still have conflicts. The key idea of CBS is to grow a set of constraints for each of the agents and find paths that are consistent with these constraints. If these paths have conflicts, and are thus invalid, the conflicts are resolved by adding new constraints. CBS works in two levels. At the high-level phase conflicts are found and constraints are added. At the low-level phase, the paths of the agents are updated to be consistent with the new constraints. We now describe each part of this process. High-level: Search the Constraint Tree (CT) At the high-level, CBS searches a constraint tree (CT). A CT is a binary tree. Each node N in the CT contains the following fields of data: 1. A set of constraints (N.constraints). The root of the CT contains an empty set of constraints. The child of a node in the CT inherits the constraints of the parent and adds one new constraint for one agent. 2. A solution (N.solution). A set of k paths, one path for each agent. The path for agent ai must be consistent with the constraints of ai. Such paths are found by the lowlevel search algorithm. 3. The total cost (N.cost). The cost of the current solution (summation over all the single-agent path costs). We denote this cost the f -value of the node. Node N in the CT is a goal node when N.solution is valid, i.e., the set of paths for all agents have no conflicts. The high-level phase performs a best-first search on the CT where nodes are ordered by their costs. Processing a node in the CT Given the list of constraints for a node N of the CT, the low-level search is invoked. This search returns one shortest path for each agent, ai, that is consistent with all the constraints associated with ai in node N . Once a consistent path has be",
"title": ""
},
{
"docid": "01bfb4c4c164bcb3faf9879284d566d3",
"text": "Emotions are multifaceted, but a key aspect of emotion involves the assessment of the value of environmental stimuli. This article reviews the many psychological representations, including representations of stimulus value, which are formed in the brain during Pavlovian and instrumental conditioning tasks. These representations may be related directly to the functions of cortical and subcortical neural structures. The basolateral amygdala (BLA) appears to be required for a Pavlovian conditioned stimulus (CS) to gain access to the current value of the specific unconditioned stimulus (US) that it predicts, while the central nucleus of the amygdala acts as a controller of brainstem arousal and response systems, and subserves some forms of stimulus-response Pavlovian conditioning. The nucleus accumbens, which appears not to be required for knowledge of the contingency between instrumental actions and their outcomes, nevertheless influences instrumental behaviour strongly by allowing Pavlovian CSs to affect the level of instrumental responding (Pavlovian-instrumental transfer), and is required for the normal ability of animals to choose rewards that are delayed. The prelimbic cortex is required for the detection of instrumental action-outcome contingencies, while insular cortex may allow rats to retrieve the values of specific foods via their sensory properties. The orbitofrontal cortex, like the BLA, may represent aspects of reinforcer value that govern instrumental choice behaviour. Finally, the anterior cingulate cortex, implicated in human disorders of emotion and attention, may have multiple roles in responding to the emotional significance of stimuli and to errors in performance, preventing responding to inappropriate stimuli.",
"title": ""
},
{
"docid": "682f09b39cb82492c37789ff6ad66389",
"text": "Aging is characterized by a progressive loss of physiological integrity, leading to impaired function and increased vulnerability to death. This deterioration is the primary risk factor for major human pathologies, including cancer, diabetes, cardiovascular disorders, and neurodegenerative diseases. Aging research has experienced an unprecedented advance over recent years, particularly with the discovery that the rate of aging is controlled, at least to some extent, by genetic pathways and biochemical processes conserved in evolution. This Review enumerates nine tentative hallmarks that represent common denominators of aging in different organisms, with special emphasis on mammalian aging. These hallmarks are: genomic instability, telomere attrition, epigenetic alterations, loss of proteostasis, deregulated nutrient sensing, mitochondrial dysfunction, cellular senescence, stem cell exhaustion, and altered intercellular communication. A major challenge is to dissect the interconnectedness between the candidate hallmarks and their relative contributions to aging, with the final goal of identifying pharmaceutical targets to improve human health during aging, with minimal side effects.",
"title": ""
},
{
"docid": "4ff50e433ba7a5da179c7d8e5e05cb22",
"text": "Social network information is now being used in ways for which it may have not been originally intended. In particular, increased use of smartphones capable ofrunning applications which access social network information enable applications to be aware of a user's location and preferences. However, current models forexchange of this information require users to compromise their privacy and security. We present several of these privacy and security issues, along withour design and implementation of solutions for these issues. Our work allows location-based services to query local mobile devices for users' social network information, without disclosing user identity or compromising users' privacy and security. We contend that it is important that such solutions be acceptedas mobile social networks continue to grow exponentially.",
"title": ""
}
] |
scidocsrr
|
57bdc835f025c6dba6e67ae55c7254cd
|
Polymorphic malware detection using sequence classification methods and ensembles
|
[
{
"docid": "b37de4587fbadad9258c1c063b03a07a",
"text": "Numerous attacks, such as worms, phishing, and botnets, threaten the availability of the Internet, the integrity of its hosts, and the privacy of its users. A core element of defense against these attacks is anti-virus(AV)–a service that detects, removes, and characterizes these threats. The ability of these products to successfully characterize these threats has far-reaching effects—from facilitating sharing across organizations, to detecting the emergence of new threats, and assessing risk in quarantine and cleanup. In this paper, we examine the ability of existing host-based anti-virus products to provide semantically meaningful information about the malicious software and tools (or malware) used by attackers. Using a large, recent collection of malware that spans a variety of attack vectors (e.g., spyware, worms, spam), we show that different AV products characterize malware in ways that are inconsistent across AV products, incomplete across malware, and that fail to be concise in their semantics. To address these limitations, we propose a new classification technique that describes malware behavior in terms of system state changes (e.g., files written, processes created) rather than in sequences or patterns of system calls. To address the sheer volume of malware and diversity of its behavior, we provide a method for automatically categorizing these profiles of malware into groups that reflect similar classes of behaviors and demonstrate how behavior-based clustering provides a more direct and effective way of classifying and analyzing Internet malware.",
"title": ""
},
{
"docid": "252f4bcaeb5612a3018578ec2008dd71",
"text": "Kraken is an ultrafast and highly accurate program for assigning taxonomic labels to metagenomic DNA sequences. Previous programs designed for this task have been relatively slow and computationally expensive, forcing researchers to use faster abundance estimation programs, which only classify small subsets of metagenomic data. Using exact alignment of k-mers, Kraken achieves classification accuracy comparable to the fastest BLAST program. In its fastest mode, Kraken classifies 100 base pair reads at a rate of over 4.1 million reads per minute, 909 times faster than Megablast and 11 times faster than the abundance estimation program MetaPhlAn. Kraken is available at http://ccb.jhu.edu/software/kraken/ .",
"title": ""
}
] |
[
{
"docid": "5cea0630252f2d36c849be957503944e",
"text": "In this paper, we propose an efficient in-DBMS solution for the problem of sub-trajectory clustering and outlier detection in large moving object datasets. The method relies on a two-phase process: a voting-and-segmentation phase that segments trajectories according to a local density criterion and trajectory similarity criteria, followed by a sampling-and-clustering phase that selects the most representative sub-trajectories to be used as seeds for the clustering process. Our proposal, called STClustering (for Sampling-based Sub-Trajectory Clustering) is novel since it is the first, to our knowledge, that addresses the pure spatiotemporal sub-trajectory clustering and outlier detection problem in a real-world setting (by ‘pure’ we mean that the entire spatiotemporal information of trajectories is taken into consideration). Moreover, our proposal can be efficiently registered as a database query operator in the context of extensible DBMS (namely, PostgreSQL in our current implementation). The effectiveness and the efficiency of the proposed algorithm are experimentally validated over synthetic and real-world trajectory datasets, demonstrating that STClustering outperforms an off-the-shelf in-DBMS solution using PostGIS by several orders of magnitude. CCS Concepts • Information systems ➝ Information systems applications ➝ Data mining ➝ Clustering • Information systems ➝ Information systems applications ➝ Spatio-temporal systems",
"title": ""
},
{
"docid": "ca26daaa9961f7ba2343ae84245c1181",
"text": "In a recently held WHO workshop it has been recommended to abandon the distinction between potentially malignant lesions and potentially malignant conditions and to use the term potentially malignant disorders instead. Of these disorders, leukoplakia and erythroplakia are the most common ones. These diagnoses are still defined by exclusion of other known white or red lesions. In spite of tremendous progress in the field of molecular biology there is yet no single marker that reliably enables to predict malignant transformation in an individual patient. The general advice is to excise or laser any oral of oropharyngeal leukoplakia/erythroplakia, if feasible, irrespective of the presence or absence of dysplasia. Nevertheless, it is actually unknown whether such removal truly prevents the possible development of a squamous cell carcinoma. At present, oral lichen planus seems to be accepted in the literature as being a potentially malignant disorder, although the risk of malignant transformation is lower than in leukoplakia. There are no means to prevent such event. The efficacy of follow-up of oral lichen planus is questionable. Finally, brief attention has been paid to oral submucous fibrosis, actinic cheilitis, some inherited cancer syndromes and immunodeficiency in relation to cancer predisposition.",
"title": ""
},
{
"docid": "fbcf9ddf08fc14c4551a82653d53963d",
"text": "Non-normal data and heteroscedasticity are two common problems encountered when dealing with testing for location measures. Non-normality exists either from the shape of the distributions or by the presence of outliers. Outliers occur when there exist data values that are very different from the majority of cases in the data set. Outliers are important because they can influence the results of the data analysis. This paper demonstrated the detection of outliers by using robust scale estimators such as MADn, Tn and LMSn as trimming criteria. These criteria will trim extreme values without prior determination of trimming percentage. Sample data was used in this study to illustrate how extreme values are removed by these trimming criteria. We will present how these were done in a SAS program.",
"title": ""
},
{
"docid": "20d95255d3cf72174cbdc6f8614796a5",
"text": "This paper gives a review of the recent developments in deep learning and unsupervised feature learning for time-series problems. While these techniques have shown promise for modeling static data, such as computer vision, applying them to time-series data is gaining increasing attention. This paper overviews the particular challenges present in time-series data and provides a review of the works that have either applied time-series data to unsupervised feature learning algorithms or alternatively have contributed to modi cations of feature learning algorithms to take into account the challenges present in time-series data.",
"title": ""
},
{
"docid": "8eb96feea999ce77f2b56b7941af2587",
"text": "The term cyber security is often used interchangeably with the term information security. This paper argues that, although there is a substantial overlap between cyber security and information security, these two concepts are not totally analogous. Moreover, the paper posits that cyber security goes beyond the boundaries of traditional information security to include not only the protection of information resources, but also that of other assets, including the person him/herself. In information security, reference to the human factor usually relates to the role(s) of humans in the security process. In cyber security this factor has an additional dimension, namely, the humans as potential targets of cyber attacks or even unknowingly participating in a cyber attack. This additional dimension has ethical implications for society as a whole, since the protection of certain vulnerable groups, for example children, could be seen as a societal responsibility. a 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "3a322129019eed67686018404366fe0b",
"text": "Scientists and casual users need better ways to query RDF databases or Linked Open Data. Using the SPARQL query language requires not only mastering its syntax and semantics but also understanding the RDF data model, the ontology used, and URIs for entities of interest. Natural language query systems are a powerful approach, but current techniques are brittle in addressing the ambiguity and complexity of natural language and require expensive labor to supply the extensive domain knowledge they need. We introduce a compromise in which users give a graphical \"skeleton\" for a query and annotates it with freely chosen words, phrases and entity names. We describe a framework for interpreting these \"schema-agnostic queries\" over open domain RDF data that automatically translates them to SPARQL queries. The framework uses semantic textual similarity to find mapping candidates and uses statistical approaches to learn domain knowledge for disambiguation, thus avoiding expensive human efforts required by natural language interface systems. We demonstrate the feasibility of the approach with an implementation that performs well in an evaluation on DBpedia data.",
"title": ""
},
{
"docid": "8bf5f5e332159674389d2026514fbc15",
"text": "This project examines the nature of password cracking and modern applications. Several applications for different platforms are studied. Different methods of cracking are explained, including dictionary attack, brute force, and rainbow tables. Password cracking across different mediums is examined. Hashing and how it affects password cracking is discussed. An implementation of two hash-based password cracking algorithms is developed, along with experimental results of their efficiency.",
"title": ""
},
{
"docid": "54ca6cb3e71574fc741c3181b8a4871c",
"text": "Micro-expressions are brief spontaneous facial expressions that appear on a face when a person conceals an emotion, making them different to normal facial expressions in subtlety and duration. Currently, emotion classes within the CASME II dataset (Chinese Academy of Sciences Micro-expression II) are based on Action Units and self-reports, creating conflicts during machine learning training. We will show that classifying expressions using Action Units, instead of predicted emotion, removes the potential bias of human reporting. The proposed classes are tested using LBP-TOP (Local Binary Patterns from Three Orthogonal Planes), HOOF (Histograms of Oriented Optical Flow) and HOG 3D (3D Histogram of Oriented Gradient) feature descriptors. The experiments are evaluated on two benchmark FACS (Facial Action Coding System) coded datasets: CASME II and SAMM (A Spontaneous Micro-Facial Movement). The best result achieves 86.35% accuracy when classifying the proposed 5 classes on CASME II using HOG 3D, outperforming the result of the state-of-the-art 5-class emotional-based classification in CASME II. Results indicate that classification based on Action Units provides an objective method to improve micro-expression recognition.",
"title": ""
},
{
"docid": "77cf780ce8b2c7b6de57c83f6b724dba",
"text": "BACKGROUND\nAlthough there are several case reports of facial skin ischemia/necrosis caused by hyaluronic acid filler injections, no systematic study of the clinical outcomes of a series of cases with this complication has been reported.\n\n\nMETHODS\nThe authors report a study of 20 consecutive patients who developed impending nasal skin necrosis as a primary concern, after nose and/or nasolabial fold augmentation with hyaluronic acid fillers. The authors retrospectively reviewed the clinical outcomes and the risk factors for this complication using case-control analysis.\n\n\nRESULTS\nSeven patients (35 percent) developed full skin necrosis, and 13 patients (65 percent) recovered fully after combination treatment with hyaluronidase. Although the two groups had similar age, sex, filler injection sites, and treatment for the complication, 85 percent of the patients in the full skin necrosis group were late presenters who did not receive the combination treatment with hyaluronidase within 2 days after the vascular complication first appeared. In contrast, just 15 percent of the patients in the full recovery group were late presenters (p = 0.004).\n\n\nCONCLUSIONS\nNose and nasolabial fold augmentations with hyaluronic acid fillers can lead to impending nasal skin necrosis, possibly caused by intravascular embolism and/or extravascular compression. The key for preventing the skin ischemia from progressing to necrosis is to identify and treat the ischemia as early as possible. Early (<2 days) combination treatment with hyaluronidase is associated with the full resolution of the complication.\n\n\nCLINICAL QUESTION/LEVEL OF EVIDENCE\nTherapeutic, IV.",
"title": ""
},
{
"docid": "52e492ff5e057a8268fd67eb515514fe",
"text": "We present a long-range passive (battery-free) radio frequency identification (RFID) and distributed sensing system using a single wire transmission line (SWTL) as the communication channel. A SWTL exploits guided surface wave propagation along a single conductor, which can be formed from existing infrastructure, such as power lines, pipes, or steel cables. Guided propagation along a SWTL has far lower losses than a comparable over-the-air (OTA) communication link; so much longer read distances can be achieved compared with the conventional OTA RFID system. In a laboratory-scale experiment with an ISO18000–6C (EPC Gen 2) passive tag, we demonstrate an RFID system using an 8 mm diameter, 5.2 m long SWTL. This SWTL has 30 dB lower propagation loss than a standard OTA RFID system at the same read range. We further demonstrate that the SWTL can tolerate extreme temperatures far beyond the capabilities of coaxial cable, by heating an operating SWTL conductor with a propane torch having a temperature of nearly 2000 °C. Extrapolation from the measured results suggest that a SWTL-based RFID system is capable of read ranges of over 70 m assuming a reader output power of +32.5 dBm and a tag power-up threshold of −7 dBm.",
"title": ""
},
{
"docid": "f2377c76df4a2bcf0af063cb86befdda",
"text": "Overexpression of ErbB2, a receptor-like tyrosine kinase, is shared by several types of human carcinomas. In breast tumors the extent of overexpression has a prognostic value, thus identifying the oncoprotein as a target for therapeutic strategies. Already, antibodies to ErbB2 are used in combination with chemotherapy in the treatment of metastasizing breast cancer. The mechanisms underlying the oncogenic action of ErbB2 involve a complex network in which ErbB2 acts as a ligand-less signaling subunit of three other receptors that directly bind a large repertoire of stroma-derived growth factors. The major partners of ErbB2 in carcinomas are ErbB1 (also called EGFR) and ErbB3, a kinase-defective receptor whose potent mitogenic action is activated in the context of heterodimeric complexes. Why ErbB2-containing heterodimers are relatively oncopotent is a function of a number of processes. Apparently, these heterodimers evade normal inactivation processes, by decreasing the rate of ligand dissociation, internalizing relatively slowly and avoiding the degradative pathway by returning to the cell surface. On the other hand, the heterodimers strongly recruit survival and mitogenic pathways such as the mitogen-activated protein kinases and the phosphatidylinositol 3-kinase. Hyper-activated signaling through the ErbB-signaling network results in dysregulation of the cell cycle homeostatic machinery, with upregulation of active cyclin-D/CDK complexes. Recent data indicate that cell cycle regulators are also linked to chemoresistance in ErbB2-dependent breast carcinoma. Together with D-type cyclins, it seems that the CDK inhibitor p21Waf1 plays an important role in evasion from apoptosis. These recent findings herald a preliminary understanding of the output layer which connects elevated ErbB-signaling to oncogenesis and chemoresistance.",
"title": ""
},
{
"docid": "b009c2b4cc62f7cc430deb671de4a192",
"text": "Electric vehicles are gaining importance and help to reduce dependency on oil, increase energy efficiency of transportation, reduce carbon emissions and noise, and avoid tail pipe emissions. Because of short driving distances, high mileages, and intermediate waiting times, fossil-fuelled taxi vehicles are ideal candidates for being replaced by battery electric vehicles (BEVs). Moreover, taxis as BEVs would increase visibility of electric mobility and therefore encourage others to purchase an electric vehicle. Prior to replacing conventional taxis with BEVs, a suitable charging infrastructure has to be established. This infrastructure, which is a prerequisite for the use of BEVs in practice, consists of a sufficiently dense network of charging stations taking into account the lower driving ranges of BEVs. In this case study we propose a decision support system for placing charging stations to satisfy the charging demand of electric taxi vehicles. Operational taxi data from about 800 vehicles is used to identify and estimate the charging demand for electric taxis based on frequent origins and destinations of trips. Next, a variant of the maximal covering location problem is formulated and solved, aiming at satisfying as much charging demand as possible with a limited number of charging stations. Already existing fast charging locations are considered in the optimization problem. In this work, we focus on finding regions in which charging stations should be placed, rather than exact locations. The exact location within an area is identified in a post-optimization phase (e.g., by authorities), where environmental conditions are considered, e.g., the capacity of the power network, availability of space, and legal issues. Our approach is implemented in the city of Vienna, Austria, in the course of an applied research project conducted in 2014. Local authorities, power network operators, representatives of taxi driver guilds as well as a radio taxi provider participated in the project and identified exact locations for charging stations based on our decision support system. ∗Corresponding author Email addresses: johannes.asamer@ait.ac.at (Johannes Asamer), martin.reinthaler@ait.ac.at (Martin Reinthaler), mario.ruthmair@univie.ac.at (Mario Ruthmair), markus.straub@ait.ac.at (Markus Straub), jakob.puchinger@centralesupelec.fr (Jakob Puchinger) Preprint submitted to Elsevier November 6, 2015",
"title": ""
},
{
"docid": "51db8011d3dfd60b7808abc6868f7354",
"text": "Security issue in cloud environment is one of the major obstacle in cloud implementation. Network attacks make use of the vulnerability in the network and the protocol to damage the data and application. Cloud follows distributed technology; hence it is vulnerable for intrusions by malicious entities. Intrusion detection systems (IDS) has become a basic component in network protection infrastructure and a necessary method to defend systems from various attacks. Distributed denial of service (DDoS) attacks are a great problem for a user of computers linked to the Internet. Data mining techniques are widely used in IDS to identify attacks using the network traffic. This paper presents and evaluates a Radial basis function neural network (RBF-NN) detector to identify DDoS attacks. Many of the training algorithms for RBF-NNs start with a predetermined structure of the network that is selected either by means of a priori knowledge or depending on prior experience. The resultant network is frequently inadequate or needlessly intricate and a suitable network structure could be configured only by trial and error method. This paper proposes Bat algorithm (BA) to configure RBF-NN automatically. Simulation results demonstrate the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "6eebd82e4d2fe02e9b26190638e9d159",
"text": "Agile development methodologies have been gaining acceptance in the mainstream software development community. While there are numerous studies of agile development in academic and educational settings, there has been little detailed reporting of the usage, penetration and success of agile methodologies in traditional, professional software development organizations. We report on the results of an empirical study conducted at Microsoft to learn about agile development and its perception by people in development, testing, and management. We found that one-third of the study respondents use agile methodologies to varying degrees, and most view it favorably due to improved communication between team members, quick releases and the increased flexibility of agile designs. The scrum variant of agile methodologies is by far the most popular at Microsoft. Our findings also indicate that developers are most worried about scaling agile to larger projects (greater than twenty members), attending too many meetings and the coordinating agile and non-agile teams.",
"title": ""
},
{
"docid": "e830098f9c045d376177e6d2644d4a06",
"text": "OBJECTIVE\nTo determine whether acetyl-L-carnitine (ALC), a metabolite necessary for energy metabolism and essential fatty acid anabolism, might help attention-deficit/hyperactivity disorder (ADHD). Trials in Down's syndrome, migraine, and Alzheimer's disease showed benefit for attention. A preliminary trial in ADHD using L-carnitine reported significant benefit.\n\n\nMETHOD\nA multi-site 16-week pilot study randomized 112 children (83 boys, 29 girls) age 5-12 with systematically diagnosed ADHD to placebo or ALC in weight-based doses from 500 to 1500 mg b.i.d. The 2001 revisions of the Conners' parent and teacher scales (including DSM-IV ADHD symptoms) were administered at baseline, 8, 12, and 16 weeks. Analyses were ANOVA of change from baseline to 16 weeks with treatment, center, and treatment-by-center interaction as independent variables.\n\n\nRESULTS\nThe primary intent-to-treat analysis, of 9 DSM-IV teacher-rated inattentive symptoms, was not significant. However, secondary analyses were interesting. There was significant (p = 0.02) moderation by subtype: superiority of ALC over placebo in the inattentive type, with an opposite tendency in combined type. There was also a geographic effect (p = 0.047). Side effects were negligible; electrocardiograms, lab work, and physical exam unremarkable.\n\n\nCONCLUSION\nALC appears safe, but with no effect on the overall ADHD population (especially combined type). It deserves further exploration for possible benefit specifically in the inattentive type.",
"title": ""
},
{
"docid": "500202f494dc3769fdb0c7de98aec9c7",
"text": "Clocked comparators have found widespread use in noise sensitive applications including analog-to-digital converters, wireline receivers, and memory bit-line detectors. However, their nonlinear, time-varying dynamics resulting in discrete output levels have discouraged the use of traditional linear time-invariant (LTI) small-signal analysis and noise simulation techniques. This paper describes a linear, time-varying (LTV) model of clock comparators that can accurately predict the decision error probability without resorting to more general stochastic system models. The LTV analysis framework in conjunction with the linear, periodically time-varying (LPTV) simulation algorithms available from RF circuit simulators can provide insights into the intrinsic sampling and decision operations of clock comparators and the major contribution sources to random decision errors. Two comparators are simulated and compared with laboratory measurements. A 90-nm CMOS comparator is measured to have an equivalent input-referred random noise of 0.73 mVrms for dc inputs, matching simulation results with a short channel excess noise factor ¿ = 2.",
"title": ""
},
{
"docid": "6e140b1901184183c7cc4cfc10532b84",
"text": "During January and February 2001, an outbreak of febrile illness associated with altered sensorium was observed in Siliguri, West Bengal, India. Laboratory investigations at the time of the outbreak did not identify an infectious agent. Because Siliguri is in close proximity to Bangladesh, where outbreaks of Nipah virus (NiV) infection were recently described, clinical material obtained during the Siliguri outbreak was retrospectively analyzed for evidence of NiV infection. NiV-specific immunoglobulin M (IgM) and IgG antibodies were detected in 9 of 18 patients. Reverse transcription-polymerase chain reaction (RT-PCR) assays detected RNA from NiV in urine samples from 5 patients. Sequence analysis confirmed that the PCR products were derived from NiV RNA and suggested that the NiV from Siliguri was more closely related to NiV isolates from Bangladesh than to NiV isolates from Malaysia. NiV infection has not been previously detected in India.",
"title": ""
},
{
"docid": "473968c14db4b189af126936fd5486ca",
"text": "Disclaimer/Complaints regulations If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: http://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible.",
"title": ""
},
{
"docid": "a895b7888b15e49a2140bcea9c20e0b9",
"text": "Deep convolutional neural networks (DNNs) have brought significant performance improvements to face recognition. However the training can hardly be carried out on mobile devices because the training of these models requires much computational power. An individual user with the demand of deriving DNN models from her own datasets usually has to outsource the training procedure onto a cloud or edge server. However this outsourcing method violates privacy because it exposes the users’ data to curious service providers. In this paper, we utilize the differentially private mechanism to enable the privacy-preserving edge based training of DNN face recognition models. During the training, DNN is split between the user device and the edge server in a way that both private data and model parameters are protected, with only a small cost of local computations. We show that our mechanism is capable of training models in different scenarios, e.g., from scratch, or through finetuning over existed models.",
"title": ""
},
{
"docid": "60f2baba7922543e453a3956eb503c05",
"text": "Pylearn2 is a machine learning research library. This does n t just mean that it is a collection of machine learning algorithms that share a comm n API; it means that it has been designed for flexibility and extensibility in ord e to facilitate research projects that involve new or unusual use cases. In this paper we give a brief history of the library, an overview of its basic philosophy, a summar y of the library’s architecture, and a description of how the Pylearn2 communi ty functions socially.",
"title": ""
}
] |
scidocsrr
|
b05b88e5f94806a65b945385f16b9dc5
|
Directly Modeling Missing Data in Sequences with RNNs: Improved Classification of Clinical Time Series
|
[
{
"docid": "42c890832d861ad2854fd1f56b13eb45",
"text": "We apply deep learning to the problem of discovery and detection of characteristic patterns of physiology in clinical time series data. We propose two novel modifications to standard neural net training that address challenges and exploit properties that are peculiar, if not exclusive, to medical data. First, we examine a general framework for using prior knowledge to regularize parameters in the topmost layers. This framework can leverage priors of any form, ranging from formal ontologies (e.g., ICD9 codes) to data-derived similarity. Second, we describe a scalable procedure for training a collection of neural networks of different sizes but with partially shared architectures. Both of these innovations are well-suited to medical applications, where available data are not yet Internet scale and have many sparse outputs (e.g., rare diagnoses) but which have exploitable structure (e.g., temporal order and relationships between labels). However, both techniques are sufficiently general to be applied to other problems and domains. We demonstrate the empirical efficacy of both techniques on two real-world hospital data sets and show that the resulting neural nets learn interpretable and clinically relevant features.",
"title": ""
}
] |
[
{
"docid": "9eaedcf7ab75f690f42466375a9ceaa6",
"text": "This paper presents a Current Mode Logic (CML) transmitter circuit that forms part of a Serializer/ Deserializer IP core used in a high speed I/O links targeted for 10+ Gbps Ethernet applications. The paper discusses the 3 tap FIR filter equalization implemented to minimize the effects of Inter Symbol interference (ISI) and attenuation of high speed signal content in the channel. The paper also discusses on the design optimization implemented using hybrid segmentation of driver segments which results in improved control on the step sizes variations, Differential Non Linearity (DNL) errors at segment boundaries over Process mismatch variations.",
"title": ""
},
{
"docid": "597b893e42df1bfba3d17b2d3ec31539",
"text": "Genetic Programming (GP) is an evolutionary algorithm that has received a lot of attention lately due to its success in solving hard real-world problems. Lately, there has been considerable interest in GP's community to develop semantic genetic operators, i.e., operators that work on the phenotype. In this contribution, we describe EvoDAG (Evolving Directed Acyclic Graph) which is a Python library that implements a steady-state semantic Genetic Programming with tournament selection using an extension of our previous crossover operators based on orthogonal projections in the phenotype space. To show the effectiveness of EvoDAG, it is compared against state-of-the-art classifiers on different benchmark problems, experimental results indicate that EvoDAG is very competitive.",
"title": ""
},
{
"docid": "44cda3da01ebd82fe39d886f8520ce13",
"text": "This paper describes some of the work on stereo that has been going on at INRIA in the last four years. The work has concentrated on obtaining dense, accurate, and reliable range maps of the environment at rates compatible with the real-time constraints of such applications as the navigation of mobile vehicles in man-made or natural environments. The class of algorithms which has been selected among several is the class of correlationbased stereo algorithms because they are the only ones that can produce su ciently dense range maps with an algorithmic structure which lends itself nicely to fast implementations because of the simplicity of the underlying computation. We describe the various improvements that we have brought to the original idea, including validation and characterization of the quality of the matches, a recursive implementation of the score computation which makes the method independent of the size of the correlation window, and a calibration method which does not require the use of a calibration pattern. We then describe two implementations of this algorithm on two very di erent pieces of hardware. The rst implementation is on a board with four Digital Signal Processors designed jointly with Matra MSII. This implementation can produce 64 64 range maps at rates varying between 200 and 400 ms, depending upon the range of disparities. The second implementation is on a board developed by DEC-PRL and can perform the cross-correlation of two 256 256 images in 140 ms. The rst implementation has been integrated in the navigation system of the INRIA cart and used to correct for inertial and odometric errors in navigation experiments both indoors and outdoors on road. This is the rst application of our correlation-based algorithm which is described in the paper. The second application has been done jointly with people from the french national space agency (CNES) to study the possibility of using stereo on a future planetary rover for the construction of Digital Elevation Maps. We have shown that real time stereo is possible today at low-cost and can be applied in real applications. The algorithm that has been described is not the most sophisticated available but we have made it robust and reliable thanks to a number of improvements. Even though each of these improvements is not earth-shattering from the pure research point of view, altogether they have allowed us to go beyond a very important threshold. This threshold measures the di erence between a program that runs in the laboratory on a few images and one that works continuously for hours on a sequence of stereo pairs and produces results at such rates and of such quality that they can be used to guide a real vehicle or to produce Discrete Elevation Maps. We believe that this threshold has only been reached in a very small number of cases.",
"title": ""
},
{
"docid": "a218d5aac0f5d52d3828cdff05a9009b",
"text": "This paper proposes a single-stage high-power-factor (HPF) LED driver with coupled inductors for street-lighting applications. The presented LED driver integrates a dual buck-boost power-factor-correction (PFC) ac-dc converter with coupled inductors and a half-bridge-type LLC dc-dc resonant converter into a single-stage-conversion circuit topology. The coupled inductors inside the dual buck-boost converter subcircuit are designed to be operated in the discontinuous-conduction mode for obtaining high power-factor (PF). The half-bridge-type LLC resonant converter is designed for achieving soft-switching on two power switches and output rectifier diodes, in order to reduce their switching losses. This paper develops and implements a cost-effective driver for powering a 144-W-rated LED street-lighting module with input utility-line voltage ranging from 100 to 120 V. The tested prototype yields satisfying experimental results, including high circuit efficiency (>89.5%), low input-current total-harmonic distortion (<; 5.5%), high PF (> 0.99), low output-voltage ripple (<; 7.5%), and low output-current ripple (<; 5%), thus demonstrating the feasibility of the proposed LED driver.",
"title": ""
},
{
"docid": "982dae78e301aec02012d9834f000d6d",
"text": "This paper investigates a universal approach of synthesizing arbitrary ternary logic circuits in quantum computation based on the truth table technology. It takes into account of the relationship of classical logic and quantum logic circuits. By adding inputs with constant value and garbage outputs, the classical non-reversible logic can be transformed into reversible logic. Combined with group theory, it provides an algorithm using the ternary Swap gate, ternary NOT gate and ternary Toffoli gate library. Simultaneously, the main result shows that the numbers of qutrits we use are minimal compared to other methods. We also illustrate with two examples to test our approach.",
"title": ""
},
{
"docid": "9385259a7dd9ed123f61141d933ab2a4",
"text": "Many of the most interesting questions ecologists ask lead to analyses of spatial data. Yet, perhaps confused by the large number of statistical models and fitting methods available, many ecologists seem to believe this is best left to specialists. Here, we describe the issues that need consideration when analysing spatial data and illustrate these using simulation studies. Our comparative analysis involves using methods including generalized least squares, spatial filters, wavelet revised models, conditional autoregressive models and generalized additive mixed models to estimate regression coefficients from synthetic but realistic data sets, including some which violate standard regression assumptions. We assess the performance of each method using two measures and using statistical error rates for model selection. Methods that performed well included generalized least squares family of models and a Bayesian implementation of the conditional auto-regressive model. Ordinary least squares also performed adequately in the absence of model selection, but had poorly controlled Type I error rates and so did not show the improvements in performance under model selection when using the above methods. Removing large-scale spatial trends in the response led to poor performance. These are empirical results; hence extrapolation of these findings to other situations should be performed cautiously. Nevertheless, our simulation-based approach provides much stronger evidence for comparative analysis than assessments based on single or small numbers of data sets, and should be considered a necessary foundation for statements of this type in future.",
"title": ""
},
{
"docid": "64cbc5ec72c81bd44e992076de5edc56",
"text": "The goal of compressed sensing is to estimate a vector from an underdetermined system of noisy linear measurements, by making use of prior knowledge on the structure of vectors in the relevant domain. For almost all results in this literature, the structure is represented by sparsity in a well-chosen basis. We show how to achieve guarantees similar to standard compressed sensing but without employing sparsity at all. Instead, we suppose that vectors lie near the range of a generative model G : R → R. Our main theorem is that, if G is L-Lipschitz, then roughly O(k logL) random Gaussian measurements suffice for an `2/`2 recovery guarantee. We demonstrate our results using generative models from published variational autoencoder and generative adversarial networks. Our method can use 5-10x fewer measurements than Lasso for the same accuracy.",
"title": ""
},
{
"docid": "8f4cebc98552d3024b477c2f1576e24f",
"text": "The SentiMAG Multicentre Trial evaluated a new magnetic technique for sentinel lymph node biopsy (SLNB) against the standard (radioisotope and blue dye or radioisotope alone). The magnetic technique does not use radiation and provides both a color change (brown dye) and a handheld probe for node localization. The primary end point of this trial was defined as the proportion of sentinel nodes detected with each technique (identification rate). A total of 160 women with breast cancer scheduled for SLNB, who were clinically and radiologically node negative, were recruited from seven centers in the United Kingdom and The Netherlands. SLNB was undertaken after administration of both the magnetic and standard tracers (radioisotope with or without blue dye). A total of 170 SLNB procedures were undertaken on 161 patients, and 1 patient was excluded, leaving 160 patients for further analysis. The identification rate was 95.0 % (152 of 160) with the standard technique and 94.4 % (151 of 160) with the magnetic technique (0.6 % difference; 95 % upper confidence limit 4.4 %; 6.9 % discordance). Of the 22 % (35 of 160) of patients with lymph node involvement, 16 % (25 of 160) had at least 1 macrometastasis, and 6 % (10 of 160) had at least a micrometastasis. Another 2.5 % (4 of 160) had isolated tumor cells. Of 404 lymph nodes removed, 297 (74 %) were true sentinel nodes. The lymph node retrieval rate was 2.5 nodes per patient overall, 1.9 nodes per patient with the standard technique, and 2.0 nodes per patient with the magnetic technique. The magnetic technique is a feasible technique for SLNB, with an identification rate that is not inferior to the standard technique.",
"title": ""
},
{
"docid": "ec0da5cea716d1270b2143ffb6c610d6",
"text": "This study focuses on the development of a web-based Attendance Register System or formerly known as ARS. The development of this system is motivated due to the fact that the students’ attendance records are one of the important elements that reflect their academic achievements in the higher academic institutions. However, the current practice implemented in most of the higher academic institutions in Malaysia is becoming more prone to human errors and frauds. Assisted by the System Development Life Cycle (SDLC) methodology, the ARS has been built using the web-based applications such as PHP, MySQL and Apache to cater the recording and reporting of the students’ attendances. The development of this prototype system is inspired by the feasibility study done in Universiti Teknologi MARA, Malaysia where 550 respondents have taken part in answering the questionnaires. From the analysis done, it has revealed that a more systematic and revolutionary system is indeed needed to be reinforced in order to improve the process of recording and reporting the attendances in the higher academic institution. ARS can be easily accessed by the lecturers via the Web and most importantly, the reports can be generated in realtime processing, thus, providing invaluable information about the students’ commitments in attending the classes. This paper will discuss in details the development of ARS from the feasibility study until the design phase.",
"title": ""
},
{
"docid": "52315f23e419ba27e6fd058fe8b7aa9d",
"text": "Detected obstacles overlaid on the original image Polar map: The agent is at the center of the map, facing 00. The blue points correspond to polar positions of the obstacle points around the agent. 1. Talukder, A., et al. \"Fast and reliable obstacle detection and segmentation for cross-country navigation.\" Intelligent Vehicle SympoTalukder, A., et al. \"Fast and reliable obstacle detection and segmentation for cross-country navigation.\" Intelligent Vehicle Symposium, 2002. IEEE. Vol. 2. IEEE, 2002. 2. Sun, Deqing, Stefan Roth, and Michael J. Black. \"Secrets of optical flow estimation and their principles.\" Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on. IEEE, 2010. 3. Bernini, Nicola, et al. \"Real-time obstacle detection using stereo vision for autonomous ground vehicles: A survey.\" Intelligent Transportation Systems (ITSC), 2014 IEEE 17th International Conference on. IEEE, 2014. 4. Broggi, Alberto, et al. \"Stereo obstacle detection in challenging environments: the VIAC experience.\" Intelligent Robots and Systems (IROS), 2011 IEEE/RSJ International Conference on. IEEE, 2011.",
"title": ""
},
{
"docid": "906b785365a27e5d9c7f0a622996264b",
"text": "In this paper, we put forward a new pre–processing scheme for automatic analysis of dermoscopic images. Our contribu tions are two-fold. First, we present a procedure, an extens ion of previous approaches, which succeeds in removing confoun ding factors from dermoscopic images: these include shading ind uce by imaging non-flat skin surfaces and the effect of light-int ensity falloff toward the edges of the dermoscopic image. This proc edure is shown to facilitate the detection and removal of arti f cts such as hairs as well. Second, we present a novel simple yet ef fective greyscale conversion approach that is based on phys ics and biology of human skin. Our proposed greyscale image provides high separability between a pigmented lesion and norm al skin surrounding it. Finally, using our pre–processing sch eme, we perform segmentation based on simple grey-level thresho lding, with results outperforming the state of the art.",
"title": ""
},
{
"docid": "34fa7e6d5d4f1ab124e3f12462e92805",
"text": "Natural image modeling plays a key role in many vision problems such as image denoising. Image priors are widely used to regularize the denoising process, which is an ill-posed inverse problem. One category of denoising methods exploit the priors (e.g., TV, sparsity) learned from external clean images to reconstruct the given noisy image, while another category of methods exploit the internal prior (e.g., self-similarity) to reconstruct the latent image. Though the internal prior based methods have achieved impressive denoising results, the improvement of visual quality will become very difficult with the increase of noise level. In this paper, we propose to exploit image external patch prior and internal self-similarity prior jointly, and develop an external patch prior guided internal clustering algorithm for image denoising. It is known that natural image patches form multiple subspaces. By utilizing Gaussian mixture models (GMMs) learning, image similar patches can be clustered and the subspaces can be learned. The learned GMMs from clean images are then used to guide the clustering of noisy-patches of the input noisy images, followed by a low-rank approximation process to estimate the latent subspace for image recovery. Numerical experiments show that the proposed method outperforms many state-of-the-art denoising algorithms such as BM3D and WNNM.",
"title": ""
},
{
"docid": "532d5655281bf409dd6a44c1f875cd88",
"text": "BACKGROUND\nOlder adults are at increased risk of experiencing loneliness and depression, particularly as they move into different types of care communities. Information and communication technology (ICT) usage may help older adults to maintain contact with social ties. However, prior research is not consistent about whether ICT use increases or decreases isolation and loneliness among older adults.\n\n\nOBJECTIVE\nThe purpose of this study was to examine how Internet use affects perceived social isolation and loneliness of older adults in assisted and independent living communities. We also examined the perceptions of how Internet use affects communication and social interaction.\n\n\nMETHODS\nOne wave of data from an ongoing study of ICT usage among older adults in assisted and independent living communities in Alabama was used. Regression analysis was used to determine the relationship between frequency of going online and isolation and loneliness (n=205) and perceptions of the effects of Internet use on communication and social interaction (n=60).\n\n\nRESULTS\nAfter controlling for the number of friends and family, physical/emotional social limitations, age, and study arm, a 1-point increase in the frequency of going online was associated with a 0.147-point decrease in loneliness scores (P=.005). Going online was not associated with perceived social isolation (P=.14). Among the measures of perception of the social effects of the Internet, each 1-point increase in the frequency of going online was associated with an increase in agreement that using the Internet had: (1) made it easier to reach people (b=0.508, P<.001), (2) contributed to the ability to stay in touch (b=0.516, P<.001), (3) made it easier to meet new people (b=0.297, P=.01, (4) increased the quantity of communication with others (b=0.306, P=.01), (5) made the respondent feel less isolated (b=0.491, P<.001), (6) helped the respondent feel more connected to friends and family (b=0.392, P=.001), and (7) increased the quality of communication with others (b=0.289, P=.01).\n\n\nCONCLUSIONS\nUsing the Internet may be beneficial for decreasing loneliness and increasing social contact among older adults in assisted and independent living communities.",
"title": ""
},
{
"docid": "dfac485205134103cb66b07caa6fbaf0",
"text": "Electrical responses of the single muscle fibre (SFER) by stimulation of the motor terminal nerve-endings have been investigated in normal subjects at various ages in vivo. Shape, latency, rise-time and interspike distance seem to be SFER's most interesting parameters of the functional organisation of the motor subunits and their terminal fractions. \"Time\" parameters of SFER are in agreement with the anatomo-functional characteristics of the excited tissues during ageing.",
"title": ""
},
{
"docid": "944d467bb6da4991127b76310fec585b",
"text": "One of the challenges in evaluating multi-object video detection, tracking and classification systems is having publically available data sets with which to compare different systems. However, the measures of performance for tracking and classification are different. Data sets that are suitable for evaluating tracking systems may not be appropriate for classification. Tracking video data sets typically only have ground truth track IDs, while classification video data sets only have ground truth class-label IDs. The former identifies the same object over multiple frames, while the latter identifies the type of object in individual frames. This paper describes an advancement of the ground truth meta-data for the DARPA Neovision2 Tower data set to allow both the evaluation of tracking and classification. The ground truth data sets presented in this paper contain unique object IDs across 5 different classes of object (Car, Bus, Truck, Person, Cyclist) for 24 videos of 871 image frames each. In addition to the object IDs and class labels, the ground truth data also contains the original bounding box coordinates together with new bounding boxes in instances where un-annotated objects were present. The unique IDs are maintained during occlusions between multiple objects or when objects re-enter the field of view. This will provide: a solid foundation for evaluating the performance of multi-object tracking of different types of objects, a straightforward comparison of tracking system performance using the standard Multi Object Tracking (MOT) framework, and classification performance using the Neovision2 metrics. These data have been hosted publically.",
"title": ""
},
{
"docid": "e04e1dc5cd4d0729c661375486884b14",
"text": "The Internet of Things (IoT) and the Web are closely related to each other. On the one hand, the Semantic Web has been including vocabularies and semantic models for the Internet of Things. On the other hand, the so-called Web of Things (WoT) advocates architectures relying on established Web technologies and RESTful interfaces for the IoT. In this paper, we present a vocabulary for WoT that aims at defining IoT concepts using terms from the Web. Notably, it includes two concepts identified as the core WoT resources: Thing Description (TD) and Interaction, that have been first elaborated by the W3C interest group for WoT. Our proposal is built upon the ontological pattern Identifier, Resource, Entity (IRE) that was originally designed for the Semantic Web. To better analyze the alignments our proposal allows, we reviewed existing IoT models as a vocabulary graph, complying with the approach of Linked Open Vocabularies (LOV).",
"title": ""
},
{
"docid": "274ce66c0bcc77a1e4a858bef9e41111",
"text": "It is a timely issue to understand the impact of bilingualism upon brain structure in healthy aging and upon cognitive decline given evidence of its neuroprotective effects. Plastic changes induced by bilingualism were reported in young adults in the left inferior parietal lobule (LIPL) and its right counterpart (RIPL) (Mechelli et al., 2004). Moreover, both age of second language (L2) acquisition and L2 proficiency correlated with increased grey matter (GM) in the LIPL/RIPL. However it is unknown whether such findings replicate in older bilinguals. We examined this question in an aging bilingual population from Hong Kong. Results from our Voxel Based Morphometry study show that elderly bilinguals relative to a matched monolingual control group also have increased GM volumes in the inferior parietal lobules underlining the neuroprotective effect of bilingualism. However, unlike younger adults, age of L2 acquisition did not predict GM volumes. Instead, LIPL and RIPL appear differentially sensitive to the effects of L2 proficiency and L2 exposure with LIPL more sensitive to the former and RIPL more sensitive to the latter. Our data also intimate that such * Corresponding author. University Vita-Salute San Raffaele, Via Olgettina 58, 20132 Milan, Italy. Tel.: þ39 0226434888. E-mail addresses: abutalebi.jubin@hsr.it, jubin@hku.hk (J. Abutalebi).",
"title": ""
},
{
"docid": "df158503822641430e6f17a43655cf2e",
"text": "Open information extraction (OIE) is the process to extract relations and their arguments automatically from textual documents without the need to restrict the search to predefined relations. In recent years, several OIE systems for the English language have been created but there is not any system for the Vietnamese language. In this paper, we propose a method of OIE for Vietnamese using a clause-based approach. Accordingly, we exploit Vietnamese dependency parsing using grammar clauses that strives to consider all possible relations in a sentence. The corresponding clause types are identified by their propositions as extractable relations based on their grammatical functions of constituents. As a result, our system is the first OIE system named vnOIE for the Vietnamese language that can generate open relations and their arguments from Vietnamese text with highly scalable extraction while being domain independent. Experimental results show that our OIE system achieves promising results with a precision of 83.71%.",
"title": ""
},
{
"docid": "7809fdedaf075955523b51b429638501",
"text": "PM10 prediction has attracted special legislative and scientific attention due to its harmful effects on human health. Statistical techniques have the potential for high-accuracy PM10 prediction and accordingly, previous studies on statistical methods for temporal, spatial and spatio-temporal prediction of PM10 are reviewed and discussed in this paper. A review of previous studies demonstrates that Support Vector Machines, Artificial Neural Networks and hybrid techniques show promise for suitable temporal PM10 prediction. A review of the spatial predictions of PM10 shows that the LUR (Land Use Regression) approach has been successfully utilized for spatial prediction of PM10 in urban areas. Of the six introduced approaches for spatio-temporal prediction of PM10, only one approach is suitable for high-resolved prediction (Spatial resolution < 100 m; Temporal resolution ď 24 h). In this approach, based upon the LUR modeling method, short-term dynamic input variables are employed as explanatory variables alongside typical non-dynamic input variables in a non-linear modeling procedure.",
"title": ""
}
] |
scidocsrr
|
3407fdd1aa3121aa6f110be5c6930c9e
|
A VNF-as-a-service design through micro-services disassembling the IMS
|
[
{
"docid": "bf239cb017be0b2137b0b4fd1f1d4247",
"text": "Network function virtualization was recently proposed to improve the flexibility of network service provisioning and reduce the time to market of new services. By leveraging virtualization technologies and commercial off-the-shelf programmable hardware, such as general-purpose servers, storage, and switches, NFV decouples the software implementation of network functions from the underlying hardware. As an emerging technology, NFV brings several challenges to network operators, such as the guarantee of network performance for virtual appliances, their dynamic instantiation and migration, and their efficient placement. In this article, we provide a brief overview of NFV, explain its requirements and architectural framework, present several use cases, and discuss the challenges and future directions in this burgeoning research area.",
"title": ""
},
{
"docid": "75a637281cb0ed9c307bc900e2a0da66",
"text": "Cloud computing provides new opportunities to deploy scalable application in an efficient way, allowing enterprise applications to dynamically adjust their computing resources on demand. In this paper we analyze and test the microservice architecture pattern, used during the last years by large Internet companies like Amazon, Netflix and LinkedIn to deploy large applications in the cloud as a set of small services that can be developed, tested, deployed, scaled, operated and upgraded independently, allowing these companies to gain agility, reduce complexity and scale their applications in the cloud in a more efficient way. We present a case study where an enterprise application was developed and deployed in the cloud using a monolithic approach and a microservice architecture using the Play web framework. We show the results of performance tests executed on both applications, and we describe the benefits and challenges that existing enterprises can get and face when they implement microservices in their applications.",
"title": ""
}
] |
[
{
"docid": "ed28d1b8142a2149a1650e861deb7c53",
"text": "Over the last few years, the use of virtualization technologies has increased dramatically. This makes the demand for efficient and secure virtualization solutions become more obvious. Container-based virtualization and hypervisor-based virtualization are two main types of virtualization technologies that have emerged to the market. Of these two classes, container-based virtualization is able to provide a more lightweight and efficient virtual environment, but not without security concerns. In this paper, we analyze the security level of Docker, a well-known representative of container-based approaches. The analysis considers two areas: (1) the internal security of Docker, and (2) how Docker interacts with the security features of the Linux kernel, such as SELinux and AppArmor, in order to harden the host system. Furthermore, the paper also discusses and identifies what could be done when using Docker to increase its level of security.",
"title": ""
},
{
"docid": "7f5815a918c6d04783d68dbc041cc6a0",
"text": "This paper proposes a method for learning joint embeddings of images and text using a two-branch neural network with multiple layers of linear projections followed by nonlinearities. The network is trained using a large-margin objective that combines cross-view ranking constraints with within-view neighborhood structure preservation constraints inspired by metric learning literature. Extensive experiments show that our approach gains significant improvements in accuracy for image-to-text and text-to-image retrieval. Our method achieves new state-of-the-art results on the Flickr30K and MSCOCO image-sentence datasets and shows promise on the new task of phrase localization on the Flickr30K Entities dataset.",
"title": ""
},
{
"docid": "fbde8c336fe5d707d247faa51bb8c76c",
"text": "The paper approaches the problem of imageto-text with attention-based encoder-decoder networks that are trained to handle sequences of characters rather than words. We experiment on lines of text from a popular handwriting database with different attention mechanisms for the decoder. The model trained with softmax attention achieves the lowest test error, outperforming several other RNN-based models. Our results show that softmax attention is able to learn a linear alignment whereas the alignment generated by sigmoid attention is linear but much less precise.",
"title": ""
},
{
"docid": "b47d411ca9a59331b79931c1b1e984f6",
"text": "A novel miniature wideband rectangular patch antenna is designed for wireless local area network (WLANs) applications and operating for 5-6 GHz ISM band, and wideband applications. The proposed antenna gives a bandwidth of 4.84 to 6.56 GHz for S11<-10dB. The antenna has the dimensions of 20 mm by 15 mm by 0.8 mm on FR4 substrate. Rectangular slot and step have been used for bandwidth improvement.",
"title": ""
},
{
"docid": "ba3315636b720625e7b285b26d8d371a",
"text": "Sharing of physical infrastructure using virtualization presents an opportunity to improve the overall resource utilization. It is extremely important for a Software as a Service (SaaS) provider to understand the characteristics of the business application workload in order to size and place the virtual machine (VM) containing the application. A typical business application has a multi-tier architecture and the application workload is often predictable. Using the knowledge of the application architecture and statistical analysis of the workload, one can obtain an appropriate capacity and a good placement strategy for the corresponding VM. In this paper we propose a tool iCirrus-WoP that determines VM capacity and VM collocation possibilities for a given set of application workloads. We perform an empirical analysis of the approach on a set of business application workloads obtained from geographically distributed data centers. The iCirrus-WoP tool determines the fixed reserved capacity and a shared capacity of a VM which it can share with another collocated VM. Based on the workload variation, the tool determines if the VM should be statically allocated or needs a dynamic placement. To determine the collocation possibility, iCirrus-WoP performs a peak utilization analysis of the workloads. The empirical analysis reveals the possibility of collocating applications running in different time-zones. The VM capacity that the tool recommends, show a possibility of improving the overall utilization of the infrastructure by more than 70% if they are appropriately collocated.",
"title": ""
},
{
"docid": "f4b270b09649ba05dd22d681a2e3e3b7",
"text": "Advanced analytical techniques are gaining popularity in addressing complex classification type decision problems in many fields including healthcare and medicine. In this exemplary study, using digitized signal data, we developed predictive models employing three machine learning methods to diagnose an asthma patient based solely on the sounds acquired from the chest of the patient in a clinical laboratory. Although, the performances varied slightly, ensemble models (i.e., Random Forest and AdaBoost combined with Random Forest) achieved about 90% accuracy on predicting asthma patients, compared to artificial neural networks models that achieved about 80% predictive accuracy. Our results show that noninvasive, computerized lung sound analysis that rely on low-cost microphones and an embedded real-time microprocessor system would help physicians to make faster and better diagnostic decisions, especially in situations where x-ray and CT-scans are not reachable or not available. This study is a testament to the improving capabilities of analytic techniques in support of better decision making, especially in situations constraint by limited resources.",
"title": ""
},
{
"docid": "91b6b9e22f191cfec87d7b62d809542c",
"text": "In the past few years, the storage and analysis of large-scale and fast evolving networks present a great challenge. Therefore, a number of different techniques have been proposed for sampling large networks. In general, network exploration techniques approximate the original networks more accurately than random node and link selection. Yet, link selection with additional subgraph induction step outperforms most other techniques. In this paper, we apply subgraph induction also to random walk and forest-fire sampling. We analyze different real-world networks and the changes of their properties introduced by sampling. We compare several sampling techniques based on the match between the original networks and their sampled variants. The results reveal that the techniques with subgraph induction underestimate the degree and clustering distribution, while overestimate average degree and density of the original networks. Techniques without subgraph induction step exhibit exactly the opposite behavior. Hence, the performance of the sampling techniques from random selection category compared to network exploration sampling does not differ significantly, while clear differences exist between the techniques with subgraph induction step and the ones without it.",
"title": ""
},
{
"docid": "6bdb8048915000b2d6c062e0e71b8417",
"text": "Depressive disorders are the most typical disease affecting many different factors of humanity. University students may be at increased risk of depression owing to the pressure and stress they encounter. Therefore, the purpose of this study is comparing the level of depression among male and female athletes and non-athletes undergraduate student of private university in Esfahan, Iran. The participants in this research are composed of 400 male and female athletes as well as no-athletes Iranian undergraduate students. The Beck depression test (BDI) was employed to measure the degree of depression. T-test was used to evaluate the distinction between athletes and non-athletes at P≤0.05. The ANOVA was conducted to examine whether there was a relationship between level of depression among non-athletes and athletes. The result showed that the prevalence rate of depression among non-athlete male undergraduate students is significantly higher than that of athlete male students. The results also presented that level of depression among female students is much more frequent compared to males. This can be due to the fatigue and lack of energy that are more frequent among female in comparison to the male students. Physical activity was negatively related to the level of depression by severity among male and female undergraduate students. However, there is no distinct relationship between physical activity and level of depression according to the age of athlete and nonathlete male and female undergraduate students. This study has essential implications for clinical psychology due to the relationship between physical activity and prevalence of depression.",
"title": ""
},
{
"docid": "186141651bfb780865712deb8c407c54",
"text": "Sample and statistically based singing synthesizers typically require a large amount of data for automatically generating expressive synthetic performances. In this paper we present a singing synthesizer that using two rather small databases is able to generate expressive synthesis from an input consisting of notes and lyrics. The system is based on unit selection and uses the Wide-Band Harmonic Sinusoidal Model for transforming samples. The first database focuses on expression and consists of less than 2 minutes of free expressive singing using solely vowels. The second one is the timbre database which for the English case consists of roughly 35 minutes of monotonic singing of a set of sentences, one syllable per beat. The synthesis is divided in two steps. First, an expressive vowel singing performance of the target song is generated using the expression database. Next, this performance is used as input control of the synthesis using the timbre database and the target lyrics. A selection of synthetic performances have been submitted to the Interspeech Singing Synthesis Challenge 2016, in which they are compared to other competing systems.",
"title": ""
},
{
"docid": "70c8caf1bdbdaf29072903e20c432854",
"text": "We show that the topological modular functor from Witten–Chern–Simons theory is universal for quantum computation in the sense that a quantum circuit computation can be efficiently approximated by an intertwining action of a braid on the functor’s state space. A computational model based on Chern–Simons theory at a fifth root of unity is defined and shown to be polynomially equivalent to the quantum circuit model. The chief technical advance: the density of the irreducible sectors of the Jones representation has topological implications which will be considered elsewhere.",
"title": ""
},
{
"docid": "3b7dcbefbbc20ca1a37fa318c2347b4c",
"text": "To better understand how individual differences influence the use of information technoiogy (IT), this study models and tests relationships among dynamic, IT-specific individual differences (i.e.. computer self-efficacy and computer anxiety). stable, situation-specific traits (i.e., personal innovativeness in IT) and stable, broad traits (i.e.. ''Cynthia Beath was the accepting senior editor for this paper. trait anxiety and negative affectivity). When compared to broad traits, the model suggests that situation-specific traits exert a more pervasive influence on IT situation-specific individual differences. Further, the modei suggests that computer anxiety mediates the influence of situationspecific traits (i.e., personal innovativeness) on computer self-efficacy. Results provide support for many of the hypothesized relationships. From a theoretical perspective, the findings help to further our understanding of the nomological network among individual differences that lead to computer self-efficacy. From a practical perspective, the findings may help IT managers design training programs that more effectiveiy increase the computer self-efficacy of users with different dispositional characteristics.",
"title": ""
},
{
"docid": "ef6160d304908ea87287f2071dea5f6d",
"text": "The diffusion of fake images and videos on social networks is a fast growing problem. Commercial media editing tools allow anyone to remove, add, or clone people and objects, to generate fake images. Many techniques have been proposed to detect such conventional fakes, but new attacks emerge by the day. Image-to-image translation, based on generative adversarial networks (GANs), appears as one of the most dangerous, as it allows one to modify context and semantics of images in a very realistic way. In this paper, we study the performance of several image forgery detectors against image-to-image translation, both in ideal conditions, and in the presence of compression, routinely performed upon uploading on social networks. The study, carried out on a dataset of 36302 images, shows that detection accuracies up to 95% can be achieved by both conventional and deep learning detectors, but only the latter keep providing a high accuracy, up to 89%, on compressed data.",
"title": ""
},
{
"docid": "d8eafd22765903ea3b2e4f0bf0f1ad9d",
"text": "Interest in \"green nanotechnology\" in nanoparticle biosynthesis is growing among researchers. Nanotechnologies, due to their physicochemical and biological properties, have applications in diverse fields, including drug delivery, sensors, optoelectronics, and magnetic devices. This review focuses on the green synthesis of silver nanoparticles (AgNPs) using plant sources. Green synthesis of nanoparticles is an eco-friendly approach, which should be further explored for the potential of different plants to synthesize nanoparticles. The sizes of AgNPs are in the range of 1 to 100 nm. Characterization of synthesized nanoparticles is accomplished through UV spectroscopy, X-ray diffraction, Fourier transform infrared spectroscopy, transmission electron microscopy, and scanning electron microscopy. AgNPs have great potential to act as antimicrobial agents. The green synthesis of AgNPs can be efficiently applied for future engineering and medical concerns. Different types of cancers can be treated and/or controlled by phytonanotechnology. The present review provides a comprehensive survey of plant-mediated synthesis of AgNPs with specific focus on their applications, e.g., antimicrobial, antioxidant, and anticancer activities.",
"title": ""
},
{
"docid": "5618f1415cace8bb8c4773a7e44a4e3f",
"text": "Methods of evaluating and comparing the performance of diagnostic tests are of increasing importance as new tests are developed and marketed. When a test is based on an observed variable that lies on a continuous or graded scale, an assessment of the overall value of the test can be made through the use of a receiver operating characteristic (ROC) curve. The curve is constructed by varying the cutpoint used to determine which values of the observed variable will be considered abnormal and then plotting the resulting sensitivities against the corresponding false positive rates. When two or more empirical curves are constructed based on tests performed on the same individuals, statistical analysis on differences between curves must take into account the correlated nature of the data. This paper presents a nonparametric approach to the analysis of areas under correlated ROC curves, by using the theory on generalized U-statistics to generate an estimated covariance matrix.",
"title": ""
},
{
"docid": "6c58cfbdbb424f1e2ad35339e7ee7aa6",
"text": "We present a theoretical model of a multi-input arrayed waveguide grating (AWG) based on Fourier optics and apply the model to the design of a flattened passband response. This modeling makes it possible to systematically analyze spectral performance and to clarify the physical mechanisms of the multi-input AWG. The model suggested that the width of an input/output mode-field function and the number of waveguides in the array are important factors to flatten the response. We also developed a model for a novel AWG employing cascaded Mach-Zehnder interferometers connected to the AWG input ports and numerically analyzed its optical performance to achieve low-loss, low-crosstalk, and flat-passband response. We demonstrated the usability of this model through investigations of filter performance. We also compared the filter spectrum given by this model with that given by simulation using the beam propagation method",
"title": ""
},
{
"docid": "ee1bbcdd8f332de297b6ea243da51b43",
"text": "Automatic image annotation has been an active research topic due to its great importance in image retrieval and management. However, results of the state-of-the-art image annotation methods are often unsatisfactory. Despite continuous efforts in inventing new annotation algorithms, it would be advantageous to develop a dedicated approach that could refine imprecise annotations. In this paper, a novel approach to automatically refining the original annotations of images is proposed. For a query image, an existing image annotation method is first employed to obtain a set of candidate annotations. Then, the candidate annotations are re-ranked and only the top ones are reserved as the final annotations. By formulating the annotation refinement process as a Markov process and defining the candidate annotations as the states of a Markov chain, a content-based image annotation refinement (CIAR) algorithm is proposed to re-rank the candidate annotations. It leverages both corpus information and the content feature of a query image. Experimental results on a typical Corel dataset show not only the validity of the refinement, but also the superiority of the proposed algorithm over existing ones.",
"title": ""
},
{
"docid": "48653a8de0dd6e881415855e694fc925",
"text": "The aim of this study was to compare the use of transcutaneous vs. motor nerve stimulation in the evaluation of low-frequency fatigue. Nine female and eleven male subjects, all physically active, performed a 30-min downhill run on a motorized treadmill. Knee extensor muscle contractile characteristics were measured before, immediately after (Post), and 30 min after the fatiguing exercise (Post30) by using single twitches and 0.5-s tetani at 20 Hz (P20) and 80 Hz (P80). The P20-to-P80 ratio was calculated. Electrical stimulations were randomly applied either maximally to the femoral nerve or via large surface electrodes (ES) at an intensity sufficient to evoke 50% of maximal voluntary contraction (MVC) during a 80-Hz tetanus. Voluntary activation level was also determined during isometric MVC by the twitch-interpolation technique. Knee extensor MVC and voluntary activation level decreased at all points in time postexercise (P < 0.001). P20 and P80 displayed significant time x gender x stimulation method interactions (P < 0.05 and P < 0.001, respectively). Both stimulation methods detected significant torque reductions at Post and Post30. Overall, ES tended to detect a greater impairment at Post in male and a lesser one in female subjects at both Post and Post30. Interestingly, the P20-P80 ratio relative decrease did not differ between the two methods of stimulation. The low-to-high frequency ratio only demonstrated a significant time effect (P < 0.001). It can be concluded that low-frequency fatigue due to eccentric exercise appears to be accurately assessable by ES.",
"title": ""
},
{
"docid": "02a276b26400fe37804298601b16bc13",
"text": "Over the years, different meanings have been associated with the word consistency in the distributed systems community. While in the ’80s “consistency” typically meant strong consistency, later defined also as linearizability, in recent years, with the advent of highly available and scalable systems, the notion of “consistency” has been at the same time both weakened and blurred.\n In this article, we aim to fill the void in the literature by providing a structured and comprehensive overview of different consistency notions that appeared in distributed systems, and in particular storage systems research, in the last four decades. We overview more than 50 different consistency notions, ranging from linearizability to eventual and weak consistency, defining precisely many of these, in particular where the previous definitions were ambiguous. We further provide a partial order among different consistency predicates, ordering them by their semantic “strength,” which we believe will be useful in future research. Finally, we map the consistency semantics to different practical systems and research prototypes.\n The scope of this article is restricted to non-transactional semantics, that is, those that apply to single storage object operations. As such, our article complements the existing surveys done in the context of transactional, database consistency semantics.",
"title": ""
},
{
"docid": "3f9faa5f62cfca0492797c50810ce7e1",
"text": "3D-GAN (Wu et al. in: Advances in Neural Information Processing Systems, pp. 82–90, 2016) has been introduced as a novel way to generate 3D models. In this paper, we propose a 3D-Masked-CGAN approach to apply in the generation of irregular 3D mesh geometry such as rocks. While there are many ways to generate 3D objects, the generation of irregular 3D models has its own peculiarity. To make a model realistic is extremely time-consuming and in high cost. In order to control the shape of generated 3D models, we extend 3D-GAN by adding conditional information into both the generator and discriminator. It is shown that that this model can generate 3D rock models with effective control over the shapes of generated models.",
"title": ""
},
{
"docid": "1b5a800affc14f3693004d021677357d",
"text": "Automatic skin lesion segmentation in dermoscopic images is a challenging task due to the low contrast between lesion and the surrounding skin, the irregular and fuzzy lesion borders, the existence of various artifacts, and various imaging acquisition conditions. In this paper, we present a fully automatic method for skin lesion segmentation by leveraging 19-layer deep convolutional neural networks that is trained end-to-end and does not rely on prior knowledge of the data. We propose a set of strategies to ensure effective and efficient learning with limited training data. Furthermore, we design a novel loss function based on Jaccard distance to eliminate the need of sample re-weighting, a typical procedure when using cross entropy as the loss function for image segmentation due to the strong imbalance between the number of foreground and background pixels. We evaluated the effectiveness, efficiency, as well as the generalization capability of the proposed framework on two publicly available databases. One is from ISBI 2016 skin lesion analysis towards melanoma detection challenge, and the other is the PH2 database. Experimental results showed that the proposed method outperformed other state-of-the-art algorithms on these two databases. Our method is general enough and only needs minimum pre- and post-processing, which allows its adoption in a variety of medical image segmentation tasks.",
"title": ""
}
] |
scidocsrr
|
40a2e8b8e002341a446e3c46eb9b21d8
|
Modelling OWL Ontologies with Graffoo
|
[
{
"docid": "6549a00df9fadd56b611ee9210102fe8",
"text": "Ontology editors are software tools that allow the creation and maintenance of ontologies through a graphical user interface. As the Semantic Web effort grows, a larger community of users for this kind of tools is expected. New users include people not specifically skilled in the use of ontology formalisms. In consequence, the usability of ontology editors can be viewed as a key adoption precondition for Semantic Web technologies. In this paper, the usability evaluation of several representative ontology editors is described. This evaluation is carried out by combining a heuristic pre-assessment and a subsequent user-testing phase. The target population comprises people with no specific ontology-creation skills that have a general knowledge about domain modelling. The problems found point out that, for this kind of users, current editors are adequate for the creation and maintenance of simple ontologies, but also that there is room for improvement, especially in browsing mechanisms, help systems and visualization metaphors.",
"title": ""
},
{
"docid": "9d330ac4c902c80b19b5f578e3bd9125",
"text": "Since its introduction in 1986, the 10-item System Usability Scale (SUS) has been assumed to be unidimensional. Factor analysis of two independent SUS data sets reveals that the SUS actually has two factors – Usability (8 items) and Learnability (2 items). These new scales have reasonable reliability (coefficient alpha of .91 and .70, respectively). They correlate highly with the overall SUS (r = .985 and .784, respectively) and correlate significantly with one another (r = .664), but at a low enough level to use as separate scales. A sensitivity analysis using data from 19 tests had a significant Test by Scale interaction, providing additional evidence of the differential utility of the new scales. Practitioners can continue to use the current SUS as is, but, at no extra cost, can also take advantage of these new scales to extract additional information from their SUS data.",
"title": ""
}
] |
[
{
"docid": "a4e1a0f5e56685a294a2c9088809a4fb",
"text": "As multicore systems continue to gain ground in the High Performance Computing world, linear algebra algorithms have to be reformulated or new algorithms have to be developed in order to take advantage of the architectural features on these new processors. Fine grain parallelism becomes a major requirement and introduces the necessity of loose synchronization in the parallel execution of an operation. This paper presents an algorithm for the Cholesky, LU and QR factorization where the operations can be represented as a sequence of small tasks that operate on square blocks of data. These tasks can be dynamically scheduled for execution based on the dependencies among them and on the availability of computational resources. This may result in an out of order execution of the tasks which will completely hide the presence of intrinsically sequential tasks in the factorization. Performance comparisons are presented with the LAPACK algorithms where parallelism can only be exploited at the level of the BLAS operations and vendor implementations.",
"title": ""
},
{
"docid": "4ab971e837286b95ebbdd1f99c6749c0",
"text": "In this paper we demonstrate results of a technique for synchronizing clocks and estimating ranges between a pair of RF transceivers. The technique uses a periodic exchange of ranging waveforms between two transceivers along with sophisticated delay estimation and tracking. The technique was implemented on wireless testbed transceivers with independent clocks and tested over-the-air in stationary and moving configurations. The technique achieved ~10ps synchronization accuracy and 2.1mm range deviation, using A two-channel oscilloscope and tape measure as truth sources. The timing resolution attained is three orders of magnitude better than the inverse signal bandwidth of the ranging waveform (50MHz⇒ 6m resolution), and is within a small fraction of the carrier wavelength (915MHz⇒ 327mm wavelength). We discuss how this result is consistent with the Weiss-Weinstein bound and cite new applications enabled by this technique.",
"title": ""
},
{
"docid": "aacaadc8175f1c42338d0e72c0234686",
"text": "For successful physical human-robot interaction, the capability of a robot to understand its environment is imperative. More importantly, the robot should extract from the human operator as much information as possible. A reliable 3D skeleton extraction is essential for a robot to predict the intentions of the operator while s/he moves toward the robot or performs a meaningful gesture. For this purpose, we have integrated a time-of-flight depth camera with a state-of-the-art 2D skeleton extraction library namely Openpose, to obtain 3D skeletal joint coordinates reliably. We have also developed a robust and rotation invariant (in the coronal plane)hand gesture detector using a convolutional neural network. At run time (after having been trained)the detector does not require any pre-processing of the hand images. A complete pipeline for skeleton extraction and hand gesture recognition is developed and employed for real-time physical human-robot interaction, demonstrating the promising capability of the designed framework. This work establishes a firm basis and will be extended for the development of intelligent human intention detection in physical human-robot interaction scenarios, to efficiently recognize a variety of static as well as dynamic gestures.",
"title": ""
},
{
"docid": "1feaf48291b7ea83d173b70c23a3b7c0",
"text": "Machine learning plays a critical role in extracting meaningful information out of the zetabytes of sensor data collected every day. For some applications, the goal is to analyze and understand the data to identify trends (e.g., surveillance, portable/wearable electronics); in other applications, the goal is to take immediate action based the data (e.g., robotics/drones, self-driving cars, smart Internet of Things). For many of these applications, local embedded processing near the sensor is preferred over the cloud due to privacy or latency concerns, or limitations in the communication bandwidth. However, at the sensor there are often stringent constraints on energy consumption and cost in addition to throughput and accuracy requirements. Furthermore, flexibility is often required such that the processing can be adapted for different applications or environments (e.g., update the weights and model in the classifier). In many applications, machine learning often involves transforming the input data into a higher dimensional space, which, along with programmable weights, increases data movement and consequently energy consumption. In this paper, we will discuss how these challenges can be addressed at various levels of hardware design ranging from architecture, hardware-friendly algorithms, mixed-signal circuits, and advanced technologies (including memories and sensors).",
"title": ""
},
{
"docid": "5f21a1348ad836ded2fd3d3264455139",
"text": "To date, brain imaging has largely relied on X-ray computed tomography and magnetic resonance angiography with limited spatial resolution and long scanning times. Fluorescence-based brain imaging in the visible and traditional near-infrared regions (400-900 nm) is an alternative but currently requires craniotomy, cranial windows and skull thinning techniques, and the penetration depth is limited to 1-2 mm due to light scattering. Here, we report through-scalp and through-skull fluorescence imaging of mouse cerebral vasculature without craniotomy utilizing the intrinsic photoluminescence of single-walled carbon nanotubes in the 1.3-1.4 micrometre near-infrared window. Reduced photon scattering in this spectral region allows fluorescence imaging reaching a depth of >2 mm in mouse brain with sub-10 micrometre resolution. An imaging rate of ~5.3 frames/s allows for dynamic recording of blood perfusion in the cerebral vessels with sufficient temporal resolution, providing real-time assessment of blood flow anomaly in a mouse middle cerebral artery occlusion stroke model.",
"title": ""
},
{
"docid": "ff947ccb7efdd5517f9b60f9c11ade6a",
"text": "Several messages express opinions about events, products, and services, political views or even their author's emotional state and mood. Sentiment analysis has been used in several applications including analysis of the repercussions of events in social networks, analysis of opinions about products and services, and simply to better understand aspects of social communication in Online Social Networks (OSNs). There are multiple methods for measuring sentiments, including lexical-based approaches and supervised machine learning methods. Despite the wide use and popularity of some methods, it is unclear which method is better for identifying the polarity (i.e., positive or negative) of a message as the current literature does not provide a method of comparison among existing methods. Such a comparison is crucial for understanding the potential limitations, advantages, and disadvantages of popular methods in analyzing the content of OSNs messages. Our study aims at filling this gap by presenting comparisons of eight popular sentiment analysis methods in terms of coverage (i.e., the fraction of messages whose sentiment is identified) and agreement (i.e., the fraction of identified sentiments that are in tune with ground truth). We develop a new method that combines existing approaches, providing the best coverage results and competitive agreement. We also present a free Web service called iFeel, which provides an open API for accessing and comparing results across different sentiment methods for a given text.",
"title": ""
},
{
"docid": "5320ff5b9e2a3d0d206bb74ed0e047cd",
"text": "To the Editor: How do Shai et al. (July 17 issue)1 explain why the subjects in their study regained weight between month 6 and month 24, despite a reported reduction of 300 to 600 calories per day? Contributing possibilities may include the notion that a food-frequency questionnaire cannot precisely determine energy or macronutrient intake but, rather, ascertains general dietary patterns. Certain populations may underreport intake2,3 and have a decreased metabolic rate. The authors did not measure body composition, which is critical for documenting weight-loss components. In addition, the titles of the diets that are described in the article are misleading. Labeling the “low-carbohydrate” diet as such is questionable, since 40 to 42% of calories were from carbohydrates from month 6 to month 24, and data regarding ketosis support this view. Participants in the low-fat and Mediterranean-diet groups consumed between 30% and 33% of calories from fat and did not increase fiber consumption, highlighting the importance of diet quality. Furthermore, the authors should have provided baseline values and P values for within-group changes from baseline (see Table 2 of the article). Contrary to the authors’ assertion, it is not surprising that the effects on many biomarkers were minimal, since the dietary changes were minimal. The absence of biologically significant weight loss (2 to 4% after 2 years) highlights the fact that energy restriction and weight loss in themselves may minimally affect metabolic outcomes and that lifestyle changes must incorporate physical activity to optimize the reduction in the risk of chronic disease.4,5 Christian K. Roberts, Ph.D. R. James Barnard, Ph.D. Daniel M. Croymans, B.S.",
"title": ""
},
{
"docid": "2653554c6dec7e9cfa0f5a4080d251e2",
"text": "Clustering is a key technique within the KDD process, with k-means, and the more general k-medoids, being well-known incremental partition-based clustering algorithms. A fundamental issue within this class of algorithms is to find an initial set of medians (or medoids) that improves the efficiency of the algorithms (e.g., accelerating its convergence to a solution), at the same time that it improves its effectiveness (e.g., finding more meaningful clusters). Thus, in this article we aim at providing a technique that, given a set of elements, quickly finds a very small number of elements as medoid candidates for this set, allowing to improve both the efficiency and effectiveness of existing clustering algorithms. We target the class of k-medoids algorithms in general, and propose a technique that selects a well-positioned subset of central elements to serve as the initial set of medoids for the clustering process. Our technique leads to a substantially smaller amount of distance calculations, thus improving the algorithm’s efficiency when compared to existing methods, without sacrificing effectiveness. A salient feature of our proposed technique is that it is not a new k-medoid clustering algorithm per se, rather, it can be used in conjunction with any existing clustering algorithm that is based on the k-medoid paradigm. Experimental results, using both synthetic and real datasets, confirm the efficiency, effectiveness and scalability of the proposed technique.",
"title": ""
},
{
"docid": "6bf002e1a3f544ebf599940ef22c1911",
"text": "In this paper, we present a new approach for fingerprint class ification based on Discrete Fourier Transform (DFT) and nonlinear discrimina nt nalysis. Utilizing the Discrete Fourier Transform and directional filters, a relia ble and efficient directional image is constructed from each fingerprint image, and then no nlinear discriminant analysis is applied to the constructed directional images, reducing the dimension dramatically and extracting the discriminant features. The pr oposed method explores the capability of DFT and directional filtering in dealing with l ow quality images and the effectiveness of nonlinear feature extraction method in fin gerprint classification. Experimental results demonstrates competitive performance compared with other published results.",
"title": ""
},
{
"docid": "044756096a67edd1681d00afbdd7d40e",
"text": "We report in this paper two types of broadband transitions between microstrip and coplanar lines on thin benzocyclobutene (BCB) polymer substrate. They are both via-free, using electromagnetic coupling between the bottom and top ground planes, which simplifies the manufacturing of components driven by microstrip electrodes. In the first ones, the bottom ground is not patterned, which makes them particularly suitable to on-wafer measurement of components under development with coplanar probes. An ultra-broad bandwidth of 68 GHz (from 1 GHz to 69 GHz) was achieved with 20-pm BCB. In the second ones, intended for connectorizing components on thin substrate with coplanar connectors, the bottom ground is patterned to match the narrow center conductor (54 μm) on thin substrate to the wide center conductor (127 μm) of the connector with a tapered section, achieving to a experimental bandwidth 13 GHz for the moment.",
"title": ""
},
{
"docid": "539294c5fbe3fa7e96524f5260dbb7a1",
"text": "Demonstrations of mm-Wave arrays with >50 elements in silicon has led to an interest in large-scale mm-Wave MIMO arrays for 5G networks, which promise substantial improvements in network capacity [1,2]. Practical considerations result in such arrays being developed with a tiled approach, where N unit cells with M elements each are tiled to achieve large MIMO/phased arrays with NM elements [2]. Achieving stringent phase-noise specifications and scalable LO distribution to maintain phase coherence across different unit cell ICs/PCBs are a critical challenge. In this paper, we demonstrate a scalable, single-wire-synchronization architecture and circuits for mm-Wave arrays that preserve the simplicity of daisy-chained LO distribution, compensate for phase offset due to interconnects, and provide phase-noise improvement with increasing number of PLLs [3]. Measurements on a scalable 28GHz prototype demonstrate a 21% improvement in rms jitter and a 3.4dB improvement in phase noise at 10MHz offset when coupling 28GHz PLLs across three different ICs.",
"title": ""
},
{
"docid": "304b4cee4006e87fc4172a3e9de88ed1",
"text": "Recently, graph neural networks (GNNs) have revolutionized the field of graph representation learning through effectively learned node embeddings, and achieved state-of-the-art results in tasks such as node classification and link prediction. However, current GNN methods are inherently flat and do not learn hierarchical representations of graphs—a limitation that is especially problematic for the task of graph classification, where the goal is to predict the label associated with an entire graph. Here we propose DIFFPOOL, a differentiable graph pooling module that can generate hierarchical representations of graphs and can be combined with various graph neural network architectures in an end-to-end fashion. DIFFPOOL learns a differentiable soft cluster assignment for nodes at each layer of a deep GNN, mapping nodes to a set of clusters, which then form the coarsened input for the next GNN layer. Our experimental results show that combining existing GNN methods with DIFFPOOL yields an average improvement of 5–10% accuracy on graph classification benchmarks, compared to all existing pooling approaches, achieving a new state-of-the-art on four out of five benchmark data sets.",
"title": ""
},
{
"docid": "c29efdd4ef9607a92c4239c08710b089",
"text": "Network coding over time varying channels has been investigated and a new scheme is proposed. We propose a novel model for packet transmission over time variant channels that exploits the channel delay profile and the dependency between channel states via first order auto-regression for Ka-band satellite communications. We provide an approximation of the delay induced assuming finite number of time slots to transmit a given number of packets. We also propose a novel adaptive transmission scheme that compensates for the lost degrees of freedom by tracking the packet erasures over time. Our results show that network coding non-adaptive mechanism for time variant channels has around 2 times throughput and delay performance gains for small size packets over network coding mechanisms with fixed channel erasures and similar performance gains for large size packets. In addition, it is shown that network coding non-adaptive mechanism for time variant channels has similar performance to the Selective Repeat (SR) with ARQ, and better performance when packet error probability is high, while due to better utilization of channel resources SR performance is similar or moderately better at very low erasures, i.e., at high SNR. However, our adaptive transmission scheme outperforms the network coding non-adaptive mechanism and SR with more than 7 times in throughput and delay performance gains.",
"title": ""
},
{
"docid": "e4ca7c16acd9b71a5ae7f1ee29101782",
"text": "Recently, distributed generators and sensitive loads have been widely used. They enable a solid-state circuit breaker (SSCB), which is an imperative device to get acceptable power quality of ac power grid systems. The existing ac SSCB composed of a silicon-controlled rectifier requires some auxiliary mechanical devices to achieve the reclosing operation before fault recovery. However, the new ac SSCB can achieve a quick breaking operation and then be reclosed with no auxiliary mechanical devices or complex control even under sustained short-circuit fault because the commutation capacitors are charged naturally without any complex control of main thyristors and auxiliary ones. The performance features of the proposed ac SSCB are verified through the experiment results of the short-circuit faults.",
"title": ""
},
{
"docid": "df4b4119653789266134cf0b7571e332",
"text": "Automatic detection of lymphocyte in H&E images is a necessary first step in lots of tissue image analysis algorithms. An accurate and robust automated lymphocyte detection approach is of great importance in both computer science and clinical studies. Most of the existing approaches for lymphocyte detection are based on traditional image processing algorithms and/or classic machine learning methods. In the recent years, deep learning techniques have fundamentally transformed the way that a computer interprets images and have become a matchless solution in various pattern recognition problems. In this work, we design a new deep neural network model which extends the fully convolutional network by combining the ideas in several recent techniques, such as shortcut links. Also, we design a new training scheme taking the prior knowledge about lymphocytes into consideration. The training scheme not only efficiently exploits the limited amount of free-form annotations from pathologists, but also naturally supports efficient fine-tuning. As a consequence, our model has the potential of self-improvement by leveraging the errors collected during real applications. Our experiments show that our deep neural network model achieves good performance in the images of different staining conditions or different types of tissues.",
"title": ""
},
{
"docid": "da64b7855ec158e97d48b31e36f100a5",
"text": "Named Entity Recognition (NER) is the task of classifying or labelling atomic elements in the text into categories such as Person, Location or Organisation. For Arabic language, recognizing named entities is a challenging task because of the complexity and the unique characteristics of this language. In addition, most of the previous work focuses on Modern Standard Arabic (MSA), however, recognizing named entities in social media is becoming more interesting these days. Dialectal Arabic (DA) and MSA are both used in social media, which is deemed as another challenging task. Most state-of-the-art Arabic NER systems count heavily on hand-crafted engineering features and lexicons which is time consuming. In this paper, we introduce a novel neural network architecture which benefits both from characterand word-level representations automatically, by using combination of bidirectional Long Short-Term Memory (LSTM) and Conditional Random Field (CRF), eliminating the need for most feature engineering. Moreover, our model relies on unsupervised word representations learned from unannotated corpora. Experimental results demonstrate that our model achieves state-of-the-art performance on publicly available benchmark for Arabic NER for social media and surpassing the previous system by a large margin.",
"title": ""
},
{
"docid": "a0d3ebfb9a3f3c27ee2d23a74dba1f50",
"text": "Machine Learning (ML) has been successful in automating a range of cognitive tasks that humans solve effortlessly and quickly. Yet many realworld tasks are difficult and slow : people solve them by an extended process that involves analytical reasoning, gathering external information, and discussing with collaborators. Examples include medical advice, judging a criminal trial, and providing personalized recommendations for rich content such as books or academic papers. There is great demand for automating tasks that require deliberative judgment. Current ML approaches can be unreliable: this is partly because such tasks are intrinsically difficult (even AI-complete) and partly because assembling datasets of deliberative judgments is expensive (each label might take hours of human work). We consider addressing this data problem by collecting fast judgments and using them to help predict deliberative (slow) judgments. Instead of having a human spend hours on a task, we might instead collect their judgment after 30 seconds or 10 minutes. These fast judgments are combined with a smaller quantity of slow judgments to provide training data. The resulting prediction problem is related to semi-supervised learning and collaborative filtering. We designed two tasks for the purpose of testing ML algorithms on predicting human deliberative judgments. One task involves Fermi estimation (back-of-the-envelope estimation) and the other involves judging the veracity of political statements. We collected a dataset of 25,000 judgments from more than 800 people. We define an ML prediction task for predicting deliberative judgments given a training set that also contains fast judgments. We tested a variety of baseline algorithms on this task. Unfortunately our dataset has serious limitations. Additional work is required to create a good testbed for predicting human deliberative judgments. This technical report explains the motivation for our project (which might be built on in future work) and explains how further work can avoid our mistakes. Our dataset and code is available at https: //github.com/oughtinc/psj. ∗University of Oxford †Ought Inc.",
"title": ""
},
{
"docid": "84e8986eff7cb95808de8df9ac286e37",
"text": "The purpose of this thesis is to describe one-shot-learning gesture recognition systems developed on the ChaLearn Gesture Dataset [3]. We use RGB and depth images and combine appearance (Histograms of Oriented Gradients) and motion descriptors (Histogram of Optical Flow) for parallel temporal segmentation and recognition. The Quadratic-Chi distance family is used to measure differences between histograms to capture cross-bin relationships. We also propose a new algorithm for trimming videos — to remove all the unimportant frames from videos. Our two methods both outperform other published methods and help narrow down the gap between human performance and algorithms on this task. The code has been made publicly available in the MLOSS repository.",
"title": ""
},
{
"docid": "81d4f23c5b6d407e306569f4e3ad4be9",
"text": "While much progress has been made in wearable computing in recent years, input techniques remain a key challenge. In this paper, we introduce uTrack, a technique to convert the thumb and fingers into a 3D input system using magnetic field (MF) sensing. A user wears a pair of magnetometers on the back of their fingers and a permanent magnet affixed to the back of the thumb. By moving the thumb across the fingers, we obtain a continuous input stream that can be used for 3D pointing. Specifically, our novel algorithm calculates the magnet's 3D position and tilt angle directly from the sensor readings. We evaluated uTrack as an input device, showing an average tracking accuracy of 4.84 mm in 3D space - sufficient for subtle interaction. We also demonstrate a real-time prototype and example applications allowing users to interact with the computer using 3D finger input.",
"title": ""
},
{
"docid": "50c639dfa7063d77cda26666eabeb969",
"text": "This paper addresses the problem of detecting people in two dimensional range scans. Previous approaches have mostly used pre-defined features for the detection and tracking of people. We propose an approach that utilizes a supervised learning technique to create a classifier that facilitates the detection of people. In particular, our approach applies AdaBoost to train a strong classifier from simple features of groups of neighboring beams corresponding to legs in range data. Experimental results carried out with laser range data illustrate the robustness of our approach even in cluttered office environments",
"title": ""
}
] |
scidocsrr
|
66d83be656b37c668d9d6753c6ac8bff
|
Cloud-based Wireless Network: Virtualized, Reconfigurable, Smart Wireless Network to Enable 5G Technologies
|
[
{
"docid": "0cbd3587fe466a13847e94e29bb11524",
"text": "The cloud heralds a new era of computing where application services are provided through the Internet. Cloud computing can enhance the computing capability of mobile systems, but is it the ultimate solution for extending such systems' battery lifetimes?",
"title": ""
},
{
"docid": "4412bca4e9165545e4179d261828c85c",
"text": "Today 3G mobile systems are on the ground providing IP connectivity for real-time and non-real-time services. On the other side, there are many wireless technologies that have proven to be important, with the most important ones being 802.11 Wireless Local Area Networks (WLAN) and 802.16 Wireless Metropolitan Area Networks (WMAN), as well as ad-hoc Wireless Personal Area Network (WPAN) and wireless networks for digital TV and radio broadcast. Then, the concepts of 4G is already much discussed and it is almost certain that 4G will include several standards under a common umbrella, similarly to 3G, but with IEEE 802.xx wireless mobile networks included from the beginning. The main contribution of this paper is definition of 5G (Fifth Generation) mobile network concept, which is seen as user-centric concept instead of operator-centric as in 3G or service-centric concept as seen for 4G. In the proposed concept the mobile user is on the top of all. The 5G terminals will have software defined radios and modulation scheme as well as new error-control schemes can be downloaded from the Internet on the run. The development is seen towards the user terminals as a focus of the 5G mobile networks. The terminals will have access to different wireless technologies at the same time and the terminal should be able to combine different flows from different technologies. Each network will be responsible for handling user-mobility, while the terminal will make the final choice among different wireless/mobile access network providers for a given service. The paper also proposes intelligent Internet phone concept where the mobile phone can choose the best connections by selected constraints and dynamically change them during a single end-to-end connection. The proposal in this paper is fundamental shift in the mobile networking philosophy compared to existing 3G and near-soon 4G mobile technologies, and this concept is called here the 5G.",
"title": ""
},
{
"docid": "68a0298286210e50240557222468c4d3",
"text": "As the take-up of Long Term Evolution (LTE)/4G cellular accelerates, there is increasing interest in technologies that will define the next generation (5G) telecommunication standard. This article identifies several emerging technologies which will change and define the future generations of telecommunication standards. Some of these technologies are already making their way into standards such as 3GPP LTE, while others are still in development. Additionally, we will look at some of the research problems that these new technologies pose.",
"title": ""
}
] |
[
{
"docid": "cf609c174c70295ef57995f662ceda50",
"text": "Upper limb exercise is often neglected during post-stroke rehabilitation. Video games have been shown to be useful in providing environments in which patients can practise repetitive, functionally meaningful movements, and in inducing neuroplasticity. The design of video games is often focused upon a number of fundamental principles, such as reward, goals, challenge and the concept of meaningful play, and these same principles are important in the design of games for rehabilitation. Further to this, there have been several attempts for the strengthening of the relationship between commercial game design and rehabilitative game design, the former providing insight into factors that can increase motivation and engagement with the latter. In this article, we present an overview of various game design principles and the theoretical grounding behind their presence, in addition to attempts made to utilise these principles in the creation of upper limb stroke rehabilitation systems and the outcomes of their use. We also present research aiming to move the collaborative efforts of designers and therapists towards a model for the structured design of these games and the various steps taken concerning the theoretical classification and mapping of game design concepts with intended cognitive and motor outcomes.",
"title": ""
},
{
"docid": "446af0ad077943a77ac4a38fd84df900",
"text": "We investigate the manufacturability of 20-nm double-gate and FinFET devices in integrated circuits by projecting process tolerances. Two important factors affecting the sensitivity of device electrical parameters to physical variations were quantitatively considered. The quantum effect was computed using the density gradient method and the sensitivity of threshold voltage to random dopant fluctuation was studied by Monte Carlo simulation. Our results show the 3 value ofVT variation caused by discrete impurity fluctuation can be greater than 100%. Thus, engineering the work function of gate materials and maintaining a nearly intrinsic channel is more desirable. Based on a design with an intrinsic channel and ideal gate work function, we analyzed the sensitivity of device electrical parameters to several important physical fluctuations such as the variations in gate length, body thickness, and gate dielectric thickness. We found that quantum effects have great impact on the performance of devices. As a result, the device electrical behavior is sensitive to small variations of body thickness. The effect dominates over the effects produced by other physical fluctuations. To achieve a relative variation of electrical parameters comparable to present practice in industry, we face a challenge of fin width control (less than 1 nm 3 value of variation) for the 20-nm FinFET devices. The constraint of the gate length variation is about 10 15%. We estimate a tolerance of 1 2 A 3 value of oxide thickness variation and up to 30% front-back oxide thickness mismatch.",
"title": ""
},
{
"docid": "6e75a3e63c515f97b6ab9c68d1f77d2c",
"text": "This paper explores the use of multiple models in performing question answering tasks on the Stanford Question Answering Database. We first implement and share results of a baseline model using bidirectional long short-term memory (BiLSTM) encoding of question and context followed a simple co-attention model [1]. We then report on the use of match-LSTM and Pointer Net which showed marked improvements in question answering over the baseline model [2]. Lastly, we extend the model by adding Dropout [3] and randomization strategies to account for unknown tokens. Final test score on Codalab under Username: yife, F1: 66.981, EM: 56.845.",
"title": ""
},
{
"docid": "f6529f9327f72d77d36e2002d97cfdf6",
"text": "The history of machine translation is described from its beginnings in the 1940s to the present day. In the earliest years, efforts were concentrated either on developing immediately useful systems, however crude in their translation quality, or on fundamental research for high quality translation systems. After the ALPAC report in 1966, which virtually ended MT research in the US for more than a decade, research focussed on the development of systems requiring human assistance for producing translations of technical documentation, on translation tools for direct use by translators themselves, and, in recent years, on systems for translating email, Web pages and other Internet documentation, where poor quality is acceptable in the interest of rapid results.",
"title": ""
},
{
"docid": "db4499bc08d0ed24f81e9412d8869d37",
"text": "In this paper we assess our progress toward creating a virtual human negotiation agent with fluid turn-taking skills. To facilitate the design of this agent, we have collected a corpus of human-human negotiation roleplays as well as a corpus of Wizard-controlled human-agent negotiations in the same roleplay scenario. We compare the natural turn-taking behavior in our human-human corpus with that achieved in our Wizard-of-Oz corpus, and quantify our virtual human’s turn-taking skills using a combination of subjective and objective metrics. We also discuss our design for a Wizard user interface to support real-time control of the virtual human’s turntaking and dialogue behavior, and analyze our wizard’s usage of this interface.",
"title": ""
},
{
"docid": "6d809270c7fbcf5b4b3c1a3c71026c3f",
"text": "Requirements defects have a major impact throughout the whole software lifecycle. Having a specific defects classification for requirements is important to analyse the root causes of problems, build checklists that support requirements reviews and to reduce risks associated with requirements problems. In our research we analyse several defects classifiers; select the ones applicable to requirements specifications, following rules to build defects taxonomies; and assess the classification validity in an experiment of requirements defects classification performed by graduate and undergraduate students. Not all subjects used the same type of defect to classify the same defect, which suggests that defects classification is not consensual. Considering our results we give recommendations to industry and other researchers on the design of classification schemes and treatment of classification results.",
"title": ""
},
{
"docid": "c3b2ef6d7010d7c08c314ddfdc2780c4",
"text": "research and development dollars for decades now, and people are beginning to ask hard questions: What really works? What are the limits? What doesn’t work as advertised? What isn’t likely to work? What isn’t affordable? This article holds a mirror up to the community, both to provide feedback and stimulate more selfassessment. The significant accomplishments and strengths of the field are highlighted. The research agenda, strategy, and heuristics are reviewed, and a change of course is recommended to improve the field’s ability to produce reusable and interoperable components.",
"title": ""
},
{
"docid": "8f967b0a46e3dad8f39476b2efea48b7",
"text": "Today’s rapid changing world highlights the influence and impact of technology in all aspects of learning life. Higher Education institutions in developed Western countries believe that these developments offer rich opportunities to embed technological innovations within the learning environment. This places developing countries, striving to be equally competitive in international markets, under tremendous pressure to similarly embed appropriate blends of technologies within their learning and curriculum approaches, and consequently enhance and innovate their learning experiences. Although many universities across the world have incorporated internet-based learning systems, the success of their implementation requires an extensive understanding of the end user acceptance process. Learning using technology has become a popular approach within higher education institutions due to the continuous growth of Internet innovations and technologies. Therefore, this paper focuses on the investigation of students, who attempt to successfully adopt e-learning systems at universities in Jordan. The conceptual research framework of e-learning adoption, which is used in the analysis, is based on the technology acceptance model. The study also provides an indicator of students’ acceptance of e-learning as well as identifying the important factors that would contribute to its successful use. The outcomes will enrich the understanding of students’ acceptance of e-learning and will assist in its continuing implementation at Jordanian Universities.",
"title": ""
},
{
"docid": "20af5209de71897158820f935018d877",
"text": "This paper presents a new bag-of-entities representation for document ranking, with the help of modern knowledge bases and automatic entity linking. Our system represents query and documents by bag-of-entities vectors constructed from their entity annotations, and ranks documents by their matches with the query in the entity space. Our experiments with Freebase on TREC Web Track datasets demonstrate that current entity linking systems can provide sufficient coverage of the general domain search task, and that bag-of-entities representations outperform bag-of-words by as much as 18% in standard document ranking tasks.",
"title": ""
},
{
"docid": "4d0889329f9011adc05484382e4f5dc0",
"text": "A high level of sustained personal plaque control is fundamental for successful treatment outcomes in patients with active periodontal disease and, hence, oral hygiene instructions are the cornerstone of periodontal treatment planning. Other risk factors for periodontal disease also should be identified and modified where possible. Many restorative dental treatments in particular require the establishment of healthy periodontal tissues for their clinical success. Failure by patients to control dental plaque because of inappropriate designs and materials for restorations and prostheses will result in the long-term failure of the restorations and the loss of supporting tissues. Periodontal treatment planning considerations are also very relevant to endodontic, orthodontic and osseointegrated dental implant conditions and proposed therapies.",
"title": ""
},
{
"docid": "23527243a9ccb9feaa24ccc7ac38f05d",
"text": "BACKGROUND\nElectrosurgical units are the most common type of electrical equipment in the operating room. A basic understanding of electricity is needed to safely apply electrosurgical technology for patient care.\n\n\nMETHODS\nWe reviewed the literature concerning the essential biophysics, the incidence of electrosurgical injuries, and the possible mechanisms for injury. Various safety guidelines pertaining to avoidance of injuries were also reviewed.\n\n\nRESULTS\nElectrothermal injury may result from direct application, insulation failure, direct coupling, capacitive coupling, and so forth.\n\n\nCONCLUSION\nA thorough knowledge of the fundamentals of electrosurgery by the entire team in the operating room is essential for patient safety and for recognizing potential complications. Newer hemostatic technologies can be used to decrease the incidence of complications.",
"title": ""
},
{
"docid": "fba3c3a0fbc08c992d388e6854890b01",
"text": "This paper presents a revenue maximisation model for sales channel allocation based on dynamic programming. It helps the media seller to determine how to distribute the sales volume of page views between guaranteed and nonguaranteed channels for display advertising. The model can algorithmically allocate and price the future page views via standardised guaranteed contracts in addition to real-time bidding (RTB). This is one of a few studies that investigates programmatic guarantee (PG) with posted prices. Several assumptions are made for media buyers’ behaviour, such as risk-aversion, stochastic demand arrivals, and time and price effects. We examine our model with an RTB dataset and find it increases the seller’s expected total revenue by adopting different pricing and allocation strategies depending the level of competition in RTB campaigns. The insights from this research can increase the allocative efficiency of the current media sellers’ sales mechanism and thus improve their revenue.",
"title": ""
},
{
"docid": "a084e7dd5485e01d97ccf628bc00d644",
"text": "A novel concept called gesture-changeable under-actuated (GCUA) function is proposed to improve the dexterities of traditional under-actuated hands and reduce the control difficulties of dexterous hands. Based on the GCUA function, a new humanoid robot hand, GCUA Hand is designed and manufactured. The GCUA Hand can grasp different objects self-adaptively and change its initial gesture dexterously before contacting objects. The hand has 5 fingers and 15 DOFs, each finger is based on screw-nut transmission, flexible drawstring constraint and belt-pulley under-actuated mechanism to realize GCUA function. The analyses on grasping static forces and grasping stabilities are put forward. The analyses and Experimental results show that the GCUA function is very nice and valid. The hands with the GCUA function can meet the requirements of grasping and operating with lower control and cost, which is the middle road between traditional under-actuated hands and dexterous hands.",
"title": ""
},
{
"docid": "33c113db245fb36c3ce8304be9909be6",
"text": "Bring Your Own Device (BYOD) is growing in popularity. In fact, this inevitable and unstoppable trend poses new security risks and challenges to control and manage corporate networks and data. BYOD may be infected by viruses, spyware or malware that gain access to sensitive data. This unwanted access led to the disclosure of information, modify access policy, disruption of service, loss of productivity, financial issues, and legal implications. This paper provides a review of existing literature concerning the access control and management issues, with a focus on recent trends in the use of BYOD. This article provides an overview of existing research articles which involve access control and management issues, which constitute of the recent rise of usage of BYOD devices. This review explores a broad area concerning information security research, ranging from management to technical solution of access control in BYOD. The main aim for this is to investigate the most recent trends touching on the access control issues in BYOD concerning information security and also to analyze the essential and comprehensive requirements needed to develop an access control framework in the future. Keywords— Bring Your Own Device, BYOD, access control, policy, security.",
"title": ""
},
{
"docid": "d45b084040e5f07d39f622fc3543e10b",
"text": "Low-shot learning methods for image classification support learning from sparse data. We extend these techniques to support dense semantic image segmentation. Specifically, we train a network that, given a small set of annotated images, produces parameters for a Fully Convolutional Network (FCN). We use this FCN to perform dense pixel-level prediction on a test image for the new semantic class. Our architecture shows a 25% relative meanIoU improvement compared to the best baseline methods for one-shot segmentation on unseen classes in the PASCAL VOC 2012 dataset and is at least 3× faster. The code is publicly available at: https://github.com/lzzcd001/OSLSM.",
"title": ""
},
{
"docid": "db866d876dddb61c4da3ff554e5b6643",
"text": "Distributed stream processing systems need to support stateful processing, recover quickly from failures to resume such processing, and reprocess an entire data stream quickly. We present Apache Samza, a distributed system for stateful and fault-tolerant stream processing. Samza utilizes a partitioned local state along with a low-overhead background changelog mechanism, allowing it to scale to massive state sizes (hundreds of TB) per application. Recovery from failures is sped up by re-scheduling based on Host Affinity. In addition to processing infinite streams of events, Samza supports processing a finite dataset as a stream, from either a streaming source (e.g., Kafka), a database snapshot (e.g., Databus), or a file system (e.g. HDFS), without having to change the application code (unlike the popular Lambdabased architectures which necessitate maintenance of separate code bases for batch and stream path processing). Samza is currently in use at LinkedIn by hundreds of production applications with more than 10, 000 containers. Samza is an open-source Apache project adopted by many top-tier companies (e.g., LinkedIn, Uber, Netflix, TripAdvisor, etc.). Our experiments show that Samza: a) handles state efficiently, improving latency and throughput by more than 100× compared to using a remote storage; b) provides recovery time independent of state size; c) scales performance linearly with number of containers; and d) supports reprocessing of the data stream quickly and with minimal interference on real-time traffic.",
"title": ""
},
{
"docid": "c4367db5e4f46a7c58e11c2fbb629f90",
"text": "Microblogging data is growing at a rapid pace. This poses new challenges to the data management systems, such as graph databases, that are typically suitable for analyzing such data. In this paper, we share our experience on executing a wide variety of micro-blogging queries on two popular graph databases: Neo4j and Sparksee. Our queries are designed to be relevant to popular applications of micro-blogging data. The queries are executed on a large real graph data set comprising of nearly 50 million nodes and 326 million edges.",
"title": ""
},
{
"docid": "d5b51e2d90b52fed0712db7dad6602c9",
"text": "Due to the rapid increase in world population, the waste of food and resources, and non-sustainable food production practices, the use of alternative food sources is currently strongly promoted. In this perspective, insects may represent a valuable alternative to main animal food sources due to their nutritional value and sustainable production. However, edible insects may be perceived as an unappealing food source and are indeed rarely consumed in developed countries. The food safety of edible insects can thus contribute to the process of acceptance of insects as an alternative food source, changing the perception of developed countries regarding entomophagy. In the present study, the levels of organic contaminants (i.e. flame retardants, PCBs, DDT, dioxin compounds, pesticides) and metals (As, Cd, Co, Cr, Cu, Ni, Pb, Sn, Zn) were investigated in composite samples of several species of edible insects (greater wax moth, migratory locust, mealworm beetle, buffalo worm) and four insect-based food items currently commercialized in Belgium. The organic chemical mass fractions were relatively low (PCBs: 27-2065 pg/g ww; OCPs: 46-368 pg/g ww; BFRs: up to 36 pg/g ww; PFRs 783-23800 pg/g ww; dioxin compounds: up to 0.25 pg WHO-TEQ/g ww) and were generally lower than those measured in common animal products. The untargeted screening analysis revealed the presence of vinyltoluene, tributylphosphate (present in 75% of the samples), and pirimiphos-methyl (identified in 50% of the samples). The levels of Cu and Zn in insects were similar to those measured in meat and fish in other studies, whereas As, Co, Cr, Pb, Sn levels were relatively low in all samples (<0.03 mg/kg ww). Our results support the possibility to consume these insect species with no additional hazards in comparison to the more commonly consumed animal products.",
"title": ""
},
{
"docid": "b2db53f203f2b168ec99bd8e544ff533",
"text": "BACKGROUND\nThis study aimed to analyze the scientific outputs of esophageal and esophagogastric junction (EGJ) cancer and construct a model to quantitatively and qualitatively evaluate pertinent publications from the past decade.\n\n\nMETHODS\nPublications from 2007 to 2016 were retrieved from the Web of Science Core Collection database. Microsoft Excel 2016 (Redmond, WA) and the CiteSpace (Drexel University, Philadelphia, PA) software were used to analyze publication outcomes, journals, countries, institutions, authors, research areas, and research frontiers.\n\n\nRESULTS\nA total of 12,978 publications on esophageal and EGJ cancer were identified published until March 23, 2017. The Journal of Clinical Oncology had the largest number of publications, the USA was the leading country, and the University of Texas MD Anderson Cancer Center was the leading institution. Ajani JA published the most papers, and Jemal A had the highest co-citation counts. Esophageal squamous cell carcinoma ranked the first in research hotspots, and preoperative chemotherapy/chemoradiotherapy ranked the first in research frontiers.\n\n\nCONCLUSION\nThe annual number of publications steadily increased in the past decade. A considerable number of papers were published in journals with high impact factor. Many Chinese institutions engaged in esophageal and EGJ cancer research but significant collaborations among them were not noted. Jemal A, Van Hagen P, Cunningham D, and Enzinger PC were identified as good candidates for research collaboration. Neoadjuvant therapy and genome-wide association study in esophageal and EGJ cancer research should be closely observed.",
"title": ""
},
{
"docid": "e88f19cdd7f21c5aafedc13143bae00f",
"text": "For a long time, the term virtualization implied talking about hypervisor-based virtualization. However, in the past few years container-based virtualization got mature and especially Docker gained a lot of attention. Hypervisor-based virtualization provides strong isolation of a complete operating system whereas container-based virtualization strives to isolate processes from other processes at little resource costs. In this paper, hypervisor and container-based virtualization are differentiated and the mechanisms behind Docker and LXC are described. The way from a simple chroot over a container framework to a ready to use container management solution is shown and a look on the security of containers in general is taken. This paper gives an overview of the two different virtualization approaches and their advantages and disadvantages.",
"title": ""
}
] |
scidocsrr
|
19cb7e29919bb9336b151b313d42c4ef
|
Approximate fair bandwidth allocation: A method for simple and flexible traffic management
|
[
{
"docid": "740daa67e29636ac58d6f3fa48bd51ba",
"text": "Status of Memo This memo provides information for the Internet community. It does not specify an Internet standard of any kind. Distribution of this memo is unlimited. Abstract This memo presents two recommendations to the Internet community concerning measures to improve and preserve Internet performance. It presents a strong recommendation for testing, standardization, and widespread deployment of active queue management in routers, to improve the performance of today's Internet. It also urges a concerted effort of research, measurement, and ultimate deployment of router mechanisms to protect the Internet from flows that are not sufficiently responsive to congestion notification.",
"title": ""
}
] |
[
{
"docid": "c629d3588203af2e328fb116c836bb8c",
"text": "The purpose of this study was to clinically and radiologically compare the utility, osteoconductivity, and absorbability of hydroxyapatite (HAp) and beta-tricalcium phosphate (TCP) spacers in medial open-wedge high tibial osteotomy (HTO). Thirty-eight patients underwent medial open-wedge HTO with a locking plate. In the first 19 knees, a HAp spacer was implanted in the opening space (HAp group). In the remaining 19 knees, a TCP spacer was implanted in the same manner (TCP group). All patients underwent clinical and radiological examinations before surgery and at 18 months after surgery. Concerning the background factors, there were no statistical differences between the two groups. Post-operatively, the knee score significantly improved in each group. Concerning the post-operative knee alignment and clinical outcome, there was no statistical difference in each parameter between the two groups. Regarding the osteoconductivity, the modified van Hemert’s score of the TCP group was significantly higher (p = 0.0009) than that of the HAp group in the most medial osteotomy zone. The absorption rate was significantly greater in the TCP group than in the HAp group (p = 0.00039). The present study demonstrated that a TCP spacer was significantly superior to a HAp spacer concerning osteoconductivity and absorbability at 18 months after medial open-wedge HTO. Retrospective comparative study, Level III.",
"title": ""
},
{
"docid": "85221954ced857c449acab8ee5cf801e",
"text": "IMSI Catchers are used in mobile networks to identify and eavesdrop on phones. When, the number of vendors increased and prices dropped, the device became available to much larger audiences. Self-made devices based on open source software are available for about US$ 1,500.\n In this paper, we identify and describe multiple methods of detecting artifacts in the mobile network produced by such devices. We present two independent novel implementations of an IMSI Catcher Catcher (ICC) to detect this threat against everyone's privacy. The first one employs a network of stationary (sICC) measurement units installed in a geographical area and constantly scanning all frequency bands for cell announcements and fingerprinting the cell network parameters. These rooftop-mounted devices can cover large areas. The second implementation is an app for standard consumer grade mobile phones (mICC), without the need to root or jailbreak them. Its core principle is based upon geographical network topology correlation, facilitating the ubiquitous built-in GPS receiver in today's phones and a network cell capabilities fingerprinting technique. The latter works for the vicinity of the phone by first learning the cell landscape and than matching it against the learned data. We implemented and evaluated both solutions for digital self-defense and deployed several of the stationary units for a long term field-test. Finally, we describe how to detect recently published denial of service attacks.",
"title": ""
},
{
"docid": "a34e153e5027a1483fd25c3ff3e1ea0c",
"text": "In this paper, we study how to initialize the convolutional neural network (CNN) model for training on a small dataset. Specially, we try to extract discriminative filters from the pre-trained model for a target task. On the basis of relative entropy and linear reconstruction, two methods, Minimum Entropy Loss (MEL) and Minimum Reconstruction Error (MRE), are proposed. The CNN models initialized by the proposed MEL and MRE methods are able to converge fast and achieve better accuracy. We evaluate MEL and MRE on the CIFAR10, CIFAR100, SVHN, and STL-10 public datasets. The consistent performances demonstrate the advantages of the proposed methods.",
"title": ""
},
{
"docid": "a645943a02f5d71b146afe705fb6f49f",
"text": "Along with the developments in the field of information technologies, the data in the electronic environment is increasing. Data mining methods are needed to obtain useful information for users in electronic environment. One of these methods, clustering methods, aims to group data according to common properties. This grouping is often based on the distance between the data. Clustering methods are divided into hierarchical and non-hierarchical methods according to the fragmentation technique of clusters. The success of both types of clustering methods varies according to the data set applied. In this study, both types of methods were tested on different type of data sets. Selected methods compared according to five different evaluation metrics. The results of the analysis are presented comparatively at the end of the study and which methods are more convenient for data set is explained.",
"title": ""
},
{
"docid": "5168f7f952d937460d250c44b43f43c0",
"text": "This letter presents the design of a coplanar waveguide (CPW) circularly polarized antenna for the central frequency 900 MHz, it comes in handy for radio frequency identification (RFID) short-range reading applications within the band of 902-928 MHz where the axial ratio of proposed antenna model is less than 3 dB. The proposed design has an axial-ratio bandwidth of 36 MHz (4%) and impedance bandwidth of 256 MHz (28.5%).",
"title": ""
},
{
"docid": "17dce24f26d7cc196e56a889255f92a8",
"text": "As known, to finish this book, you may not need to get it at once in a day. Doing the activities along the day may make you feel so bored. If you try to force reading, you may prefer to do other entertaining activities. But, one of concepts we want you to have this book is that it will not make you feel bored. Feeling bored when reading will be only unless you don't like the book. computational principles of mobile robotics really offers what everybody wants.",
"title": ""
},
{
"docid": "ae7117416b4a07d2b15668c2c8ac46e3",
"text": "We present OntoWiki, a tool providing support for agile, distributed knowledge engineering scenarios. OntoWiki facilitates the visual presentation of a knowledge base as an information map, with different views on instance data. It enables intuitive authoring of semantic content, with an inline editing mode for editing RDF content, similar to WYSIWYG for text documents. It fosters social collaboration aspects by keeping track of changes, allowing comments and discussion on every single part of a knowledge base, enabling to rate and measure the popularity of content and honoring the activity of users. OntoWiki enhances the browsing and retrieval by offering semantic enhanced search strategies. All these techniques are applied with the ultimate goal of decreasing the entrance barrier for projects and domain experts to collaborate using semantic technologies. In the spirit of the Web 2.0 OntoWiki implements an ”architecture of participation” that allows users to add value to the application as they use it. It is available as open-source software and a demonstration platform can be accessed at http://3ba.se.",
"title": ""
},
{
"docid": "0289858bb9002e00d753e1ed2da8b204",
"text": "This paper presents a motion planning method for mobile manipulators for which the base locomotion is less precise than the manipulator control. In such a case, it is advisable to move the base to discrete poses from which the manipulator can be deployed to cover a prescribed trajectory. The proposed method finds base poses that not only cover the trajectory but also meet constraints on a measure of manipulability. We propose a variant of the conventional manipulability measure that is suited to the trajectory control of the end effector of the mobile manipulator along an arbitrary curve in three space. Results with implementation on a mobile manipulator are discussed.",
"title": ""
},
{
"docid": "19621b0ab08cb0abed04b859331d8092",
"text": "The objective of designing a strategy for an institution is to create more value and achieve its vision, with clear and coherent strategies, identifying the conditions in which they are currently, the sector in which they work and the different types of competences that generate, as well as the market in general where they perform, to create this type of conditions requires the availability of strategic information to verify the current conditions, to define the strategic line to follow according to internal and external factors, and in this way decide which methods to use to implement the development of a strategy in the organization. This research project was developed in an institution of higher education where the strategic processes were analyzed from different perspectives i.e. financial, customers, internal processes, and training and learning using business intelligence tools, such as Excel Power BI, Power Pivot, Power Query and a relational database for data repository; which helped having agile and effective information for the creation of the balanced scorecard, involving all levels of the organization and academic units; operating key performance indicators (KPI’s), for operational and strategic decisions. The results were obtained in form of boards of indicators designed to be visualized in the final view of the software constructed with previously described software tools. Keywords—Business intelligence; balanced scorecard; key performance indicators; BI Tools",
"title": ""
},
{
"docid": "0e5111addf4a6d5f0cad92707d6b7173",
"text": "We present a novel model based stereo system, which accurately extracts the 3D shape and pose of faces from multiple images taken simultaneously. Extracting the 3D shape from images is important in areas such as pose-invariant face recognition and image manipulation. The method is based on a 3D morphable face model learned from a database of facial scans. The use of a strong face prior allows us to extract high precision surfaces from stereo data of faces, where traditional correlation based stereo methods fail because of the mostly textureless input images. The method uses two or more uncalibrated images of arbitrary baseline, estimating calibration and shape simultaneously. Results using two and three input images are presented. We replace the lighting and albedo estimation of a monocular method with the use of stereo information, making the system more accurate and robust. We evaluate the method using ground truth data and the standard PIE image dataset. A comparison with the state of the art monocular system shows that the new method has a significantly higher accuracy.",
"title": ""
},
{
"docid": "85012f6ad9aa8f3e80a9c971076b4eb9",
"text": "The article aims to introduce an integrated web-based interactive data platform for molecular dynamic simulations using the datasets generated by different life science communities from Armenia. The suggested platform, consisting of data repository and workflow management services, is vital for current and future scientific discoveries in the life science domain. We focus on interactive data visualization workflow service as a key to perform more in-depth analyzes of research data outputs, helping to understand the problems efficiently and to consolidate the data into one collective illustration platform. The functionalities of the integrated data platform is presented as an advanced integrated environment to capture, analyze, process and visualize the scientific data.",
"title": ""
},
{
"docid": "8d208bb5318dcbc5d941df24906e121f",
"text": "Applications based on eye-blink detection have increased, as a result of which it is essential for eye-blink detection to be robust and non-intrusive irrespective of the changes in the user's facial pose. However, most previous studies on camera-based blink detection have the disadvantage that their performances were affected by the facial pose. They also focused on blink detection using only frontal facial images. To overcome these disadvantages, we developed a new method for blink detection, which maintains its accuracy despite changes in the facial pose of the subject. This research is novel in the following four ways. First, the face and eye regions are detected by using both the AdaBoost face detector and a Lucas-Kanade-Tomasi (LKT)-based method, in order to achieve robustness to facial pose. Secondly, the determination of the state of the eye (being open or closed), needed for blink detection, is based on two features: the ratio of height to width of the eye region in a still image, and the cumulative difference of the number of black pixels of the eye region using an adaptive threshold in successive images. These two features are robustly extracted irrespective of the lighting variations by using illumination normalization. Thirdly, the accuracy of determining the eye state - open or closed - is increased by combining the above two features on the basis of the support vector machine (SVM). Finally, the SVM classifier for determining the eye state is adaptively selected according to the facial rotation. Experimental results using various databases showed that the blink detection by the proposed method is robust to various facial poses.",
"title": ""
},
{
"docid": "ae9469b80390e5e2e8062222423fc2cd",
"text": "Social media such as those residing in the popular photo sharing websites is attracting increasing attention in recent years. As a type of user-generated data, wisdom of the crowd is embedded inside such social media. In particular, millions of users upload to Flickr their photos, many associated with temporal and geographical information. In this paper, we investigate how to rank the trajectory patterns mined from the uploaded photos with geotags and timestamps. The main objective is to reveal the collective wisdom recorded in the seemingly isolated photos and the individual travel sequences reflected by the geo-tagged photos. Instead of focusing on mining frequent trajectory patterns from geo-tagged social media, we put more effort into ranking the mined trajectory patterns and diversifying the ranking results. Through leveraging the relationships among users, locations and trajectories, we rank the trajectory patterns. We then use an exemplar-based algorithm to diversify the results in order to discover the representative trajectory patterns. We have evaluated the proposed framework on 12 different cities using a Flickr dataset and demonstrated its effectiveness.",
"title": ""
},
{
"docid": "ec26505d813ed98ac3f840ea54358873",
"text": "In this paper we address cardinality estimation problem which is an important subproblem in query optimization. Query optimization is a part of every relational DBMS responsible for finding the best way of the execution for the given query. These ways are called plans. The execution time of different plans may differ by several orders, so query optimizer has a great influence on the whole DBMS performance. We consider cost-based query optimization approach as the most popular one. It was observed that costbased optimization quality depends much on cardinality estimation quality. Cardinality of the plan node is the number of tuples returned by it. In the paper we propose a novel cardinality estimation approach with the use of machine learning methods. The main point of the approach is using query execution statistics of the previously executed queries to improve cardinality estimations. We called this approach adaptive cardinality estimation to reflect this point. The approach is general, flexible, and easy to implement. The experimental evaluation shows that this approach significantly increases the quality of cardinality estimation, and therefore increases the DBMS performance for some queries by several times or even by several dozens of times.",
"title": ""
},
{
"docid": "73e616ebf26c6af34edb0d60a0ce1773",
"text": "While recent deep neural networks have achieved a promising performance on object recognition, they rely implicitly on the visual contents of the whole image. In this paper, we train deep neural networks on the foreground (object) and background (context) regions of images respectively. Considering human recognition in the same situations, networks trained on the pure background without objects achieves highly reasonable recognition performance that beats humans by a large margin if only given context. However, humans still outperform networks with pure object available, which indicates networks and human beings have different mechanisms in understanding an image. Furthermore, we straightforwardly combine multiple trained networks to explore different visual cues learned by different networks. Experiments show that useful visual hints can be explicitly learned separately and then combined to achieve higher performance, which verifies the advantages of the proposed framework.",
"title": ""
},
{
"docid": "0952701dd63326f8a78eb5bc9a62223f",
"text": "The self-organizing map (SOM) is an automatic data-analysis method. It is widely applied to clustering problems and data exploration in industry, finance, natural sciences, and linguistics. The most extensive applications, exemplified in this paper, can be found in the management of massive textual databases and in bioinformatics. The SOM is related to the classical vector quantization (VQ), which is used extensively in digital signal processing and transmission. Like in VQ, the SOM represents a distribution of input data items using a finite set of models. In the SOM, however, these models are automatically associated with the nodes of a regular (usually two-dimensional) grid in an orderly fashion such that more similar models become automatically associated with nodes that are adjacent in the grid, whereas less similar models are situated farther away from each other in the grid. This organization, a kind of similarity diagram of the models, makes it possible to obtain an insight into the topographic relationships of data, especially of high-dimensional data items. If the data items belong to certain predetermined classes, the models (and the nodes) can be calibrated according to these classes. An unknown input item is then classified according to that node, the model of which is most similar with it in some metric used in the construction of the SOM. A new finding introduced in this paper is that an input item can even more accurately be represented by a linear mixture of a few best-matching models. This becomes possible by a least-squares fitting procedure where the coefficients in the linear mixture of models are constrained to nonnegative values.",
"title": ""
},
{
"docid": "83fda0277ebcdb6aeae216a38553db9c",
"text": "Variational inference is a scalable technique for approximate Bayesian inference. Deriving variational inference algorithms requires tedious model-specific calculations; this makes it di cult for non-experts to use. We propose an automatic variational inference algorithm, automatic di erentiation variational inference ( ); we implement it in Stan (code available), a probabilistic programming system. In the user provides a Bayesian model and a dataset, nothing else. We make no conjugacy assumptions and support a broad class of models. The algorithm automatically determines an appropriate variational family and optimizes the variational objective. We compare to sampling across hierarchical generalized linear models, nonconjugate matrix factorization, and a mixture model. We train the mixture model on a quarter million images. With we can use variational inference on any model we write in Stan.",
"title": ""
},
{
"docid": "3550dbe913466a675b621d476baba219",
"text": "Successful implementing and managing of change is urgently necessary for each adult educational organization. During the process, leading of the staff is becoming a key condition and the most significant factor. Beside certain personal traits of the leader, change management demands also certain leadership knowledges, skills, versatilities and behaviour which may even border on changing the organizational culture. The paper finds the significance of certain values and of organizational climate and above all the significance of leadership style which a leader will adjust to the staff and to the circumstances. The author presents a multiple qualitative case study of managing change in three adult educational organizations. The paper finds that factors of successful leading of change exist which represent an adequate approach to leading the staff during the introduction of changes in educational organizations. Its originality/value is in providing information on the important relationship between culture, leadership styles and leader’s behaviour as preconditions for successful implementing and managing of strategic change.",
"title": ""
},
{
"docid": "e67d09b3bf155c5191ad241006e011ad",
"text": "An effective approach for energy conservation in wireless sensor networks is scheduling sleep intervals for extraneous nodes, while the remaining nodes stay active to provide continuous service. For the sensor network to operate successfully, the active nodes must maintain both sensing coverage and network connectivity. Furthermore, the network must be able to configure itself to any feasible degrees of coverage and connectivity in order to support different applications and environments with diverse requirements. This paper presents the design and analysis of novel protocols that can dynamically configure a network to achieve guaranteed degrees of coverage and connectivity. This work differs from existing connectivity or coverage maintenance protocols in several key ways: 1) We present a Coverage Configuration Protocol (CCP) that can provide different degrees of coverage requested by applications. This flexibility allows the network to self-configure for a wide range of applications and (possibly dynamic) environments. 2) We provide a geometric analysis of the relationship between coverage and connectivity. This analysis yields key insights for treating coverage and connectivity in a unified framework: this is in sharp contrast to several existing approaches that address the two problems in isolation. 3) Finally, we integrate CCP with SPAN to provide both coverage and connectivity guarantees. We demonstrate the capability of our protocols to provide guaranteed coverage and connectivity configurations, through both geometric analysis and extensive simulations.",
"title": ""
}
] |
scidocsrr
|
1fed19e9ce9c5752f552fd164ee8ec78
|
Contextualized Bilinear Attention Networks
|
[
{
"docid": "86c998f5ffcddb0b74360ff27b8fead4",
"text": "Attention networks in multimodal learning provide an efficient way to utilize given visual information selectively. However, the computational cost to learn attention distributions for every pair of multimodal input channels is prohibitively expensive. To solve this problem, co-attention builds two separate attention distributions for each modality neglecting the interaction between multimodal inputs. In this paper, we propose bilinear attention networks (BAN) that find bilinear attention distributions to utilize given vision-language information seamlessly. BAN considers bilinear interactions among two groups of input channels, while low-rank bilinear pooling extracts the joint representations for each pair of channels. Furthermore, we propose a variant of multimodal residual networks to exploit eight-attention maps of the BAN efficiently. We quantitatively and qualitatively evaluate our model on visual question answering (VQA 2.0) and Flickr30k Entities datasets, showing that BAN significantly outperforms previous methods and achieves new state-of-the-arts on both datasets.",
"title": ""
}
] |
[
{
"docid": "527c4c17aadb23a991d85511004a7c4f",
"text": "Accurate and robust recognition and prediction of traffic situation plays an important role in autonomous driving, which is a prerequisite for risk assessment and effective decision making. Although there exist a lot of works dealing with modeling driver behavior of a single object, it remains a challenge to make predictions for multiple highly interactive agents that react to each other simultaneously. In this work, we propose a generic probabilistic hierarchical recognition and prediction framework which employs a two-layer Hidden Markov Model (TLHMM) to obtain the distribution of potential situations and a learning-based dynamic scene evolution model to sample a group of future trajectories. Instead of predicting motions of a single entity, we propose to get the joint distribution by modeling multiple interactive agents as a whole system. Moreover, due to the decoupling property of the layered structure, our model is suitable for knowledge transfer from simulation to real world applications as well as among different traffic scenarios, which can reduce the computational efforts of training and the demand for a large data amount. A case study of highway ramp merging scenario is demonstrated to verify the effectiveness and accuracy of the proposed framework.",
"title": ""
},
{
"docid": "c6054c39b9b36b5d446ff8da3716ec30",
"text": "The Web is a constantly expanding global information space that includes disparate types of data and resources. Recent trends demonstrate the urgent need to manage the large amounts of data stream, especially in specific domains of application such as critical infrastructure systems, sensor networks, log file analysis, search engines and more recently, social networks. All of these applications involve large-scale data-intensive tasks, often subject to time constraints and space complexity. Algorithms, data management and data retrieval techniques must be able to process data stream, i.e., process data as it becomes available and provide an accurate response, based solely on the data stream that has already been provided. Data retrieval techniques often require traditional data storage and processing approach, i.e., all data must be available in the storage space in order to be processed. For instance, a widely used relevance measure is Term Frequency–Inverse Document Frequency (TF–IDF), which can evaluate how important a word is in a collection of documents and requires to a priori know the whole dataset. To address this problem, we propose an approximate version of the TF–IDF measure suitable to work on continuous data stream (such as the exchange of messages, tweets and sensor-based log files). The algorithm for the calculation of this measure makes two assumptions: a fast response is required, and memory is both limited and infinitely smaller than the size of the data stream. In addition, to face the great computational power required to process massive data stream, we present also a parallel implementation of the approximate TF–IDF calculation using Graphical Processing Units (GPUs). This implementation of the algorithm was tested on generated and real data stream and was able to capture the most frequent terms. Our results demonstrate that the approximate version of the TF–IDF measure performs at a level that is comparable to the solution of the precise TF–IDF measure. 2014 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "273bd38144d33aa215298ddd5cf674f2",
"text": "Looking to increase the functionality of current wireless platforms and to improve their quality of service, we have explored the merits of using frequency-reconfigurable antennas as an alternative for multiband antennas. Our study included an analysis of various reconfigurable and multiband structures such as patches, wires, and combinations. Switches, such as radio-frequency microelectromechanical systems (RFMEMS) and p-i-n diodes, were also studied and directly incorporated onto antenna structures to successfully form frequency-reconfigurable antennas.",
"title": ""
},
{
"docid": "a1bef11b10bc94f84914d103311a5941",
"text": "Class imbalance and class overlap are two of the major problems in data mining and machine learning. Several studies have shown that these data complexities may affect the performance or behavior of artificial neural networks. Strategies proposed to face with both challenges have been separately applied. In this paper, we introduce a hybrid method for handling both class imbalance and class overlap simultaneously in multi-class learning problems. Experimental results on five remote sensing data show that the combined approach is a promising method. 2012 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "5a13fa656b34d25fb53c707291721d04",
"text": "Cloud computing is a popular model for accessing the computer resources. The data owner outsources their data on cloud server that can be accessed by an authorized user. In Cloud Computing public key encryption with equality test (PKEET) provides an alternative to public key encryption by simplify the public key and credential administration at Public Key Infrastructure (PKI). However it still faces the security risk in outsourced computation on encrypted data. Therefore this paper proposed a novel identity based hybrid encryption (RSA with ECC) to enhance the security of outsourced data. In this approach sender encrypts the sensitive data using hybrid algorithm. Then the proxy re encryption is used to encrypt the keyword and identity in standardize toward enrichment security of data.",
"title": ""
},
{
"docid": "94c5f0bba64e131a64989813652846a5",
"text": "The ability to access patents and relevant patent-related information pertaining to a patented technology can fundamentally transform the patent system and its functioning and patent institutions such as the USPTO and the federal courts. This paper describes an ontology-based computational framework that can resolve some of difficult issues in retrieving patents and patent related information for the legal and justice system.",
"title": ""
},
{
"docid": "6b718717d5ecef343a8f8033803a55e6",
"text": "BACKGROUND\nMedication and adverse drug event (ADE) information extracted from electronic health record (EHR) notes can be a rich resource for drug safety surveillance. Existing observational studies have mainly relied on structured EHR data to obtain ADE information; however, ADEs are often buried in the EHR narratives and not recorded in structured data.\n\n\nOBJECTIVE\nTo unlock ADE-related information from EHR narratives, there is a need to extract relevant entities and identify relations among them. In this study, we focus on relation identification. This study aimed to evaluate natural language processing and machine learning approaches using the expert-annotated medical entities and relations in the context of drug safety surveillance, and investigate how different learning approaches perform under different configurations.\n\n\nMETHODS\nWe have manually annotated 791 EHR notes with 9 named entities (eg, medication, indication, severity, and ADEs) and 7 different types of relations (eg, medication-dosage, medication-ADE, and severity-ADE). Then, we explored 3 supervised machine learning systems for relation identification: (1) a support vector machines (SVM) system, (2) an end-to-end deep neural network system, and (3) a supervised descriptive rule induction baseline system. For the neural network system, we exploited the state-of-the-art recurrent neural network (RNN) and attention models. We report the performance by macro-averaged precision, recall, and F1-score across the relation types.\n\n\nRESULTS\nOur results show that the SVM model achieved the best average F1-score of 89.1% on test data, outperforming the long short-term memory (LSTM) model with attention (F1-score of 65.72%) as well as the rule induction baseline system (F1-score of 7.47%) by a large margin. The bidirectional LSTM model with attention achieved the best performance among different RNN models. With the inclusion of additional features in the LSTM model, its performance can be boosted to an average F1-score of 77.35%.\n\n\nCONCLUSIONS\nIt shows that classical learning models (SVM) remains advantageous over deep learning models (RNN variants) for clinical relation identification, especially for long-distance intersentential relations. However, RNNs demonstrate a great potential of significant improvement if more training data become available. Our work is an important step toward mining EHRs to improve the efficacy of drug safety surveillance. Most importantly, the annotated data used in this study will be made publicly available, which will further promote drug safety research in the community.",
"title": ""
},
{
"docid": "ec920015a3206a5d76e8ab3698ceab90",
"text": "In this paper, we present a method for temporal relation extraction from clinical narratives in French and in English. We experiment on two comparable corpora, the MERLOT corpus for French and the THYME corpus for English, and show that a common approach can be used for both languages.",
"title": ""
},
{
"docid": "da47eb6c793f4afff5aecf6f52194e12",
"text": "An inline chalcogenide phase change RF switch utilizing germanium telluride (GeTe) and driven by an integrated, electrically isolated thin film heater for thermal actuation has been fabricated. A voltage or current pulse applied to the heater terminals was used to transition the phase change material between the crystalline and amorphous states. An on-state resistance of 1.2 Ω (0.036 Ω-mm), with an off-state capacitance and resistance of 18.1 fF and 112 kΩ respectively were measured. This results in an RF switch cut-off frequency (Fco) of 7.3 THz, and an off/on DC resistance ratio of 9 × 104. The heater pulse power required to switch the GeTe between the two states was as low as 0.5W, with zero power consumption during steady state operation, making it a non-volatile RF switch. To the authors' knowledge, this is the first reported implementation of an RF phase change switch in a 4-terminal, inline configuration.",
"title": ""
},
{
"docid": "bfdfd911e913c4dbe7a01e775ae6f5bf",
"text": "With the upgrowing of digital processing of images and film archiving, the need for assisted or unsupervised restoration required the development of a series of methods and techniques. Among them, image inpainting is maybe the most impressive and useful. Based on partial derivative equations or texture synthesis, many other hybrid techniques have been proposed recently. The need for an analytical comparison, beside the visual one, urged us to perform the studies shown in the present paper. Starting with an overview of the domain, an evaluation of the five methods was performed using a common benchmark and measuring the PSNR. Conclusions regarding the performance of the investigated algorithms have been presented, categorizing them in function of the restored image structure. Based on these experiments, we have proposed an adaptation of Oliveira's and Hadhoud's algorithms, which are performing well on images with natural defects.",
"title": ""
},
{
"docid": "aa30fc0f921509b1f978aeda1140ffc0",
"text": "Arithmetic coding provides an e ective mechanism for removing redundancy in the encoding of data. We show how arithmetic coding works and describe an e cient implementation that uses table lookup as a fast alternative to arithmetic operations. The reduced-precision arithmetic has a provably negligible e ect on the amount of compression achieved. We can speed up the implementation further by use of parallel processing. We discuss the role of probability models and how they provide probability information to the arithmetic coder. We conclude with perspectives on the comparative advantages and disadvantages of arithmetic coding.",
"title": ""
},
{
"docid": "c29349c32074392e83f51b1cd214ec8a",
"text": "Recent work has shown that optical flow estimation can be formulated as a supervised learning task and can be successfully solved with convolutional networks. Training of the so-called FlowNet was enabled by a large synthetically generated dataset. The present paper extends the concept of optical flow estimation via convolutional networks to disparity and scene flow estimation. To this end, we propose three synthetic stereo video datasets with sufficient realism, variation, and size to successfully train large networks. Our datasets are the first large-scale datasets to enable training and evaluation of scene flow methods. Besides the datasets, we present a convolutional network for real-time disparity estimation that provides state-of-the-art results. By combining a flow and disparity estimation network and training it jointly, we demonstrate the first scene flow estimation with a convolutional network.",
"title": ""
},
{
"docid": "7a6d32d50e3b1be70889fc85ffdcac45",
"text": "Any image can be represented as a function defined on a weighted graph, in which the underlying structure of the image is encoded in kernel similarity and associated Laplacian matrices. In this paper, we develop an iterative graph-based framework for image restoration based on a new definition of the normalized graph Laplacian. We propose a cost function, which consists of a new data fidelity term and regularization term derived from the specific definition of the normalized graph Laplacian. The normalizing coefficients used in the definition of the Laplacian and associated regularization term are obtained using fast symmetry preserving matrix balancing. This results in some desired spectral properties for the normalized Laplacian such as being symmetric, positive semidefinite, and returning zero vector when applied to a constant image. Our algorithm comprises of outer and inner iterations, where in each outer iteration, the similarity weights are recomputed using the previous estimate and the updated objective function is minimized using inner conjugate gradient iterations. This procedure improves the performance of the algorithm for image deblurring, where we do not have access to a good initial estimate of the underlying image. In addition, the specific form of the cost function allows us to render the spectral analysis for the solutions of the corresponding linear equations. In addition, the proposed approach is general in the sense that we have shown its effectiveness for different restoration problems, including deblurring, denoising, and sharpening. Experimental results verify the effectiveness of the proposed algorithm on both synthetic and real examples.",
"title": ""
},
{
"docid": "96f4f77f114fec7eca22d0721c5efcbe",
"text": "Aggregation structures with explicit information, such as image attributes and scene semantics, are effective and popular for intelligent systems for assessing aesthetics of visual data. However, useful information may not be available due to the high cost of manual annotation and expert design. In this paper, we present a novel multi-patch (MP) aggregation method for image aesthetic assessment. Different from state-of-the-art methods, which augment an MP aggregation network with various visual attributes, we train the model in an end-to-end manner with aesthetic labels only (i.e., aesthetically positive or negative). We achieve the goal by resorting to an attention-based mechanism that adaptively adjusts the weight of each patch during the training process to improve learning efficiency. In addition, we propose a set of objectives with three typical attention mechanisms (i.e., average, minimum, and adaptive) and evaluate their effectiveness on the Aesthetic Visual Analysis (AVA) benchmark. Numerical results show that our approach outperforms existing methods by a large margin. We further verify the effectiveness of the proposed attention-based objectives via ablation studies and shed light on the design of aesthetic assessment systems.",
"title": ""
},
{
"docid": "49e76ffb51f11339950005ddeef71f3e",
"text": "Multichannel die probing increases test speed and lowers the overall cost of testing. A new high-density wafer probe card based on MEMS technology is presented in this paper. MEMS-based microtest-channels have been designed to establish high-speed low-resistance connectivity between the die-under-test and the tester at the wafer level. The proposed test scheme can be used to probe fine pitch pads and interconnects of a new generation of 3-D integrated circuits. The proposed MEMS probe, which is fabricated with two masks, supports \\(10^{6}\\) lifetime touchdowns. Measurement results using a prototype indicate that the proposed architecture can be used to conduct manufacturing tests up to 38.6 GHz with less than -1-dB insertion loss while maintaining 11.4-m\\(\\Omega \\) contact resistance. The measured return loss of the probe at 39.6 GHz is -12.05 dB.",
"title": ""
},
{
"docid": "2aed918913e6b72603e3dfdfca710572",
"text": "We investigate the task of building a domain aware chat system which generates intelligent responses in a conversation comprising of different domains. The domain in this case is the topic or theme of the conversation. To achieve this, we present DOM-Seq2Seq, a domain aware neural network model based on the novel technique of using domain-targeted sequence-to-sequence models (Sutskever et al., 2014) and a domain classifier. The model captures features from current utterance and domains of the previous utterances to facilitate the formation of relevant responses. We evaluate our model on automatic metrics and compare our performance with the Seq2Seq model.",
"title": ""
},
{
"docid": "7368671d20b4f4b30a231d364eb501bc",
"text": "In this article, we study the problem of Web user profiling, which is aimed at finding, extracting, and fusing the “semantic”-based user profile from the Web. Previously, Web user profiling was often undertaken by creating a list of keywords for the user, which is (sometimes even highly) insufficient for main applications. This article formalizes the profiling problem as several subtasks: profile extraction, profile integration, and user interest discovery. We propose a combination approach to deal with the profiling tasks. Specifically, we employ a classification model to identify relevant documents for a user from the Web and propose a Tree-Structured Conditional Random Fields (TCRF) to extract the profile information from the identified documents; we propose a unified probabilistic model to deal with the name ambiguity problem (several users with the same name) when integrating the profile information extracted from different sources; finally, we use a probabilistic topic model to model the extracted user profiles, and construct the user interest model. Experimental results on an online system show that the combination approach to different profiling tasks clearly outperforms several baseline methods. The extracted profiles have been applied to expert finding, an important application on the Web. Experiments show that the accuracy of expert finding can be improved (ranging from +6% to +26% in terms of MAP) by taking advantage of the profiles.",
"title": ""
},
{
"docid": "9352d3d38094cc083ab3958d42b4d69a",
"text": "We performed a clinical study to evaluate the unawareness of dyskinesias in patients affected by Parkinson's disease (PD) and Huntington's disease (HD). Thirteen PD patients with levodopa-induced dyskinesias and 9 HD patients were enrolled. Patients were asked to evaluate the presence of dyskinesias while performing specific motor tasks. The Abnormal Involuntary Movement Scale (AIMS) and Goetz dyskinesia rating scale were administered to determine the severity of dyskinesias. The Unified Parkinson's disease rating scale (UPDRS) and Unified Huntington's Disease Rating Scale (UHDRS) were used in PD and HD patients, respectively. In PD we found a significant negative relationship between unawareness score at hand pronation-supination and AIMS score for upper limbs. In HD we found a significant positive relationship between total unawareness score and disease duration. In PD the unawareness seems to be inversely related with severity of dyskinesias, while in HD it is directly related to disease duration and severity.",
"title": ""
},
{
"docid": "f13cbc36f2c51c5735185751ddc2500e",
"text": "This paper presents an overview of the road and traffic sign detection and recognition. It describes the characteristics of the road signs, the requirements and difficulties behind road signs detection and recognition, how to deal with outdoor images, and the different techniques used in the image segmentation based on the colour analysis, shape analysis. It shows also the techniques used for the recognition and classification of the road signs. Although image processing plays a central role in the road signs recognition, especially in colour analysis, but the paper points to many problems regarding the stability of the received information of colours, variations of these colours with respect to the daylight conditions, and absence of a colour model that can led to a good solution. This means that there is a lot of work to be done in the field, and a lot of improvement can be achieved. Neural networks were widely used in the detection and the recognition of the road signs. The majority of the authors used neural networks as a recognizer, and as classifier. Some other techniques such as template matching or classical classifiers were also used. New techniques should be involved to increase the robustness, and to get faster systems for real-time applications.",
"title": ""
}
] |
scidocsrr
|
df17357725db1bfaf76fc0f01dc09ed9
|
Computational challenges for sentiment analysis in life sciences
|
[
{
"docid": "42613c6a08ce7d86f81ec51255a1071d",
"text": "Happiness and other emotions spread between people in direct contact, but it is unclear whether massive online social networks also contribute to this spread. Here, we elaborate a novel method for measuring the contagion of emotional expression. With data from millions of Facebook users, we show that rainfall directly influences the emotional content of their status messages, and it also affects the status messages of friends in other cities who are not experiencing rainfall. For every one person affected directly, rainfall alters the emotional expression of about one to two other people, suggesting that online social networks may magnify the intensity of global emotional synchrony.",
"title": ""
},
{
"docid": "5ff263cf4a73c202741c46d5582a960a",
"text": "Sentiment analysis; Sentiment classification; Feature selection; Emotion detection; Transfer learning; Building resources Abstract Sentiment Analysis (SA) is an ongoing field of research in text mining field. SA is the computational treatment of opinions, sentiments and subjectivity of text. This survey paper tackles a comprehensive overview of the last update in this field. Many recently proposed algorithms’ enhancements and various SA applications are investigated and presented briefly in this survey. These articles are categorized according to their contributions in the various SA techniques. The related fields to SA (transfer learning, emotion detection, and building resources) that attracted researchers recently are discussed. The main target of this survey is to give nearly full image of SA techniques and the related fields with brief details. The main contributions of this paper include the sophisticated categorizations of a large number of recent articles and the illustration of the recent trend of research in the sentiment analysis and its related areas. 2014 Production and hosting by Elsevier B.V. on behalf of Ain Shams University.",
"title": ""
},
{
"docid": "a51803d5c0753f64f5216d2cc225d172",
"text": "Twitter is a free social networking and micro-blogging service that enables its millions of users to send and read each other's \"tweets,\" or short, 140-character messages. The service has more than 190 million registered users and processes about 55 million tweets per day. Useful information about news and geopolitical events lies embedded in the Twitter stream, which embodies, in the aggregate, Twitter users' perspectives and reactions to current events. By virtue of sheer volume, content embedded in the Twitter stream may be useful for tracking or even forecasting behavior if it can be extracted in an efficient manner. In this study, we examine the use of information embedded in the Twitter stream to (1) track rapidly-evolving public sentiment with respect to H1N1 or swine flu, and (2) track and measure actual disease activity. We also show that Twitter can be used as a measure of public interest or concern about health-related events. Our results show that estimates of influenza-like illness derived from Twitter chatter accurately track reported disease levels.",
"title": ""
}
] |
[
{
"docid": "79caff0b1495900b5c8f913562d3e84d",
"text": "We propose a formal model of web security based on an abstraction of the web platform and use this model to analyze the security of several sample web mechanisms and applications. We identify three distinct threat models that can be used to analyze web applications, ranging from a web attacker who controls malicious web sites and clients, to stronger attackers who can control the network and/or leverage sites designed to display user-supplied content. We propose two broadly applicable security goals and study five security mechanisms. In our case studies, which include HTML5 forms, Referer validation, and a single sign-on solution, we use a SAT-based model-checking tool to find two previously known vulnerabilities and three new vulnerabilities. Our case study of a Kerberos-based single sign-on system illustrates the differences between a secure network protocol using custom client software and a similar but vulnerable web protocol that uses cookies, redirects, and embedded links instead.",
"title": ""
},
{
"docid": "49a538fc40d611fceddd589b0c9cb433",
"text": "Both intuition and creativity are associated with knowledge creation, yet a clear link between them has not been adequately established. First, the available empirical evidence for an underlying relationship between intuition and creativity is sparse in nature. Further, this evidence is arguable as the concepts are diversely operationalized and the measures adopted are often not validated sufficiently. Combined, these issues make the findings from various studies examining the link between intuition and creativity difficult to replicate. Nevertheless, the role of intuition in creativity should not be neglected as it is often reported to be a core component of the idea generation process, which in conjunction with idea evaluation are crucial phases of creative cognition. We review the prior research findings in respect of idea generation and idea evaluation from the view that intuition can be construed as the gradual accumulation of cues to coherence. Thus, we summarize the literature on what role intuitive processes play in the main stages of the creative problem-solving process and outline a conceptual framework of the interaction between intuition and creativity. Finally, we discuss the main challenges of measuring intuition as well as possible directions for future research.",
"title": ""
},
{
"docid": "3265677221270162ae7eaac330f64664",
"text": "We describe LifeNet, a new common sense knowledge base that captures a first-person model of human experience in terms of a propositional representation. LifeNet represents knowledge as an undirected graphical model relating 80,000 egocentric propositions with 415,000 temporal and atemporal links between these propositions. We explain how we built LifeNet by extracting its propositions and links from the Open Mind Common Sense corpus of common sense assertions, present a method for reasoning with the resulting knowledge base, evaluate the knowledge in LifeNet and the quality of inference, and describe a knowledge acquisition system that lets people interact with LifeNet to extend it further. INTRODUCTION We are interested in building ‘common sense’ models of the structure and flow of human life. Today’s computer systems lack such models—they know almost nothing about the kinds of activities people engage in, the actions we are capable of and their likely effects, the kinds of places we spend our time and the things that are found there, the types of events we enjoy and types we loathe, and so forth. By finding ways to give computers the ability to represent and reason about ordinary life, we believe they can be made more helpful participants in the human world. An adequate common sense model should include knowledge about a wide range of objects, states, events, and situations. For example, a common sense model of human life should enable the following kinds of predictions: • When someone is thirsty, it is likely that they will soon be drinking a liquid beverage. • When someone is at an airport, it is likely they possess a plane ticket. • When someone is typing at a computer, it is possible that they are composing an e-mail. • When someone is crying, it is likely that they feel sad or are in pain. • After someone wakes up, they are likely to get out of bed. Most previous efforts to encode common sense knowledge have made use of relational representations such as frames or predicate logics. However, while such representations have proven expressive enough to describe a wide range of common sense knowledge (see Davis [1] for many examples of how types of common sense knowledge can be formulated in first-order logic, or the Cyc upper level ontology [2]), it has been challenging finding methods of default reasoning that can both make use of such powerful representations and also scale to the number of assertions that are needed to encompass a reasonably broad range of common sense knowledge. In addition, as a knowledge base grows, it is increasingly likely that individual pieces of knowledge will suffer from bugs of various kinds; it seems necessary that we find methods of common sense reasoning that are tolerant to some errors and uncertainties in the knowledge base. However, in recent years there has been much progress in finding ways to reason in uncertain domains using less expressive propositional representations, for example, with Bayesian networks and other types of graphical models. Could such methods be applied to the common sense reasoning problem? Is it possible to take an approach to common sense reasoning that begins not with an ontology of predicates and individuals, but rather with a large set of propositions linked by their conditional or joint probabilities? Propositional representations are less expressive than relational ones, and so it may take a great many propositional rules to express the same constraint as a single relational rule, but such costs in expressivity often come with potential gains in tractability, and in the case of common sense domains, this trade-off seems to be rather poorly understood. The potential benefits of a proposition representation go beyond just matters of efficiency. From the perspective of knowledge acquisition, interfaces for browsing and entering propositional knowledge are potentially much easier to use because they do not require that the user learn to read and write some complex syntax. From the perspective of applying common sense reasoning within applications, propositional representations have such a simple semantics that they are likely quite easy to interface to. Thus, while propositional representations may be less expressive and require a larger ontology of propositions than relational representations for the same domain, they are in many ways easier to build, understand and use. In this paper we explore such questions by describing LifeNet, a new common sense knowledge base that captures a first-person model of human experience in terms of Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. K-CAP’03, October 23–25, 2003, Sanibel Island, Florida, USA. Copyright 2003 ACM 1-58113-583-1/03/0010...$5.00. a propositional representation. LifeNet represents knowledge as a graphical model relating 80,000 egocentric propositions with 415,000 temporal and atemporal links between these propositions, e.g. • I-put-my-foot-on-the-brake-pedal → I-stop-a-car • I-pour-detergent-into-wash → I-clean-clothes • I-put-quarter-in-washing-machine → I-clean-clothes • I-am-at-a-zoo → I-see-a-monkey • I-put-on-a-seat-belt → I-drive-a-car • I-put-a-key-in-the-ignition → I-drive-a-car We explain how we built LifeNet by extracting its propositions and links from the Open Mind Common Sense corpus of common sense assertions supplied by thousands of members of the general public, present a method for reasoning with the resulting knowledge base, evaluate the knowledge in LifeNet and the quality of inference, and describe a knowledge acquisition system that lets people interact with LifeNet to extend it further. LIFENET LifeNet is a large-scale temporal graphical model expressed in terms of ‘egocentric’ propositions, e.g. propositions of the form: • I-am-at-a-restaurant • I-am-eating-a-sandwich • It-is-3-pm • It-is-raining-outside • I-feel-frightened • I-am-drinking-coffee Each of these propositions is a statement that a person could say was true or not true of their situation, perhaps with some probability. In LifeNet these propositions are arranged into two columns representing the state at two consecutive moments in time, and these propositions are linked by joint probability tables representing both the probability that one proposition follows another, and also the probability of two propositions being true at the same time. A small sample of LifeNet is shown in Figure 1 below:",
"title": ""
},
{
"docid": "eec60b309731ef2f0adbfe94324a2ca0",
"text": "Wireless sensor networks are those networks which are composed by the collection of very small devices mainly named as nodes. These nodes are integrated with small battery life which is very hard or impossible to replace or reinstate. For the sensing, gathering and processing capabilities, the usage of battery is must. Therefore, the battery life of Wireless Sensor Networks should be as large as possible in order to sense the information around it or in which the nodes are placed. The concept of hierarchical routing is mainly highlighted in this paper, in which the nodes work in a hierarchical manner by the formation of Cluster Head within a Cluster. These formed Cluster Heads then transfer the data or information in the form of packets from one cluster to another. In this work, the protocol used for the simulation is Low Energy adaptive Clustering Hierarchy which is one of the most efficient protocol. The nodes are of homogeneous in nature. The simulator used is MATLAB along with Cuckoo Search Algorithm. The Simulation results have been taken out showing the effectiveness of protocol with Cuckoo Search. Keywords— Wireless Sensor Network (WSN), Low Energy adaptive Clustering Hierarchy (LEACH), Cuckoo Search, Cluster Head (CH), Base Station (BS).",
"title": ""
},
{
"docid": "df92fe7057593a9312de91c06e1525ca",
"text": "The Formal Theory of Fun and Creativity (1990–2010) [Schmidhuber, J.: Formal theory of creativity, fun, and intrinsic motivation (1990–2010). IEEE Trans. Auton. Mental Dev. 2(3), 230–247 (2010b)] describes principles of a curious and creative agent that never stops generating nontrivial and novel and surprising tasks and data. Two modules are needed: a data encoder and a data creator. The former encodes the growing history of sensory data as the agent is interacting with its environment; the latter executes actions shaping the history. Both learn. The encoder continually tries to encode the created data more efficiently, by discovering new regularities in it. Its learning progress is the wow-effect or fun or intrinsic reward of the creator, which maximizes future expected reward, being motivated to invent skills leading to interesting data that the encoder does not yet know but can easily learn with little computational effort. I have argued that this simple formal principle explains science and art and music and humor. Note: This overview heavily draws on previous publications since 1990, especially Schmidhuber (2010b), parts of which are reprinted with friendly permission by IEEE.",
"title": ""
},
{
"docid": "ee20233660c2caa4a24dbfb512172277",
"text": "Any projection of a 3D scene into a wide-angle image unavoidably results in distortion. Current projection methods either bend straight lines in the scene, or locally distort the shapes of scene objects. We present a method that minimizes this distortion by adapting the projection to content in the scene, such as salient scene regions and lines, in order to preserve their shape. Our optimization technique computes a spatially-varying projection that respects user-specified constraints while minimizing a set of energy terms that measure wide-angle image distortion. We demonstrate the effectiveness of our approach by showing results on a variety of wide-angle photographs, as well as comparisons to standard projections.",
"title": ""
},
{
"docid": "3e845c9a82ef88c7a1f4447d57e35a3e",
"text": "Link prediction is a key problem for network-structured data. Link prediction heuristics use some score functions, such as common neighbors and Katz index, to measure the likelihood of links. They have obtained wide practical uses due to their simplicity, interpretability, and for some of them, scalability. However, every heuristic has a strong assumption on when two nodes are likely to link, which limits their effectiveness on networks where these assumptions fail. In this regard, a more reasonable way should be learning a suitable heuristic from a given network instead of using predefined ones. By extracting a local subgraph around each target link, we aim to learn a function mapping the subgraph patterns to link existence, thus automatically learning a “heuristic” that suits the current network. In this paper, we study this heuristic learning paradigm for link prediction. First, we develop a novel γ-decaying heuristic theory. The theory unifies a wide range of heuristics in a single framework, and proves that all these heuristics can be well approximated from local subgraphs. Our results show that local subgraphs reserve rich information related to link existence. Second, based on the γ-decaying theory, we propose a new method to learn heuristics from local subgraphs using a graph neural network (GNN). Its experimental results show unprecedented performance, working consistently well on a wide range of problems.",
"title": ""
},
{
"docid": "7c00c5d75ab4beffc595aff99a66b402",
"text": "We develop a unified model, known as MgNet, that simultaneously recovers some convolutional neural networks (CNN) for image classification and multigrid (MG) methods for solving discretized partial different equations (PDEs). This model is based on close connections that we have observed and uncovered between the CNN and MG methodologies. For example, pooling operation and feature extraction in CNN correspond directly to restriction operation and iterative smoothers in MG, respectively. As the solution space is often the dual of the data space in PDEs, the analogous concept of feature space and data space (which are dual to each other) is introduced in CNN. With such connections and new concept in the unified model, the function of various convolution operations and pooling used in CNN can be better understood. As a result, modified CNN models (with fewer weights and hyper parameters) are developed that exhibit competitive and sometimes better performance in comparison with existing CNN models when applied to both CIFAR-10 and CIFAR-100 data sets.",
"title": ""
},
{
"docid": "c72940e6154fa31f6bedca17336f8a94",
"text": "Following on from ecological theories of perception, such as the one proposed by [Gibson, J. J. (1966). The senses considered as perceptual systems. Boston: Houghton Mifflin] this paper reviews the literature on the multisensory interactions underlying the perception of flavor in order to determine the extent to which it is really appropriate to consider flavor perception as a distinct perceptual system. We propose that the multisensory perception of flavor may be indicative of the fact that the taxonomy currently used to define our senses is simply not appropriate. According to the view outlined here, the act of eating allows the different qualities of foodstuffs to be combined into unified percepts; and flavor can be used as a term to describe the combination of tastes, smells, trigeminal, and tactile sensations as well as the visual and auditory cues, that we perceive when tasting food.",
"title": ""
},
{
"docid": "e6d3b95f34640c16435b2a7a78bed25b",
"text": "In this paper, a novel face dataset with attractiveness ratings, namely the SCUT-FBP dataset, is developed for automatic facial beauty perception. This dataset provides a benchmark to evaluate the performance of different methods for facial attractiveness prediction, including the state-of-the-art deep learning method. The SCUT-FBP dataset contains face portraits of 500 Asian female subjects with attractiveness ratings, all of which have been verified in terms of rating distribution, standard deviation, consistency, and self-consistency. Benchmark evaluations for facial attractiveness prediction were performed with different combinations of facial geometrical features and texture features using classical statistical learning methods and the deep learning method. The best Pearson correlation 0.8187 was achieved by the CNN model. The results of the experiments indicate that the SCUT-FBP dataset provides a reliable benchmark for facial beauty perception.",
"title": ""
},
{
"docid": "d22390e43aa4525d810e0de7da075bbf",
"text": "information, including knowledge management and e-business applications. Next-generation knowledge management systems will likely rely on conceptual models in the form of ontologies to precisely define the meaning of various symbols. For example, FRODO (a Framework for Distributed Organizational Memories) uses ontologies for knowledge description in organizational memories,1 CoMMA (Corporate Memory Management through Agents) investigates agent technologies for maintaining ontology-based knowledge management systems,2 and Steffen Staab and his colleagues have discussed the methodologies and processes for building ontology-based systems.3 Here we present an integrated enterprise-knowledge management architecture for implementing an ontology-based knowledge management system (OKMS). We focus on two critical issues related to working with ontologies in real-world enterprise applications. First, we realize that imposing a single ontology on the enterprise is difficult if not impossible. Because organizations must devise multiple ontologies and thus require integration mechanisms, we consider means for combining distributed and heterogeneous ontologies using mappings. Additionally, a system’s ontology often must reflect changes in system requirements and focus, so we developed guidelines and an approach for managing the difficult and complex ontology-evolution process.",
"title": ""
},
{
"docid": "223a7496c24dcf121408ac3bba3ad4e5",
"text": "Process control and SCADA systems, with their reliance on proprietary networks and hardware, have long been considered immune to the network attacks that have wreaked so much havoc on corporate information systems. Unfortunately, new research indicates this complacency is misplaced – the move to open standards such as Ethernet, TCP/IP and web technologies is letting hackers take advantage of the control industry’s ignorance. This paper summarizes the incident information collected in the BCIT Industrial Security Incident Database (ISID), describes a number of events that directly impacted process control systems and identifies the lessons that can be learned from these security events.",
"title": ""
},
{
"docid": "f012c0d9fe795a738b3cd82cef94ef19",
"text": "Fraud detection is an industry where incremental gains in predictive accuracy can have large benefits for banks and customers. Banks adapt models to the novel ways in which “fraudsters” commit credit card fraud. They collect data and engineer new features in order to increase predictive power. This research compares the algorithmic impact on the predictive power across three supervised classification models: logistic regression, gradient boosted trees, and deep learning. This research also explores the benefits of creating features using domain expertise and feature engineering using an autoencoder—an unsupervised feature engineering method. These two methods of feature engineering combined with the direct mapping of the original variables create six different feature sets. Across these feature sets this research compares the aforementioned models. This research concludes that creating features using domain expertise offers a notable improvement in predictive power. Additionally, the autoencoder offers a way to reduce the dimensionality of the data and slightly boost predictive power.",
"title": ""
},
{
"docid": "e4ca92179277334d9113a5580be37998",
"text": "This paper presents a systematic design approach for low-profile UWB body-of-revolution (BoR) monopole antennas with specified radiation objectives and size constraints. The proposed method combines a random walk scheme, the genetic algorithm, and a BoR moment method analysis for antenna shape optimization. A weighted global cost function, which minimizes the difference between potential optimal points and a utopia point (optimal design combining 3 different objectives) within the criterion space, is adapted. A 24'' wide and 6'' tall aperture was designed operating from low VHF frequencies up to 2 GHz. This optimized antenna shape reaches -15 dBi gain at 41 MHz on a ground plane and is only λ/12 in aperture width and λ/50 in height at this frequency. The same antenna achieves VSWR <; 3 from 210 MHz up to at least 2 GHz. Concurrently, it maintains a realized gain of ~5 dBi with moderate oscillations across the band of interest. A resistive treatment was further applied at the top antenna rim to improve matching and pattern stability. Measurements are provided for validation of the design. Of importance is that the optimized aperture delivers a larger impedance bandwidth as well as more uniform gain and pattern when compared to a previously published inverted-hat antenna of the same size.",
"title": ""
},
{
"docid": "a45be66a54403701a8271c3063dd24d8",
"text": "This paper highlights the role of humans in the next generation of driver assistance and intelligent vehicles. Understanding, modeling, and predicting human agents are discussed in three domains where humans and highly automated or self-driving vehicles interact: 1) inside the vehicle cabin, 2) around the vehicle, and 3) inside surrounding vehicles. Efforts within each domain, integrative frameworks across domains, and scientific tools required for future developments are discussed to provide a human-centered perspective on research in intelligent vehicles.",
"title": ""
},
{
"docid": "55989ee3d7130f150113904778720f28",
"text": "Because decisions made by human inspectors often involve subjective judgment, in addition to being intensive and therefore costly, an automated approach for printed circuit board (PCB) inspection is preferred to eliminate subjective discrimination and thus provide fast, quantitative, and dimensional assessments. In this study, defect classification is essential to the identification of defect sources. Therefore, an algorithm for PCB defect classification is presented that consists of well-known conventional operations, including image difference, image subtraction, image addition, counted image comparator, flood-fill, and labeling for the classification of six different defects, namely, missing hole, pinhole, underetch, short-circuit, open-circuit, and mousebite. The defect classification algorithm is improved by incorporating proper image registration and thresholding techniques to solve the alignment and uneven illumination problem. The improved PCB defect classification algorithm has been applied to real PCB images to successfully classify all of the defects.",
"title": ""
},
{
"docid": "f3188f260ae3fbe6f89b583aa2557e7f",
"text": "We present the design of Note Code -- a music programming puzzle game designed as a tangible device coupled with a Graphical User Interface (GUI). Tapping patterns and placing boxes in proximity enables programming these \"note-boxes\" to store sets of notes, play them back and activate different sub-components or neighboring boxes. This system provides users the opportunity to learn a variety of computational concepts, including functions, function calling and recursion, conditionals, as well as engage in composing music. The GUI adds a dimension of viewing the created programs and interacting with a set of puzzles that help discover the various computational concepts in the pursuit of creating target tunes, and optimizing the program made.",
"title": ""
},
{
"docid": "a56d43bd191147170e1df87878ca1b11",
"text": "Although problem solving is regarded by most educators as among the most important learning outcomes, few instructional design prescriptions are available for designing problem-solving instruction and engaging learners. This paper distinguishes between well-structured problems and ill-structured problems. Well-structured problems are constrained problems with convergent solutions that engage the application of a limited number of rules and principles within welldefined parameters. Ill-structured problems possess multiple solutions, solution paths, fewer parameters which are less manipulable, and contain uncertainty about which concepts, rules, and principles are necessary for the solution or how they are organized and which solution is best. For both types of problems, this paper presents models for how learners solve them and models for designing instruction to support problem-solving skill development. The model for solving wellstructured problems is based on information processing theories of learning, while the model for solving ill-structured problems relies on an emerging theory of ill-structured problem solving and on constructivist and situated cognition approaches to learning. PROBLEM: INSTRUCTIONAL-DESIGN MODELS FOR PROBLEM SOLVING",
"title": ""
},
{
"docid": "3132a06337d94f032c6dfdb7087633cd",
"text": "A Virtual Best Solver (VBS) is a hypothetical algorithm that selects the best solver from a given portfolio of alternatives on a per-instance basis. The VBS idealizes performance when all solvers in a portfolio are run in parallel, and also gives a valuable bound on the performance of portfolio-based algorithm selectors. Typically, VBS performance is measured by running every solver in a portfolio once on a given instance and reporting the best performance over all solvers. Here, we argue that doing so results in a flawed measure that is biased to reporting better performance when a randomized solver is present in an algorithm portfolio. Specifically, this flawed notion of VBS tends to show performance better than that achievable by a perfect selector that for each given instance runs the solver with the best expected running time. We report results from an empirical study using solvers and instances submitted to several SAT competitions, in which we observe significant bias on many random instances and some combinatorial instances. We also show that the bias increases with the number of randomized solvers and decreases as we average solver performance over many independent runs per instance. We propose an alternative VBS performance measure by (1) empirically obtaining the solver with best expected performance for each instance and (2) taking bootstrap samples for this solver on every instance, to obtain a confidence interval on VBS performance. Our findings shed new light on widely studied algorithm selection benchmarks and help explain performance gaps observed between VBS and state-of-the-art algorithm selection approaches.",
"title": ""
}
] |
scidocsrr
|
0642923b608cd6d9e2d8f3455cbc443b
|
Continuous Path Smoothing for Car-Like Robots Using B-Spline Curves
|
[
{
"docid": "38382c04e7dc46f5db7f2383dcae11fb",
"text": "Motor schemas serve as the basic unit of behavior specification for the navigation of a mobile robot. They are multiple concurrent processes that operate in conjunction with associated perceptual schemas and contribute independently to the overall concerted action of the vehicle. The motivation behind the use of schemas for this domain is drawn from neuroscientific, psychological, and robotic sources. A variant of the potential field method is used to produce the appropriate velocity and steering commands for the robot. Simulation results and actual mobile robot experiments demonstrate the feasibility of this approach.",
"title": ""
}
] |
[
{
"docid": "a7be4f9177e6790756b7ede4a2d9ca79",
"text": "Metabolomics, or the comprehensive profiling of small molecule metabolites in cells, tissues, or whole organisms, has undergone a rapid technological evolution in the past two decades. These advances have led to the application of metabolomics to defining predictive biomarkers for incident cardiometabolic diseases and, increasingly, as a blueprint for understanding those diseases' pathophysiologic mechanisms. Progress in this area and challenges for the future are reviewed here.",
"title": ""
},
{
"docid": "f4bc0b7aa15de139ddb09e406fc1ce0b",
"text": "This paper reviews the problem of catastrophic forgetting (the loss or disruption of previously learned information when new information is learned) in neural networks, and explores rehearsal mechanisms (the retraining of some of the previously learned information as the new information is added) as a potential solution. We replicate some of the experiments described by Ratcliff (1990), including those relating to a simple “recency” based rehearsal regime. We then develop further rehearsal regimes which are more effective than recency rehearsal. In particular “sweep rehearsal” is very successful at minimising catastrophic forgetting. One possible limitation of rehearsal in general, however, is that previously learned information may not be available for retraining. We describe a solution to this problem, “pseudorehearsal”, a method which provides the advantages of rehearsal without actually requiring any access to the previously learned information (the original training population) itself. We then suggest an interpretation of these rehearsal mechanisms in the context of a function approximation based account of neural network learning. Both rehearsal and pseudorehearsal may have practical applications, allowing new information to be integrated into an existing network with minimum disruption of old information.",
"title": ""
},
{
"docid": "712636d3a1dfe2650c0568c8f7cf124c",
"text": "Modern deep neural networks have a large number of parameters, making them very hard to train. We propose DSD, a dense-sparse-dense training flow, for regularizing deep neural networks and achieving better optimization performance. In the first D (Dense) step, we train a dense network to learn connection weights and importance. In the S (Sparse) step, we regularize the network by pruning the unimportant connections with small weights and retraining the network given the sparsity constraint. In the final D (re-Dense) step, we increase the model capacity by removing the sparsity constraint, re-initialize the pruned parameters from zero and retrain the whole dense network. Experiments show that DSD training can improve the performance for a wide range of CNNs, RNNs and LSTMs on the tasks of image classification, caption generation and speech recognition. On ImageNet, DSD improved the Top1 accuracy of GoogLeNet by 1.1%, VGG-16 by 4.3%, ResNet-18 by 1.2% and ResNet-50 by 1.1%, respectively. On the WSJ’93 dataset, DSD improved DeepSpeech and DeepSpeech2 WER by 2.0% and 1.1%. On the Flickr-8K dataset, DSD improved the NeuralTalk BLEU score by over 1.7. DSD is easy to use in practice: at training time, DSD incurs only one extra hyper-parameter: the sparsity ratio in the S step. At testing time, DSD doesn’t change the network architecture or incur any inference overhead. The consistent and significant performance gain of DSD experiments shows the inadequacy of the current training methods for finding the best local optimum, while DSD effectively achieves superior optimization performance for finding a better solution. DSD models are available to download at https://songhan.github.io/DSD.",
"title": ""
},
{
"docid": "42b9f909251aeb850a1bfcdf7ec3ace4",
"text": "Kidney stones are one of the most common chronic disorders in industrialized countries. In patients with kidney stones, the goal of medical therapy is to prevent the formation of new kidney stones and to reduce growth of existing stones. The evaluation of the patient with kidney stones should identify dietary, environmental, and genetic factors that contribute to stone risk. Radiologic studies are required to identify the stone burden at the time of the initial evaluation and to follow up the patient over time to monitor success of the treatment program. For patients with a single stone an abbreviated laboratory evaluation to identify systemic disorders usually is sufficient. For patients with multiple kidney stones 24-hour urine chemistries need to be measured to identify abnormalities that predispose to kidney stones, which guides dietary and pharmacologic therapy to prevent future stone events.",
"title": ""
},
{
"docid": "52315f23e419ba27e6fd058fe8b7aa9d",
"text": "Detected obstacles overlaid on the original image Polar map: The agent is at the center of the map, facing 00. The blue points correspond to polar positions of the obstacle points around the agent. 1. Talukder, A., et al. \"Fast and reliable obstacle detection and segmentation for cross-country navigation.\" Intelligent Vehicle SympoTalukder, A., et al. \"Fast and reliable obstacle detection and segmentation for cross-country navigation.\" Intelligent Vehicle Symposium, 2002. IEEE. Vol. 2. IEEE, 2002. 2. Sun, Deqing, Stefan Roth, and Michael J. Black. \"Secrets of optical flow estimation and their principles.\" Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on. IEEE, 2010. 3. Bernini, Nicola, et al. \"Real-time obstacle detection using stereo vision for autonomous ground vehicles: A survey.\" Intelligent Transportation Systems (ITSC), 2014 IEEE 17th International Conference on. IEEE, 2014. 4. Broggi, Alberto, et al. \"Stereo obstacle detection in challenging environments: the VIAC experience.\" Intelligent Robots and Systems (IROS), 2011 IEEE/RSJ International Conference on. IEEE, 2011.",
"title": ""
},
{
"docid": "e56accce9d4ae911e85f5fd2b92a614a",
"text": "This paper introduces and documents a novel image database specifically built for the purpose of development and bench-marking of camera-based digital forensic techniques. More than 14,000 images of various indoor and outdoor scenes have been acquired under controlled and thus widely comparable conditions from altogether 73 digital cameras. The cameras were drawn from only 25 different models to ensure that device-specific and model-specific characteristics can be disentangled and studied separately, as validated with results in this paper. In addition, auxiliary images for the estimation of device-specific sensor noise pattern were collected for each camera. Another subset of images to study model-specific JPEG compression algorithms has been compiled for each model. The 'Dresden Image Database' will be made freely available for scientific purposes when this accompanying paper is presented. The database is intended to become a useful resource for researchers and forensic investigators. Using a standard database as a benchmark not only makes results more comparable and reproducible, but it is also more economical and avoids potential copyright and privacy issues that go along with self-sampled benchmark sets from public photo communities on the Internet.",
"title": ""
},
{
"docid": "ac0875c0f01d32315f4ea63049d3a1e1",
"text": "Point clouds provide a flexible and scalable geometric representation suitable for countless applications in computer graphics; they also comprise the raw output of most 3D data acquisition devices. Hence, the design of intelligent computational models that act directly on point clouds is critical, especially when efficiency considerations or noise preclude the possibility of expensive denoising and meshing procedures. While hand-designed features on point clouds have long been proposed in graphics and vision, however, the recent overwhelming success of convolutional neural networks (CNNs) for image analysis suggests the value of adapting insight from CNN to the point cloud world. To this end, we propose a new neural network module dubbed EdgeConv suitable for CNN-based high-level tasks on point clouds including classification and segmentation. EdgeConv is differentiable and can be plugged into existing architectures. Compared to existing modules operating largely in extrinsic space or treating each point independently, EdgeConv has several appealing properties: It incorporates local neighborhood information; it can be stacked or recurrently applied to learn global shape properties; and in multi-layer systems affinity in feature space captures semantic characteristics over potentially long distances in the original embedding. Beyond proposing this module, we provide extensive evaluation and analysis revealing that EdgeConv captures and exploits fine-grained geometric properties of point clouds. The proposed approach achieves state-of-the-art performance on standard benchmarks including ModelNet40 and S3DIS. ∗Equal Contribution",
"title": ""
},
{
"docid": "de1f680fd80b20f005dab2ef8067f773",
"text": "This paper describes a convolutional neural network based deep learning approach for bird song classification that was used in an audio record-based bird identification challenge, called BirdCLEF 2016. The training and test set contained about 24k and 8.5k recordings, belonging to 999 bird species. The recorded waveforms were very diverse in terms of length and content. We converted the waveforms into frequency domain and splitted into equal segments. The segments were fed into a convolutional neural network for feature learning, which was followed by fully connected layers for classification. In the official scores our solution reached a MAP score of over 40% for main species, and MAP score of over 33% for main species mixed with background species.",
"title": ""
},
{
"docid": "731d9faffc834156d5218a09fbb82e27",
"text": "With this paper we take a first step to understand the appropriation of social media by the police. For this purpose we analyzed the Twitter communication by the London Metropolitan Police (MET) and the Greater Manchester Police (GMP) during the riots in August 2011. The systematic comparison of tweets demonstrates that the two forces developed very different practices for using Twitter. While MET followed an instrumental approach in their communication, in which the police aimed to remain in a controlled position and keep a distance to the general public, GMP developed an expressive approach, in which the police actively decreased the distance to the citizens. In workshops and interviews, we asked the police officers about their perspectives, which confirmed the identified practices. Our study discusses benefits and risks of the two approaches and the potential impact of social media on the evolution of the role of police in society.",
"title": ""
},
{
"docid": "a2b9c5f2b6299d0de91d80f9316a02e7",
"text": "In this paper, with the help of knowledge base, we build and formulate a semantic space to connect the source and target languages, and apply it to the sequence-to-sequence framework to propose a Knowledge-Based Semantic Embedding (KBSE) method. In our KBSE method, the source sentence is firstly mapped into a knowledge based semantic space, and the target sentence is generated using a recurrent neural network with the internal meaning preserved. Experiments are conducted on two translation tasks, the electric business data and movie data, and the results show that our proposed method can achieve outstanding performance, compared with both the traditional SMT methods and the existing encoder-decoder models.",
"title": ""
},
{
"docid": "288f831e93e83b86d28624e31bb2f16c",
"text": "Deep learning has made significant improvements at many image processing tasks in recent years, such as image classification, object recognition and object detection. Convolutional neural networks (CNN), which is a popular deep learning architecture designed to process data in multiple array form, show great success to almost all detection & recognition problems and computer vision tasks. However, the number of parameters in a CNN is too high such that the computers require more energy and larger memory size. In order to solve this problem, we propose a novel energy efficient model Binary Weight and Hadamard-transformed Image Network (BWHIN), which is a combination of Binary Weight Network (BWN) and Hadamard-transformed Image Network (HIN). It is observed that energy efficiency is achieved with a slight sacrifice at classification accuracy. Among all energy efficient networks, our novel ensemble model outperforms other energy efficient models.",
"title": ""
},
{
"docid": "ff4c069ab63ced5979cf6718eec30654",
"text": "Dowser is a ‘guided’ fuzzer that combines taint tracking, program analysis and symbolic execution to find buffer overflow and underflow vulnerabilities buried deep in a program’s logic. The key idea is that analysis of a program lets us pinpoint the right areas in the program code to probe and the appropriate inputs to do so. Intuitively, for typical buffer overflows, we need consider only the code that accesses an array in a loop, rather than all possible instructions in the program. After finding all such candidate sets of instructions, we rank them according to an estimation of how likely they are to contain interesting vulnerabilities. We then subject the most promising sets to further testing. Specifically, we first use taint analysis to determine which input bytes influence the array index and then execute the program symbolically, making only this set of inputs symbolic. By constantly steering the symbolic execution along branch outcomes most likely to lead to overflows, we were able to detect deep bugs in real programs (like the nginx webserver, the inspircd IRC server, and the ffmpeg videoplayer). Two of the bugs we found were previously undocumented buffer overflows in ffmpeg and the poppler PDF rendering library.",
"title": ""
},
{
"docid": "875e12852dabbcabe24cc59b764a4226",
"text": "As more and more marketers incorporate social media as an integral part of the promotional mix, rigorous investigation of the determinants that impact consumers’ engagement in eWOM via social networks is becoming critical. Given the social and communal characteristics of social networking sites (SNSs) such as Facebook, MySpace and Friendster, this study examines how social relationship factors relate to eWOM transmitted via online social websites. Specifically, a conceptual model that identifies tie strength, homophily, trust, normative and informational interpersonal influence as an important antecedent to eWOM behaviour in SNSs was developed and tested. The results confirm that tie strength, trust, normative and informational influence are positively associated with users’ overall eWOM behaviour, whereas a negative relationship was found with regard to homophily. This study suggests that product-focused eWOM in SNSs is a unique phenomenon with important social implications. The implications for researchers, practitioners and policy makers of social media regulation are discussed.",
"title": ""
},
{
"docid": "4e2bed31e5406e30ae59981fa8395d5b",
"text": "Asynchronous Learning Networks (ALNs) make the process of collaboration more transparent, because a transcript of conference messages can be used to assess individual roles and contributions and the collaborative process itself. This study considers three aspects of ALNs: the design; the quality of the resulting knowledge construction process; and cohesion, role and power network structures. The design is evaluated according to the Social Interdependence Theory of Cooperative Learning. The quality of the knowledge construction process is evaluated through Content Analysis; and the network structures are analyzed using Social Network Analysis of the response relations among participants during online discussions. In this research we analyze data from two three-monthlong ALN academic university courses: a formal, structured, closed forum and an informal, nonstructured, open forum. We found that in the structured ALN, the knowledge construction process reached a very high phase of critical thinking and developed cohesive cliques. The students took on bridging and triggering roles, while the tutor had relatively little power. In the non-structured ALN, the knowledge construction process reached a low phase of cognitive activity; few cliques were constructed; most of the students took on the passive role of teacher-followers; and the tutor was at the center of activity. These differences are statistically significant. We conclude that a well-designed ALN develops significant, distinct cohesion, and role and power structures lead the knowledge construction process to high phases of critical thinking.",
"title": ""
},
{
"docid": "6f410e93fa7ab9e9c4a7a5710fea88e2",
"text": "We propose a fast, scalable locality-sensitive hashing method for the problem of retrieving similar physiological waveform time series. When compared to the naive k-nearest neighbor search, the method vastly speeds up the retrieval time of similar physiological waveforms without sacrificing significant accuracy. Our result shows that we can achieve 95% retrieval accuracy or better with up to an order of magnitude of speed-up. The extra time required in advance to create the optimal data structure is recovered when query quantity equals 15% of the repository, while the method incurs a trivial additional memory cost. We demonstrate the effectiveness of this method on an arterial blood pressure time series dataset extracted from the ICU physiological waveform repository of the MIMIC-II database.",
"title": ""
},
{
"docid": "cd0bd7ac3aead17068c7f223fc19da60",
"text": "In this letter, a class of wideband impedance transformers based on multisection quarter-wave transmission lines and short-circuited stubs are proposed to be incorporated with good passband frequency selectivity. A synthesis approach is then presented to design this two-port asymmetrical transformer with Chebyshev frequency response. For the specified impedance transformation ratio, bandwidth, and in-band return loss, the required impedance parameters can be directly determined. Next, a transformer with two section transmission lines in the middle is characterized, where a set of design curves are given for practical design. Theoretically, the proposed multisection transformer has attained good passband frequency selectivity against the reported counterparts. Finally, a 50-110 Ω impedance transformer with a fractional bandwidth of 77.8% and 15 dB in-band return loss is designed, fabricated and measured to verify the prediction.",
"title": ""
},
{
"docid": "b1a538752056e91fd5800911f36e6eb0",
"text": "BACKGROUND\nThe current, so-called \"Millennial\" generation of learners is frequently characterized as having deep understanding of, and appreciation for, technology and social connectedness. This generation of learners has also been molded by a unique set of cultural influences that are essential for medical educators to consider in all aspects of their teaching, including curriculum design, student assessment, and interactions between faculty and learners.\n\n\nAIM\n The following tips outline an approach to facilitating learning of our current generation of medical trainees.\n\n\nMETHOD\n The method is based on the available literature and the authors' experiences with Millennial Learners in medical training.\n\n\nRESULTS\n The 12 tips provide detailed approaches and specific strategies for understanding and engaging Millennial Learners and enhancing their learning.\n\n\nCONCLUSION\n With an increased understanding of the characteristics of the current generation of medical trainees, faculty will be better able to facilitate learning and optimize interactions with Millennial Learners.",
"title": ""
},
{
"docid": "1e4f13016c846039f7bbed47810b8b3d",
"text": "This paper characterizes general properties of useful, or Effective, explanations of recommendations. It describes a methodology based on focus groups, in which we elicit what helps moviegoers decide whether or not they would like a movie. Our results highlight the importance of personalizing explanations to the individual user, as well as considering the source of recommendations, user mood, the effects of group viewing, and the effect of explanations on user expectations.",
"title": ""
},
{
"docid": "7f83aa38f6f715285b757e235da04257",
"text": "In recent researches on inverter-based distributed generators, disadvantages of traditional grid-connected current control, such as no grid-forming ability and lack of inertia, have been pointed out. As a result, novel control methods like droop control and virtual synchronous generator (VSG) have been proposed. In both methods, droop characteristics are used to control active and reactive power, and the only difference between them is that VSG has virtual inertia with the emulation of swing equation, whereas droop control has no inertia. In this paper, dynamic characteristics of both control methods are studied, in both stand-alone mode and synchronous-generator-connected mode, to understand the differences caused by swing equation. Small-signal models are built to compare transient responses of frequency during a small loading transition, and state-space models are built to analyze oscillation of output active power. Effects of delays in both controls are also studied, and an inertial droop control method is proposed based on the comparison. The results are verified by simulations and experiments. It is suggested that VSG control and proposed inertial droop control inherits the advantages of droop control, and in addition, provides inertia support for the system.",
"title": ""
},
{
"docid": "5467003778aa2c120c36ac023f0df704",
"text": "We consider the task of automated estimation of facial expression intensity. This involves estimation of multiple output variables (facial action units — AUs) that are structurally dependent. Their structure arises from statistically induced co-occurrence patterns of AU intensity levels. Modeling this structure is critical for improving the estimation performance; however, this performance is bounded by the quality of the input features extracted from face images. The goal of this paper is to model these structures and estimate complex feature representations simultaneously by combining conditional random field (CRF) encoded AU dependencies with deep learning. To this end, we propose a novel Copula CNN deep learning approach for modeling multivariate ordinal variables. Our model accounts for ordinal structure in output variables and their non-linear dependencies via copula functions modeled as cliques of a CRF. These are jointly optimized with deep CNN feature encoding layers using a newly introduced balanced batch iterative training algorithm. We demonstrate the effectiveness of our approach on the task of AU intensity estimation on two benchmark datasets. We show that joint learning of the deep features and the target output structure results in significant performance gains compared to existing deep structured models for analysis of facial expressions.",
"title": ""
}
] |
scidocsrr
|
f76831e70b7cf9ed3cc70387913f5c4e
|
Bidirectional Attentive Fusion with Context Gating for Dense Video Captioning
|
[
{
"docid": "5d4797cffc06cbde079bf4019dc196db",
"text": "Automatically generating natural language descriptions of videos plays a fundamental challenge for computer vision community. Most recent progress in this problem has been achieved through employing 2-D and/or 3-D Convolutional Neural Networks (CNNs) to encode video content and Recurrent Neural Networks (RNNs) to decode a sentence. In this paper, we present Long Short-Term Memory with Transferred Semantic Attributes (LSTM-TSA)—a novel deep architecture that incorporates the transferred semantic attributes learnt from images and videos into the CNN plus RNN framework, by training them in an end-to-end manner. The design of LSTM-TSA is highly inspired by the facts that 1) semantic attributes play a significant contribution to captioning, and 2) images and videos carry complementary semantics and thus can reinforce each other for captioning. To boost video captioning, we propose a novel transfer unit to model the mutually correlated attributes learnt from images and videos. Extensive experiments are conducted on three public datasets, i.e., MSVD, M-VAD and MPII-MD. Our proposed LSTM-TSA achieves to-date the best published performance in sentence generation on MSVD: 52.8% and 74.0% in terms of BLEU@4 and CIDEr-D. Superior results are also reported on M-VAD and MPII-MD when compared to state-of-the-art methods.",
"title": ""
}
] |
[
{
"docid": "4b75c7158f6c20542385d08eca9bddb3",
"text": "PURPOSE\nExtraarticular manifestations of the joint hypermobility syndrome may include the peripheral nervous system. The purpose of this study was to investigate autonomic function in patients with this syndrome.\n\n\nMETHODS\nForty-eight patients with the joint hypermobility syndrome who fulfilled the 1998 Brighton criteria and 30 healthy control subjects answered a clinical questionnaire designed to evaluate the frequency of complaints related to the autonomic nervous system. Next, 27 patients and 21 controls underwent autonomic evaluation: orthostatic testing, cardiovascular vagal and sympathetic functions, catecholamine levels, and adrenoreceptor responsiveness.\n\n\nRESULTS\nSymptoms related to the autonomic nervous system, such as syncope and presyncope, palpitations, chest discomfort, fatigue, and heat intolerance, were significantly more common among patients. Orthostatic hypotension, postural orthostatic tachycardia syndrome, and uncategorized orthostatic intolerance were found in 78% (21/27) of patients compared with in 10% (2/21) of controls. Patients with the syndrome had a greater mean (+/- SD) drop in systolic blood pressure during hyperventilation than did controls (-11 +/- 7 mm Hg vs. -5 +/- 5 mm Hg, P = 0.02) and a greater increase in systolic blood pressure after a cold pressor test (19 +/- 10 mm Hg vs. 11 +/- 13 mm Hg, P = 0.06). Patients with the syndrome also had evidence of alpha-adrenergic (as assessed by administration of phenylephrine) and beta-adrenergic hyperresponsiveness (as assessed by administration of isoproterenol).\n\n\nCONCLUSION\nThe autonomic nervous system-related symptoms of the patients have a pathophysiological basis, which suggests that dysautonomia is an extraarticular manifestation in the joint hypermobility syndrome.",
"title": ""
},
{
"docid": "6c99c86d994460f3314865f0da2f57e4",
"text": "BACKGROUND\nThresholds for statistical significance are insufficiently demonstrated by 95% confidence intervals or P-values when assessing results from randomised clinical trials. First, a P-value only shows the probability of getting a result assuming that the null hypothesis is true and does not reflect the probability of getting a result assuming an alternative hypothesis to the null hypothesis is true. Second, a confidence interval or a P-value showing significance may be caused by multiplicity. Third, statistical significance does not necessarily result in clinical significance. Therefore, assessment of intervention effects in randomised clinical trials deserves more rigour in order to become more valid.\n\n\nMETHODS\nSeveral methodologies for assessing the statistical and clinical significance of intervention effects in randomised clinical trials were considered. Balancing simplicity and comprehensiveness, a simple five-step procedure was developed.\n\n\nRESULTS\nFor a more valid assessment of results from a randomised clinical trial we propose the following five-steps: (1) report the confidence intervals and the exact P-values; (2) report Bayes factor for the primary outcome, being the ratio of the probability that a given trial result is compatible with a 'null' effect (corresponding to the P-value) divided by the probability that the trial result is compatible with the intervention effect hypothesised in the sample size calculation; (3) adjust the confidence intervals and the statistical significance threshold if the trial is stopped early or if interim analyses have been conducted; (4) adjust the confidence intervals and the P-values for multiplicity due to number of outcome comparisons; and (5) assess clinical significance of the trial results.\n\n\nCONCLUSIONS\nIf the proposed five-step procedure is followed, this may increase the validity of assessments of intervention effects in randomised clinical trials.",
"title": ""
},
{
"docid": "45df307e591eb146c1313686e345dede",
"text": "A high-precision CMOS time-to-digital converter IC has been designed. Time interval measurement is based on a counter and two-level interpolation realized with stabilized delay lines. Reference recycling in the delay line improves the integral nonlinearity of the interpolator and enables the use of a low frequency reference clock. Multi-level interpolation reduces the number of delay elements and registers and lowers the power consumption. The load capacitor scaled parallel structure in the delay line permits very high resolution. An INL look-up table reduces the effect of the remaining nonlinearity. The digitizer measures time intervals from 0 to 204 /spl mu/s with 8.1 ps rms single-shot precision. The resolution of 12.2 ps from a 5-MHz external reference clock is divided by means of only 20 delay elements.",
"title": ""
},
{
"docid": "78b371e7df39a1ebbad64fdee7303573",
"text": "This state of the art report focuses on glyph-based visualization, a common form of visual design where a data set is depicted by a collection of visual objects referred to as glyphs. Its major strength is that patterns of multivariate data involving more than two attribute dimensions can often be more readily perceived in the context of a spatial relationship, whereas many techniques for spatial data such as direct volume rendering find difficult to depict with multivariate or multi-field data, and many techniques for non-spatial data such as parallel coordinates are less able to convey spatial relationships encoded in the data. This report fills several major gaps in the literature, drawing the link between the fundamental concepts in semiotics and the broad spectrum of glyph-based visualization, reviewing existing design guidelines and implementation techniques, and surveying the use of glyph-based visualization in many applications.",
"title": ""
},
{
"docid": "894f5289293a72084647e07f8e7423f7",
"text": "Convolutional Neural Networks (CNNs) have been widely adopted for many imaging applications. For image aesthetics prediction, state-of-the-art algorithms train CNNs on a recently-published large-scale dataset, AVA. However, the distribution of the aesthetic scores on this dataset is extremely unbalanced, which limits the prediction capability of existing methods. We overcome such limitation by using weighted CNNs. We train a regression model that improves the prediction accuracy of the aesthetic scores over state-of-the-art algorithms. In addition, we propose a novel histogram prediction model that not only predicts the aesthetic score, but also estimates the difficulty of performing aesthetics assessment for an input image. We further show an image enhancement application where we obtain an aesthetically pleasing crop of an input image using our regression model.",
"title": ""
},
{
"docid": "688ee7a4bde400a6afbd6972d729fad4",
"text": "Learning-to-Rank ( LtR ) techniques leverage machine learning algorithms and large amounts of training data to induce high-quality ranking functions. Given a set of documents and a user query, these functions are able to precisely predict a score for each of the documents, in turn exploited to effectively rank them. Although the scoring efficiency of LtR models is critical in several applications – e.g., it directly impacts on response time and throughput of Web query processing – it has received relatively little attention so far. The goal of this work is to experimentally investigate the scoring efficiency of LtR models along with their ranking quality. Specifically, we show that machine-learned ranking models exhibit a quality versus efficiency trade-off. For example, each family of LtR algorithms has tuning parameters that can influence both effectiveness and efficiency, where higher ranking quality is generally obtained with more complex and expensive models. Moreover, LtR algorithms that learn complex models, such as those based on forests of regression trees, are generally more expensive and more effective than other algorithms that induce simpler models like linear combination of features. We extensively analyze the quality versus efficiency trade-off of a wide spectrum of stateof-the-art LtR , and we propose a sound methodology to devise the most effective ranker given a time budget. To guarantee reproducibility, we used publicly available datasets and we contribute an open source C++ framework providing optimized, multi-threaded implementations of the most effective tree-based learners: Gradient Boosted Regression Trees ( GBRT ), Lambda-Mart ( λ-MART ), and the first public-domain implementation of Oblivious Lambda-Mart ( λ-MART ), an algorithm that induces forests of oblivious regression trees. We investigate how the different training parameters impact on the quality versus efficiency trade-off, and provide a thorough comparison of several algorithms in the qualitycost space. The experiments conducted show that there is not an overall best algorithm, but the optimal choice depends on the time budget. © 2016 Elsevier Ltd. All rights reserved. ∗ Corresponding author. E-mail addresses: gabriele.capannini@mdh.se (G. Capannini), claudio.lucchese@isti.cnr.it , c.lucchese@isti.cnr.it (C. Lucchese), f.nardini@isti.cnr.it (F.M. Nardini), orlando@unive.it (S. Orlando), r.perego@isti.cnr.it (R. Perego), n.tonellotto@isti.cnr.it (N. Tonellotto). http://dx.doi.org/10.1016/j.ipm.2016.05.004 0306-4573/© 2016 Elsevier Ltd. All rights reserved. Please cite this article as: G. Capannini et al., Quality versus efficiency in document scoring with learning-to-rank models, Information Processing and Management (2016), http://dx.doi.org/10.1016/j.ipm.2016.05.004 2 G. Capannini et al. / Information Processing and Management 0 0 0 (2016) 1–17 ARTICLE IN PRESS JID: IPM [m3Gsc; May 17, 2016;19:28 ] Document Index Base Ranker Top Ranker Features Learning to Rank Algorithm Query First step Second step N docs K docs 1. ............ 2. ............ 3. ............ K. ............ ... ... Results Page(s) Fig. 1. The architecture of a generic machine-learned ranking pipeline.",
"title": ""
},
{
"docid": "1b4963cac3a0c3b0ae469f616b4295a8",
"text": "The volume of traveling websites is rapidly increasing. This makes relevant information extraction more challenging. Several fuzzy ontology-based systems have been proposed to decrease the manual work of a full-text query search engine and opinion mining. However, most search engines are keyword-based, and available full-text search engine systems are still imperfect at extracting precise information using different types of user queries. In opinion mining, travelers do not declare their hotel opinions entirely but express individual feature opinions in reviews. Hotel reviews have numerous uncertainties, and most featured opinions are based on complex linguistic wording (small, big, very good and very bad). Available ontology-based systems cannot extract blurred information from reviews to provide better solutions. To solve these problems, this paper proposes a new extraction and opinion mining system based on a type-2 fuzzy ontology called T2FOBOMIE. The system reformulates the user’s full-text query to extract the user requirement and convert it into the format of a proper classical full-text search engine query. The proposed system retrieves targeted hotel reviews and extracts feature opinions from reviews using a fuzzy domain ontology. The fuzzy domain ontology, user information and hotel information are integrated to form a type-2 fuzzy merged ontology for the retrieving of feature polarity and individual hotel polarity. The Protégé OWL-2 (Ontology Web Language) tool is used to develop the type-2 fuzzy ontology. A series of experiments were designed and demonstrated that T2FOBOMIE performance is highly productive for analyzing reviews and accurate opinion mining.",
"title": ""
},
{
"docid": "1298ddbeea84f6299e865708fd9549a6",
"text": "Since its invention in the early 1960s (Rotman and Turner, 1963), the Rotman Lens has proven itself to be a useful beamformer for designers of electronically scanned arrays. Inherent in its design is a true time delay phase shift capability that is independent of frequency and removes the need for costly phase shifters to steer a beam over wide angles. The Rotman Lens has a long history in military radar, but it has also been used in communication systems. This article uses the developed software to design and analyze a microstrip Rotman Lens for the Ku band. The initial lens design will come from a tool based on geometrical optics (GO). A second stage of analysis will be performed using a full wave finite difference time domain (FDTD) solver. The results between the first-cut design tool and the comprehensive FDTD solver will be compared, and some of the design trades will be explored to gauge their impact on the performance of the lens.",
"title": ""
},
{
"docid": "517454eb09e377bb157926e196094a2e",
"text": "Wireless sensor networks are one of the emerging areas which have equipped scientists with the capability of developing real-time monitoring systems. This paper discusses the development of a wireless sensor network(WSN) to detect landslides, which includes the design, development and implementation of a WSN for real time monitoring, the development of the algorithms needed that will enable efficient data collection and data aggregation, and the network requirements of the deployed landslide detection system. The actual deployment of the testbed is in the Idukki district of the Southern state of Kerala, India, a region known for its heavy rainfall, steep slopes, and frequent landslides.",
"title": ""
},
{
"docid": "9a3cc8e2bef4f9ecec5bf6f5111562f2",
"text": "We present a study that explores the use of a commercially available eye tracker as a control device for video games. We examine its use across multiple gaming genres and present games that utilize the eye tracker in a variety of ways. First, we describe a first-person shooter that uses the eyes to control orientation. Second, we study the use of eye movements for more natural interaction with characters in a role playing game. And lastly, we examine the use of eye tracking as a means to control a modified version of the classic action/arcade game Missile Command. Our results indicate that the use of an eye tracker can increase the immersion of a video game and can significantly alter the gameplay experience.",
"title": ""
},
{
"docid": "0daa43669ae68a81e5eb71db900976c6",
"text": "Fertilizer plays an important role in maintaining soil fertility, increasing yields and improving harvest quality. However, a significant portion of fertilizers are lost, increasing agricultural cost, wasting energy and polluting the environment, which are challenges for the sustainability of modern agriculture. To meet the demands of improving yields without compromising the environment, environmentally friendly fertilizers (EFFs) have been developed. EFFs are fertilizers that can reduce environmental pollution from nutrient loss by retarding, or even controlling, the release of nutrients into soil. Most of EFFs are employed in the form of coated fertilizers. The application of degradable natural materials as a coating when amending soils is the focus of EFF research. Here, we review recent studies on materials used in EFFs and their effects on the environment. The major findings covered in this review are as follows: 1) EFF coatings can prevent urea exposure in water and soil by serving as a physical barrier, thereby reducing the urea hydrolysis rate and decreasing nitrogen oxide (NOx) and dinitrogen (N2) emissions, 2) EFFs can increase the soil organic matter content, 3) hydrogel/superabsorbent coated EFFs can buffer soil acidity or alkalinity and lead to an optimal pH for plants, and 4) hydrogel/superabsorbent coated EFFs can improve water-retention and water-holding capacity of soil. In conclusion, EFFs play an important role in enhancing nutrients efficiency and reducing environmental pollution.",
"title": ""
},
{
"docid": "15102e561d9640ee39952e4ad62ef896",
"text": "OBJECTIVE\nTo define the relative position of the maxilla and mandible in fetuses with trisomy 18 at 11 + 0 to 13 + 6 weeks of gestation.\n\n\nMETHODS\nA three-dimensional (3D) volume of the fetal head was obtained before karyotyping at 11 + 0 to 13 + 6 weeks of gestation in 36 fetuses subsequently found to have trisomy 18, and 200 chromosomally normal fetuses. The frontomaxillary facial (FMF) angle and the mandibulomaxillary facial (MMF) angle were measured in a mid-sagittal view of the fetal face.\n\n\nRESULTS\nIn the chromosomally normal group both the FMF and MMF angles decreased significantly with crown-rump length (CRL). In the trisomy 18 fetuses the FMF angle was significantly greater and the angle was above the 95(th) centile of the normal range in 21 (58.3%) cases. In contrast, in trisomy 18 fetuses the MMF angle was significantly smaller than that in normal fetuses and the angle was below the 5(th) centile of the normal range in 12 (33.3%) cases.\n\n\nCONCLUSIONS\nTrisomy 18 at 11 + 0 to 13 + 6 weeks of gestation is associated with both mid-facial hypoplasia and micrognathia or retrognathia that can be documented by measurement of the FMF angle and MMF angle, respectively.",
"title": ""
},
{
"docid": "db5eb3eef66f26cedb6cacf5e1373403",
"text": "In this article, we present a novel approach for modulating the shape of transitions between terrain materials to produce detailed and varied contours where blend resolution is limited. Whereas texture splatting and blend mapping add detail to transitions at the texel level, our approach addresses the broader shape of the transition by introducing intermittency and irregularity. Our results have proven that enriched detail of the blend contour can be achieved with a performance competitive to existing approaches without additional texture, geometry resources, or asset preprocessing. We achieve this by compositing blend masks on-the-fly with the subdivision of texture space into differently sized patches to produce irregular contours from minimal artistic input. Our approach is of particular importance for applications where GPU resources or artistic input is limited or impractical.",
"title": ""
},
{
"docid": "2583e0ccbf65571d98e78547c8b9aeb4",
"text": "The current evolution of the cyber-threat ecosystem shows that no system can be considered invulnerable. It is therefore important to quantify the risk level within a system and devise risk prediction methods such that proactive measures can be taken to reduce the damage of cyber attacks. We present RiskTeller, a system that analyzes binary file appearance logs of machines to predict which machines are at risk of infection months in advance. Risk prediction models are built by creating, for each machine, a comprehensive profile capturing its usage patterns, and then associating each profile to a risk level through both fully and semi-supervised learning methods. We evaluate RiskTeller on a year-long dataset containing information about all the binaries appearing on machines of 18 enterprises. We show that RiskTeller can use the machine profile computed for a given machine to predict subsequent infections with the highest prediction precision achieved to date.",
"title": ""
},
{
"docid": "113c07908c1f22c7671553c7f28c0b3f",
"text": "Nearly 80% of children in the United States have at least 1 sibling, indicating that the birth of a baby sibling is a normative ecological transition for most children. Many clinicians and theoreticians believe the transition is stressful, constituting a developmental crisis for most children. Yet, a comprehensive review of the empirical literature on children's adjustment over the transition to siblinghood (TTS) has not been done for several decades. The current review summarizes research examining change in first borns' adjustment to determine whether there is evidence that the TTS is disruptive for most children. Thirty studies addressing the TTS were found, and of those studies, the evidence did not support a crisis model of developmental transitions, nor was there overwhelming evidence of consistent changes in firstborn adjustment. Although there were decreases in children's affection and responsiveness toward mothers, the results were more equivocal for many other behaviors (e.g., sleep problems, anxiety, aggression, regression). An inspection of the scientific literature indicated there are large individual differences in children's adjustment and that the TTS can be a time of disruption, an occasion for developmental advances, or a period of quiescence with no noticeable changes. The TTS may be a developmental turning point for some children that portends future psychopathology or growth depending on the transactions between children and the changes in the ecological context over time. A developmental ecological systems framework guided the discussion of how child, parent, and contextual factors may contribute to the prediction of firstborn children's successful adaptation to the birth of a sibling.",
"title": ""
},
{
"docid": "e3104e5311dee57067540869f8036ba9",
"text": "Direct-touch interaction on mobile phones revolves around screens that compete for visual attention with users' real-world tasks and activities. This paper investigates the impact of these situational impairments on touch-screen interaction. We probe several design factors for touch-screen gestures, under various levels of environmental demands on attention, in comparison to the status-quo approach of soft buttons. We find that in the presence of environmental distractions, gestures can offer significant performance gains and reduced attentional load, while performing as well as soft buttons when the user's attention is focused on the phone. In fact, the speed and accuracy of bezel gestures did not appear to be significantly affected by environment, and some gestures could be articulated eyes-free, with one hand. Bezel-initiated gestures offered the fastest performance, and mark-based gestures were the most accurate. Bezel-initiated marks therefore may offer a promising approach for mobile touch-screen interaction that is less demanding of the user's attention.",
"title": ""
},
{
"docid": "df6a26b68ebc49f6cc0792ede3d8266f",
"text": "Nested Chinese Restaurant Process (nCRP) topic models are powerful nonparametric Bayesian methods to extract a topic hierarchy from a given text corpus, where the hierarchical structure is automatically determined by the data. Hierarchical Latent Dirichlet Allocation (hLDA) is a popular instance of nCRP topic models. However, hLDA has only been evaluated at small scale, because the existing collapsed Gibbs sampling and instantiated weight variational inference algorithms either are not scalable or sacrice inference quality with mean-eld assumptions. Moreover, an efcient distributed implementation of the data structures, such as dynamically growing count matrices and trees, is challenging. In this paper, we propose a novel partially collapsed Gibbs sampling (PCGS) algorithm, which combines the advantages of collapsed and instantiated weight algorithms to achieve good scalability as well as high model quality. An initialization strategy is presented to further improve the model quality. Finally, we propose an ecient distributed implementation of PCGS through vectorization, pre-processing, and a careful design of the concurrent data structures and communication strategy. Empirical studies show that our algorithm is 111 times more ecient than the previous open-source implementation for hLDA, with comparable or even beer model quality. Our distributed implementation can extract 1,722 topics from a 131-million-document corpus with 28 billion tokens, which is 4-5 orders of magnitude larger than the previous largest corpus, with 50 machines in 7 hours.",
"title": ""
},
{
"docid": "1ebb46b4c9e32423417287ab26cae14b",
"text": "Two field studies explored the relationship between self-awareness and transgressive behavior. In the first study, 363 Halloween trick-or-treaters were instructed to only take one candy. Self-awareness induced by the presence of a mirror placed behind the candy bowl decreased transgression rates for children who had been individuated by asking them their name and address, but did not affect the behavior of children left anonymous. Self-awareness influenced older but not younger children. Naturally occurring standards instituted by the behavior of the first child to approach the candy bowl in each group were shown to interact with the experimenter's verbally stated standard. The behavior of 349 subjects in the second study replicated the findings in the first study. Additionally, when no standard was stated by the experimenter, children took more candy when not self-aware than when self-aware.",
"title": ""
},
{
"docid": "a0a28f85247279d63a5b5f1189818f2c",
"text": "In this paper, we rigorously study tractable models for provably recovering low-rank tensors. Unlike their matrix-based predecessors, current convex approaches for recovering low-rank tensors based on incomplete (tensor completion) and/or grossly corrupted (tensor robust principal analysis) observations still suffer from the lack of theoretical guarantees, although they have been used in various recent applications and have exhibited promising empirical performance. In this work, we attempt to fill this gap. Specifically, we propose a class of convex recovery models (including strongly convex programs) that can be proved to guarantee exact recovery under a set of new tensor incoherence conditions which only require the existence of one low-rank mode, and characterize the problems where our models tend to perform well.",
"title": ""
},
{
"docid": "d7594a6e11835ac94ee40e5d69632890",
"text": "(CLUES) is an advanced, automated mortgageunderwriting rule-based expert system. The system was developed to increase the production capacity and productivity of Countrywide branches, improve the consistency of underwriting, and reduce the cost of originating a loan. The system receives selected information from the loan application, credit report, and appraisal. It then decides whether the loan should be approved or whether it requires further review by a human underwriter. If the system approves the loan, no further review is required, and the application is funded. CLUES has been in operation since February 1993 and is currently processing more than 8500 loans each month in over 300 decentralized branches around the country.",
"title": ""
}
] |
scidocsrr
|
b8c16bf86e4334e0a9b5e9a53c883285
|
A Convex Formulation for Learning Task Relationships in Multi-Task Learning
|
[
{
"docid": "3bb905351ce1ea2150f37059ed256a90",
"text": "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.",
"title": ""
}
] |
[
{
"docid": "1f45d589a42815614d48d20b4ca4abb6",
"text": "The modification of the conventional helical antenna by two pitch angles and a truncated cone reflector was analyzed. Limits of the axial radiation mode were examined by criteria defined with axial ratio, HPBW and SLL of the antenna. Gain increase was achieved but the bandwidth of the axial radiation mode remained almost the same. The practical adjustment was made on helical antenna with dielectric cylinder and measured in a laboratory. The measurement results confirmed the improvement of the conventional antenna in terms of gain increase.",
"title": ""
},
{
"docid": "4818794eddc8af63fd99b000bd00736a",
"text": "Dysproteinemia is characterized by the overproduction of an Ig by clonal expansion of cells from the B cell lineage. The resultant monoclonal protein can be composed of the entire Ig or its components. Monoclonal proteins are increasingly recognized as a contributor to kidney disease. They can cause injury in all areas of the kidney, including the glomerular, tubular, and vascular compartments. In the glomerulus, the major mechanism of injury is deposition. Examples of this include Ig amyloidosis, monoclonal Ig deposition disease, immunotactoid glomerulopathy, and cryoglobulinemic GN specifically from types 1 and 2 cryoglobulins. Mechanisms that do not involve Ig deposition include the activation of the complement system, which causes complement deposition in C3 glomerulopathy, and cytokines/growth factors as seen in thrombotic microangiopathy and precipitation, which is involved with cryoglobulinemia. It is important to recognize that nephrotoxic monoclonal proteins can be produced by clones from any of the B cell lineages and that a malignant state is not required for the development of kidney disease. The nephrotoxic clones that do not meet requirement for a malignant condition are now called monoclonal gammopathy of renal significance. Whether it is a malignancy or monoclonal gammopathy of renal significance, preservation of renal function requires substantial reduction of the monoclonal protein. With better understanding of the pathogenesis, clone-directed strategies, such as rituximab against CD20 expressing B cell and bortezomib against plasma cell clones, have been used in the treatment of these diseases. These clone-directed therapies been found to be more effective than immunosuppressive regimens used in nonmonoclonal protein-related kidney diseases.",
"title": ""
},
{
"docid": "0418d5ce9f15a91aeaacd65c683f529d",
"text": "We propose a novel cancelable biometric approach, known as PalmHashing, to solve the non-revocable biometric proposed method hashes palmprint templates with a set of pseudo-random keys to obtain a unique code called palmhash. The palmhash code can be stored in portable devices such tokens and smartcards for verification. Multiple sets of palmha can be maintained in multiple applications. Thus the privacy and security of the applications can be greatly enhance compromised, revocation can also be achieved via direct replacement of a new set of palmhash code. In addition, PalmHashin offers several advantages over contemporary biometric approaches such as clear separation of the genuine-imposter and zero EER occurrences. In this paper, we outline the implementation details of this method and also highlight its p in security-critical applications. 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "85b885986958b388b7fda7ca2426a583",
"text": "To reduce the risk of catheter-associated urinary tract infection (CAUTI), limiting use of indwelling catheters is encouraged with alternative collection methods and early removal. Adverse effects associated with such practices have not been described. We also determined if CAUTI preventative measures increase the risk of catheter-related complications. We hypothesized that there are complications associated with early removal of indwelling catheters. We described complications associated with indwelling catheterization and intermittent catheterization, and compared complication rates before and after policy updates changed catheterization practices. We performed retrospective cohort analysis of trauma patients admitted between August 1, 2009, and December 31, 2013 who required indwelling catheter. Associations between catheter days and adverse outcomes such as infection, bladder overdistention injury, recatheterization, urinary retention, and patients discharged with indwelling catheter were evaluated. The incidence of CAUTI and the total number of catheter days pre and post policy change were similar. The incidence rate of urinary retention and associated complications has increased since the policy changed. Practices intended to reduce the CAUTI rate are associated with unintended complications, such as urinary retention. Patient safety and quality improvement programs should monitor all complications associated with urinary catheterization practices, not just those that represent financial penalties.",
"title": ""
},
{
"docid": "afeb909f4be9da56dcaeb86d464ec75e",
"text": "Synthesizing expressive speech with appropriate prosodic variations, e.g., various styles, still has much room for improvement. Previous methods have explored to use manual annotations as conditioning attributes to provide variation information. However, the related training data are expensive to obtain and the annotated style codes can be ambiguous and unreliable. In this paper, we explore utilizing the residual error as conditioning attributes. The residual error is the difference between the prediction of a trained average model and the ground truth. We encode the residual error into a style embedding via a neural networkbased error encoder. The style embedding is then fed to the target synthesis model to provide information for modeling various style distributions more accurately. The average model and the error encoder are jointly optimized with the target synthesis model. Our proposed method has two advantages: 1) the embedding is automatically learned with no need of manual style annotations, which helps overcome data sparsity and ambiguity limitations; 2) For any unseen audio utterance, the style embedding can be efficiently generated. This enables rapid adaptation to the desired style to be achieved with only a single adaptation utterance. Experimental results show that our proposed method outperforms the baseline model in both speech quality and style similarity.",
"title": ""
},
{
"docid": "ece9554b3cb94a4cedd12d5659c8fe0d",
"text": "In many real-world network datasets such as co-authorship, co-citation, email communication, etc., relationships are complex and go beyond pairwise. Hypergraphs provide a flexible and natural modeling tool to model such complex relationships. The obvious existence of such complex relationships in many real-world networks naturally motivates the problem of learning with hypergraphs. A popular learning paradigm is hypergraph-based semi-supervised learning (SSL) where the goal is to assign labels to initially unlabelled vertices in a hypergraph. Motivated by the fact that a graph convolutional network (GCN) has been effective for graph-based SSL, we propose HyperGCN, a novel GCN for SSL on attributed hypergraphs. Additionally, we show how HyperGCN can be used as a learning-based approach for combinatorial optimisation on NP-hard hypergraph problems. We demonstrate HyperGCN’s effectiveness through detailed experimentation on real-world hypergraphs. We have made HyperGCN’s source code available to foster reproducible research.",
"title": ""
},
{
"docid": "803b681a89e6f3db34061c4b26fc2cd5",
"text": "T cells redirected to specific antigen targets with engineered chimeric antigen receptors (CARs) are emerging as powerful therapies in hematologic malignancies. Various CAR designs, manufacturing processes, and study populations, among other variables, have been tested and reported in over 10 clinical trials. Here, we review and compare the results of the reported clinical trials and discuss the progress and key emerging factors that may play a role in effecting tumor responses. We also discuss the outlook for CAR T-cell therapies, including managing toxicities and expanding the availability of personalized cell therapy as a promising approach to all hematologic malignancies. Many questions remain in the field of CAR T cells directed to hematologic malignancies, but the encouraging response rates pave a wide road for future investigation.",
"title": ""
},
{
"docid": "fb1f467ab11bb4c01a9e410bf84ac258",
"text": "The modular arrangement of the neocortex is based on the cell minicolumn: a self-contained ecosystem of neurons and their afferent, efferent, and interneuronal connections. The authors' preliminary studies indicate that minicolumns in the brains of autistic patients are narrower, with an altered internal organization. More specifically, their minicolumns reveal less peripheral neuropil space and increased spacing among their constituent cells. The peripheral neuropil space of the minicolumn is the conduit, among other things, for inhibitory local circuit projections. A defect in these GABAergic fibers may correlate with the increased prevalence of seizures among autistic patients. This article expands on our initial findings by arguing for the specificity of GABAergic inhibition in the neocortex as being focused around its mini- and macrocolumnar organization. The authors conclude that GABAergic interneurons are vital to proper minicolumnar differentiation and signal processing (e.g., filtering capacity of the neocortex), thus providing a putative correlate to autistic symptomatology.",
"title": ""
},
{
"docid": "252256527c17c21492e4de0ae50d9729",
"text": "Scribbles in scribble-based interactive segmentation such as graph-cut are usually assumed to be perfectly accurate, i.e., foreground scribble pixels will never be segmented as background in the final segmentation. However, it can be hard to draw perfectly accurate scribbles, especially on fine structures of the image or on mobile touch-screen devices. In this paper, we propose a novel ratio energy function that tolerates errors in the user input while encouraging maximum use of the user input information. More specifically, the ratio energy aims to minimize the graph-cut energy while maximizing the user input respected in the segmentation. The ratio energy function can be exactly optimized using an efficient iterated graph cut algorithm. The robustness of the proposed method is validated on the GrabCut dataset using both synthetic scribbles and manual scribbles. The experimental results show that the proposed algorithm is robust to the errors in the user input and preserves the \"anchoring\" capability of the user input.",
"title": ""
},
{
"docid": "95a038d92ed94e7a1cefdfab1db18c1d",
"text": "Arcing in PV systems has caused multiple residential and commercial rooftop fires. The National Electrical Code® (NEC) added section 690.11 to mitigate this danger by requiring arc-fault circuit interrupters (AFCI). Currently, the requirement is only for series arc-faults, but to fully protect PV installations from arc-fault-generated fires, parallel arc-faults must also be mitigated effectively. In order to de-energize a parallel arc-fault without module-level disconnects, the type of arc-fault must be identified so that proper action can be taken (e.g., opening the array for a series arc-fault and shorting for a parallel arc-fault). In this work, we investigate the electrical behavior of the PV system during series and parallel arc-faults to (a) understand the arcing power available from different faults, (b) identify electrical characteristics that differentiate the two fault types, and (c) determine the location of the fault based on current or voltage of the faulted array. This information can be used to improve arc-fault detector speed and functionality.",
"title": ""
},
{
"docid": "332bcd9b49f3551d8f07e4f21a881804",
"text": "Attention plays a critical role in effective learning. By means of attention assessment, it helps learners improve and review their learning processes, and even discover Attention Deficit Hyperactivity Disorder (ADHD). Hence, this work employs modified smart glasses which have an inward facing camera for eye tracking, and an inertial measurement unit for head pose estimation. The proposed attention estimation system consists of eye movement detection, head pose estimation, and machine learning. In eye movement detection, the central point of the iris is found by the locally maximum curve via the Hough transform where the region of interest is derived by the identified left and right eye corners. The head pose estimation is based on the captured inertial data to generate physical features for machine learning. Here, the machine learning adopts Genetic Algorithm (GA)-Support Vector Machine (SVM) where the feature selection of Sequential Floating Forward Selection (SFFS) is employed to determine adequate features, and GA is to optimize the parameters of SVM. Our experiments reveal that the proposed attention estimation system can achieve the accuracy of 93.1% which is fairly good as compared to the conventional systems. Therefore, the proposed system embedded in smart glasses brings users mobile, convenient, and comfortable to assess their attention on learning or medical symptom checker.",
"title": ""
},
{
"docid": "1f8b3933dc49d87204ba934f82f2f84f",
"text": "While journalism is evolving toward a rather open-minded participatory paradigm, social media presents overwhelming streams of data that make it difficult to identify the information of a journalist's interest. Given the increasing interest of journalists in broadening and democratizing news by incorporating social media sources, we have developed TweetGathering, a prototype tool that provides curated and contextualized access to news stories on Twitter. This tool was built with the aim of assisting journalists both with gathering and with researching news stories as users comment on them. Five journalism professionals who tested the tool found helpful characteristics that could assist them with gathering additional facts on breaking news, as well as facilitating discovery of potential information sources such as witnesses in the geographical locations of news.",
"title": ""
},
{
"docid": "cf9fe52efd734c536d0a7daaf59a9bcd",
"text": "Image-based sequence recognition has been a long-standing research topic in computer vision. In this paper, we investigate the problem of scene text recognition, which is among the most important and challenging tasks in image-based sequence recognition. A novel neural network architecture, which integrates feature extraction, sequence modeling and transcription into a unified framework, is proposed. Compared with previous systems for scene text recognition, the proposed architecture possesses four distinctive properties: (1) It is end-to-end trainable, in contrast to most of the existing algorithms whose components are separately trained and tuned. (2) It naturally handles sequences in arbitrary lengths, involving no character segmentation or horizontal scale normalization. (3) It is not confined to any predefined lexicon and achieves remarkable performances in both lexicon-free and lexicon-based scene text recognition tasks. (4) It generates an effective yet much smaller model, which is more practical for real-world application scenarios. The experiments on standard benchmarks, including the IIIT-5K, Street View Text and ICDAR datasets, demonstrate the superiority of the proposed algorithm over the prior arts. Moreover, the proposed algorithm performs well in the task of image-based music score recognition, which evidently verifies the generality of it.",
"title": ""
},
{
"docid": "5953dafaebde90a0f6af717883452d08",
"text": "Compact high-voltage Marx generators have found wide ranging applications for driving resistive and capacitive loads. Parasitic or leakage capacitance in compact low-energy Marx systems has proved useful in driving resistive loads, but it can be detrimental when driving capacitive loads where it limits the efficiency of energy transfer to the load capacitance. In this paper, we show how manipulating network designs consisting of these parasitic elements along with internal and external components can optimize the performance of such systems.",
"title": ""
},
{
"docid": "ebd40aaf7fa87beec30ceba483cc5047",
"text": "Event Detection (ED) aims to identify instances of specified types of events in text, which is a crucial component in the overall task of event extraction. The commonly used features consist of lexical, syntactic, and entity information, but the knowledge encoded in the Abstract Meaning Representation (AMR) has not been utilized in this task. AMR is a semantic formalism in which the meaning of a sentence is encoded as a rooted, directed, acyclic graph. In this paper, we demonstrate the effectiveness of AMR to capture and represent the deeper semantic contexts of the trigger words in this task. Experimental results further show that adding AMR features on top of the traditional features can achieve 67.8% (with 2.1% absolute improvement) F-measure (F1), which is comparable to the state-of-the-art approaches.",
"title": ""
},
{
"docid": "6a4844bf755830d14fb24caff1aa8442",
"text": "We present a stochastic first-order optimization algorithm, named BCSC, that adds a cyclic constraint to stochastic block-coordinate descent. It uses different subsets of the data to update different subsets of the parameters, thus limiting the detrimental effect of outliers in the training set. Empirical tests in benchmark datasets show that our algorithm outperforms state-of-the-art optimization methods in both accuracy as well as convergence speed. The improvements are consistent across different architectures, and can be combined with other training techniques and regularization methods.",
"title": ""
},
{
"docid": "7cd992aec08167cb16ea1192a511f9aa",
"text": "In this thesis, we will present an Echo State Network (ESN) to investigate hierarchical cognitive control, one of the functions of Prefrontal Cortex (PFC). This ESN is designed with the intention to implement it as a robot controller, making it useful for biologically inspired robot control and for embodied and embedded PFC research. We will apply the ESN to a n-back task and a Wisconsin Card Sorting task to confirm the hypothesis that topological mapping of temporal and policy abstraction over the PFC can be explained by the effects of two requirements: a better preservation of information when information is processed in different areas, versus a better integration of information when information is processed in a single area.",
"title": ""
},
{
"docid": "0178f7e0f0db3dac510a8b8a94767f34",
"text": "We propose a novel method of regularization for recurrent neural networks called suprisal-driven zoneout. In this method, states zoneout (maintain their previous value rather than updating), when the suprisal (discrepancy between the last state’s prediction and target) is small. Thus regularization is adaptive and input-driven on a per-neuron basis. We demonstrate the effectiveness of this idea by achieving state-of-the-art bits per character of 1.31 on the Hutter Prize Wikipedia dataset, significantly reducing the gap to the best known highly-engineered compression methods.",
"title": ""
},
{
"docid": "ef02508d3d05cdda0b1b39b53f3820ec",
"text": "In natural language generation, a meaning representation of some kind is successively transformed into a sentence or a text. Naturally, a central subtask of this problem is the choice of words, orlexicalization. In this paper, we propose four major issues that determine how a generator tackles lexicalization, and survey the contributions that researchers have made to them. Open problems are identified, and a possible direction for future research is sketched.",
"title": ""
},
{
"docid": "f02bd91e8374506aa4f8a2107f9545e6",
"text": "In an online survey with two cohorts (2009 and 2011) of undergraduates in dating relationshi ps, we examined how attachment was related to communication technology use within romantic relation ships. Participants reported on their attachment style and frequency of in-person communication as well as phone, text messaging, social network site (SNS), and electronic mail usage with partners. Texting and SNS communication were more frequent in 2011 than 2009. Attachment avoidance was related to less frequent phone use and texting, and greater email usage. Electronic communication channels (phone and texting) were related to positive relationship qualities, however, once accounting for attachment, only moderated effects were found. Interactions indicated texting was linked to more positive relationships for highly avoidant (but not less avoidant) participants. Additionally, email use was linked to more conflict for highly avoidant (but not less avoidant) participants. Finally, greater use of a SNS was positively associated with intimacy/support for those higher (but not lower) on attachment anxiety. This study illustrates how attachment can help to explain why the use of specific technology-based communication channels within romantic relationships may mean different things to different people, and that certain channels may be especially relevant in meeting insecurely attached individuals’ needs. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
a9fdf52d50e102648541ce8a8ca8d724
|
Static Detection of Second-Order Vulnerabilities in Web Applications
|
[
{
"docid": "827493ff47cff1defaeafff2ef180dce",
"text": "We present a static analysis algorithm for detecting security vulnerabilities in PHP, a popular server-side scripting language for building web applications. Our analysis employs a novel three-tier architecture to capture information at decreasing levels of granularity at the intrablock, intraprocedural, and interprocedural level. This architecture enables us to handle dynamic features unique to scripting languages such as dynamic typing and code inclusion, which have not been adequately addressed by previous techniques. We demonstrate the effectiveness of our approach by running our tool on six popular open source PHP code bases and finding 105 previously unknown security vulnerabilities, most of which we believe are remotely exploitable.",
"title": ""
}
] |
[
{
"docid": "c5ee2a4e38dfa27bc9d77edcd062612f",
"text": "We perform transaction-level analyses of entrusted loans – the largest component of shadow banking in China. There are two types – affiliated and non-affiliated. The latter involve a much higher interest rate than the former and official bank loan rates, and largely flow into the real estate industry. Both involve firms with privileged access to cheap capital to channel funds to less privileged firms and increase when credit is tight. The pricing of entrusted loans, especially that of non-affiliated loans, incorporates fundamental and informational risks. Stock market reactions suggest that both affiliated and non-affiliated loans are fairly-compensated investments.",
"title": ""
},
{
"docid": "87a7e7fe82a5768633b606e95727244d",
"text": "Hashing is fundamental to many algorithms and data structures widely used in practice. For theoretical analysis of hashing, there have been two main approaches. First, one can assume that the hash function is truly random, mapping each data item independently and uniformly to the range. This idealized model is unrealistic because a truly random hash function requires an exponential number of bits to describe. Alternatively, one can provide rigorous bounds on performance when explicit families of hash functions are used, such as 2-universal or O(1)-wise independent families. For such families, performance guarantees are often noticeably weaker than for ideal hashing.\n In practice, however, it is commonly observed that simple hash functions, including 2-universal hash functions, perform as predicted by the idealized analysis for truly random hash functions. In this paper, we try to explain this phenomenon. We demonstrate that the strong performance of universal hash functions in practice can arise naturally from a combination of the randomness of the hash function and the data. Specifially, following the large body of literature on random sources and randomness extraction, we model the data as coming from a \"block source,\" whereby each new data item has some \"entropy\" given the previous ones. As long as the (Renyi) entropy per data item is sufficiently large, it turns out that the performance when choosing a hash function from a 2-universal family is essentially the same as for a truly random hash function. We describe results for several sample applications, including linear probing, balanced allocations, and Bloom filters.",
"title": ""
},
{
"docid": "2d86a717ef4f83ff0299f15ef1df5b1b",
"text": "Proactive interference (PI) refers to the finding that memory for recently studied (target) information can be vastly impaired by the previous study of other (nontarget) information. PI can be reduced in a number of ways, for instance, by directed forgetting of the prior nontarget information, the testing of the prior nontarget information, or an internal context change before study of the target information. Here we report the results of four experiments, in which we demonstrate that all three forms of release from PI are accompanied by a decrease in participants’ response latencies. Because response latency is a sensitive index of the size of participants’ mental search set, the results suggest that release from PI can reflect more focused memory search, with the previously studied nontarget items being largely eliminated from the search process. Our results thus provide direct evidence for a critical role of retrieval processes in PI release. 2012 Elsevier Inc. All rights reserved. Introduction buildup of PI is caused by a failure to distinguish items Proactive interference (PI) refers to the finding that memory for recently studied information can be vastly impaired by the previous study of further information (e.g., Underwood, 1957). In a typical PI experiment, participants study a (target) list of items and are later tested on it. In the PI condition, participants study further (nontarget) lists that precede encoding of the target information, whereas in the no-PI condition participants engage in an unrelated distractor task. Typically, recall of the target list is worse in the PI condition than the no-PI condition, which reflects the PI finding. PI has been extensively studied in the past century, has proven to be a very robust finding, and has been suggested to be one of the major causes of forgetting in everyday life (e.g., Underwood, 1957; for reviews, see Anderson & Neely, 1996; Crowder, 1976). Over the years, a number of theories have been put forward to account for PI, most of them suggesting a critical role of retrieval processes in this form of forgetting. For instance, temporal discrimination theory suggests that . All rights reserved. ie.uni-regensburg.de from the most recent target list from items that appeared on the earlier nontarget lists. Specifically, the theory assumes that at test participants are unable to restrict their memory search to the target list and instead search the entire set of items that have previously been exposed (Baddeley, 1990; Crowder, 1976; Wixted & Rohrer, 1993). Another retrieval account attributes PI to a generation failure. Here, reduced recall levels of the target items are thought to be due to the impaired ability to access the material’s correct memory representation (Dillon & Thomas, 1975). In contrast to these retrieval explanations of PI, some theories also suggested a role of encoding factors in PI, assuming that the prior study of other lists impairs subsequent encoding of the target list. For instance, attentional resources may deteriorate across item lists and cause the target material to be less well processed in the presence than the absence of the preceding lists (e.g., Crowder, 1976).",
"title": ""
},
{
"docid": "f3459ff684d6309ac773c20e03f86183",
"text": "We propose an algorithm to separate simultaneously speaking persons from each other, the “cocktail party problem”, using a single microphone. Our approach involves a deep recurrent neural networks regression to a vector space that is descriptive of independent speakers. Such a vector space can embed empirically determined speaker characteristics and is optimized by distinguishing between speaker masks. We call this technique source-contrastive estimation. The methodology is inspired by negative sampling, which has seen success in natural language processing, where an embedding is learned by correlating and decorrelating a given input vector with output weights. Although the matrix determined by the output weights is dependent on a set of known speakers, we only use the input vectors during inference. Doing so will ensure that source separation is explicitly speaker-independent. Our approach is similar to recent deep neural network clustering and permutation-invariant training research; we use weighted spectral features and masks to augment individual speaker frequencies while filtering out other speakers. We avoid, however, the severe computational burden of other approaches with our technique. Furthermore, by training a vector space rather than combinations of different speakers or differences thereof, we avoid the so-called permutation problem during training. Our algorithm offers an intuitive, computationally efficient response to the cocktail party problem, and most importantly boasts better empirical performance than other current techniques.",
"title": ""
},
{
"docid": "e7f4fc00b911b9f593020c0ac4bd80ce",
"text": "INTRODUCTION\nS2R (sigma-2 receptor)/Pgrmc1 (progesterone receptor membrane component 1) is a cytochrome-related protein that binds directly to heme and various pharmacological compounds. S2R(Pgrmc1) also associates with cytochrome P450 proteins, the EGFR receptor tyrosine kinase and the RNA-binding protein PAIR-BP1. S2R(Pgrmc1) is induced in multiple types of cancer, where it regulates tumor growth and is implicated in progesterone signaling. S2R(Pgrmc1) also increases cholesterol synthesis in non-cancerous cells and may have a role in modulating drug metabolizing P450 proteins.\n\n\nAREAS COVERED\nThis review covers the independent identification of S2R and Pgrmc1 and their induction in cancers, as well as the role of S2R(Pgrmc1) in increasing cholesterol metabolism and P450 activity. This article was formed through a PubMed literature search using, but not limited to, the terms sigma-2 receptor, Pgrmc1, Dap1, cholesterol and aromatase.\n\n\nEXPERT OPINION\nMultiple laboratories have shown that S2R(Pgrmc1) associates with various P450 proteins and increases cholesterol synthesis via Cyp51. However, the lipogenic role of S2R(Pgrmc1) is tissue-specific. Furthermore, the role of S2R(Pgrmc1) in regulating P450 proteins other than Cyp51 appears to be highly selective, with modest inhibitory activity for Cyp3A4 in vitro and a complex regulatory pattern for Cyp21. Cyp19/aromatase is a therapeutic target in breast cancer, and S2R(Pgrmc1) activated Cyp19 significantly in vitro but modestly in biochemical assays. In summary, S2R(Pgrmc1) is a promising therapeutic target for cancer and possibly cholesterol synthesis but research to date has not identified a major role in P450-mediated drug metabolism.",
"title": ""
},
{
"docid": "05db9a684a537fdf1234e92047618e18",
"text": "Globally the internet is been accessed by enormous people within their restricted domains. When the client and server exchange messages among each other, there is an activity that can be observed in log files. Log files give a detailed description of the activities that occur in a network that shows the IP address, login and logout durations, the user's behavior etc. There are several types of attacks occurring from the internet. Our focus of research in this paper is Denial of Service (DoS) attacks with the help of pattern recognition techniques in data mining. Through which the Denial of Service attack is identified. Denial of service is a very dangerous attack that jeopardizes the IT resources of an organization by overloading with imitation messages or multiple requests from unauthorized users.",
"title": ""
},
{
"docid": "319a24bca0b0849e05ce8cce327c549b",
"text": "This paper presents a summary of the Computational Linguistics and Clinical Psychology (CLPsych) 2015 shared and unshared tasks. These tasks aimed to provide apples-to-apples comparisons of various approaches to modeling language relevant to mental health from social media. The data used for these tasks is from Twitter users who state a diagnosis of depression or post traumatic stress disorder (PTSD) and demographically-matched community controls. The unshared task was a hackathon held at Johns Hopkins University in November 2014 to explore the data, and the shared task was conducted remotely, with each participating team submitted scores for a held-back test set of users. The shared task consisted of three binary classification experiments: (1) depression versus control, (2) PTSD versus control, and (3) depression versus PTSD. Classifiers were compared primarily via their average precision, though a number of other metrics are used along with this to allow a more nuanced interpretation of the performance measures.",
"title": ""
},
{
"docid": "cf369f232ba023e675f322f42a20b2c2",
"text": "Ring topology local area networks (LAN’s) using the “buffer insertion” access method have as yet received relatively little attention. In this paper we present details of a LAN of this.-, called SILK-system for integrated local communication (in German, “Kommunikation”). Sections of the paper describe the synchronous transmission technique of the ring channel, the time-multiplexed access of eight ports at each node, the “braided” interconnection for bypassing defective nodes, and the role of interface transformation units and user interfaces, as well as some traffic,characteristics and reliability aspects. SILK’S modularity and open system concept are demonstrated by the already implemented applications such as distributed text editing, local telephone or teletex exchange, and process control in a TV studio.",
"title": ""
},
{
"docid": "926db14af35f9682c28a64e855fb76e5",
"text": "This paper reports about the development of a Named Entity Recognition (NER) system for Bengali using the statistical Conditional Random Fields (CRFs). The system makes use of the different contextual information of the words along with the variety of features that are helpful in predicting the various named entity (NE) classes. A portion of the partially NE tagged Bengali news corpus, developed from the archive of a leading Bengali newspaper available in the web, has been used to develop the system. The training set consists of 150K words and has been manually annotated with a NE tagset of seventeen tags. Experimental results of the 10-fold cross validation test show the effectiveness of the proposed CRF based NER system with an overall average Recall, Precision and F-Score values of 93.8%, 87.8% and 90.7%, respectively.",
"title": ""
},
{
"docid": "f3c76c415aa4555f3f9d4c347d3c5e87",
"text": "Virtual worlds, set-up on the Internet, occur as a highly complex form of visual media. They foreshadow future developments, not only in leisure settings, but also in health care and business environments. The interaction between real-life and virtual worlds, i.e., inter-reality, has recently moved to the center of scientific interest (Bainbridge 2007). Particularly, the empirical assessment of the value of virtual embodiment and its outcomes is needed (Schultze 2010). Here, this paper aims to make a contribution. Reviewing prior media theories and corresponding conceptualizations such as presence, immersion, media literacy and emotions, we argue that in inter-reality, individual differences in perceiving and dealing with one’s own and other’s emotions influence an individual's performance. Providing construct operationalizations and model propositions, we suggest testing the theory in the context of competitive and socially interactive virtual worlds.",
"title": ""
},
{
"docid": "e5b125bdb5a17cbe926c03c3bac6935c",
"text": "We propose a general framework for unsupervised domain adaptation, which allows deep neural networks trained on a source domain to be tested on a different target domain without requiring any training annotations in the target domain. This is achieved by adding extra networks and losses that help regularize the features extracted by the backbone encoder network. To this end we propose the novel use of the recently proposed unpaired image-to-image translation framework to constrain the features extracted by the encoder network. Specifically, we require that the features extracted are able to reconstruct the images in both domains. In addition we require that the distribution of features extracted from images in the two domains are indistinguishable. Many recent works can be seen as specific cases of our general framework. We apply our method for domain adaptation between MNIST, USPS, and SVHN datasets, and Amazon, Webcam and DSLR Office datasets in classification tasks, and also between GTA5 and Cityscapes datasets for a segmentation task. We demonstrate state of the art performance on each of these datasets.",
"title": ""
},
{
"docid": "e0d553cc4ca27ce67116c62c49c53d23",
"text": "We estimate a vehicle's speed, its wheelbase length, and tire track length by jointly estimating its acoustic wave pattern with a single passive acoustic sensor that records the vehicle's drive-by noise. The acoustic wave pattern is determined using the vehicle's speed, the Doppler shift factor, the sensor's distance to the vehicle's closest-point-of-approach, and three envelope shape (ES) components, which approximate the shape variations of the received signal's power envelope. We incorporate the parameters of the ES components along with estimates of the vehicle engine RPM, the number of cylinders, and the vehicle's initial bearing, loudness and speed to form a vehicle profile vector. This vector provides a fingerprint that can be used for vehicle identification and classification. We also provide possible reasons why some of the existing methods are unable to provide unbiased vehicle speed estimates using the same framework. The approach is illustrated using vehicle speed estimation and classification results obtained with field data.",
"title": ""
},
{
"docid": "af2afb32b243af0706dd641324d63dc0",
"text": "We present a qualitative evaluation of a number of free publicly available physics engines for simulation systems and game development. A brief overview of the aspects of a physics engine is presented accompanied by a comparison of the capabilities of each physics engine. Aspects that are investigated the accuracy and computational efficiency of the integrator properties, material properties, stacks, links, and collision detection system.",
"title": ""
},
{
"docid": "2adde1812974f2d5d35d4c7e31ca7247",
"text": "All currently available network intrusion detection (ID) systems rely upon a mechanism of data collection---passive protocol analysis---which is fundamentally flawed. In passive protocol analysis, the intrusion detection system (IDS) unobtrusively watches all traffic on the network, and scrutinizes it for patterns of suspicious activity. We outline in this paper two basic problems with the reliability of passive protocol analysis: (1) there isn't enough information on the wire on which to base conclusions about what is actually happening on networked machines, and (2) the fact that the system is passive makes it inherently \"fail-open,\" meaning that a compromise in the availability of the IDS doesn't compromise the availability of the network. We define three classes of attacks which exploit these fundamental problems---insertion, evasion, and denial of service attacks --and describe how to apply these three types of attacks to IP and TCP protocol analysis. We present the results of tests of the efficacy of our attacks against four of the most popular network intrusion detection systems on the market. All of the ID systems tested were found to be vulnerable to each of our attacks. This indicates that network ID systems cannot be fully trusted until they are fundamentally redesigned. Insertion, Evasion, and Denial of Service: Eluding Network Intrusion Detection http://www.robertgraham.com/mirror/Ptacek-Newsham-Evasion-98.html (1 of 55) [17/01/2002 08:32:46 p.m.]",
"title": ""
},
{
"docid": "3dbafd997eeb5985df0f90a65ea17c9f",
"text": "This paper reviews the extended Cauchy model and the four-parameter model for describing the wavelength and temperature effects of liquid crystal (LC) refractive indices. The refractive indices of nine commercial LCs, MLC-9200-000, MLC-9200-100, MLC-6608, MLC-6241-000, 5PCH, 5CB, TL-216, E7, and E44 are measured by the Multi-wavelength Abbe Refractometer. These experimental data are used to validate the theoretical models. Excellent agreement between experiment and theory is obtained.",
"title": ""
},
{
"docid": "a357ce62099cd5b12c09c688c5b9736e",
"text": "Considerations of personal identity bear on John Searle's Chinese Room argument, and on the opposed position that a computer itself could really understand a natural language. In this paper I develop the notion of a virtual person, modelled on the concept of virtual machines familiar in computer science. I show how Searle's argument, and J. Maloney's attempt to defend it, fail. I conclude that Searle is correct in holding that no digital machine could understand language, but wrong in holding that artificial minds are impossible: minds and persons are not the same as the machines, biological or electronic, that realize them.",
"title": ""
},
{
"docid": "2164fbc381033f7be87d075440053c0e",
"text": "Recently there has been a surge of interest in neural architectures for complex structured learning tasks. Along this track, we are addressing the supervised task of relation extraction and named-entity recognition via recursive neural structures and deep unsupervised feature learning. Our models are inspired by several recent works in deep learning for natural language. We have extended the previous models, and evaluated them in various scenarios, for relation extraction and namedentity recognition. In the models, we avoid using any external features, so as to investigate the power of representation instead of feature engineering. We implement the models and proposed some more general models for future work. We will briefly review previous works on deep learning and give a brief overview of recent progresses relation extraction and named-entity recognition.",
"title": ""
},
{
"docid": "830a585529981bd5b61ac5af3055d933",
"text": "Automatic retinal image analysis is emerging as an important screening tool for early detection of eye diseases. Glaucoma is one of the most common causes of blindness. The manual examination of optic disk (OD) is a standard procedure used for detecting glaucoma. In this paper, we present an automatic OD parameterization technique based on segmented OD and cup regions obtained from monocular retinal images. A novel OD segmentation method is proposed which integrates the local image information around each point of interest in multidimensional feature space to provide robustness against variations found in and around the OD region. We also propose a novel cup segmentation method which is based on anatomical evidence such as vessel bends at the cup boundary, considered relevant by glaucoma experts. Bends in a vessel are robustly detected using a region of support concept, which automatically selects the right scale for analysis. A multi-stage strategy is employed to derive a reliable subset of vessel bends called r-bends followed by a local spline fitting to derive the desired cup boundary. The method has been evaluated on 138 images comprising 33 normal and 105 glaucomatous images against three glaucoma experts. The obtained segmentation results show consistency in handling various geometric and photometric variations found across the dataset. The estimation error of the method for vertical cup-to-disk diameter ratio is 0.09/0.08 (mean/standard deviation) while for cup-to-disk area ratio it is 0.12/0.10. Overall, the obtained qualitative and quantitative results show effectiveness in both segmentation and subsequent OD parameterization for glaucoma assessment.",
"title": ""
},
{
"docid": "f981f9a15062f4187dfa7ac71f19d54a",
"text": "Background\nSoccer is one of the most widely played sports in the world. However, soccer players have an increased risk of lower limb injury. These injuries may be caused by both modifiable and non-modifiable factors, justifying the adoption of an injury prevention program such as the Fédération Internationale de Football Association (FIFA) 11+. The purpose of this study was to evaluate the efficacy of the FIFA 11+ injury prevention program for soccer players.\n\n\nMethodology\nThis meta-analysis was based on the PRISMA 2015 protocol. A search using the keywords \"FIFA,\" \"injury prevention,\" and \"football\" found 183 articles in the PubMed, MEDLINE, LILACS, SciELO, and ScienceDirect databases. Of these, 6 studies were selected, all of which were randomized clinical trials.\n\n\nResults\nThe sample consisted of 6,344 players, comprising 3,307 (52%) in the intervention group and 3,037 (48%) in the control group. The FIFA 11+ program reduced injuries in soccer players by 30%, with an estimated relative risk of 0.70 (95% confidence interval, 0.52-0.93, p = 0.01). In the intervention group, 779 (24%) players had injuries, while in the control group, 1,219 (40%) players had injuries. However, this pattern was not homogeneous throughout the studies because of clinical and methodological differences in the samples. This study showed no publication bias.\n\n\nConclusion\nThe FIFA 11+ warm-up program reduced the risk of injury in soccer players by 30%.",
"title": ""
}
] |
scidocsrr
|
e89289505bece7a8c5ff3cfd0d094cac
|
A 4.2-W 10-GHz GaN MMIC Doherty Power Amplifier
|
[
{
"docid": "b0c694eb683c9afb41242298fdd4cf63",
"text": "We have demonstrated 8.5-11.5 GHz class-E MMIC high-power amplifiers (HPAs) with a peak power-added-efficiency (PAE) of 61% and drain efficiency (DE) of 70% with an output power of 3.7 W in a continuous-mode operation. At 5 W output power, PAE and DE of 58% and 67% are measured, respectively, which implies MMIC power density of 5 W/mm at Vds = 30 V. The peak gain is 11 dB, with an associated gain of 9 dB at the peak PAE. At an output power of 9 W, DE and PAE of 59% and 51 % were measured, respectively. In order to improve the linearity, we have designed and simulated X-band class-E MMIC PAs similar to a Doherty configuration. The Doherty-based class-E amplifiers show an excellent cancellation of a third-order intermodulation product (IM3), which improved the simulated two-tone linearity C/IM3 to >; 50 dBc.",
"title": ""
}
] |
[
{
"docid": "e4570b3894a333da2e2bf23bc90f6920",
"text": "The malaria parasite's chloroquine resistance transporter (CRT) is an integral membrane protein localized to the parasite's acidic digestive vacuole. The function of CRT is not known and the protein was originally described as a transporter simply because it possesses 10 transmembrane domains. In wild-type (chloroquine-sensitive) parasites, chloroquine accumulates to high concentrations within the digestive vacuole and it is through interactions in this compartment that it exerts its antimalarial effect. Mutations in CRT can cause a decreased intravacuolar concentration of chloroquine and thereby confer chloroquine resistance. However, the mechanism by which they do so is not understood. In this paper we present the results of a detailed bioinformatic analysis that reveals that CRT is a member of a previously undefined family of proteins, falling within the drug/metabolite transporter superfamily. Comparisons between CRT and other members of the superfamily provide insight into the possible role of the protein and into the significance of the mutations associated with the chloroquine resistance phenotype. The protein is predicted to function as a dimer and to be oriented with its termini in the parasite cytosol. The key chloroquine-resistance-conferring mutation (K76T) is localized in a region of the protein implicated in substrate selectivity. The mutation is predicted to alter the selectivity of the protein such that it is able to transport the cationic (protonated) form of chloroquine down its steep concentration gradient, out of the acidic vacuole, and therefore away from its site of action.",
"title": ""
},
{
"docid": "325e33bb763ed78b6b84deeb0b10453f",
"text": "The present study was conducted to identify possible acoustic cues of sarcasm. Native English speakers produced a variety of simple utterances to convey four different attitudes: sarcasm, humour, sincerity, and neutrality. Following validation by a separate naı̈ve group of native English speakers, the recorded speech was subjected to acoustic analyses for the following features: mean fundamental frequency (F0), F0 standard deviation, F0 range, mean amplitude, amplitude range, speech rate, harmonics-to-noise ratio (HNR, to probe for voice quality changes), and one-third octave spectral values (to probe resonance changes). The results of analyses indicated that sarcasm was reliably characterized by a number of prosodic cues, although one acoustic feature appeared particularly robust in sarcastic utterances: overall reductions in mean F0 relative to all other target attitudes. Sarcasm was also reliably distinguished from sincerity by overall reductions in HNR and in F0 standard deviation. In certain linguistic contexts, sarcasm could be differentiated from sincerity and humour through changes in resonance and reductions in both speech rate and F0 range. Results also suggested a role of language used by speakers in conveying sarcasm and sincerity. It was concluded that sarcasm in speech can be characterized by a specific pattern of prosodic cues in addition to textual cues, and that these acoustic characteristics can be influenced by language used by the speaker. 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "9c09b8504a4e8ae249314083f89e951e",
"text": "Recently, social media sites like Facebook and Twitter have been severely criticized by policy makers, and media watchdog groups for allowing fake news stories to spread unchecked on their platforms. In response, these sites are encouraging their users to report any news story they encounter on the site, which they perceive as fake. Stories that are reported as fake by a large number of users are prioritized for fact checking by (human) experts at fact checking organizations like Snopes and PolitiFact. Thus, social media sites today are relying on their users' perceptions of the truthfulness of news stories to select stories to fact check.\n However, few studies have focused on understanding how users perceive truth in news stories, or how biases in their perceptions might affect current strategies to detect and label fake news stories. To this end, we present an in-depth analysis on users' perceptions of truth in news stories. Specifically, we analyze users' truth perception biases for 150 stories fact checked by Snopes. Based on their ground truth and the truth value perceived by users, we can classify the stories into four categories -- (i) C1: false stories perceived as false by most users, (ii) C2: true stories perceived as false by most users, (iii) C3: false stories perceived as true by most users, and (iv) C4: true stories perceived as true by most users.\n The stories that are likely to be reported (flagged) for fact checking are from the two classes C1 and C2 that have the lowest perceived truth levels. We argue that there is little to be gained by fact checking stories from C1 whose truth value is correctly perceived by most users. Although stories in C2 reveal the cynicality of users about true stories, social media sites presently do not explicitly mark them as true to resolve the confusion.\n On the contrary, stories in C3 are false stories, yet perceived as true by most users. Arguably, these stories are more damaging than C1 because the truth values of the the story in former situation is incorrectly perceived while truth values of the latter is correctly perceived. Nevertheless, the stories in C1 is likely to be fact checked with greater priority than the stories in C3! In fact, in today's social media sites, the higher the gullibility of users towards believing a false story, the less likely it is to be reported for fact checking.\n In summary, we make the following contributions in this work.\n 1. Methodological: We develop a novel method for assessing users' truth perceptions of news stories. We design a test for users to rapidly assess (i.e., at the rate of a few seconds per story) how truthful or untruthful the claims in a news story are. We then conduct our truth perception tests on-line and gather truth perceptions of 100 US-based Amazon Mechanical Turk workers for each story.\n 2. Empirical: Our exploratory analysis of users' truth perceptions reveal several interesting insights. For instance, (i) for many stories, the collective wisdom of the crowd (average truth rating) differs significantly from the actual truth of the story, i.e., wisdom of crowds is inaccurate, (ii) across different stories, we find evidence for both false positive perception bias (i.e., a gullible user perceiving the story to be more true than it is in reality) and false negative perception bias (i.e., a cynical user perceiving a story to be more false than it is in reality), and (iii) users' political ideologies influence their truth perceptions for the most controversial stories, it is frequently the result of users' political ideologies influencing their truth perceptions.\n 3. Practical: Based on our observations, we call for prioritizing stories to fact check in order to achieve the following three important goals: (i) Remove false news stories from circulation, (ii) Correct the misperception of the users, and (iii) Decrease the disagreement between different users' perceptions of truth.\n Finally, we provide strategies which utilize users' truth perceptions (and predictive analysis of their biases) to achieve the three goals stated above while prioritizing stories for fact checking. The full paper is available at: https://bit.ly/2T7raFO",
"title": ""
},
{
"docid": "ff9ca485a07dca02434396eca0f0c94f",
"text": "Clustering is a NP-hard problem that is used to find the relationship between patterns in a given set of patterns. It is an unsupervised technique that is applied to obtain the optimal cluster centers, especially in partitioned based clustering algorithms. On the other hand, cat swarm optimization (CSO) is a new metaheuristic algorithm that has been applied to solve various optimization problems and it provides better results in comparison to other similar types of algorithms. However, this algorithm suffers from diversity and local optima problems. To overcome these problems, we are proposing an improved version of the CSO algorithm by using opposition-based learning and the Cauchy mutation operator. We applied the opposition-based learning method to enhance the diversity of the CSO algorithm and we used the Cauchy mutation operator to prevent the CSO algorithm from trapping in local optima. The performance of our proposed algorithm was tested with several artificial and real datasets and compared with existing methods like K-means, particle swarm optimization, and CSO. The experimental results show the applicability of our proposed method.",
"title": ""
},
{
"docid": "b733ffe2cf4e0ee19b07614075c091a8",
"text": "BACKGROUND\nPENS is a rare neuro-cutaneous syndrome that has been recently described. It involves one or more congenital epidermal hamartomas of the papular epidermal nevus with \"skyline\" basal cell layer type (PENS) as well as non-specific neurological anomalies. Herein, we describe an original case in which the epidermal hamartomas are associated with autism spectrum disorder (ASD).\n\n\nPATIENTS AND METHODS\nA 6-year-old boy with a previous history of severe ASD was referred to us for asymptomatic pigmented congenital plaques on the forehead and occipital region. Clinical examination revealed a light brown verrucous mediofrontal plaque in the form of an inverted comma with a flat striated surface comprising coalescent polygonal papules, and a clinically similar round occipital plaque. Repeated biopsies revealed the presence of acanthotic epidermis covered with orthokeratotic hyperkeratosis with occasionally broadened epidermal crests and basal hyperpigmentation, pointing towards an anatomoclinical diagnosis of PENS.\n\n\nDISCUSSION\nA diagnosis of PENS hamartoma was made on the basis of the clinical characteristics and histopathological analysis of the skin lesions. This condition is defined clinically as coalescent polygonal papules with a flat or rough surface, a round or comma-like shape and light brown coloring. Histopathological examination showed the presence of a regular palisade \"skyline\" arrangement of basal cell epidermal nuclei which, while apparently pathognomonic, is neither a constant feature nor essential for diagnosis. Association of a PENS hamartoma and neurological disorders allows classification of PENS as a new keratinocytic epidermal hamartoma syndrome. The early neurological signs, of varying severity, are non-specific and include psychomotor retardation, learning difficulties, dyslexia, hyperactivity, attention deficit disorder and epilepsy. There have been no reports hitherto of the presence of ASD as observed in the case we present.\n\n\nCONCLUSION\nThis new case report of PENS confirms the autonomous nature of this neuro-cutaneous disorder associated with keratinocytic epidermal hamartoma syndromes.",
"title": ""
},
{
"docid": "e724db907bb466c108b5322a2df073da",
"text": "CRISPR/Cas9 is a versatile genome-editing technology that is widely used for studying the functionality of genetic elements, creating genetically modified organisms as well as preclinical research of genetic disorders. However, the high frequency of off-target activity (≥50%)-RGEN (RNA-guided endonuclease)-induced mutations at sites other than the intended on-target site-is one major concern, especially for therapeutic and clinical applications. Here, we review the basic mechanisms underlying off-target cutting in the CRISPR/Cas9 system, methods for detecting off-target mutations, and strategies for minimizing off-target cleavage. The improvement off-target specificity in the CRISPR/Cas9 system will provide solid genotype-phenotype correlations, and thus enable faithful interpretation of genome-editing data, which will certainly facilitate the basic and clinical application of this technology.",
"title": ""
},
{
"docid": "edccb0babf1e6fe85bb1d7204ab0ea0a",
"text": "OBJECTIVE\nControlled study of the long-term outcome of selective mutism (SM) in childhood.\n\n\nMETHOD\nA sample of 33 young adults with SM in childhood and two age- and gender-matched comparison groups were studied. The latter comprised 26 young adults with anxiety disorders in childhood (ANX) and 30 young adults with no psychiatric disorders during childhood. The three groups were compared with regard to psychiatric disorder in young adulthood by use of the Composite International Diagnostic Interview (CIDI). In addition, the effect of various predictors on outcome of SM was studied.\n\n\nRESULTS\nThe symptoms of SM improved considerably in the entire SM sample. However, both SM and ANX had significantly higher rates for phobic disorder and any psychiatric disorder than controls at outcome. Taciturnity in the family and, by trend, immigrant status and a severity indicator of SM had an impact on psychopathology and symptomatic outcome in young adulthood.\n\n\nCONCLUSION\nThis first controlled long-term outcome study of SM provides evidence of symptomatic improvement of SM in young adulthood. However, a high rate of phobic disorder at outcome points to the fact that SM may be regarded as an anxiety disorder variant.",
"title": ""
},
{
"docid": "3e24de04f0b1892b27fc60bb8a405d0d",
"text": "A power factor (PF) corrected single stage, two-switch isolated zeta converter is proposed for arc welding. This modified zeta converter is having two switches and two clamping diodes on the primary side of a high-frequency transformer. This, in turn, results in reduced switch stress. The proposed converter is designed to operate in a discontinuous inductor current mode (DICM) to achieve inherent PF correction at the utility. The DICM operation substantially reduces the complexity of the control and effectively regulates the output dc voltage. The proposed converter offers several features, such as inherent overload current limit and fast parametrical response, to the load and source voltage conditions. This, in turn, results in an improved performance in terms of power quality indices and an enhanced weld bead quality. The proposed modified zeta converter is designed and its performance is simulated in the MATLAB/Simulink environment. Simulated results are also verified experimentally on a developed prototype of the converter. The performance of the system is investigated in terms of its input PF, displacement PF, total harmonic distortion of ac mains current, voltage regulation, and robustness to prove its efficacy in overall performance.",
"title": ""
},
{
"docid": "1c1830e8e5154566ed03972d300906db",
"text": "Filicide is the killing of a child by his or her parent. Despite the disturbing nature of these crimes, a study of filicide classification can provide insight into their causes. Furthermore, a study of filicide classification provides information essential to accurate death certification. We report a rare case of familial filicide in which twin sisters both attempted to kill their respective children. We then suggest a detailed classification of filicide subtypes that provides a framework of motives and precipitating factors leading to filicide. We identify 16 subtypes of filicide, each of which is sufficiently characteristic to warrant a separate category. We describe in some detail the characteristic features of these subtypes. A knowledge of filicide subtypes contributes to interpretation of difficult cases. Furthermore, to protect potential child homicide victims, it is necessary to know how and why they are killed. Epidemiologic studies using filicide subtypes as their basis could provide information leading to strategies for prevention.",
"title": ""
},
{
"docid": "1301030c091eeb23d43dd3bfa6763e77",
"text": "A new system for web attack detection is presented. It follows the anomaly-based approach, therefore known and unknown attacks can be detected. The system relies on a XML file to classify the incoming requests as normal or anomalous. The XML file, which is built from only normal traffic, contains a description of the normal behavior of the target web application statistically characterized. Any request which deviates from the normal behavior is considered an attack. The system has been applied to protect a real web application. An increasing number of training requests have been used to train the system. Experiments show that when the XML file has enough information to closely characterize the normal behavior of the target web application, a very high detection rate is reached while the false alarm rate remains very low.",
"title": ""
},
{
"docid": "7f7e7f7ddcbb4d98270c0ba50a3f7a25",
"text": "Workflow management systems are traditionally centralized, creating a single point of failure and a scalability bottleneck. In collaboration with Cybermation, Inc., we have developed a content-based publish/subscribe platform, called PADRES, which is a distributed middleware platform with features inspired by the requirements of workflow management and business process execution. These features constitute original additions to publish/subscribe systems and include an expressive subscription language, composite subscription processing, a rulebased matching and routing mechanism, historc, query-based data access, and the support for the decentralized execution of business process specified in XML. PADRES constitutes the basis for the next generation of enterprise management systems developed by Cybermation, Inc., including business process automation, monitoring, and execution applications.",
"title": ""
},
{
"docid": "9acb0fe31e4586349475cf52323ef0d6",
"text": "Accurate and robust segmentation of small organs in wholebody MRI is difficult due to anatomical variation and class imbalance. Recent deep network based approaches have demonstrated promising performance on abdominal multi-organ segmentations. However, the performance on small organs is still suboptimal as these occupy only small regions of the whole-body volumes with unclear boundaries and variable shapes. A coarse-to-fine, hierarchical strategy is a common approach to alleviate this problem, however, this might miss useful contextual information. We propose a two-stage approach with weighting schemes based on auto-context and spatial atlas priors. Our experiments show that the proposed approach can boost the segmentation accuracy of multiple small organs in whole-body MRI scans.",
"title": ""
},
{
"docid": "6318c9d0e62f1608c105b114c6395e6f",
"text": "Myofascial pain associated with myofascial trigger points (MTrPs) is a common cause of nonarticular musculoskeletal pain. Although the presence of MTrPs can be determined by soft tissue palpation, little is known about the mechanisms and biochemical milieu associated with persistent muscle pain. A microanalytical system was developed to measure the in vivo biochemical milieu of muscle in near real time at the subnanogram level of concentration. The system includes a microdialysis needle capable of continuously collecting extremely small samples (approximately 0.5 microl) of physiological saline after exposure to the internal tissue milieu across a 105-microm-thick semi-permeable membrane. This membrane is positioned 200 microm from the tip of the needle and permits solutes of <75 kDa to diffuse across it. Three subjects were selected from each of three groups (total 9 subjects): normal (no neck pain, no MTrP); latent (no neck pain, MTrP present); active (neck pain, MTrP present). The microdialysis needle was inserted in a standardized location in the upper trapezius muscle. Due to the extremely small sample size collected by the microdialysis system, an established microanalytical laboratory, employing immunoaffinity capillary electrophoresis and capillary electrochromatography, performed analysis of selected analytes. Concentrations of protons, bradykinin, calcitonin gene-related peptide, substance P, tumor necrosis factor-alpha, interleukin-1beta, serotonin, and norepinephrine were found to be significantly higher in the active group than either of the other two groups (P < 0.01). pH was significantly lower in the active group than the other two groups (P < 0.03). In conclusion, the described microanalytical technique enables continuous sampling of extremely small quantities of substances directly from soft tissue, with minimal system perturbation and without harmful effects on subjects. The measured levels of analytes can be used to distinguish clinically distinct groups.",
"title": ""
},
{
"docid": "2393fc67fdca6b98695d0940fba19ca3",
"text": "Evaluation of network security is an essential step in securing any network. This evaluation can help security professionals in making optimal decisions about how to design security countermeasures, to choose between alternative security architectures, and to systematically modify security configurations in order to improve security. However, the security of a network depends on a number of dynamically changing factors such as emergence of new vulnerabilities and threats, policy structure and network traffic. Identifying, quantifying and validating these factors using security metrics is a major challenge in this area. In this paper, we propose a novel security metric framework that identifies and quantifies objectively the most significant security risk factors, which include existing vulnerabilities, historical trend of vulnerability of the remotely accessible services, prediction of potential vulnerabilities for any general network service and their estimated severity and finally policy resistance to attack propagation within the network. We then describe our rigorous validation experiments using real- life vulnerability data of the past 6 years from National Vulnerability Database (NVD) [10] to show the high accuracy and confidence of the proposed metrics. Some previous works have considered vulnerabilities using code analysis. However, as far as we know, this is the first work to study and analyze these metrics for network security evaluation using publicly available vulnerability information and security policy configuration.",
"title": ""
},
{
"docid": "39271e70afb7ea1b1876b57dfab1d745",
"text": "This study examined the patterns or mechanism for conflict resolution in traditional African societies with particular reference to Yoruba and Igbo societies in Nigeria and Pondo tribe in South Africa. The paper notes that conflict resolution in traditional African societies provides opportunity to interact with the parties concerned, it promotes consensus-building, social bridge reconstructions and enactment of order in the society. The paper submits further that the western world placed more emphasis on the judicial system presided over by council of elders, kings’ courts, peoples (open place)",
"title": ""
},
{
"docid": "e646f83143a98e5a0b143cb30596d549",
"text": "The difference in the performance characteristics of volatile (DRAM) and non-volatile storage devices (HDD/SSDs) influences the design of database management systems (DBMSs). The key assumption has always been that the latter is much slower than the former. This affects all aspects of a DBMS's runtime architecture. But the arrival of new non-volatile memory (NVM) storage that is almost as fast as DRAM with fine-grained read/writes invalidates these previous design choices.\n In this tutorial, we provide an outline on how to build a new DBMS given the changes to hardware landscape due to NVM. We survey recent developments in this area, and discuss the lessons learned from prior research on designing NVM database systems. We highlight a set of open research problems, and present ideas for solving some of them.",
"title": ""
},
{
"docid": "35de3cc0aa21d20074b72d8b85c3a72f",
"text": "Fetus-in-fetu (FIF) is a rare entity resulting from abnormal embryogenesis in diamniotic monochorionic twins, being first described by Johann Friedrich Meckel (1800s). This occurs when a vertebrate fetus is enclosed in a normally growing fetus. Clinical manifestations vary. Detection is most often in infancy, the oldest reported age being 47. We report the case of a 4-day-old girl who was referred postnatally following a prenatal fetal scan which had revealed the presence of a multi-loculated retroperitoneal mass lesion with calcifications within. A provisional radiological diagnosis of FIF was made. Elective laparotomy revealed a well encapsulated retroperitoneal mass containing among other structures a skull vault and rudimentary limb buds. Recovery was uneventful. Here we discuss the difference between FIF and teratomas, risks of non-operative therapy and the role of serology in surveillance and detection of malignant change.",
"title": ""
},
{
"docid": "c1338abb3ddd4acb1ba7ed7ac0c4452c",
"text": "Defect prediction models that are trained on class imbalanced datasets (i.e., the proportion of defective and clean modules is not equally represented) are highly susceptible to produce inaccurate prediction models. Prior research compares the impact of class rebalancing techniques on the performance of defect prediction models. Prior research efforts arrive at contradictory conclusions due to the use of different choice of datasets, classification techniques, and performance measures. Such contradictory conclusions make it hard to derive practical guidelines for whether class rebalancing techniques should be applied in the context of defect prediction models. In this paper, we investigate the impact of 4 popularly-used class rebalancing techniques on 10 commonly-used performance measures and the interpretation of defect prediction models. We also construct statistical models to better understand in which experimental design settings that class rebalancing techniques are beneficial for defect prediction models. Through a case study of 101 datasets that span across proprietary and open-source systems, we recommend that class rebalancing techniques are necessary when quality assurance teams wish to increase the completeness of identifying software defects (i.e., Recall). However, class rebalancing techniques should be avoided when interpreting defect prediction models. We also find that class rebalancing techniques do not impact the AUC measure. Hence, AUC should be used as a standard measure when comparing defect prediction models.",
"title": ""
},
{
"docid": "71b25e3d37ad3a057a5759179403247e",
"text": "BACKGROUND\nObesity is a major health problem in the United States and around the world. To date, relationships between obesity and aspects of the built environment have not been evaluated empirically at the individual level.\n\n\nOBJECTIVE\nTo evaluate the relationship between the built environment around each participant's place of residence and self-reported travel patterns (walking and time in a car), body mass index (BMI), and obesity for specific gender and ethnicity classifications.\n\n\nMETHODS\nBody Mass Index, minutes spent in a car, kilometers walked, age, income, educational attainment, and gender were derived through a travel survey of 10,878 participants in the Atlanta, Georgia region. Objective measures of land use mix, net residential density, and street connectivity were developed within a 1-kilometer network distance of each participant's place of residence. A cross-sectional design was used to associate urban form measures with obesity, BMI, and transportation-related activity when adjusting for sociodemographic covariates. Discrete analyses were conducted across gender and ethnicity. The data were collected between 2000 and 2002 and analysis was conducted in 2004.\n\n\nRESULTS\nLand-use mix had the strongest association with obesity (BMI >/= 30 kg/m(2)), with each quartile increase being associated with a 12.2% reduction in the likelihood of obesity across gender and ethnicity. Each additional hour spent in a car per day was associated with a 6% increase in the likelihood of obesity. Conversely, each additional kilometer walked per day was associated with a 4.8% reduction in the likelihood of obesity. As a continuous measure, BMI was significantly associated with urban form for white cohorts. Relationships among urban form, walk distance, and time in a car were stronger among white than black cohorts.\n\n\nCONCLUSIONS\nMeasures of the built environment and travel patterns are important predictors of obesity across gender and ethnicity, yet relationships among the built environment, travel patterns, and weight may vary across gender and ethnicity. Strategies to increase land-use mix and distance walked while reducing time in a car can be effective as health interventions.",
"title": ""
},
{
"docid": "d031b76b0363a12c0141785ac875e6a4",
"text": "In this paper, we consider a smart power infrastructure, where several subscribers share a common energy source. Each subscriber is equipped with an energy consumption controller (ECC) unit as part of its smart meter. Each smart meter is connected to not only the power grid but also a communication infrastructure such as a local area network. This allows two-way communication among smart meters. Considering the importance of energy pricing as an essential tool to develop efficient demand side management strategies, we propose a novel real-time pricing algorithm for the future smart grid. We focus on the interactions between the smart meters and the energy provider through the exchange of control messages which contain subscribers' energy consumption and the real-time price information. First, we analytically model the subscribers' preferences and their energy consumption patterns in form of carefully selected utility functions based on concepts from microeconomics. Second, we propose a distributed algorithm which automatically manages the interactions among the ECC units at the smart meters and the energy provider. The algorithm finds the optimal energy consumption levels for each subscriber to maximize the aggregate utility of all subscribers in the system in a fair and efficient fashion. Finally, we show that the energy provider can encourage some desirable consumption patterns among the subscribers by means of the proposed real-time pricing interactions. Simulation results confirm that the proposed distributed algorithm can potentially benefit both subscribers and the energy provider.",
"title": ""
}
] |
scidocsrr
|
99cdb216e60bc17be1564c374d39ccd8
|
Comparing Performances of Big Data Stream Processing Platforms with RAM3S
|
[
{
"docid": "f35d164bd1b19f984b10468c41f149e3",
"text": "Recent technological advancements have led to a deluge of data from distinctive domains (e.g., health care and scientific sensors, user-generated data, Internet and financial companies, and supply chain systems) over the past two decades. The term big data was coined to capture the meaning of this emerging trend. In addition to its sheer volume, big data also exhibits other unique characteristics as compared with traditional data. For instance, big data is commonly unstructured and require more real-time analysis. This development calls for new system architectures for data acquisition, transmission, storage, and large-scale data processing mechanisms. In this paper, we present a literature survey and system tutorial for big data analytics platforms, aiming to provide an overall picture for nonexpert readers and instill a do-it-yourself spirit for advanced audiences to customize their own big-data solutions. First, we present the definition of big data and discuss big data challenges. Next, we present a systematic framework to decompose big data systems into four sequential modules, namely data generation, data acquisition, data storage, and data analytics. These four modules form a big data value chain. Following that, we present a detailed survey of numerous approaches and mechanisms from research and industry communities. In addition, we present the prevalent Hadoop framework for addressing big data challenges. Finally, we outline several evaluation benchmarks and potential research directions for big data systems.",
"title": ""
}
] |
[
{
"docid": "11a4536e40dde47e024d4fe7541b368c",
"text": "Building Information Modeling (BIM) provides an integrated 3D environment to manage large-scale engineering projects. The Architecture, Engineering and Construction (AEC) industry explores 4D visualizations over these datasets for virtual construction planning. However, existing solutions lack adequate visual mechanisms to inspect the underlying schedule and make inconsistencies readily apparent. The goal of this paper is to apply best practices of information visualization to improve 4D analysis of construction plans. We first present a review of previous work that identifies common use cases and limitations. We then consulted with AEC professionals to specify the main design requirements for such applications. These guided the development of CasCADe, a novel 4D visualization system where task sequencing and spatio-temporal simultaneity are immediately apparent. This unique framework enables the combination of diverse analytical features to create an information-rich analysis environment. We also describe how engineering collaborators used CasCADe to review the real-world construction plans of an Oil & Gas process plant. The system made evident schedule uncertainties, identified work-space conflicts and helped analyze other constructability issues. The results and contributions of this paper suggest new avenues for future research in information visualization for the AEC industry.",
"title": ""
},
{
"docid": "baeddccc34585796fec12659912a757e",
"text": "Recurrent neural networks (RNNs) have shown success for many sequence-modeling tasks, but learning long-term dependencies from data remains difficult. This is often attributed to the vanishing gradient problem, which shows that gradient components relating a loss at time t to time t− τ tend to decay exponentially with τ . Long short-term memory (LSTM) and gated recurrent units (GRUs), the most widely-used RNN architectures, attempt to remedy this problem by making the decay’s base closer to 1. NARX RNNs1 take an orthogonal approach: by including direct connections, or delays, from the past, NARX RNNs make the decay’s exponent closer to 0. However, as introduced, NARX RNNs reduce the decay’s exponent only by a factor of nd, the number of delays, and simultaneously increase computation by this same factor. We introduce a new variant of NARX RNNs, called MIxed hiSTory RNNs, which addresses these drawbacks. We show that for τ ≤ 2nd−1, MIST RNNs reduce the decay’s worst-case exponent from τ/nd to log τ , while maintaining computational complexity that is similar to LSTM and GRUs. We compare MIST RNNs to simple RNNs, LSTM, and GRUs across 4 diverse tasks. MIST RNNs outperform all other methods in 2 cases, and in all cases are competitive.",
"title": ""
},
{
"docid": "4cd7f19d0413f9bab1a2cda5a5b7a9a4",
"text": "Web-based learning plays a vital role in the modern education system, where different technologies are being emerged to enhance this E-learning process. Therefore virtual and online laboratories are gaining popularity due to its easy implementation and accessibility worldwide. These types of virtual labs are useful where the setup of the actual laboratory is complicated due to several factors such as high machinery or hardware cost. This paper presents a very efficient method of building a model using JavaScript Web Graphics Library with HTML5 enabled and having controllable features inbuilt. This type of program is free from any web browser plug-ins or application and also server independent. Proprietary software has always been a bottleneck in the development of such platforms. This approach rules out this issue and can easily applicable. Here the framework has been discussed and neatly elaborated with an example of a simplified robot configuration.",
"title": ""
},
{
"docid": "9e310ac4876eee037e0d5c2a248f6f45",
"text": "The self-balancing two-wheel chair (SBC) is an unconventional type of personal transportation vehicle. It has unstable dynamics and therefore requires a special control to stabilize and prevent it from falling and to ensure the possibility of speed control and steering by the rider. This paper discusses the dynamic modeling and controller design for the system. The model of SBC is based on analysis of the motions of the inverted pendulum on a mobile base complemented with equations of the wheel motion and motor dynamics. The proposed control design involves a multi-loop PID control. Experimental verification and prototype implementation are discussed.",
"title": ""
},
{
"docid": "5233286436f0ecfde8e0e647e89b288f",
"text": "Each employee’s performance is important in an organization. A way to motivate it is through the application of reinforcement theory which is developed by B. F. Skinner. One of the most commonly used methods is positive reinforcement in which one’s behavior is strengthened or increased based on consequences. This paper aims to review the impact of positive reinforcement on the performances of employees in organizations. It can be applied by utilizing extrinsic reward or intrinsic reward. Extrinsic rewards include salary, bonus and fringe benefit while intrinsic rewards are praise, encouragement and empowerment. By applying positive reinforcement in these factors, desired positive behaviors are encouraged and negative behaviors are eliminated. Financial and non-financial incentives have a positive relationship with the efficiency and effectiveness of staffs.",
"title": ""
},
{
"docid": "6038975e7868b235f2b665ffbd249b68",
"text": "Existing person re-identification benchmarks and methods mainly focus on matching cropped pedestrian images between queries and candidates. However, it is different from real-world scenarios where the annotations of pedestrian bounding boxes are unavailable and the target person needs to be searched from a gallery of whole scene images. To close the gap, we propose a new deep learning framework for person search. Instead of breaking it down into two separate tasks—pedestrian detection and person re-identification, we jointly handle both aspects in a single convolutional neural network. An Online Instance Matching (OIM) loss function is proposed to train the network effectively, which is scalable to datasets with numerous identities. To validate our approach, we collect and annotate a large-scale benchmark dataset for person search. It contains 18,184 images, 8,432 identities, and 96,143 pedestrian bounding boxes. Experiments show that our framework outperforms other separate approaches, and the proposed OIM loss function converges much faster and better than the conventional Softmax loss.",
"title": ""
},
{
"docid": "301aee8363dffd7ae4c7ac2945a55842",
"text": "This work studies the usage of the Deep Neural Network (DNN) Bottleneck (BN) features together with the traditional MFCC features in the task of i-vector-based speaker recognition. We decouple the sufficient statistics extraction by using separate GMM models for frame alignment, and for statistics normalization and we analyze the usage of BN and MFCC features (and their concatenation) in the two stages. We also show the effect of using full-covariance GMM models, and, as a contrast, we compare the result to the recent DNN-alignment approach. On the NIST SRE2010, telephone condition, we show 60% relative gain over the traditional MFCC baseline for EER (and similar for the NIST DCF metrics), resulting in 0.94% EER.",
"title": ""
},
{
"docid": "9b30a07edc14ed2d1132421d8f372cd2",
"text": "Even when the role of a conversational agent is well known users persist in confronting them with Out-of-Domain input. This often results in inappropriate feedback, leaving the user unsatisfied. In this paper we explore the automatic creation/enrichment of conversational agents’ knowledge bases by taking advantage of natural language interactions present in the Web, such as movies subtitles. Thus, we introduce Filipe, a chatbot that answers users’ request by taking advantage of a corpus of turns obtained from movies subtitles (the Subtle corpus). Filipe is based on Say Something Smart, a tool responsible for indexing a corpus of turns and selecting the most appropriate answer, which we fully describe in this paper. Moreover, we show how this corpus of turns can help an existing conversational agent to answer Out-of-Domain interactions. A preliminary evaluation is also presented.",
"title": ""
},
{
"docid": "b7c4d8b946ea6905a2f0da10e6dc9de6",
"text": "We develop a broadband channel estimation algorithm for millimeter wave (mmWave) multiple input multiple output (MIMO) systems with few-bit analog-to-digital converters (ADCs). Our methodology exploits the joint sparsity of the mmWave MIMO channel in the angle and delay domains. We formulate the estimation problem as a noisy quantized compressed-sensing problem and solve it using efficient approximate message passing (AMP) algorithms. In particular, we model the angle-delay coefficients using a Bernoulli–Gaussian-mixture distribution with unknown parameters and use the expectation-maximization forms of the generalized AMP and vector AMP algorithms to simultaneously learn the distributional parameters and compute approximately minimum mean-squared error (MSE) estimates of the channel coefficients. We design a training sequence that allows fast, fast Fourier transform based implementation of these algorithms while minimizing peak-to-average power ratio at the transmitter, making our methods scale efficiently to large numbers of antenna elements and delays. We present the results of a detailed simulation study that compares our algorithms to several benchmarks. Our study investigates the effect of SNR, training length, training type, ADC resolution, and runtime on channel estimation MSE, mutual information, and achievable rate. It shows that, in a mmWave MIMO system, the methods we propose to exploit joint angle-delay sparsity allow 1-bit ADCs to perform comparably to infinite-bit ADCs at low SNR, and 4-bit ADCs to perform comparably to infinite-bit ADCs at medium SNR.",
"title": ""
},
{
"docid": "bd06f693359bba90de59454f32581c9c",
"text": "Digital business ecosystems are becoming an increasingly popular concept as an open environment for modeling and building interoperable system integration. Business organizations have realized the importance of using standards as a cost-effective method for accelerating business process integration. Small and medium size enterprise (SME) participation in global trade is increasing, however, digital transactions are still at a low level. Cloud integration is expected to offer a cost-effective business model to form an interoperable digital supply chain. By observing the integration models, we can identify the large potential of cloud services to accelerate integration. An industrial case study is conducted. This paper investigates and contributes new knowledge on a how top-down approach by using a digital business ecosystem framework enables business managers to define new user requirements and functionalities for system integration. Through analysis, we identify the current cap of integration design. Using the cloud clustering framework, we identify how the design affects cloud integration services.",
"title": ""
},
{
"docid": "84c95e15ddff06200624822cc12fa51f",
"text": "A growing body of research has recently been conducted on semantic textual similarity using a variety of neural network models. While recent research focuses on word-based representation for phrases, sentences and even paragraphs, this study considers an alternative approach based on character n-grams. We generate embeddings for character n-grams using a continuous-bag-of-n-grams neural network model. Three different sentence representations based on n-gram embeddings are considered. Results are reported for experiments with bigram, trigram and 4-gram embeddings on the STS Core dataset for SemEval-2016 Task 1.",
"title": ""
},
{
"docid": "0a170051e72b58081ad27e71a3545bcf",
"text": "Relational learning is becoming increasingly important in many areas of application. Here, we present a novel approach to relational learning based on the factorization of a three-way tensor. We show that unlike other tensor approaches, our method is able to perform collective learning via the latent components of the model and provide an efficient algorithm to compute the factorization. We substantiate our theoretical considerations regarding the collective learning capabilities of our model by the means of experiments on both a new dataset and a dataset commonly used in entity resolution. Furthermore, we show on common benchmark datasets that our approach achieves better or on-par results, if compared to current state-of-the-art relational learning solutions, while it is significantly faster to compute.",
"title": ""
},
{
"docid": "60ec8f06cdd4bf7cb27565c6d576ff40",
"text": "2.5D chips with TSV and interposer are becoming the most popular packaging method with great increased flexibility and integrated functionality. However, great challenges have been posed in the failure analysis process to precisely locate the failure point of each interconnection in ultra-small size. The electro-optic sampling (EOS) based pulsed Time-domain reflectometry (TDR) is a powerful tool for the 2.5D/3D package diagnostics with greatly increased I/O speed and density. The timing of peaks in the reflected waveform accurately reveals the faulty location. In this work, 2.5D chip with known open failure location has been analyzed by a EOS based TDR system.",
"title": ""
},
{
"docid": "5ad696a08b236e200a96589780b2b06c",
"text": "The need for increasing flexibility of industrial automation system products leads to the trend of shifting functional behavior from hardware solutions to software components. This trend causes an increasing complexity of software components and the need for comprehensive and automated testing approaches to ensure a required (high) quality level. Nevertheless, key tasks in software testing include identifying appropriate test cases that typically require a high effort for (a) test case generation/construction and (b) test case modification in case of requirements changes. Semi-automated derivation of test cases based on models, like UML, can support test case generation. In this paper we introduce an automated test case generation approach for industrial automation applications where the test cases are specified by UML state chart diagrams. In addition we present a prototype application of the presented approach for a sorting machine. Major results showed that state charts (a) can support efficient test case generation and (b) enable automated generation of test cases and code for industrial automation systems.",
"title": ""
},
{
"docid": "e3853e259c3ae6739dcae3143e2074a8",
"text": "A new reference collection of patent documents for training and testing automated categorization systems is established and described in detail. This collection is tailored for automating the attribution of international patent classification codes to patent applications and is made publicly available for future research work. We report the results of applying a variety of machine learning algorithms to the automated categorization of English-language patent documents. This procedure involves a complex hierarchical taxonomy, within which we classify documents into 114 classes and 451 subclasses. Several measures of categorization success are described and evaluated. We investigate how best to resolve the training problems related to the attribution of multiple classification codes to each patent document.",
"title": ""
},
{
"docid": "7edd1ae4ec4bac9ed91e5e14326a694e",
"text": "These days, educational institutions and organizations are generating huge amount of data, more than the people can read in their lifetime. It is not possible for a person to learn, understand, decode, and interpret to find valuable information. Data mining is one of the most popular method which can be used to identify hidden patterns from large databases. User can extract historical, hidden details, and previously unknown information, from large repositories by applying required mining techniques. There are two algorithms which can be used to classify and predict, such as supervised learning and unsupervised learning. Classification is a technique which performs an induction on current data (existing data) and predicts future class. The main objective of classification is to make an unknown class to known class by consulting its neighbor class. therefore it is called as supervised learning, it builds the classifier by consulting with the known class labels such as k-nearest neighbor algorithm (k-NN), Naïve Bayes (NB), support vector machine (SVM), decision tree. Clustering is an unsupervised learning that builds a model to group similar objects into categories without consulting a class label. The main objective of clustering is find the distance between objects like nearby and faraway based on their similarities and dissimilarities it groups the objects and detects outliers. In this paper Weka tool is used to analyze by applying preprocessing, classification on institutional academic result of under graduate students of computer science & engineering. Keywords— Weka, classifier, supervised learning,",
"title": ""
},
{
"docid": "7c13ebe2897fc4870a152159cda62025",
"text": "Tuberculosis (TB) remains a major health threat, killing nearly 2 million individuals around this globe, annually. The only vaccine, developed almost a century ago, provides limited protection only during childhood. After decades without the introduction of new antibiotics, several candidates are currently undergoing clinical investigation. Curing TB requires prolonged combination of chemotherapy with several drugs. Moreover, monitoring the success of therapy is questionable owing to the lack of reliable biomarkers. To substantially improve the situation, a detailed understanding of the cross-talk between human host and the pathogen Mycobacterium tuberculosis (Mtb) is vital. Principally, the enormous success of Mtb is based on three capacities: first, reprogramming of macrophages after primary infection/phagocytosis to prevent its own destruction; second, initiating the formation of well-organized granulomas, comprising different immune cells to create a confined environment for the host-pathogen standoff; third, the capability to shut down its own central metabolism, terminate replication, and thereby transit into a stage of dormancy rendering itself extremely resistant to host defense and drug treatment. Here, we review the molecular mechanisms underlying these processes, draw conclusions in a working model of mycobacterial dormancy, and highlight gaps in our understanding to be addressed in future research.",
"title": ""
},
{
"docid": "36c11c29f6605f7c234e68ecba2a717a",
"text": "BACKGROUND\nThe main purpose of this study was to identify factors that influence healthcare quality in the Iranian context.\n\n\nMETHODS\nExploratory in-depth individual and focus group interviews were conducted with 222 healthcare stakeholders including healthcare providers, managers, policy-makers, and payers to identify factors affecting the quality of healthcare services provided in Iranian healthcare organisations.\n\n\nRESULTS\nQuality in healthcare is a production of cooperation between the patient and the healthcare provider in a supportive environment. Personal factors of the provider and the patient, and factors pertaining to the healthcare organisation, healthcare system, and the broader environment affect healthcare service quality. Healthcare quality can be improved by supportive visionary leadership, proper planning, education and training, availability of resources, effective management of resources, employees and processes, and collaboration and cooperation among providers.\n\n\nCONCLUSION\nThis article contributes to healthcare theory and practice by developing a conceptual framework that provides policy-makers and managers a practical understanding of factors that affect healthcare service quality.",
"title": ""
},
{
"docid": "a433ebaeeb5dc5b68976b3ecb770c0cd",
"text": "1 abstract The importance of the inspection process has been magniied by the requirements of the modern manufacturing environment. In electronics mass-production manufacturing facilities, an attempt is often made to achieve 100 % quality assurance of all parts, subassemblies, and nished goods. A variety of approaches for automated visual inspection of printed circuits have been reported over the last two decades. In this survey, algorithms and techniques for the automated inspection of printed circuit boards are examined. A classiication tree for these algorithms is presented and the algorithms are grouped according to this classiication. This survey concentrates mainly on image analysis and fault detection strategies, these also include the state-of-the-art techniques. A summary of the commercial PCB inspection systems is also presented. 2 Introduction Many important applications of vision are found in the manufacturing and defense industries. In particular, the areas in manufacturing where vision plays a major role are inspection, measurements , and some assembly tasks. The order among these topics closely reeects the manufacturing needs. In most mass-production manufacturing facilities, an attempt is made to achieve 100% quality assurance of all parts, subassemblies, and nished products. One of the most diicult tasks in this process is that of inspecting for visual appearance-an inspection that seeks to identify both functional and cosmetic defects. With the advances in computers (including high speed, large memory and low cost) image processing, pattern recognition, and artiicial intelligence have resulted in better and cheaper equipment for industrial image analysis. This development has made the electronics industry active in applying automated visual inspection to manufacturing/fabricating processes that include printed circuit boards, IC chips, photomasks, etc. Nello 1] gives a summary of the machine vision inspection applications in electronics industry. 01",
"title": ""
},
{
"docid": "9f5998ebc2457c330c29a10772d8ee87",
"text": "Fuzzy hashing is a known technique that has been adopted to speed up malware analysis processes. However, Hashing has not been fully implemented for malware detection because it can easily be evaded by applying a simple obfuscation technique such as packing. This challenge has limited the usage of hashing to triaging of the samples based on the percentage of similarity between the known and unknown. In this paper, we explore the different ways fuzzy hashing can be used to detect similarities in a file by investigating particular hashes of interest. Each hashing method produces independent but related interesting results which are presented herein. We further investigate combination techniques that can be used to improve the detection rates in hashing methods. Two such evidence combination theory based methods are applied in this work in order propose a novel way of combining the results achieved from different hashing algorithms. This study focuses on file and section Ssdeep hashing, PeHash and Imphash techniques to calculate the similarity of the Portable Executable files. Our results show that the detection rates are improved when evidence combination techniques are used.",
"title": ""
}
] |
scidocsrr
|
f739aca6dcc42816419fa73850d20acd
|
A discussion on the validation tests employed to compare human action recognition methods using the MSR Action3D dataset
|
[
{
"docid": "8c70f1af7d3132ca31b0cf603b7c5939",
"text": "Much of the existing work on action recognition combines simple features (e.g., joint angle trajectories, optical flow, spatio-temporal video features) with somewhat complex classifiers or dynamical models (e.g., kernel SVMs, HMMs, LDSs, deep belief networks). Although successful, these approaches represent an action with a set of parameters that usually do not have any physical meaning. As a consequence, such approaches do not provide any qualitative insight that relates an action to the actual motion of the body or its parts. For example, it is not necessarily the case that clapping can be correlated to hand motion or that walking can be correlated to a specific combination of motions from the feet, arms and body. In this paper, we propose a new representation of human actions called Sequence of the Most Informative Joints (SMIJ), which is extremely easy to interpret. At each time instant, we automatically select a few skeletal joints that are deemed to be the most informative for performing the current action. The selection of joints is based on highly interpretable measures such as the mean or variance of joint angles, maximum angular velocity of joints, etc. We then represent an action as a sequence of these most informative joints. Our experiments on multiple databases show that the proposed representation is very discriminative for the task of human action recognition and performs better than several state-of-the-art algorithms.",
"title": ""
},
{
"docid": "c474df285da8106b211dc7fe62733423",
"text": "In this paper, we propose an effective method to recognize human actions using 3D skeleton joints recovered from 3D depth data of RGBD cameras. We design a new action feature descriptor for action recognition based on differences of skeleton joints, i.e., EigenJoints which combine action information including static posture, motion property, and overall dynamics. Accumulated Motion Energy (AME) is then proposed to perform informative frame selection, which is able to remove noisy frames and reduce computational cost. We employ non-parametric Naïve-Bayes-Nearest-Neighbor (NBNN) to classify multiple actions. The experimental results on several challenging datasets demonstrate that our approach outperforms the state-of-the-art methods. In addition, we investigate how many frames are necessary for our method to perform classification in the scenario of online action recognition. We observe that the first 30% to 40% frames are sufficient to achieve comparable results to that using the entire video sequences on the MSR Action3D dataset.",
"title": ""
}
] |
[
{
"docid": "f5532b33092d22c97d1b6ebe69de051f",
"text": "Automatic personality recognition is useful for many computational applications, including recommendation systems, dating websites, and adaptive dialogue systems. There have been numerous successful approaches to classify the “Big Five” personality traits from a speaker’s utterance, but these have largely relied on judgments of personality obtained from external raters listening to the utterances in isolation. This work instead classifies personality traits based on self-reported personality tests, which are more valid and more difficult to identify. Our approach, which uses lexical and acoustic-prosodic features, yields predictions that are between 6.4% and 19.2% more accurate than chance. This approach predicts Opennessto-Experience and Neuroticism most successfully, with less accurate recognition of Extroversion. We compare the performance of classification and regression techniques, and also explore predicting personality clusters.",
"title": ""
},
{
"docid": "a42f7e9efc4c0e2d56107397f98b15f1",
"text": "Recently, much advance has been made in image captioning, and an encoder-decoder framework has achieved outstanding performance for this task. In this paper, we propose an extension of the encoder-decoder framework by adding a component called guiding network. The guiding network models the attribute properties of input images, and its output is leveraged to compose the input of the decoder at each time step. The guiding network can be plugged into the current encoder-decoder framework and trained in an end-to-end manner. Hence, the guiding vector can be adaptively learned according to the signal from the decoder, making itself to embed information from both image and language. Additionally, discriminative supervision can be employed to further improve the quality of guidance. The advantages of our proposed approach are verified by experiments carried out on the MS COCO dataset.",
"title": ""
},
{
"docid": "8140838d7ef17b3d6f6c042442de0f73",
"text": "The two vascular systems of our body are the blood and lymphatic vasculature. Our understanding of the cellular and molecular processes controlling the development of the lymphatic vasculature has progressed significantly in the last decade. In mammals, this is a stepwise process that starts in the embryonic veins, where lymphatic EC (LEC) progenitors are initially specified. The differentiation and maturation of these progenitors continues as they bud from the veins to produce scattered primitive lymph sacs, from which most of the lymphatic vasculature is derived. Here, we summarize our current understanding of the key steps leading to the formation of a functional lymphatic vasculature.",
"title": ""
},
{
"docid": "f0c334e0d626bd5be4e17f08049d573e",
"text": "The cost efficiency and diversity of digital channels facilitate marketers’ frequent and interactive communication with their customers. Digital channels like the Internet, email, mobile phones and digital television offer new prospects to cultivate customer relationships. However, there are a few models explaining how digital marketing communication (DMC) works from a relationship marketing perspective, especially for cultivating customer loyalty. In this paper, we draw together previous research into an integrative conceptual model that explains how the key elements of DMC frequency and content of brand communication, personalization, and interactivity can lead to improved customer value, commitment, and loyalty.",
"title": ""
},
{
"docid": "4e0735c47fba93e77bc33eee689ed03e",
"text": "Word-of-mouth (WOM) has been recognized as one of the most influential resources of information transmission. However, conventional WOM communication is only effective within limited social contact boundaries. The advances of information technology and the emergence of online social network sites have changed the way information is transmitted and have transcended the traditional limitations of WOM. This paper describes online interpersonal influence or electronic word of mouth (eWOM) because it plays a significant role in consumer purchase decisions.",
"title": ""
},
{
"docid": "63262d2a9abdca1d39e31d9937bb41cf",
"text": "A structural model is presented for synthesizing binaural sound from a monaural source. The model produces well-controlled vertical as well as horizontal effects. The model is based on a simplified time-domain description of the physics of wave propagation and diffraction. The components of the model have a one-to-one correspondence with the physical sources of sound diffraction, delay, and reflection. The simplicity of the model permits efficient implementation in DSP hardware, and thus facilitates real-time operation. Additionally, the parameters in the model can be adjusted to fit a particular individual’s characteristics, thereby producing individualized head-related transfer functions. Experimental tests verify the perceptual effectiveness of the approach.",
"title": ""
},
{
"docid": "6d0c4e7f69169b98484e9acc3c3ffdd9",
"text": "Motion capture is a prevalent technique for capturing and analyzing human articulations. A common problem encountered in motion capture is that some marker positions are often missing due to occlusions or ambiguities. Most methods for completing missing markers may quickly become ineffective and produce unsatisfactory results when a significant portion of the markers are missing for extended periods of time. We propose a data-driven, piecewise linear modeling approach to missing marker estimation that is especially beneficial in this scenario. We model motion sequences of a training set with a hierarchy of low-dimensional local linear models characterized by the principal components. For a new sequence with missing markers, we use a pre-trained classifier to identify the most appropriate local linear model for each frame and then recover the missing markers by finding the least squares solutions based on the available marker positions and the principal components of the associated model. Our experimental results demonstrate that our method is efficient in recovering the full-body motion and is robust to heterogeneous motion data.",
"title": ""
},
{
"docid": "924a9b5ff2a60a46ef3dfd8b40abb0fc",
"text": "We extend the conceptual model developed by Amelinckx et al. (2008) by relating electronic reverse auction (ERA) project outcomes to ERA project satisfaction. We formulate hypotheses about the relationships among organizational and project antecedents, a set of financial, operational, and strategic ERA project outcomes, and ERA project satisfaction. We empirically test the extended model with a sample of 180 buying professionals from ERA project teams at large global companies. Our results show that operational and strategic outcomes are positively related to ERA project satisfaction, while price savings are not. We also find positive relationships between financial outcomes and project team expertise; operational outcomes and organizational commitment, cross-functional project team composition, and procedural fairness ; and strategic outcomes and top management support, organizational commitment, and procedural fairness. An electronic reverse auction (ERA) is ''an online, real-time dynamic auction between a buying organization and a group of pre-qualified suppliers who compete against each other to win the business to supply goods or services that have clearly defined specifications for design, quantity, quality, delivery, and related terms and conditions. These suppliers compete by bidding against each other online over the Internet using specialized software by submitting successively lower priced bids during a scheduled time period'' (Beall et al. 2003). Over the past two decades, ERAs have been used in various industries, (Beall et al. 2003, Ray et al. 2011, Wang et al. 2013). ERAs are increasingly popular among buying organizations, although their use sparks controversy and ethical concerns in the sourcing world (Charki et al. 2010). Indeed, the one-sided focus on price savings in ERAs is considered to be at odds with the benefits of long-term cooperative buyer–supplier relationships (Beall et al. 2003, Hunt et al. 2006). However, several researchers have declared that ERAs are here to stay, as they are relatively easy to install and use and have resulted in positive outcomes across a range of offerings and contexts (Beall et al. 2003, Hur et al. 2006). In prior research work on ERAs, Amelinckx et al. (2008) developed a conceptual model based on an extensive review of the electronic sourcing literature and exploratory research involving multiple case studies. The authors identified operational and strategic outcomes that buying organizations can obtain in ERAs, in addition to financial gains. Furthermore, the authors asserted that the different outcomes can be obtained jointly, through the implementation of important organizational and project antecedents, and as such alleviate …",
"title": ""
},
{
"docid": "73edaa7319dcf225c081f29146bbb385",
"text": "Sign language is a specific area of human gesture communication and a full-edged complex language that is used by various deaf communities. In Bangladesh, there are many deaf and dumb people. It becomes very difficult to communicate with them for the people who are unable to understand the Sign Language. In this case, an interpreter can help a lot. So it is desirable to make computer to understand the Bangladeshi sign language that can serve as an interpreter. In this paper, a Computer Vision-based Bangladeshi Sign Language Recognition System (BdSL) has been proposed. In this system, separate PCA (Principal Component Analysis) is used for Bengali Vowels and Bengali Numbers recognition. The system is tested for 6 Bengali Vowels and 10 Bengali Numbers.",
"title": ""
},
{
"docid": "f1ef6a16c85a874250148d1863ce3756",
"text": "In this paper, a triple band capacitive-fed circular patch antenna with arc-shaped slots is proposed for 1.575 GHz GPS and Wi-Fi 2.4/5.2 GHz communications on unmanned aerial vehicle (UAV) applications. In order to enhance the impedance bandwidth of the antenna, a double-layered geometry is applied in this design with a circular feeding disk placed between two layers. The antenna covers 2380 - 2508 MHz and 5100 - 6030 MHz for full support of the Wi-Fi communication between UAV and ground base station. The foam-Duroid stacked geometry can further enhance the bandwidths for both GPS and Wi-Fi bands when compared to purely Duroid form. The simulation and measurement results are reported in this paper.",
"title": ""
},
{
"docid": "4db29a3fd1f1101c3949d3270b15ef07",
"text": "Human goal-directed action emerges from the interaction between stimulus-driven sensorimotor online systems and slower-working control systems that relate highly processed perceptual information to the construction of goal-related action plans. This distribution of labor requires the acquisition of enduring action representations; that is, of memory traces which capture the main characteristics of successful actions and their consequences. It is argued here that these traces provide the building blocks for off-line prospective action planning, which renders the search through stored action representations an essential part of action control. Hence, action planning requires cognitive search (through possible options) and might have led to the evolution of cognitive search routines that humans have learned to employ for other purposes as well, such as searching for perceptual events and through memory. Thus, what is commonly considered to represent different types of search operations may all have evolved from action planning and share the same characteristics. Evidence is discussed which suggests that all types of cognitive search—be it in searching for perceptual events, for suitable actions, or through memory—share the characteristic of following a fi xed sequence of cognitive operations: divergent search followed by convergent search.",
"title": ""
},
{
"docid": "cce477dd5efd3ecbabc57dfb237b72c9",
"text": "In this paper we present BabelDomains, a unified resource which provides lexical items with information about domains of knowledge. We propose an automatic method that uses knowledge from various lexical resources, exploiting both distributional and graph-based clues, to accurately propagate domain information. We evaluate our methodology intrinsically on two lexical resources (WordNet and BabelNet), achieving a precision over 80% in both cases. Finally, we show the potential of BabelDomains in a supervised learning setting, clustering training data by domain for hypernym discovery.",
"title": ""
},
{
"docid": "cbdbe103bcc85f76f9e6ac09eed8ea4c",
"text": "Using the evidence collection and analysis methodology for Android devices proposed by Martini, Do and Choo (2015), we examined and analyzed seven popular Android cloud-based apps. Firstly, we analyzed each app in order to see what information could be obtained from their private app storage and SD card directories. We collated the information and used it to aid our investigation of each app’s database files and AccountManager data. To complete our understanding of the forensic artefacts stored by apps we analyzed, we performed further analysis on the apps to determine if the user’s authentication credentials could be collected for each app based on the information gained in the initial analysis stages. The contributions of this research include a detailed description of artefacts, which are of general forensic interest, for each app analyzed.",
"title": ""
},
{
"docid": "80c522a65fafb98886d1d3d848605e77",
"text": "We propose a technique for producing ‘visual explanations’ for decisions from a large class of Convolutional Neural Network (CNN)-based models, making them more transparent. Our approach – Gradient-weighted Class Activation Mapping (Grad-CAM), uses the gradients of any target concept (say logits for ‘dog’ or even a caption), flowing into the final convolutional layer to produce a coarse localization map highlighting the important regions in the image for predicting the concept. Unlike previous approaches, Grad- CAM is applicable to a wide variety of CNN model-families: (1) CNNs with fully-connected layers (e.g. VGG), (2) CNNs used for structured outputs (e.g. captioning), (3) CNNs used in tasks with multi-modal inputs (e.g. visual question answering) or reinforcement learning, without architectural changes or re-training. We combine Grad-CAM with existing fine-grained visualizations to create a high-resolution class-discriminative visualization, Guided Grad-CAM, and apply it to image classification, image captioning, and visual question answering (VQA) models, including ResNet-based architectures. In the context of image classification models, our visualizations (a) lend insights into failure modes of these models (showing that seemingly unreasonable predictions have reasonable explanations), (b) outperform previous methods on the ILSVRC-15 weakly-supervised localization task, (c) are more faithful to the underlying model, and (d) help achieve model generalization by identifying dataset bias. For image captioning and VQA, our visualizations show even non-attention based models can localize inputs. Finally, we design and conduct human studies to measure if Grad-CAM explanations help users establish appropriate trust in predictions from deep networks and show that Grad-CAM helps untrained users successfully discern a ‘stronger’ deep network from a ‘weaker’ one even when both make identical predictions. Our code is available at https: //github.com/ramprs/grad-cam/ along with a demo on CloudCV [2] and video at youtu.be/COjUB9Izk6E.",
"title": ""
},
{
"docid": "15a24d02f998f0b515e35ce4c66a6dc1",
"text": "Nowadays chronic diseases are the leading cause of deaths in India. These diseases which include various ailments in the form of diabetes, stroke, cardiovascular diseases, mental health illness, cancers, and chronic lung diseases. Chronic diseases are the biggest challenge for India and these diseases are the main cause of hospitalization for elder people. People who have suffered from chronic diseases are needed to repeatedly monitor the vital signs periodically. The number of nurses in hospital is relative low compared to the number of patients in hospital, there may be a chance to miss to monitor any patient vital signs which may affect patient health. In this paper, real time monitoring vital signs of a patient is developed using wearable sensors. Without nurse help, patient know the vital signs from the sensors and the system stored the sensor value in the form of text document. By using data mining approaches, the system is trained for vital sign data. Patients give their text document to the system which in turn they know their health status without any nurse help. This system enables high risk patients to be timely checked and enhance the quality of a life of patients.",
"title": ""
},
{
"docid": "84fe6840461b63a5ccf007450f0eeef8",
"text": "The canonical Wnt cascade has emerged as a critical regulator of stem cells. In many tissues, activation of Wnt signalling has also been associated with cancer. This has raised the possibility that the tightly regulated self-renewal mediated by Wnt signalling in stem and progenitor cells is subverted in cancer cells to allow malignant proliferation. Insights gained from understanding how the Wnt pathway is integrally involved in both stem cell and cancer cell maintenance and growth in the intestinal, epidermal and haematopoietic systems may serve as a paradigm for understanding the dual nature of self-renewal signals.",
"title": ""
},
{
"docid": "0d723c344ab5f99447f7ad2ff72c0455",
"text": "The aim of this study was to determine the pattern of fixations during the performance of a well-learned task in a natural setting (making tea), and to classify the types of monitoring action that the eyes perform. We used a head-mounted eye-movement video camera, which provided a continuous view of the scene ahead, with a dot indicating foveal direction with an accuracy of about 1 deg. A second video camera recorded the subject's activities from across the room. The videos were linked and analysed frame by frame. Foveal direction was always close to the object being manipulated, and very few fixations were irrelevant to the task. The first object-related fixation typically led the first indication of manipulation by 0.56 s, and vision moved to the next object about 0.61 s before manipulation of the previous object was complete. Each object-related act that did not involve a waiting period lasted an average of 3.3 s and involved about 7 fixations. Roughly a third of all fixations on objects could be definitely identified with one of four monitoring functions: locating objects used later in the process, directing the hand or object in the hand to a new location, guiding the approach of one object to another (e.g. kettle and lid), and checking the state of some variable (e.g. water level). We conclude that although the actions of tea-making are 'automated' and proceed with little conscious involvement, the eyes closely monitor every step of the process. This type of unconscious attention must be a common phenomenon in everyday life.",
"title": ""
},
{
"docid": "31d5e64bfc92d0987f17666841e6e648",
"text": "BACKGROUND AND PURPOSE\nThe semiquantitative noncontrast CT Alberta Stroke Program Early CT Score (ASPECTS) and RAPID automated computed tomography (CT) perfusion (CTP) ischemic core volumetric measurements have been used to quantify infarct extent. We aim to determine the correlation between ASPECTS and CTP ischemic core, evaluate the variability of core volumes within ASPECTS strata, and assess the strength of their association with clinical outcomes.\n\n\nMETHODS\nReview of a prospective, single-center database of consecutive thrombectomies of middle cerebral or intracranial internal carotid artery occlusions with pretreatment CTP between September 2010 and September 2015. CTP was processed with RAPID software to identify ischemic core (relative cerebral blood flow<30% of normal tissue).\n\n\nRESULTS\nThree hundred and thirty-two patients fulfilled inclusion criteria. Median age was 66 years (55-75), median ASPECTS was 8 (7-9), whereas median CTP ischemic core was 11 cc (2-27). Median time from last normal to groin puncture was 5.8 hours (3.9-8.8), and 90-day modified Rankin scale score 0 to 2 was observed in 54%. The correlation between CTP ischemic core and ASPECTS was fair (R=-0.36; P<0.01). Twenty-six patients (8%) had ASPECTS <6 and CTP core ≤50 cc (37% had modified Rankin scale score 0-2, whereas 29% were deceased at 90 days). Conversely, 27 patients (8%) had CTP core >50 cc and ASPECTS ≥6 (29% had modified Rankin scale 0-2, whereas 21% were deceased at 90 days). Moderate correlations between ASPECTS and final infarct volume (R=-0.42; P<0.01) and between CTP ischemic core and final infarct volume (R=0.50; P<0.01) were observed; coefficients were not significantly influenced by the time from stroke onset to presentation. Multivariable regression indicated ASPECTS ≥6 (odds ratio 4.10; 95% confidence interval, 1.47-11.46; P=0.01) and CTP core ≤50 cc (odds ratio 3.86; 95% confidence interval, 1.22-12.15; P=0.02) independently and comparably predictive of good outcome.\n\n\nCONCLUSIONS\nThere is wide variability of CTP-derived core volumes within ASPECTS strata. Patient selection may be affected by the imaging selection method.",
"title": ""
},
{
"docid": "353d9add247202dc1a31f69064c68c5c",
"text": "Deep learning technologies, which are the key components of state-of-the-art Artificial Intelligence (AI) services, have shown great success in providing human-level capabilities for a variety of tasks, such as visual analysis, speech recognition, and natural language processing and etc. Building a production-level deep learning model is a non-trivial task, which requires a large amount of training data, powerful computing resources, and human expertises. Therefore, illegitimate reproducing, distribution, and the derivation of proprietary deep learning models can lead to copyright infringement and economic harm to model creators. Therefore, it is essential to devise a technique to protect the intellectual property of deep learning models and enable external verification of the model ownership.\n In this paper, we generalize the \"digital watermarking'' concept from multimedia ownership verification to deep neural network (DNNs) models. We investigate three DNN-applicable watermark generation algorithms, propose a watermark implanting approach to infuse watermark into deep learning models, and design a remote verification mechanism to determine the model ownership. By extending the intrinsic generalization and memorization capabilities of deep neural networks, we enable the models to learn specially crafted watermarks at training and activate with pre-specified predictions when observing the watermark patterns at inference. We evaluate our approach with two image recognition benchmark datasets. Our framework accurately (100%) and quickly verifies the ownership of all the remotely deployed deep learning models without affecting the model accuracy for normal input data. In addition, the embedded watermarks in DNN models are robust and resilient to different counter-watermark mechanisms, such as fine-tuning, parameter pruning, and model inversion attacks.",
"title": ""
}
] |
scidocsrr
|
18637c14834f664424798541ff9e3d6b
|
Secure storage system and key technologies
|
[
{
"docid": "21d84bd9ea7896892a3e69a707b03a6a",
"text": "Tahoe is a system for secure, distributed storage. It uses capabilities for access control, cryptography for confidentiality and integrity, and erasure coding for fault-tolerance. It has been deployed in a commercial backup service and is currently operational. The implementation is Open Source.",
"title": ""
}
] |
[
{
"docid": "271f6291ab2c97b5e561cf06b9131f9d",
"text": "Recently, substantial research effort has focused on how to apply CNNs or RNNs to better capture temporal patterns in videos, so as to improve the accuracy of video classification. In this paper, however, we show that temporal information, especially longer-term patterns, may not be necessary to achieve competitive results on common trimmed video classification datasets. We investigate the potential of a purely attention based local feature integration. Accounting for the characteristics of such features in video classification, we propose a local feature integration framework based on attention clusters, and introduce a shifting operation to capture more diverse signals. We carefully analyze and compare the effect of different attention mechanisms, cluster sizes, and the use of the shifting operation, and also investigate the combination of attention clusters for multimodal integration. We demonstrate the effectiveness of our framework on three real-world video classification datasets. Our model achieves competitive results across all of these. In particular, on the large-scale Kinetics dataset, our framework obtains an excellent single model accuracy of 79.4% in terms of the top-1 and 94.0% in terms of the top-5 accuracy on the validation set.",
"title": ""
},
{
"docid": "c6ef33607a015c4187ac77b18d903a8a",
"text": "OBJECTIVE\nA systematic review was conducted to identify effective intervention strategies for communication in individuals with Down syndrome.\n\n\nMETHODS\nWe updated and extended previous reviews by examining: (1) participant characteristics; (2) study characteristics; (3) characteristics of effective interventions (e.g., strategies and intensity); (4) whether interventions are tailored to the Down syndrome behavior phenotype; and (5) the effectiveness (i.e., percentage nonoverlapping data and Cohen's d) of interventions.\n\n\nRESULTS\nThirty-seven studies met inclusion criteria. The majority of studies used behavior analytic strategies and produced moderate gains in communication targets. Few interventions were tailored to the needs of the Down syndrome behavior phenotype.\n\n\nCONCLUSION\nThe results suggest that behavior analytic strategies are a promising approach, and future research should focus on replicating the effects of these interventions with greater methodological rigor.",
"title": ""
},
{
"docid": "9244acef01812d757639bd4f09631c22",
"text": "This paper describes the results of the first shared task on Multilingual Emoji Prediction, organized as part of SemEval 2018. Given the text of a tweet, the task consists of predicting the most likely emoji to be used along such tweet. Two subtasks were proposed, one for English and one for Spanish, and participants were allowed to submit a system run to one or both subtasks. In total, 49 teams participated in the English subtask and 22 teams submitted a system run to the Spanish subtask. Evaluation was carried out emoji-wise, and the final ranking was based on macro F-Score. Data and further information about this task can be found at https://competitions. codalab.org/competitions/17344.",
"title": ""
},
{
"docid": "521699fc8fc841e8ac21be51370b439f",
"text": "Scene understanding is an essential technique in semantic segmentation. Although there exist several datasets that can be used for semantic segmentation, they are mainly focused on semantic image segmentation with large deep neural networks. Therefore, these networks are not useful for real time applications, especially in autonomous driving systems. In order to solve this problem, we make two contributions to semantic segmentation task. The first contribution is that we introduce the semantic video dataset, the Highway Driving dataset, which is a densely annotated benchmark for a semantic video segmentation task. The Highway Driving dataset consists of 20 video sequences having a 30Hz frame rate, and every frame is densely annotated. Secondly, we propose a baseline algorithm that utilizes a temporal correlation. Together with our attempt to analyze the temporal correlation, we expect the Highway Driving dataset to encourage research on semantic video segmentation.",
"title": ""
},
{
"docid": "255a707951238ace366ef1ea0df833fc",
"text": "During the last decade, researchers have verified that clothing can provide information for gender recognition. However, before extracting features, it is necessary to segment the clothing region. We introduce a new clothes segmentation method based on the application of the GrabCut technique over a trixel mesh, obtaining very promising results for a close to real time system. Finally, the clothing features are combined with facial and head context information to outperform previous results in gender recognition with a public database.",
"title": ""
},
{
"docid": "288383c6a6d382b6794448796803699f",
"text": "A transresistance instrumentation amplifier (dual-input transresistance amplifier) was designed, and a prototype was fabricated and tested in a gamma-ray dosimeter. The circuit, explained in this letter, is a differential amplifier which is suitable for amplification of signals from current-source transducers. In the dosimeter application, the amplifier proved superior to a regular (single) transresistance amplifier, giving better temperature stability and better common-mode rejection.",
"title": ""
},
{
"docid": "07817eb2722fb434b1b8565d936197cf",
"text": "We recently have witnessed many ground-breaking results in machine learning and computer vision, generated by using deep convolutional neural networks (CNN). While the success mainly stems from the large volume of training data and the deep network architectures, the vector processing hardware (e.g. GPU) undisputedly plays a vital role in modern CNN implementations to support massive computation. Though much attention was paid in the extent literature to understand the algorithmic side of deep CNN, little research was dedicated to the vectorization for scaling up CNNs. In this paper, we studied the vectorization process of key building blocks in deep CNNs, in order to better understand and facilitate parallel implementation. Key steps in training and testing deep CNNs are abstracted as matrix and vector operators, upon which parallelism can be easily achieved. We developed and compared six implementations with various degrees of vectorization with which we illustrated the impact of vectorization on the speed of model training and testing. Besides, a unified CNN framework for both high-level and low-level vision tasks is provided, along with a vectorized Matlab implementation with state-of-the-art speed performance.",
"title": ""
},
{
"docid": "c5dc7a1ff0a3db20232fdff9cfb65381",
"text": "We replace the output layer of deep neural nets, typically the softmax function, by a novel interpolating function. And we propose end-to-end training and testing algorithms for this new architecture. Compared to classical neural nets with softmax function as output activation, the surrogate with interpolating function as output activation combines advantages of both deep and manifold learning. The new framework demonstrates the following major advantages: First, it is better applicable to the case with insufficient training data. Second, it significantly improves the generalization accuracy on a wide variety of networks. The algorithm is implemented in PyTorch, and the code is available at https://github.com/ BaoWangMath/DNN-DataDependentActivation.",
"title": ""
},
{
"docid": "363a465d626fec38555563722ae92bb1",
"text": "A novel reverse-conducting insulated-gate bipolar transistor (RC-IGBT) featuring an oxide trench placed between the n-collector and the p-collector and a floating p-region (p-float) sandwiched between the n-drift and n-collector is proposed. First, the new structure introduces a high-resistance collector short resistor at low current density, which leads to the suppression of the snapback effect. Second, the collector short resistance can be adjusted by varying the p-float length without increasing the collector cell length. Third, the p-float layer also acts as the base of the n-collector/p-float/n-drift transistor which can be activated and offers a low-resistance current path at high current densities, which contributes to the low on-state voltage of the integrated freewheeling diode and the fast turnoff. As simulations show, the proposed RC-IGBT shows snapback-free output characteristics and faster turnoff compared with the conventional RC-IGBT.",
"title": ""
},
{
"docid": "e66ae650db7c4c75a88ee6cf1ea8694d",
"text": "Traditional recommender systems minimize prediction error with respect to users' choices. Recent studies have shown that recommender systems have a positive effect on the provider's revenue.\n In this paper we show that by providing a set of recommendations different than the one perceived best according to user acceptance rate, the recommendation system can further increase the business' utility (e.g. revenue), without any significant drop in user satisfaction. Indeed, the recommendation system designer should have in mind both the user, whose taste we need to reveal, and the business, which wants to promote specific content.\n We performed a large body of experiments comparing a commercial state-of-the-art recommendation engine with a modified recommendation list, which takes into account the utility (or revenue) which the business obtains from each suggestion that is accepted by the user. We show that the modified recommendation list is more desirable for the business, as the end result gives the business a higher utility (or revenue). To study possible reduce in satisfaction by providing the user worse suggestions, we asked the users how they perceive the list of recommendation that they received. Differences in user satisfaction between the lists is negligible, and not statistically significant.\n We also uncover a phenomenon where movie consumers prefer watching and even paying for movies that they have already seen in the past than movies that are new to them.",
"title": ""
},
{
"docid": "2b095980aaccd7d35d079260738279c5",
"text": "Convolutional Neural Networks (CNNs) have achieved state-of-the-art performance when embedded in large vocabulary continuous speech recognition (LVCSR) systems due to its capability of modeling local correlations and reducing translational variations. In all previous related works for ASR, only up to two convolutional layers are employed. In light of the recent success of very deep CNNs in image classification, it is of interest to investigate the deep structure of CNNs for speech recognition in detail. In contrast to image classification, the dimensionality of the speech feature, the span size of input feature and the relationship between temporal and spectral domain are new factors to consider while designing very deep CNNs. In this work, very deep CNNs are introduced for LVCSR task, by extending depth of convolutional layers up to ten. The contribution of this work is two-fold: performance improvement of very deep CNNs is investigated under different configurations; further, a better way to perform convolution operations on temporal dimension is proposed. Experiments showed that very deep CNNs offer a 8-12% relative improvement over baseline DNN system, and a 4-7% relative improvement over baseline CNN system, evaluated on both a 15-hr Callhome and a 51-hr Switchboard LVCSR tasks.",
"title": ""
},
{
"docid": "ce7175f868e2805e9e08e96a1c9738f4",
"text": "The development of the Semantic Web, with machine-readable content, has the potential to revolutionize the World Wide Web and its use. In A Semantic Web Primer Grigoris Antoniou and Frank van Harmelen provide an introduction and guide to this emerging field, describing its key ideas, languages, and technologies. Suitable for use as a textbook or for self-study by professionals, the book concentrates on undergraduate-level fundamental concepts and techniques that will enable readers to proceed with building applications on their own and includes exercises, project descriptions, and annotated references to relevant online materials. A Semantic Web Primer is the only available book on the Semantic Web to include a systematic treatment of the different languages (XML, RDF, OWL, and rules) and technologies (explicit metadata, ontologies, and logic and inference) that are central to Semantic Web development. The book also examines such crucial related topics as ontology engineering and application scenarios. After an introductory chapter, topics covered in succeeding chapters include XML and related technologies that support semantic interoperability; RDF and RDF Schema, the standard data model for machine-processible semantics; and OWL, the W3C-approved standard for a Web ontology language that is more extensive than RDF Schema; rules, both monotonic and nonmonotonic, in the framework of the Semantic Web; selected application domains and how the Semantic Web would benefit them; the development of ontology-based systems; and current debates on key issues and predictions for the future.",
"title": ""
},
{
"docid": "5c892e59bed54f149697dbdf4024fbd1",
"text": "In this paper, an online tracking system has been developed to control the arm and head of a Nao robot using Kinect sensor. The main goal of this work is to achieve that the robot is able to follow the motion of a human user in real time to track. This objective has been achieved using a RGB-D camera (Kinect v2) and a Nao robot, which is a humanoid robot with 5 degree of freedom (DOF) for each arm. The joint motions of the operator's head and arm in the real world captured by a Kinect camera can be transferred into the workspace mathematically via forward and inverse kinematics, realitically through data based UDP connection between the robot and Kinect sensor. The satisfactory performance of the proposed approaches have been achieved, which is shown in experimental results.",
"title": ""
},
{
"docid": "1a8e346b6f2cd1c368f449f9a9474e5c",
"text": "Fuzzing is the process of finding security vulnerabilities in input-processing code by repeatedly testing the code with modified inputs. In this paper, we formalize fuzzing as a reinforcement learning problem using the concept of Markov decision processes. This in turn allows us to apply state-of-the-art deep Q-learning algorithms that optimize rewards, which we define from runtime properties of the program under test. By observing the rewards caused by mutating with a specific set of actions performed on an initial program input, the fuzzing agent learns a policy that can next generate new higher-reward inputs. We have implemented this new approach, and preliminary empirical evidence shows that reinforcement fuzzing can outperform baseline random fuzzing.",
"title": ""
},
{
"docid": "b127e63ac45c81ce9fa9aa6240ce5154",
"text": "This paper examines the use of social learning platforms in conjunction with the emergent pedagogy of the `flipped classroom'. In particular the attributes of the social learning platform “Edmodo” is considered alongside the changes in the way in which online learning environments are being implemented, especially within British education. Some observations are made regarding the use and usefulness of these platforms along with a consideration of the increasingly decentralized nature of education in the United Kingdom.",
"title": ""
},
{
"docid": "fd69e05a9be607381c4b8cd69d758f41",
"text": "The increase in electronically mediated self-servic e technologies in the banking industry has impacted on the way banks service consumers. Despit e a large body of research on electronic banking channels, no study has been undertaken to e xplor the fit between electronic banking channels and banking tasks. Nor has there been rese a ch into how the ‘task-channel fit’ and other factors impact on consumers’ intention to use elect ronic banking channels. This paper proposes a theoretical model addressing these gaps. An explora tory study was first conducted, investigating industry experts’ perceptions towards the concept o f ‘task-channel fit’ and its relationship to other electronic banking channel variables. The findings demonstrated that the concept was perceived as being highly relevant by bank managers. A resear ch model was then developed drawing on the existing literature. To evaluate the research mode l quantitatively, a survey will be developed and validated, administered to a sample of consumers, a nd the resulting data used to test both measurement and structural aspects of the research model.",
"title": ""
},
{
"docid": "92cecd8329343bc3a9b0e46e2185eb1c",
"text": "The spondylo and spondylometaphyseal dysplasias (SMDs) are characterized by vertebral changes and metaphyseal abnormalities of the tubular bones, which produce a phenotypic spectrum of disorders from the mild autosomal-dominant brachyolmia to SMD Kozlowski to autosomal-dominant metatropic dysplasia. Investigations have recently drawn on the similar radiographic features of those conditions to define a new family of skeletal dysplasias caused by mutations in the transient receptor potential cation channel vanilloid 4 (TRPV4). This review demonstrates the significance of radiography in the discovery of a new bone dysplasia family due to mutations in a single gene.",
"title": ""
},
{
"docid": "bd9f584e7dbc715327b791e20cd20aa9",
"text": "We discuss learning a profile of user interests for recommending information sources such as Web pages or news articles. We describe the types of information available to determine whether to recommend a particular page to a particular user. This information includes the content of the page, the ratings of the user on other pages and the contents of these pages, the ratings given to that page by other users and the ratings of these other users on other pages and demographic information about users. We describe how each type of information may be used individually and then discuss an approach to combining recommendations from multiple sources. We illustrate each approach and the combined approach in the context of recommending restaurants.",
"title": ""
},
{
"docid": "fa42192f3ffd08332e35b98019e622ff",
"text": "Human immunodeficiency virus 1 (HIV-1) and other retroviruses synthesize a DNA copy of their genome after entry into the host cell. Integration of this DNA into the host cell's genome is an essential step in the viral replication cycle. The viral DNA is synthesized in the cytoplasm and is associated with viral and cellular proteins in a large nucleoprotein complex. Before integration into the host genome can occur, this complex must be transported to the nucleus and must cross the nuclear envelope. This Review summarizes our current knowledge of how this journey is accomplished.",
"title": ""
},
{
"docid": "939b2faa63e24c0f303b823481682c4c",
"text": "Visual motion perception is fundamental to many aspects of visual perception. Visual motion perception has long been associated with the dorsal (parietal) pathway and the involvement of the ventral 'form' (temporal) visual pathway has not been considered critical for normal motion perception. Here, we evaluated this view by examining whether circumscribed damage to ventral visual cortex impaired motion perception. The perception of motion in basic, non-form tasks (motion coherence and motion detection) and complex structure-from-motion, for a wide range of motion speeds, all centrally displayed, was assessed in five patients with a circumscribed lesion to either the right or left ventral visual pathway. Patients with a right, but not with a left, ventral visual lesion displayed widespread impairments in central motion perception even for non-form motion, for both slow and for fast speeds, and this held true independent of the integrity of areas MT/V5, V3A or parietal regions. In contrast with the traditional view in which only the dorsal visual stream is critical for motion perception, these novel findings implicate a more distributed circuit in which the integrity of the right ventral visual pathway is also necessary even for the perception of non-form motion.",
"title": ""
}
] |
scidocsrr
|
fc15c7921e0abe34c8a123cf78699293
|
The Basic AI Drives
|
[
{
"docid": "15004021346a3c79924733bfc38bbe82",
"text": "Self-improving systems are a promising new approach to developing artificial intelligence. But will their behavior be predictable? Can we be sure that they will behave as we intended even after many generations of selfimprovement? This paper presents a framework for answering questions like these. It shows that self-improvement causes systems to converge on an",
"title": ""
}
] |
[
{
"docid": "0e218dd5654ae9125d40bdd5c0a326d6",
"text": "Dynamic data race detection incurs heavy runtime overheads. Recently, many sampling techniques have been proposed to detect data races. However, some sampling techniques (e.g., Pacer) are based on traditional happens-before relation and incur a large basic overhead. Others utilize hardware to reduce their sampling overhead (e.g., DataCollider) and they, however, detect a race only when the race really occurs by delaying program executions. In this paper, we study the limitations of existing techniques and propose a new data race definition, named as Clock Races, for low overhead sampling purpose. The innovation of clock races is that the detection of them does not rely on concrete locks and also avoids heavy basic overhead from tracking happens-before relation. We further propose CRSampler (Clock Race Sampler) to detect clock races via hardware based sampling without directly delaying program executions, to further reduce runtime overhead. We evaluated CRSampler on Dacapo benchmarks. The results show that CRSampler incurred less than 5% overhead on average at 1% sampling rate. Whereas, Pacer and DataCollider incurred larger than 25% and 96% overhead, respectively. Besides, at the same sampling rate, CRSampler detected significantly more data races than that by Pacer and DataCollider.",
"title": ""
},
{
"docid": "6c0f3240b86677a0850600bf68e21740",
"text": "In this article, we revisit two popular convolutional neural networks in person re-identification (re-ID): verification and identification models. The two models have their respective advantages and limitations due to different loss functions. Here, we shed light on how to combine the two models to learn more discriminative pedestrian descriptors. Specifically, we propose a Siamese network that simultaneously computes the identification loss and verification loss. Given a pair of training images, the network predicts the identities of the two input images and whether they belong to the same identity. Our network learns a discriminative embedding and a similarity measurement at the same time, thus taking full usage of the re-ID annotations. Our method can be easily applied on different pretrained networks. Albeit simple, the learned embedding improves the state-of-the-art performance on two public person re-ID benchmarks. Further, we show that our architecture can also be applied to image retrieval. The code is available at https://github.com/layumi/2016_person_re-ID.",
"title": ""
},
{
"docid": "40714e8b4c58666e4044789ffe344493",
"text": "The paper presents a novel calibration method for fisheye lens. Five parameters, which fully reflect characters of fisheye lens, are proposed. Linear displacement platform is used to acquire precise sliding displacement between the target image and fisheye lens. Laser calibration method is designed to obtain the precise value of optical center. A convenient method, which is used to calculate the virtual focus of the fisheye lens, is proposed. To verify the result, indoor environment is built up to measure the localization error of omni-directional robot. Image including landmarks is acquired by fisheye lens and delivered to DSP (Digital Signal Processor) to futher process. Error analysis to localization of omni-directional robot is showed in the conclusion.",
"title": ""
},
{
"docid": "b0356ab3a4a3917386bfe928a68031f5",
"text": "Even when Ss fail to recall a solicited target, they can provide feeling-of-knowing (FOK) judgments about its availability in memory. Most previous studies addressed the question of FOK accuracy, only a few examined how FOK itself is determined, and none asked how the processes assumed to underlie FOK also account for its accuracy. The present work examined all 3 questions within a unified model, with the aim of demystifying the FOK phenomenon. The model postulates that the computation of FOK is parasitic on the processes involved in attempting to retrieve the target, relying on the accessibility of pertinent information. It specifies the links between memory strength, accessibility of correct and incorrect information about the target, FOK judgments, and recognition memory. Evidence from 3 experiments is presented. The results challenge the view that FOK is based on a direct, privileged access to an internal monitor.",
"title": ""
},
{
"docid": "495be81dda82d3e4d90a34b6716acf39",
"text": "Botnets such as Conficker and Torpig utilize high entropy domains for fluxing and evasion. Bots may query a large number of domains, some of which may fail. In this paper, we present techniques where the failed domain queries (NXDOMAIN) may be utilized for: (i) Speeding up the present detection strategies which rely only on successful DNS domains. (ii) Detecting Command and Control (C&C) server addresses through features such as temporal correlation and information entropy of both successful and failed domains. We apply our technique to a Tier-1 ISP dataset obtained from South Asia, and a campus DNS trace, and thus validate our methods by detecting Conficker botnet IPs and other anomalies with a false positive rate as low as 0.02%. Our technique can be applied at the edge of an autonomous system for real-time detection.",
"title": ""
},
{
"docid": "3dc4384744f2f85983bc58b0a8a241c6",
"text": "OBJECTIVE\nTo define a map of interradicular spaces where miniscrew can be likely placed at a level covered by attached gingiva, and to assess if a correlation between crowding and availability of space exists.\n\n\nMETHODS\nPanoramic radiographs and digital models of 40 patients were selected according to the inclusion criteria. Interradicular spaces were measured on panoramic radiographs, while tooth size-arch length discrepancy was assessed on digital models. Statistical analysis was performed to evaluate if interradicular spaces are influenced by the presence of crowding.\n\n\nRESULTS\nIn the mandible, the most convenient sites for miniscrew insertion were in the spaces comprised between second molars and first premolars; in the maxilla, between first molars and second premolars as well as between canines and lateral incisors and between the two central incisors. The interradicular spaces between the maxillary canines and lateral incisors, and between mandibular first and second premolars revealed to be influenced by the presence of dental crowding.\n\n\nCONCLUSIONS\nThe average interradicular sites map hereby proposed can be used as a general guide for miniscrew insertion at the very beginning of orthodontic treatment planning. Then, the clinician should consider the amount of crowding: if this is large, the actual interradicular space in some areas might be significantly different from what reported on average. Individualized radiographs for every patient are still recommended.",
"title": ""
},
{
"docid": "f161b9891e8b1a828b2a177c5f9e6761",
"text": "This paper focuses on molten aluminum and aluminum alloy droplet generation for application to net-form manufacturing of structural components. The mechanism of droplet formation from capillary stream break-up provides the allure for use in net-form manufacturing due to the intrinsic uniformity of droplets generated under proper forcing conditions and the high rates at which they are generated. Additionally, droplet formation from capillary stream break-up allows the customization of droplet streams for a particular application. The current status of the technology under development is presented, and issues affecting the microstructure and the mechanical properties of the manufactured components are studied in an effort to establish a relationship between processing parameters and properties. ∗ Corresponding author Introduction High precision droplet-based net-form manufacturing of structural components is gaining considerable academic and industrial interest due to the promise of improved component quality resulting from rapid solidification processing and the economic benefits associated with fabricating a structural component in one integrated operation. A droplet based net-form manufacturing technique is under development at UCI which is termed Precision Droplet-Based Net-Form Manufacturing (PDM). The crux of the technique lies in the ability to generate highly uniform streams of molten metal droplets such as aluminum or aluminum alloys. Though virtually any Newtonian fluid that can be contained in a crucible is suitable for the technology, this work concentrates on the generation and deposition of molten aluminum alloy (2024) droplets that are generated and deposited in an inert environment. Figure 1 is a conceptual schematic of the current status of PDM. Droplets are generated from capillary stream break-up in an inert environment and are deposited onto a substrate whose motion is controlled by a programmable x-y table. In this way, tubes with circular, square, and triangular cross sections have been fabricated such as those illustrated in Figure 2. Tubes have been fabricated with heights as great as 11.0 cm. The surface morphology of the component is governed by the thermal conditions at the substrate. If we denote the solidified component and the substrate the \"effective substrate\", then the newly arriving droplets must have sufficient thermal energy to locally remelt a thin layer (with dimensions on the order of 10 microns or less) of the effective substrate. Remelting action of the previously deposited and solidified material will insure the removal of individual splat boundaries and result in a more homogeneous component. The thermal requirements for remelting have been studied analytically in reference [1]. It was shown in that work that there exists a minimum substrate temperature for a given droplet impingement temperature that results in remelting. The \"bump iness\" apparent in the circular cylinder shown in Figure 2 is due to the fact that the initial substrate temperature was insufficient to initiate the onset of remelting. As the component grows in height by successive droplet deliveries, the effective substrate temperature increases due to the fact that droplets are delivered at rates too high to allow cooling before the arrival of the next layer of droplets. Therefore, within the constraints of the current embodiment of the technology, there exists a certain height of the component for which remelting will occur. This height is demarcated at the location where the \"bumpiness\" is eliminated and relative\"smoothness\" prevails, as can be seen in the circular cylinder. As the component grows beyond this height, the remelting depth will continue to increase due to increased heating to the effective substrate. Hence, the component walls will thicken due to slower solidification rates. The objective of Eighth International Conference on Liquid Atomization and Spray Systems, Pasadena, CA, USA, July 2000 ongoing work (not presented here) is to identify the heat flux required for the minimum remelting of the effective substrate, and to develop processing conditions for which this heat flux seen by the substrate remains constant for each geometry desired. In this manner, the fidelity of the microstructure, mechanical properties, and geometry will remain intact. As is evident from Figure 1, the research presented in this work did not employ electrostatic charging and deflection. However, in the final realization of the technology, charging and deflection will be utilized in order to control the droplet density as a function of the component geometry, or to print fine details at high speed and at high precision. The charging and deflection of droplets bears many similarities to the technology of ink-jet printing, except that in the current application of PDM, large lateral areas are printed, thereby requiring significantly higher droplet charges than in ink-jet printing. The high charges on the closely spaced droplets result in mutual inter-droplet interactions that are not apparent in the application of ink-jet printing. Recent experimental and numerical results on the subject of droplet interactions due the application of high electrostatic charges are presented elsewhere [2]. Though not yet utilized for net-form manufacturing of structural components, droplet charging and deflection has been successfully applied to the \"printing\" of electronic components such as BGA's (Ball Grid Arrays). These results can be found in reference [3]. Research on controlled droplet formation from capillary stream break-up over the past decade has enabled ultra-precise charged droplet formation, deflection, and deposition that makes feasible many emerging applications in net-form manufacturing and electronic component fabrication [4-9]. Unlike the Drop-on-Demand mode of droplet formation, droplets can be generated at rates typically on the order of 10,000 to 20,000 droplets per second, from capillary stream break-up and can be electrostatically charged and deflected onto a substrate with a measured accuracy of ± 12.5 μm. Other net-form manufacturing technologies that rely on uniform droplet formation include 3D Printing (3DP) [10-12] and Shape Deposition Manufacturing (SDM) [13-15]. In 3DP, parts are manufactured by generating droplets of a binder material with the Drop-on-Demand mode of generation and depositing them onto selected areas of a layer of metal or ceramic powder. After the binder dries, the print bed is lowered and another layer of powder is spread in order to repeat the process. The process is repeated until the 3-D component is fabricated. Like PDM, the process of SDM relies on uniform generation of molten metal droplets. However the droplet generation technique is markedly different than droplet generation from capillary stream Figure 2: Examples of preliminary comp onents fabricated with PDM. The tall square tube shown horizontally is 11.0 cm. Figure 1: Conceptual schematic of cylinder fabrication on a flat-plate substrate with controlled droplet deposition. Molten droplet",
"title": ""
},
{
"docid": "917c703c04ec76bd209c3b6f9e2b868d",
"text": "Crowd simulation for virtual environments offers many challenges centered on the trade-offs between rich behavior, control and computational cost. In this paper we present a new approach to controlling the behavior of agents in a crowd. Our method is scalable in the sense that increasingly complex crowd behaviors can be created without a corresponding increase in the complexity of the agents. Our approach is also more authorable; users can dynamically specify which crowd behaviors happen in various parts of an environment. Finally, the character motion produced by our system is visually convincing. We achieve our aims with a situation-based control structure. Basic agents have very limited behaviors. As they enter new situations, additional, situation-specific behaviors are composed on the fly to enable agents to respond appropriately. The composition is done using a probabilistic mechanism. We demonstrate our system with three environments including a city street and a theater.",
"title": ""
},
{
"docid": "7b717d6c4506befee2a374333055e2d1",
"text": "This is the pre-acceptance version, to read the final version please go to IEEE Geoscience and Remote Sensing Magazine on IEEE XPlore. Standing at the paradigm shift towards data-intensive science, machine learning techniques are becoming increasingly important. In particular, as a major breakthrough in the field, deep learning has proven as an extremely powerful tool in many fields. Shall we embrace deep learning as the key to all? Or, should we resist a “black-box” solution? There are controversial opinions in the remote sensing community. In this article, we analyze the challenges of using deep learning for remote sensing data analysis, review the recent advances, and provide resources to make deep learning in remote sensing ridiculously simple to start with. More importantly, we advocate remote sensing scientists to bring their expertise into deep learning, and use it as an implicit general model to tackle unprecedented large-scale influential challenges, such as climate change and urbanization. X. Zhu and L. Mou are with the Remote Sensing Technology Institute (IMF), German Aerospace Center (DLR), Germany and with Signal Processing in Earth Observation (SiPEO), Technical University of Munich (TUM), Germany, E-mails: xiao.zhu@dlr.de; lichao.mou@dlr.de. D. Tuia was with the Department of Geography, University of Zurich, Switzerland. He is now with the Laboratory of GeoInformation Science and Remote Sensing, Wageningen University of Research, the Netherlands. E-mail: devis.tuia@wur.nl. G.-S Xia and L. Zhang are with the State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing (LIESMARS), Wuhan University. E-mail:guisong.xia@whu.edu.cn; zlp62@whu.edu.cn. F. Xu is with the Key Laboratory for Information Science of Electromagnetic Waves (MoE), Fudan Univeristy. E-mail: fengxu@fudan.edu.cn. F. Fraundorfer is with the Institute of Computer Graphics and Vision, TU Graz, Austria and with the Remote Sensing Technology Institute (IMF), German Aerospace Center (DLR), Germany. E-mail: fraundorfer@icg.tugraz.at. The work of X. Zhu and L. Mou are supported by the European Research Council (ERC) under the European Unions Horizon 2020 research and innovation programme (grant agreement No [ERC-2016-StG-714087], Acronym: So2Sat), Helmholtz Association under the framework of the Young Investigators Group “SiPEO” (VH-NG-1018, www.sipeo.bgu.tum.de) and China Scholarship Council. The work of D. Tuia is supported by the Swiss National Science Foundation (SNSF) under the project NO. PP0P2 150593. The work of G.-S. Xia and L. Zhang are supported by the National Natural Science Foundation of China (NSFC) projects with grant No. 41501462 and No. 41431175. The work of F. Xu are supported by the National Natural Science Foundation of China (NSFC) projects with grant No. 61571134. October 12, 2017 DRAFT ar X iv :1 71 0. 03 95 9v 1 [ cs .C V ] 1 1 O ct 2 01 7 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE, IN PRESS. 2",
"title": ""
},
{
"docid": "57c0db8c200b94baa28779ff4f47d630",
"text": "The development of the Web services lets many users easily provide their opinions recently. Automatic summarization of enormous sentiments has been expected. Intuitively, we can summarize a review with traditional document summarization methods. However, such methods have not well-discussed “aspects”. Basically, a review consists of sentiments with various aspects. We summarize reviews for each aspect so that the summary presents information without biasing to a specific topic. In this paper, we propose a method for multiaspects review summarization based on evaluative sentence extraction. We handle three features; ratings of aspects, the tf -idf value, and the number of mentions with a similar topic. For estimating the number of mentions, we apply a clustering algorithm. By integrating these features, we generate a more appropriate summary. The experiment results show the effectiveness of our method.",
"title": ""
},
{
"docid": "af8fbdfbc4c4958f69b3936ff2590767",
"text": "Analysis of sedimentary diatom assemblages (10 to 144 ka) form the basis for a detailed reconstruction of the paleohydrography and diatom paleoecology of Lake Malawi. Lake-level fluctuations on the order of hundreds of meters were inferred from dramatic changes in the fossil and sedimentary archives. Many of the fossil diatom assemblages we observed have no analog in modern Lake Malawi. Cyclotelloid diatom species are a major component of fossil assemblages prior to 35 ka, but are not found in significant abundances in the modern diatom communities in Lake Malawi. Salinityand alkalinity-tolerant plankton has not been reported in the modern lake system, but frequently dominant fossil diatom assemblages prior to 85 ka. Large stephanodiscoid species that often dominate the plankton today are rarely present in the fossil record prior to 31 ka. Similarly, prior to 31 ka, common central-basin aulacoseiroid species are replaced by species found in the shallow, well-mixed southern basin. Surprisingly, tychoplankton and periphyton were not common throughout prolonged lowstands, but tended to increase in relative abundance during periods of inferred deeper-lake environments. A high-resolution lake level reconstruction was generated by a principle component analysis of fossil diatom and wetsieved fossil and mineralogical residue records. Prior to 70 ka, fossil assemblages suggest that the central basin was periodically a much shallower, more saline and/or alkaline, well-mixed environment. The most significant reconstructed lowstands are ~ 600 m below the modern lake level and span thousands of years. These conditions contrast starkly with the deep, dilute, dysaerobic environments of the modern central basin. After 70 ka, our reconstruction indicates sustained deeper-water environments were common, marked by a few brief, but significant, lowstands. High amplitude lake-level fluctuations appear related to changes in insolation. Seismic reflection data and additional sediment cores recovered from the northern basin of Lake Malawi provide evidence that supports our reconstruction.",
"title": ""
},
{
"docid": "7003d59d401bce0f6764cc6aa25b5dd2",
"text": "This paper presents a 13 bit 50 MS/s fully differential ring amplifier based SAR-assisted pipeline ADC, implemented in 65 nm CMOS. We introduce a new fully differential ring amplifier, which solves the problems of single-ended ring amplifiers while maintaining the benefits of high gain, fast slew based charging and an almost rail-to-rail output swing. We implement a switched-capacitor (SC) inter-stage residue amplifier that uses this new fully differential ring amplifier to give accurate amplification without calibration. In addition, a new floated detect-and-skip (FDAS) capacitive DAC (CDAC) switching method reduces the switching energy and improves linearity of first-stage CDAC. With these techniques, the prototype ADC achieves measured SNDR, SNR, and SFDR of 70.9 dB (11.5b), 71.3 dB and 84.6 dB, respectively, with a Nyquist frequency input. The prototype achieves 13 bit linearity without calibration and consumes 1 mW. This measured performance is equivalent to Walden and Schreier FoMs of 6.9 fJ/conversion ·step and 174.9 dB, respectively.",
"title": ""
},
{
"docid": "9c183992492880d8b6e1a644e014a72f",
"text": "Repeated measures analyses of variance are the method of choice in many studies from experimental psychology and the neurosciences. Data from these fields are often characterized by small sample sizes, high numbers of factor levels of the within-subjects factor(s), and nonnormally distributed response variables such as response times. For a design with a single within-subjects factor, we investigated Type I error control in univariate tests with corrected degrees of freedom, the multivariate approach, and a mixed-model (multilevel) approach (SAS PROC MIXED) with Kenward-Roger's adjusted degrees of freedom. We simulated multivariate normal and nonnormal distributions with varied population variance-covariance structures (spherical and nonspherical), sample sizes (N), and numbers of factor levels (K). For normally distributed data, as expected, the univariate approach with Huynh-Feldt correction controlled the Type I error rate with only very few exceptions, even if samples sizes as low as three were combined with high numbers of factor levels. The multivariate approach also controlled the Type I error rate, but it requires N ≥ K. PROC MIXED often showed acceptable control of the Type I error rate for normal data, but it also produced several liberal or conservative results. For nonnormal data, all of the procedures showed clear deviations from the nominal Type I error rate in many conditions, even for sample sizes greater than 50. Thus, none of these approaches can be considered robust if the response variable is nonnormally distributed. The results indicate that both the variance heterogeneity and covariance heterogeneity of the population covariance matrices affect the error rates.",
"title": ""
},
{
"docid": "3a75cf54ace0ebb56b985e1452151a91",
"text": "Ubiquitous networks support the roaming service for mobile communication devices. The mobile user can use the services in the foreign network with the help of the home network. Mutual authentication plays an important role in the roaming services, and researchers put their interests on the authentication schemes. Recently, in 2016, Gope and Hwang found that mutual authentication scheme of He et al. for global mobility networks had security disadvantages such as vulnerability to forgery attacks, unfair key agreement, and destitution of user anonymity. Then, they presented an improved scheme. However, we find that the scheme cannot resist the off-line guessing attack and the de-synchronization attack. Also, it lacks strong forward security. Moreover, the session key is known to HA in that scheme. To get over the weaknesses, we propose a new two-factor authentication scheme for global mobility networks. We use formal proof with random oracle model, formal verification with the tool Proverif, and informal analysis to demonstrate the security of the proposed scheme. Compared with some very recent schemes, our scheme is more applicable. Copyright © 2016 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "52357eff7eda659bcf225d0ab70cb8d2",
"text": "BACKGROUND\nFlexibility is an important physical quality. Self-myofascial release (SMFR) methods such as foam rolling (FR) increase flexibility acutely but how long such increases in range of motion (ROM) last is unclear. Static stretching (SS) also increases flexibility acutely and produces a cross-over effect to contralateral limbs. FR may also produce a cross-over effect to contralateral limbs but this has not yet been identified.\n\n\nPURPOSE\nTo explore the potential cross-over effect of SMFR by investigating the effects of a FR treatment on the ipsilateral limb of 3 bouts of 30 seconds on changes in ipsilateral and contralateral ankle DF ROM and to assess the time-course of those effects up to 20 minutes post-treatment.\n\n\nMETHODS\nA within- and between-subject design was carried out in a convenience sample of 26 subjects, allocated into FR (n=13) and control (CON, n=13) groups. Ankle DF ROM was recorded at baseline with the in-line weight-bearing lunge test for both ipsilateral and contralateral legs and at 0, 5, 10, 15, 20 minutes following either a two-minute seated rest (CON) or 3 3 30 seconds of FR of the plantar flexors of the dominant leg (FR). Repeated measures ANOVA was used to examine differences in ankle DF ROM.\n\n\nRESULTS\nNo significant between-group effect was seen following the intervention. However, a significant within-group effect (p<0.05) in the FR group was seen between baseline and all post-treatment time-points (0, 5, 10, 15 and 20 minutes). Significant within-group effects (p<0.05) were also seen in the ipsilateral leg between baseline and at all post-treatment time-points, and in the contralateral leg up to 10 minutes post-treatment, indicating the presence of a cross-over effect.\n\n\nCONCLUSIONS\nFR improves ankle DF ROM for at least 20 minutes in the ipsilateral limb and up to 10 minutes in the contralateral limb, indicating that FR produces a cross-over effect into the contralateral limb. The mechanism producing these cross-over effects is unclear but may involve increased stretch tolerance, as observed following SS.\n\n\nLEVELS OF EVIDENCE\n2c.",
"title": ""
},
{
"docid": "605a078c74d37007654094b4b426ece8",
"text": "Currently, blockchain technology, which is decentralized and may provide tamper-resistance to recorded data, is experiencing exponential growth in industry and research. In this paper, we propose the MIStore, a blockchain-based medical insurance storage system. Due to blockchain’s the property of tamper-resistance, MIStore may provide a high-credibility to users. In a basic instance of the system, there are a hospital, patient, insurance company and n servers. Specifically, the hospital performs a (t, n)-threshold MIStore protocol among the n servers. For the protocol, any node of the blockchain may join the protocol to be a server if the node and the hospital wish. Patient’s spending data is stored by the hospital in the blockchain and is protected by the n servers. Any t servers may help the insurance company to obtain a sum of a part of the patient’s spending data, which servers can perform homomorphic computations on. However, the n servers cannot learn anything from the patient’s spending data, which recorded in the blockchain, forever as long as more than n − t servers are honest. Besides, because most of verifications are performed by record-nodes and all related data is stored at the blockchain, thus the insurance company, servers and the hospital only need small memory and CPU. Finally, we deploy the MIStore on the Ethererum blockchain and give the corresponding performance evaluation.",
"title": ""
},
{
"docid": "03a036bea8fac6b1dfa7d9a4783eef66",
"text": "Face recognition from the real data, capture images, sensor images and database images is challenging problem due to the wide variation of face appearances, illumination effect and the complexity of the image background. Face recognition is one of the most effective and relevant applications of image processing and biometric systems. In this paper we are discussing the face recognition methods, algorithms proposed by many researchers using artificial neural networks (ANN) which have been used in the field of image processing and pattern recognition. How ANN will used for the face recognition system and how it is effective than another methods will also discuss in this paper. There are many ANN proposed methods which give overview face recognition using ANN. Therefore, this research includes a general review of face detection studies and systems which based on different ANN approaches and algorithms. The strengths and limitations of these literature studies and systems were included, and also the performance analysis of different ANN approach and algorithm is analysing in this research study.",
"title": ""
},
{
"docid": "da2bc0813d4108606efef507e50100e3",
"text": "Prediction is one of the most attractive aspects in data mining. Link prediction has recently attracted the attention of many researchers as an effective technique to be used in graph based models in general and in particular for social network analysis due to the recent popularity of the field. Link prediction helps to understand associations between nodes in social communities. Existing link prediction-related approaches described in the literature are limited to predict links that are anticipated to exist in the future. To the best of our knowledge, none of the previous works in this area has explored the prediction of links that could disappear in the future. We argue that the latter set of links are important to know about; they are at least equally important as and do complement the positive link prediction process in order to plan better for the future. In this paper, we propose a link prediction model which is capable of predicting both links that might exist and links that may disappear in the future. The model has been successfully applied in two different though very related domains, namely health care and gene expression networks. The former application concentrates on physicians and their interactions while the second application covers genes and their interactions. We have tested our model using different classifiers and the reported results are encouraging. Finally, we compare our approach with the internal links approach and we reached the conclusion that our approach performs very well in both bipartite and non-bipartite graphs.",
"title": ""
},
{
"docid": "3b2cbc85f5fb17aba8a872c12ba4928a",
"text": "For over five decades, liquid injectable silicone has been used for soft-tissue augmentation. Its use has engendered polarized reactions from the public and from physicians. Adherents of this product tout its inert chemical structure, ease of use, and low cost. Opponents of silicone cite the many reports of complications, including granulomas, pneumonitis, and disfiguring nodules that are usually the result of large-volume injection and/or industrial grade or adulterated material. Unfortunately, as recently as 2006, reports in The New England Journal of Medicine and The New York Times failed to distinguish between the use of medical grade silicone injected by physicians trained in the microdroplet technique and the use of large volumes of industrial grade products injected by unlicensed or unskilled practitioners. This review separates these two markedly different procedures. In addition, it provides an overview of the chemical structure of liquid injectable silicone, the immunology of silicone reactions within the body, treatment for cosmetic improvement including human immunodeficiency virus lipoatrophy, technical considerations for its injection, complications seen following injections, and some considerations of the future for silicone soft-tissue augmentation.",
"title": ""
}
] |
scidocsrr
|
bc772df5bd360e4dcaac189ee483a6b8
|
RGB-D object modelling for object recognition and tracking
|
[
{
"docid": "d02af961d8780a06ae0162647603f8bb",
"text": "We contribute an empirically derived noise model for the Kinect sensor. We systematically measure both lateral and axial noise distributions, as a function of both distance and angle of the Kinect to an observed surface. The derived noise model can be used to filter Kinect depth maps for a variety of applications. Our second contribution applies our derived noise model to the KinectFusion system to extend filtering, volumetric fusion, and pose estimation within the pipeline. Qualitative results show our method allows reconstruction of finer details and the ability to reconstruct smaller objects and thinner surfaces. Quantitative results also show our method improves pose estimation accuracy.",
"title": ""
}
] |
[
{
"docid": "02156199912027e9230b3c000bcbe87b",
"text": "Voice conversion (VC) using sequence-to-sequence learning of context posterior probabilities is proposed. Conventional VC using shared context posterior probabilities predicts target speech parameters from the context posterior probabilities estimated from the source speech parameters. Although conventional VC can be built from non-parallel data, it is difficult to convert speaker individuality such as phonetic property and speaking rate contained in the posterior probabilities because the source posterior probabilities are directly used for predicting target speech parameters. In this work, we assume that the training data partly include parallel speech data and propose sequence-to-sequence learning between the source and target posterior probabilities. The conversion models perform non-linear and variable-length transformation from the source probability sequence to the target one. Further, we propose a joint training algorithm for the modules. In contrast to conventional VC, which separately trains the speech recognition that estimates posterior probabilities and the speech synthesis that predicts target speech parameters, our proposed method jointly trains these modules along with the proposed probability conversion modules. Experimental results demonstrate that our approach outperforms the conventional VC.",
"title": ""
},
{
"docid": "ec6f53bd2cbc482c1450934b1fd9e463",
"text": "Cloud computing providers have setup several data centers at different geographical locations over the Internet in order to optimally serve needs of their customers around the world. However, existing systems do not support mechanisms and policies for dynamically coordinating load distribution among different Cloud-based data centers in order to determine optimal location for hosting application services to achieve reasonable QoS levels. Further, the Cloud computing providers are unable to predict geographic distribution of users consuming their services, hence the load coordination must happen automatically, and distribution of services must change in response to changes in the load. To counter this problem, we advocate creation of federated Cloud computing environment (InterCloud) that facilitates just-in-time, opportunistic, and scalable provisioning of application services, consistently achieving QoS targets under variable workload, resource and network conditions. The overall goal is to create a computing environment that supports dynamic expansion or contraction of capabilities (VMs, services, storage, and database) for handling sudden variations in service demands. This paper presents vision, challenges, and architectural elements of InterCloud for utility-oriented federation of Cloud computing environments. The proposed InterCloud environment supports scaling of applications across multiple vendor clouds. We have validated our approach by conducting a set of rigorous performance evaluation study using the CloudSim toolkit. The results demonstrate that federated Cloud computing model has immense potential as it offers significant performance gains as regards to response time and cost saving under dynamic workload scenarios.",
"title": ""
},
{
"docid": "42303331bf6713c1809468532c153693",
"text": "................................................................................................................................................ V Table of",
"title": ""
},
{
"docid": "57f1671f7b73f0b888f55a1f31a9f1a1",
"text": "The ongoing high relevance of business intelligence (BI) for the management and competitiveness of organizations requires a continuous, transparent, and detailed assessment of existing BI solutions in the enterprise. This paper presents a BI maturity model (called biMM) that has been developed and refined over years. It is used for both, in surveys to determine the overall BI maturity in German speaking countries and for the individual assessment in organizations. A recently conducted survey shows that the current average BI maturity can be assigned to the third stage (out of five stages). Comparing future (planned) activities and current challenges allows the derivation of a BI research agenda. The need for action includes among others emphasizing BI specific organizational structures, such as the establishment of BI competence centers, a stronger focus on profitability, and improved effectiveness of the BI architecture.",
"title": ""
},
{
"docid": "d679fb65265fb48cc53ae771b0f254af",
"text": "This paper presents a tunable transmission line (t-line) structure, featuring independent control of line inductance and capacitance. The t-line provides variable delay while maintaining relatively constant characteristic impedance using direct digital control through FET switches. As an application of this original structure, a 60 GHz RF-phase shifter for phased-array applications is implemented in a 32 nm SOI process attaining state-of-the-art performance. Measured data from two phase shifter variants at 60 GHz showed phase changes of 175° and 185°, S21 losses of 3.5-7.1 dB and 6.1-7.6 dB, RMS phase errors of 2° and 3.2°, and areas of 0.073 mm2 and 0.099 mm2 respectively.",
"title": ""
},
{
"docid": "5ca14c0581484f5618dd806a6f994a03",
"text": "Many of existing criteria for evaluating Web sites quality require methods such as heuristic evaluations, or/and empirical usability tests. This paper aims at defining a quality model and a set of characteristics relating internal and external quality factors and giving clues about potential problems, which can be measured by automated tools. The first step in the quality assessment process is an automatic check of the source code, followed by manual evaluation, possibly supported by an appropriate user panel. As many existing tools can check sites (mainly considering accessibility issues), the general architecture will be based upon a conceptual model of the site/page, and the tools will export their output to a Quality Data Base, which is the basis for subsequent actions (checking, reporting test results, etc.).",
"title": ""
},
{
"docid": "50603dae3b5131ba4e6d956d57402e10",
"text": "Due to the spread of color laser printers to the general public, numerous forgeries are made by color laser printers. Printer identification is essential to preventing damage caused by color laser printed forgeries. This paper presents a new method to identify a color laser printer using photographed halftone images. First, we preprocess the photographed images to extract the halftone pattern regardless of the variation of the illumination conditions. Then, 15 halftone texture features are extracted from the preprocessed images. A support vector machine is used to be trained and classify the extracted features. Experiments are performed on seven color laser printers. The experimental results show that the proposed method is suitable for identifying the source color laser printer using photographed images.",
"title": ""
},
{
"docid": "74f8127bc620fa1c9797d43dedea4d45",
"text": "A novel system for long-term tracking of a human face in unconstrained videos is built on Tracking-Learning-Detection (TLD) approach. The system extends TLD with the concept of a generic detector and a validator which is designed for real-time face tracking resistent to occlusions and appearance changes. The off-line trained detector localizes frontal faces and the online trained validator decides which faces correspond to the tracked subject. Several strategies for building the validator during tracking are quantitatively evaluated. The system is validated on a sitcom episode (23 min.) and a surveillance (8 min.) video. In both cases the system detects-tracks the face and automatically learns a multi-view model from a single frontal example and an unlabeled video.",
"title": ""
},
{
"docid": "a60752274fdae6687c713538215d0269",
"text": "Some soluble phosphate salts, heavily used in agriculture as highly effective phosphorus (P) fertilizers, cause surface water eutrophication, while solid phosphates are less effective in supplying the nutrient P. In contrast, synthetic apatite nanoparticles could hypothetically supply sufficient P nutrients to crops but with less mobility in the environment and with less bioavailable P to algae in comparison to the soluble counterparts. Thus, a greenhouse experiment was conducted to assess the fertilizing effect of synthetic apatite nanoparticles on soybean (Glycine max). The particles, prepared using one-step wet chemical method, were spherical in shape with diameters of 15.8 ± 7.4 nm and the chemical composition was pure hydroxyapatite. The data show that application of the nanoparticles increased the growth rate and seed yield by 32.6% and 20.4%, respectively, compared to those of soybeans treated with a regular P fertilizer (Ca(H2PO4)2). Biomass productions were enhanced by 18.2% (above-ground) and 41.2% (below-ground). Using apatite nanoparticles as a new class of P fertilizer can potentially enhance agronomical yield and reduce risks of water eutrophication.",
"title": ""
},
{
"docid": "149ffd270f39a330f4896c7d3aa290be",
"text": "The pathogenesis underlining many neurodegenerative diseases remains incompletely understood. The lack of effective biomarkers and disease preventative medicine demands the development of new techniques to efficiently probe the mechanisms of disease and to detect early biomarkers predictive of disease onset. Raman spectroscopy is an established technique that allows the label-free fingerprinting and imaging of molecules based on their chemical constitution and structure. While analysis of isolated biological molecules has been widespread in the chemical community, applications of Raman spectroscopy to study clinically relevant biological species, disease pathogenesis, and diagnosis have been rapidly increasing since the past decade. The growing number of biomedical applications has shown the potential of Raman spectroscopy for detection of novel biomarkers that could enable the rapid and accurate screening of disease susceptibility and onset. Here we provide an overview of Raman spectroscopy and related techniques and their application to neurodegenerative diseases. We further discuss their potential utility in research, biomarker detection, and diagnosis. Challenges to routine use of Raman spectroscopy in the context of neuroscience research are also presented.",
"title": ""
},
{
"docid": "9e0cbbe8d95298313fd929a7eb2bfea9",
"text": "We compare two technological approaches to augmented reality for 3-D medical visualization: optical and video see-through devices. We provide a context to discuss the technology by reviewing several medical applications of augmented-reality re search efforts driven by real needs in the medical field, both in the United States and in Europe. We then discuss the issues for each approach, optical versus video, from both a technology and human-factor point of view. Finally, we point to potentially promising future developments of such devices including eye tracking and multifocus planes capabilities, as well as hybrid optical/video technology.",
"title": ""
},
{
"docid": "bd91ef7524a262fb40083d3fb34f8d0e",
"text": "Simulators have become an integral part of the computer architecture research and design process. Since they have the advantages of cost, time, and flexibility, architects use them to guide design space exploration and to quantify the efficacy of an enhancement. However, long simulation times and poor accuracy limit their effectiveness. To reduce the simulation time, architects have proposed several techniques that increase the simulation speed or throughput. To increase the accuracy, architects try to minimize the amount of error in their simulators and have proposed adding statistical rigor to their simulation methodology. Since a wide range of approaches exist and since many of them overlap, this paper describes, classifies, and compares them to aid the computer architect in selecting the most appropriate one.",
"title": ""
},
{
"docid": "65d3d020ee63cdeb74cb3da159999635",
"text": "We investigated the effects of format of an initial test and whether or not students received corrective feedback on that test on a final test of retention 3 days later. In Experiment 1, subjects studied four short journal papers. Immediately after reading each paper, they received either a multiple choice (MC) test, a short answer (SA) test, a list of statements to read, or a filler task. The MC test, SA test, and list of statements tapped identical facts from the studied material. No feedback was provided during the initial tests. On a final test 3 days later (consisting of MC and SA questions), having had an intervening MC test led to better performance than an intervening SA test, but the intervening MC condition did not differ significantly from the read statements condition. To better equate exposure to test-relevant information, corrective feedback during the initial tests was introduced in Experiment 2. With feedback provided, having had an intervening SA test led to the best performance on the final test, suggesting that the more demanding the retrieval processes engendered by the intervening test, the greater the benefit to final retention. The practical application of these findings is that regular SA quizzes with feedback may be more effective in enhancing student learning than repeated presentation of target facts or taking an MC quiz.",
"title": ""
},
{
"docid": "c45d911aea9d06208a4ef273c9ab5ff3",
"text": "A wide range of research has used face data to estimate a person's engagement, in applications from advertising to student learning. An interesting and important question not addressed in prior work is if face-based models of engagement are generalizable and context-free, or do engagement models depend on context and task. This research shows that context-sensitive face-based engagement models are more accurate, at least in the space of web-based tools for trauma recovery. Estimating engagement is important as various psychological studies indicate that engagement is a key component to measure the effectiveness of treatment and can be predictive of behavioral outcomes in many applications. In this paper, we analyze user engagement in a trauma-recovery regime during two separate modules/tasks: relaxation and triggers. The dataset comprises of 8M+ frames from multiple videos collected from 110 subjects, with engagement data coming from 800+ subject self-reports. We build an engagement prediction model as sequence learning from facial Action Units (AUs) using Long Short Term Memory (LSTMs). Our experiments demonstrate that engagement prediction is contextual and depends significantly on the allocated task. Models trained to predict engagement on one task are only weak predictors for another and are much less accurate than context-specific models. Further, we show the interplay of subject mood and engagement using a very short version of Profile of Mood States (POMS) to extend our LSTM model.",
"title": ""
},
{
"docid": "accf1445bcf32b7e3c03443bf722a882",
"text": "The Chua circuit is among the simplest non-linear circuits that shows most complex dynamical behavior, including chaos which exhibits a variety of bifurcation phenomena and attractors. In this paper, Chua attractor’s chaotic oscillator, synchronization and masking communication circuits were designed and simulated. The electronic circuit oscilloscope outputs of the realized Chua system is also presented. Simulation and oscilloscope outputs are used to illustrate the accuracy of the designed and realized Chua chaotic oscillator circuits. The Chua system is addressed suitable for chaotic synchronization circuits and chaotic masking communication circuits using Matlab® and MultiSIM® software. Simulation results are used to visualize and illustrate the effectiveness of Chua chaotic system in synchronization and application of secure communication.",
"title": ""
},
{
"docid": "28f6751a043201fd8313944b4f79101f",
"text": "FLLL 2 Preface This is a printed collection of the contents of the lecture \" Genetic Algorithms: Theory and Applications \" which I gave first in the winter semester 1999/2000 at the Johannes Kepler University in Linz. The reader should be aware that this manuscript is subject to further reconsideration and improvement. Corrections, complaints, and suggestions are cordially welcome. The sources were manifold: Chapters 1 and 2 were written originally for these lecture notes. All examples were implemented from scratch. The third chapter is a distillation of the books of Goldberg [13] and Hoffmann [15] and a handwritten manuscript of the preceding lecture on genetic algorithms which was given by Andreas Stöckl in 1993 at the Johannes Kepler University. Chapters 4, 5, and 7 contain recent adaptations of previously published material from my own master thesis and a series of lectures which was given by Francisco Herrera and myself at the Second Summer School on Advanced Control at the Slovak Technical University, Bratislava, in summer 1997 [4]. Chapter 6 was written originally, however, strongly influenced by A. Geyer-Schulz's works and H. Hörner's paper on his C++ GP kernel [18]. I would like to thank all the students attending the first GA lecture in Winter 1999/2000, for remaining loyal throughout the whole term and for contributing much to these lecture notes with their vivid, interesting, and stimulating questions, objections, and discussions. Last but not least, I want to express my sincere gratitude to Sabine Lumpi and Susanne Saminger for support in organizational matters, and Pe-ter Bauer for proofreading .",
"title": ""
},
{
"docid": "1906aa92c26bb95b4cb79b4bfe7e362f",
"text": "As Artificial Intelligence (AI) techniques become more powerful and easier to use they are increasingly deployed as key components of modern software systems. While this enables new functionality and often allows better adaptation to user needs it also creates additional problems for software engineers and exposes companies to new risks. Some work has been done to better understand the interaction between Software Engineering and AI but we lack methods to classify ways of applying AI in software systems and to analyse and understand the risks this poses. Only by doing so can we devise tools and solutions to help mitigate them. This paper presents the AI in SE Application Levels (AI-SEAL) taxonomy that categorises applications according to their point of application, the type of AI technology used and the automation level allowed. We show the usefulness of this taxonomy by classifying 15 papers from previous editions of the RAISE workshop. Results show that the taxonomy allows classification of distinct AI applications and provides insights concerning the risks associated with them. We argue that this will be important for companies in deciding how to apply AI in their software applications and to create strategies for its use.",
"title": ""
},
{
"docid": "cac8f1df581628a7e64e779751fafaf0",
"text": "The vast majority of Web services and sites are hosted in various kinds of cloud services, and ordering some level of quality of service (QoS) in such systems requires effective load-balancing policies that choose among multiple clouds. Recently, software-defined networking (SDN) is one of the most promising solutions for load balancing in cloud data center. SDN is characterized by its two distinguished features, including decoupling the control plane from the data plane and providing programmability for network application development. By using these technologies, SDN and cloud computing can improve cloud reliability, manageability, scalability and controllability. SDN-based cloud is a new type cloud in which SDN technology is used to acquire control on network infrastructure and to provide networking-as-a-service (NaaS) in cloud computing environments. In this paper, we introduce an SDN-enhanced Inter cloud Manager (S-ICM) that allocates network flows in the cloud environment. S-ICM consists of two main parts, monitoring and decision making. For monitoring, S-ICM uses SDN control message that observes and collects data, and decision-making is based on the measured network delay of packets. Measurements are used to compare S-ICM with a round robin (RR) allocation of jobs between clouds which spreads the workload equitably, and with a honeybee foraging algorithm (HFA). We see that S-ICM is better at avoiding system saturation than HFA and RR under heavy load formula using RR job scheduler. Measurements are also used to evaluate whether a simple queueing formula can be used to predict system performance for several clouds being operated under an RR scheduling policy, and show the validity of the theoretical approximation.",
"title": ""
},
{
"docid": "3fda6dfa4aa7973725baa1dd9dc7f542",
"text": "This paper presents a novel cognitive management architecture developed within the H2020 CogNet project to manage 5G networks. We also present the instantiation of this architecture for two Operator use cases, namely ‘SLA enforcement’ and ‘Mobile Quality Predictor’. The SLA enforcement use case tackles the SLA management with machine learning techniques, precisely, LSTM (Long Short Term Memory). The second use case, Mobile Quality Predictor, proposes a framework using machine learning to enable an accurate bandwidth prediction for each mobile subscriber in real-time. A problem statement, stakeholders, an instantiation of the cognitive management architecture, a related work as well as an evaluation results are presented for each use case.",
"title": ""
},
{
"docid": "6a3210307c98b4311271c29da142b134",
"text": "Accelerating innovation in renewable energy (RE) requires not just more finance, but finance servicing the entire innovation landscape. Given that finance is not ‘neutral’, more information is required on the quality of finance that meets technology and innovation stage-specific financing needs for the commercialization of RE technologies. We investigate the relationship between different financial actors with investment in different RE technologies. We construct a new deal-level dataset of global RE asset finance from 2004 to 2014 based on Bloomberg New Energy Finance data, that distinguishes 10 investor types (e.g. private banks, public banks, utilities) and 11 RE technologies into which they invest. We also construct a heuristic investment risk measure that varies with technology, time and country of investment. We find that particular investor types have preferences for particular risk levels, and hence particular types of RE. Some investor types invested into far riskier portfolios than others, and financing of individual high-risk technologies depended on investment by specific investor types. After the 2008 financial crisis, state-owned or controlled companies and banks emerged as the high-risk taking locomotives of RE asset finance. We use these preliminary results to formulate new questions for future RE policy, and encourage further research.",
"title": ""
}
] |
scidocsrr
|
9af8e7dc3fea72d4cc8a202a17ebf31e
|
Personalization Method for Tourist Point of Interest (POI) Recommendation
|
[
{
"docid": "bd9f584e7dbc715327b791e20cd20aa9",
"text": "We discuss learning a profile of user interests for recommending information sources such as Web pages or news articles. We describe the types of information available to determine whether to recommend a particular page to a particular user. This information includes the content of the page, the ratings of the user on other pages and the contents of these pages, the ratings given to that page by other users and the ratings of these other users on other pages and demographic information about users. We describe how each type of information may be used individually and then discuss an approach to combining recommendations from multiple sources. We illustrate each approach and the combined approach in the context of recommending restaurants.",
"title": ""
}
] |
[
{
"docid": "f78d0dae400b331d6dcb4de9d10ca2f0",
"text": "How ontologies provide the semantics, as explained here with the help of Harry Potter and his owl Hedwig.",
"title": ""
},
{
"docid": "2579a6082d157d8b9940b3ca8084f741",
"text": "In general, conventional Arbiter-based Physically Unclonable Functions (PUFs) generate responses with low unpredictability. The N-XOR Arbiter PUF, proposed in 2007, is a well-known technique for improving this unpredictability. In this paper, we propose a novel design for Arbiter PUF, called Double Arbiter PUF, to enhance the unpredictability on field programmable gate arrays (FPGAs), and we compare our design to conventional N-XOR Arbiter PUFs. One metric for judging the unpredictability of responses is to measure their tolerance to machine-learning attacks. Although our previous work showed the superiority of Double Arbiter PUFs regarding unpredictability, its details were not clarified. We evaluate the dependency on the number of training samples for machine learning, and we discuss the reason why Double Arbiter PUFs are more tolerant than the N-XOR Arbiter PUFs by evaluating intrachip variation. Further, the conventional Arbiter PUFs and proposed Double Arbiter PUFs are evaluated according to other metrics, namely, their uniqueness, randomness, and steadiness. We demonstrate that 3-1 Double Arbiter PUF archives the best performance overall.",
"title": ""
},
{
"docid": "597b893e42df1bfba3d17b2d3ec31539",
"text": "Genetic Programming (GP) is an evolutionary algorithm that has received a lot of attention lately due to its success in solving hard real-world problems. Lately, there has been considerable interest in GP's community to develop semantic genetic operators, i.e., operators that work on the phenotype. In this contribution, we describe EvoDAG (Evolving Directed Acyclic Graph) which is a Python library that implements a steady-state semantic Genetic Programming with tournament selection using an extension of our previous crossover operators based on orthogonal projections in the phenotype space. To show the effectiveness of EvoDAG, it is compared against state-of-the-art classifiers on different benchmark problems, experimental results indicate that EvoDAG is very competitive.",
"title": ""
},
{
"docid": "ba291f7d938f73946969476fdc96f0df",
"text": "Networking research often relies on simulation in order to test and evaluate new ideas. An important requirement of this process is that results must be reproducible so that other researchers can replicate, validate, and extend existing work. We look at the landscape of simulators for research in peer-to-peer (P2P) networks by conducting a survey of a combined total of over 280 papers from before and after 2007 (the year of the last survey in this area), and comment on the large quantity of research using bespoke, closed-source simulators. We propose a set of criteria that P2P simulators should meet, and poll the P2P research community for their agreement. We aim to drive the community towards performing their experiments on simulators that allow for others to validate their results.",
"title": ""
},
{
"docid": "a9201c32c903eba5cc25a744134a1c3c",
"text": "This paper proposes a new approach to sparsity, called the horseshoe estimator, which arises from a prior based on multivariate-normal scale mixtures. We describe the estimator’s advantages over existing approaches, including its robustness, adaptivity to different sparsity patterns and analytical tractability. We prove two theorems: one that characterizes the horseshoe estimator’s tail robustness and the other that demonstrates a super-efficient rate of convergence to the correct estimate of the sampling density in sparse situations. Finally, using both real and simulated data, we show that the horseshoe estimator corresponds quite closely to the answers obtained by Bayesian model averaging under a point-mass mixture prior.",
"title": ""
},
{
"docid": "2c48dfb1ea7bc0defbe1643fa4708614",
"text": "Text in natural images is an important source of information, which can be utilized for many real-world applications. This work focuses on a new problem: distinguishing images that contain text from a large volume of natural images. To address this problem, we propose a novel convolutional neural network variant, called Multi-scale Spatial Partition Network (MSP-Net). The network classifies images that contain text or not, by predicting text existence in all image blocks, which are spatial partitions at multiple scales on an input image. The whole image is classified as a text image (an image containing text) as long as one of the blocks is predicted to contain text. The network classifies images very efficiently by predicting all blocks simultaneously in a single forward propagation. Through experimental evaluations and comparisons on public datasets, we demonstrate the effectiveness and robustness of the proposed method.",
"title": ""
},
{
"docid": "4bce72901777783578637fc6bfeb6267",
"text": "This study examines the causal relationship between carbon dioxide emissions, electricity consumption and economic growth within a panel vector error correction model for five ASEAN countries over the period 1980 to 2006. The long-run estimates indicate that there is a statistically significant positive association between electricity consumption and emissions and a non-linear relationship between emissions and real output, consistent with the Environmental Kuznets Curve. The long-run estimates, however, do not indicate the direction of causality between the variables. The results from the Granger causality tests suggest that in the long-run there is unidirectional Granger causality running from electricity consumption and emissions to economic growth. The results also point to unidirectional Granger causality running from emissions to electricity consumption in the short-run.",
"title": ""
},
{
"docid": "4468a8d7f01c1b3e6adcf316bdc34f81",
"text": "Hyper-connected and digitized governments are increasingly advancing a vision of data-driven government as producers and consumers of big data in the big data ecosystem. Despite the growing interests in the potential power of big data, we found paucity of empirical research on big data use in government. This paper explores organizational capability challenges in transforming government through big data use. Using systematic literature review approach we developed initial framework for examining impacts of socio-political, strategic change, analytical, and technical capability challenges in enhancing public policy and service through big data. We then applied the framework to conduct case study research on two large-size city governments’ big data use. The findings indicate the framework’s usefulness, shedding new insights into the unique government context. Consequently, the framework was revised by adding big data public policy, political leadership structure, and organizational culture to further explain impacts of organizational capability challenges in transforming government.",
"title": ""
},
{
"docid": "4f64b2b2b50de044c671e3d0d434f466",
"text": "Optical flow estimation is one of the oldest and still most active research domains in computer vision. In 35 years, many methodological concepts have been introduced and have progressively improved performances , while opening the way to new challenges. In the last decade, the growing interest in evaluation benchmarks has stimulated a great amount of work. In this paper, we propose a survey of optical flow estimation classifying the main principles elaborated during this evolution, with a particular concern given to recent developments. It is conceived as a tutorial organizing in a comprehensive framework current approaches and practices. We give insights on the motivations, interests and limitations of modeling and optimization techniques, and we highlight similarities between methods to allow for a clear understanding of their behavior. Motion analysis is one of the main tasks of computer vision. From an applicative viewpoint, the information brought by the dynamical behavior of observed objects or by the movement of the camera itself is a decisive element for the interpretation of observed phenomena. The motion characterizations can be extremely variable among the large number of application domains. Indeed, one can be interested in tracking objects, quantifying deformations, retrieving dominant motion, detecting abnormal behaviors, and so on. The most low-level characterization is the estimation of a dense motion field, corresponding to the displacement of each pixel, which is called optical flow. Most high-level motion analysis tasks employ optical flow as a fundamental basis upon which more semantic interpretation is built. Optical flow estimation has given rise to a tremendous quantity of works for 35 years. If a certain continuity can be found since the seminal works of [120,170], a number of methodological innovations have progressively changed the field and improved performances. Evaluation benchmarks and applicative domains have followed this progress by proposing new challenges allowing methods to face more and more difficult situations in terms of motion discontinuities, large displacements, illumination changes or computational costs. Despite great advances, handling these issues in a unique method still remains an open problem. Comprehensive surveys of optical flow literature were carried out in the nineties [21,178,228]. More recently, reviewing works have focused on variational approaches [264], benchmark results [13], specific applications [115], or tutorials restricted to a certain subset of methods [177,260]. However, covering all the main estimation approaches and including recent developments in a comprehensive classification is still lacking in the optical flow field. This survey …",
"title": ""
},
{
"docid": "7e557091d8cfe6209b1eda3b664ab551",
"text": "With the increasing penetration of mobile phones, problematic use of mobile phone (PUMP) deserves attention. In this study, using a path model we examined the relationship between depression and PUMP, with motivations as mediators. Findings suggest that depressed people may rely on mobile phone to alleviate their negative feelings and spend more time on communication activities via mobile phone, which in turn can deteriorate into PUMP. However, face-to-face communication with others played a moderating role, weakening the link between use of mobile phone for communication activities and dete-",
"title": ""
},
{
"docid": "5b1241edf4a9853614a18139323f74eb",
"text": "This paper presents a W-band SPDT switch implemented using PIN diodes in a new 90 nm SiGe BiCMOS technology. The SPDT switch achieves a minimum insertion loss of 1.4 dB and an isolation of 22 dB at 95 GHz, with less than 2 dB insertion loss from 77-134 GHz, and greater than 20 dB isolation from 79-129 GHz. The input and output return losses are greater than 10 dB from 73-133 GHz. By reverse biasing the off-state PIN diodes, the P1dB is larger than +24 dBm. To the authors' best knowledge, these results demonstrate the lowest loss and highest power handling capability achieved by a W-band SPDT switch in any silicon-based technology reported to date.",
"title": ""
},
{
"docid": "d88059813c4064ec28c58a8ab23d3030",
"text": "Routing in Vehicular Ad hoc Networks is a challenging task due to the unique characteristics of the network such as high mobility of nodes, dynamically changing topology and highly partitioned network. It is a challenge to ensure reliable, continuous and seamless communication in the presence of speeding vehicles. The performance of routing protocols depends on various internal factors such as mobility of nodes and external factors such as road topology and obstacles that block the signal. This demands a highly adaptive approach to deal with the dynamic scenarios by selecting the best routing and forwarding strategies and by using appropriate mobility and propagation models. In this paper we review the existing routing protocols for VANETs and categorise them into a taxonomy based on key attributes such as network architecture, applications supported, routing strategies, forwarding strategies, mobility models and quality of service metrics. Protocols belonging to unicast, multicast, geocast and broadcast categories are discussed. Strengths and weaknesses of various protocols using topology based, position based and cluster based approaches are analysed. Emphasis is given on the adaptive and context-aware routing protocols. Simulation of broadcast and unicast protocols is carried out and the results are presented.",
"title": ""
},
{
"docid": "0c0d0b6d4697b1a0fc454b995bcda79a",
"text": "Online multiplayer games, such as Gears of War and Halo, use skill-based matchmaking to give players fair and enjoyable matches. They depend on a skill rating system to infer accurate player skills from historical data. TrueSkill is a popular and effective skill rating system, working from only the winner and loser of each game. This paper presents an extension to TrueSkill that incorporates additional information that is readily available in online shooters, such as player experience, membership in a squad, the number of kills a player scored, tendency to quit, and skill in other game modes. This extension, which we call TrueSkill2, is shown to significantly improve the accuracy of skill ratings computed from Halo 5 matches. TrueSkill2 predicts historical match outcomes with 68% accuracy, compared to 52% accuracy for TrueSkill.",
"title": ""
},
{
"docid": "7343d29bfdc1a4466400f8752dce4622",
"text": "We present a novel method for detecting occlusions and in-painting unknown areas of a light field photograph, based on previous work in obstruction-free photography and light field completion. An initial guess at separating the occluder from the rest of the photograph is computed by aligning backgrounds of the images and using this information to generate an occlusion mask. The masked pixels are then synthesized using a patch-based texture synthesis algorithm, with the median image as the source of each patch.",
"title": ""
},
{
"docid": "2b71cfacf2b1e0386094711d8b326ff7",
"text": "In-car navigation systems are designed with effectiveness and efficiency (e.g., guiding accuracy) in mind. However, finding a way and discovering new places could also be framed as an adventurous, stimulating experience for the driver and passengers. Inspired by Gaver and Martin's (2000) notion of \"ambiguity and detour\" and Hassenzahl's (2010) Experience Design, we built ExplorationRide, an in-car navigation system to foster exploration. An empirical in situ exploration demonstrated the system's ability to create an exploration experience, marked by a relaxed at-mosphere, a loss of sense of time, excitement about new places and an intensified relationship with the landscape.",
"title": ""
},
{
"docid": "4592c8f5758ccf20430dbec02644c931",
"text": "Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.",
"title": ""
},
{
"docid": "53518256d6b4f3bb4e8dcf28a35f9284",
"text": "Customers often evaluate products at brick-and-mortar stores to identify their “best fit” product but buy it for a lower price at a competing online retailer. This free-riding behavior by customers is referred to as “showrooming” and we show that this is detrimental to the profits of the brick-and-mortar stores. We first analyze price matching as a short-term strategy to counter showrooming. Since customers purchase from the store at lower than store posted price when they ask for price-matching, one would expect the price matching strategy to be less effective as the fraction of customers who seek the matching increases. However, our results show that with an increase in the fraction of customers who seek price matching, the stores profits initially decrease and then increase. While price-matching could be used even when customers do not exhibit showrooming behavior, we find that it is more effective when customers do showrooming. We then study exclusivity of product assortments as a long-term strategy to counter showrooming. This strategy can be implemented in two different ways. One, by arranging for exclusivity of known brands (e.g. Macy’s has such an arrangement with Tommy Hilfiger), or, two, through creation of store brands at the brick-and-mortar store (T.J.Maxx uses a large number of store brands). Our analysis suggests that implementing exclusivity through store brands is better than exclusivity through known brands when the product category has few digital attributes. However, when customers do not showroom, the known brand strategy dominates the store brand strategy.",
"title": ""
},
{
"docid": "91f45641d96b519dd65bf00249571a99",
"text": "Tissue perfusion is determined by both blood vessel geometry and the rheological properties of blood. Blood is a nonNewtonian fluid, its viscosity being dependent on flow conditions. Blood and plasma viscosities, as well as the rheological properties of blood cells (e.g., deformability and aggregation of red blood cells), are influenced by disease processes and extreme physiological conditions. These rheological parameters may in turn affect the blood flow in vessels, and hence tissue perfusion. Unfortunately it is not always possible to determine if a change in rheological parameters is the cause or the result of a disease process. The hemorheology-tissue perfusion relationship is further complicated by the distinct in vivo behavior of blood. Besides the special hemodynamic mechanisms affecting the composition of blood in various regions of the vascular system, autoregulation based on vascular control mechanisms further complicates this relationship. Hemorheological parameters may be especially important for adequate tissue perfusion if the vascular system is geometrically challenged.",
"title": ""
},
{
"docid": "dd34e763b3fdf0a0a903b773fe1a84be",
"text": "Natural language processing (NLP) is a vibrant field of interdisciplinary Computer Science research. Ultimately, NLP seeks to build intelligence into software so that software will be able to process a natural language as skillfully and artfully as humans. Prolog, a general purpose logic programming language, has been used extensively to develop NLP applications or components thereof. This report is concerned with introducing the interested reader to the broad field of NLP with respect to NLP applications that are built in Prolog or from Prolog components.",
"title": ""
},
{
"docid": "27ba6cfdebdedc58ab44b75a15bbca05",
"text": "OBJECTIVES\nTo assess the influence of material/technique selection (direct vs. CAD/CAM inlays) for large MOD composite adhesive restorations and its effect on the crack propensity and in vitro accelerated fatigue resistance.\n\n\nMETHODS\nA standardized MOD slot-type tooth preparation was applied to 32 extracted maxillary molars (5mm depth and 5mm bucco-palatal width) including immediately sealed dentin for the inlay group. Fifteen teeth were restored with direct composite resin restoration (Miris2) and 17 teeth received milled inlays using Paradigm MZ100 block in the CEREC machine. All inlays were adhesively luted with a light curing composite resin (Filtek Z100). Enamel shrinkage-induced cracks were tracked with photography and transillumination. Cyclic isometric chewing (5 Hz) was simulated, starting with a load of 200 N (5000 cycles), followed by stages of 400, 600, 800, 1000, 1200 and 1400 N at a maximum of 30,000 cycles each. Samples were loaded until fracture or to a maximum of 185,000 cycles.\n\n\nRESULTS\nTeeth restored with the direct technique fractured at an average load of 1213 N and two of them withstood all loading cycles (survival=13%); with inlays, the survival rate was 100%. Most failures with Miris2 occurred above the CEJ and were re-restorable (67%), but generated more shrinkage-induced cracks (47% of the specimen vs. 7% for inlays).\n\n\nSIGNIFICANCE\nCAD/CAM MZ100 inlays increased the accelerated fatigue resistance and decreased the crack propensity of large MOD restorations when compared to direct restorations. While both restorative techniques yielded excellent fatigue results at physiological masticatory loads, CAD/CAM inlays seem more indicated for high-load patients.",
"title": ""
}
] |
scidocsrr
|
d15b94152661b013e935f44373d6bc23
|
The Good, The Bad and the Ugly: A Meta-analytic Review of Positive and Negative Effects of Violent Video Games
|
[
{
"docid": "a52fce0b7419d745a85a2bba27b34378",
"text": "Playing action video games enhances several different aspects of visual processing; however, the mechanisms underlying this improvement remain unclear. Here we show that playing action video games can alter fundamental characteristics of the visual system, such as the spatial resolution of visual processing across the visual field. To determine the spatial resolution of visual processing, we measured the smallest distance a distractor could be from a target without compromising target identification. This approach exploits the fact that visual processing is hindered as distractors are brought close to the target, a phenomenon known as crowding. Compared with nonplayers, action-video-game players could tolerate smaller target-distractor distances. Thus, the spatial resolution of visual processing is enhanced in this population. Critically, similar effects were observed in non-video-game players who were trained on an action video game; this result verifies a causative relationship between video-game play and augmented spatial resolution.",
"title": ""
}
] |
[
{
"docid": "bbeebb29c7220009c8d138dc46e8a6dd",
"text": "Let’s begin with a problem that many of you have seen before. It’s a common question in technical interviews. You’re given as input an array A of length n, with the promise that it has a majority element — a value that is repeated in strictly more than n/2 of the array’s entries. Your task is to find the majority element. In algorithm design, the usual “holy grail” is a linear-time algorithm. For this problem, your post-CS161 toolbox already contains a subroutine that gives a linear-time solution — just compute the median of A. (Note: it must be the majority element.) So let’s be more ambitious: can we compute the majority element with a single left-to-right pass through the array? If you haven’t seen it before, here’s the solution:",
"title": ""
},
{
"docid": "45dbc5a3adacd0cc1374f456fb421ee9",
"text": "The purpose of this article is to discuss current techniques used with poly-l-lactic acid to safely and effectively address changes observed in the aging face. Several important points deserve mention. First, this unique agent is not a filler but a stimulator of the host's own collagen, which then acts to volumize tissue in a gradual, progressive, and predictable manner. The technical differences between the use of biostimulatory agents and replacement fillers are simple and straightforward, but are critically important to the safe and successful use of these products and will be reviewed in detail. Second, in addition to gains in technical insights that have improved our understanding of how to use the product to best advantage, where to use the product to best advantage in facial filling has also improved with ever-evolving insights into the changes observed in the aging face. Finally, it is important to recognize that a patient's final outcome, and the amount of product and work it will take to get there, is a reflection of the quality of tissues with which they start. This is, of course, an issue of patient selection and not product selection.",
"title": ""
},
{
"docid": "dd741d612ee466aecbb03f5e1be89b90",
"text": "To date, many of the methods for information extraction of biological information from scientific articles are restricted to the abstract of the article. However, full text articles in electronic version, which offer larger sources of data, are currently available. Several questions arise as to whether the effort of scanning full text articles is worthy, or whether the information that can be extracted from the different sections of an article can be relevant. In this work we addressed those questions showing that the keyword content of the different sections of a standard scientific article (abstract, introduction, methods, results, and discussion) is very heterogeneous. Although the abstract contains the best ratio of keywords per total of words, other sections of the article may be a better source of biologically relevant data.",
"title": ""
},
{
"docid": "7f368ea27e9aa7035c8da7626c409740",
"text": "The GANs are generative models whose random samples realistically reflect natural images. It also can generate samples with specific attributes by concatenating a condition vector into the input, yet research on this field is not well studied. We propose novel methods of conditioning generative adversarial networks (GANs) that achieve state-of-the-art results on MNIST and CIFAR-10. We mainly introduce two models: an information retrieving model that extracts conditional information from the samples, and a spatial bilinear pooling model that forms bilinear features derived from the spatial cross product of an image and a condition vector. These methods significantly enhance log-likelihood of test data under the conditional distributions compared to the methods of concatenation.",
"title": ""
},
{
"docid": "0d6a28cc55d52365986382f43c28c42c",
"text": "Predictive analytics embraces an extensive range of techniques including statistical modeling, machine learning, and data mining and is applied in business intelligence, public health, disaster management and response, and many other fields. To date, visualization has been broadly used to support tasks in the predictive analytics pipeline. Primary uses have been in data cleaning, exploratory analysis, and diagnostics. For example, scatterplots and bar charts are used to illustrate class distributions and responses. More recently, extensive visual analytics systems for feature selection, incremental learning, and various prediction tasks have been proposed to support the growing use of complex models, agent-specific optimization, and comprehensive model comparison and result exploration. Such work is being driven by advances in interactive machine learning and the desire of end-users to understand and engage with the modeling process. In this state-of-the-art report, we catalogue recent advances in the visualization community for supporting predictive analytics. First, we define the scope of predictive analytics discussed in this article and describe how visual analytics can support predictive analytics tasks in a predictive visual analytics (PVA) pipeline. We then survey the literature and categorize the research with respect to the proposed PVA pipeline. Systems and techniques are evaluated in terms of their supported interactions, and interactions specific to predictive analytics are discussed. We end this report with a discussion of challenges and opportunities for future research in predictive visual analytics.",
"title": ""
},
{
"docid": "a91add591aacaa333e109d77576ba463",
"text": "It has become essential to scrutinize and evaluate software development methodologies, mainly because of their increasing number and variety. Evaluation is required to gain a better understanding of the features, strengths, and weaknesses of the methodologies. The results of such evaluations can be leveraged to identify the methodology most appropriate for a specific context. Moreover, methodology improvement and evolution can be accelerated using these results. However, despite extensive research, there is still a need for a feature/criterion set that is general enough to allow methodologies to be evaluated regardless of their types. We propose a general evaluation framework which addresses this requirement. In order to improve the applicability of the proposed framework, all the features – general and specific – are arranged in a hierarchy along with their corresponding criteria. Providing different levels of abstraction enables users to choose the suitable criteria based on the context. Major evaluation frameworks for object-oriented, agent-oriented, and aspect-oriented methodologies have been studied and assessed against the proposed framework to demonstrate its reliability and validity.",
"title": ""
},
{
"docid": "8c79eb51cfbc9872a818cf6467648693",
"text": "A compact frequency-reconfigurable slot antenna for LTE (2.3 GHz), AMT-fixed service (4.5 GHz), and WLAN (5.8 GHz) applications is proposed in this letter. A U-shaped slot with short ends and an L-shaped slot with open ends are etched in the ground plane to realize dual-band operation. By inserting two p-i-n diodes inside the slots, easy reconfigurability of three frequency bands over a frequency ratio of 2.62:1 can be achieved. In order to reduce the cross polarization of the antenna, another L-shaped slot is introduced symmetrically. Compared to the conventional reconfigurable slot antenna, the size of the antenna is reduced by 32.5%. Simulated and measured results show that the antenna can switch between two single-band modes (2.3 and 5.8 GHz) and two dual-band modes (2.3/4.5 and 4.5/5.8 GHz). Also, stable radiation patterns are obtained.",
"title": ""
},
{
"docid": "94631c7be7b2a992d006cd642dcc502c",
"text": "This paper describes nagging, a technique for parallelizing search in a heterogeneous distributed computing environment. Nagging exploits the speedup anomaly often observed when parallelizing problems by playing multiple reformulations of the problem or portions of the problem against each other. Nagging is both fault tolerant and robust to long message latencies. In this paper, we show how nagging can be used to parallelize several different algorithms drawn from the artificial intelligence literature, and describe how nagging can be combined with partitioning, the more traditional search parallelization strategy. We present a theoretical analysis of the advantage of nagging with respect to partitioning, and give empirical results obtained on a cluster of 64 processors that demonstrate nagging’s effectiveness and scalability as applied to A* search, α β minimax game tree search, and the Davis-Putnam algorithm.",
"title": ""
},
{
"docid": "0e5eb8191cea7d3a59f192aa32a214c4",
"text": "Recent neural models have shown significant progress on the problem of generating short descriptive texts conditioned on a small number of database records. In this work, we suggest a slightly more difficult data-to-text generation task, and investigate how effective current approaches are on this task. In particular, we introduce a new, large-scale corpus of data records paired with descriptive documents, propose a series of extractive evaluation methods for analyzing performance, and obtain baseline results using current neural generation methods. Experiments show that these models produce fluent text, but fail to convincingly approximate humangenerated documents. Moreover, even templated baselines exceed the performance of these neural models on some metrics, though copyand reconstructionbased extensions lead to noticeable improvements.",
"title": ""
},
{
"docid": "54b094c7747c8ac0b1fbd1f93e78fd8e",
"text": "It is essential for the marine navigator conducting maneuvers of his ship at sea to know future positions of himself and target ships in a specific time span to effectively solve collision situations. This article presents an algorithm of ship movement trajectory prediction, which, through data fusion, takes into account measurements of the ship's current position from a number of doubled autonomous devices. This increases the reliability and accuracy of prediction. The algorithm has been implemented in NAVDEC, a navigation decision support system and practically used on board ships.",
"title": ""
},
{
"docid": "0fd61e297560ebb8bcf1aafdf011ae67",
"text": "Research is fundamental to the advancement of medicine and critical to identifying the most optimal therapies unique to particular societies. This is easily observed through the dynamics associated with pharmacology, surgical technique and the medical equipment used today versus short years ago. Advancements in knowledge synthesis and reporting guidelines enhance the quality, scope and applicability of results; thus, improving health science and clinical practice and advancing health policy. While advancements are critical to the progression of optimal health care, the high cost associated with these endeavors cannot be ignored. Research fundamentally needs to be evaluated to identify the most efficient methods of evaluation. The primary objective of this paper is to look at a specific research methodology when applied to the area of clinical research, especially extracorporeal circulation and its prognosis for the future.",
"title": ""
},
{
"docid": "e1c04d30c7b8f71d9c9b19cb2bb36a33",
"text": "This Guide has been written to provide guidance for individuals involved in curriculum design who wish to develop research skills and foster the attributes in medical undergraduates that help develop research. The Guide will provoke debate on an important subject, and although written specifically with undergraduate medical education in mind, we hope that it will be of interest to all those involved with other health professionals' education. Initially, the Guide describes why research skills and its related attributes are important to those pursuing a medical career. It also explores the reasons why research skills and an ethos of research should be instilled into professionals of the future. The Guide also tries to define what these skills and attributes should be for medical students and lays out the case for providing opportunities to develop research expertise in the undergraduate curriculum. Potential methods to encourage the development of research-related attributes are explored as are some suggestions as to how research skills could be taught and assessed within already busy curricula. This publication also discusses the real and potential barriers to developing research skills in undergraduate students, and suggests strategies to overcome or circumvent these. Whilst we anticipate that this Guide will appeal to all levels of expertise in terms of student research, we hope that, through the use of case studies, we will provide practical advice to those currently developing this area within their curriculum.",
"title": ""
},
{
"docid": "8863a617cee49b578a3902d12841053b",
"text": "N Engl J Med 2009;361:1475-85. Copyright © 2009 Massachusetts Medical Society. DNA damage has emerged as a major culprit in cancer and many diseases related to aging. The stability of the genome is supported by an intricate machinery of repair, damage tolerance, and checkpoint pathways that counteracts DNA damage. In addition, DNA damage and other stresses can trigger a highly conserved, anticancer, antiaging survival response that suppresses metabolism and growth and boosts defenses that maintain the integrity of the cell. Induction of the survival response may allow interventions that improve health and extend the life span. Recently, the first candidate for such interventions, rapamycin (also known as sirolimus), has been identified.1 Compromised repair systems in tumors also offer opportunities for intervention, making it possible to attack malignant cells in which maintenance of the genome has been weakened. Time-dependent accumulation of damage in cells and organs is associated with gradual functional decline and aging.2 The molecular basis of this phenomenon is unclear,3-5 whereas in cancer, DNA alterations are the major culprit. In this review, I present evidence that cancer and diseases of aging are two sides of the DNAdamage problem. An examination of the importance of DNA damage and the systems of genome maintenance in relation to aging is followed by an account of the derailment of genome guardian mechanisms in cancer and of how this cancerspecific phenomenon can be exploited for treatment.",
"title": ""
},
{
"docid": "e9750bf1287847b6587ad28b19e78751",
"text": "Biomedical engineering handles the organization and functioning of medical devices in the hospital. This is a strategic function of the hospital for its balance, development, and growth. This is a major focus in internal and external reports of the hospital. It's based on piloting of medical devices needs and the procedures of biomedical teams’ intervention. Multi-year projects of capital and operating expenditure in medical devices are planned as coherently as possible with the hospital's financial budgets. An information system is an essential tool for monitoring medical devices engineering and relationship with medical services.",
"title": ""
},
{
"docid": "1203f22bfdfc9ecd211dbd79a2043a6a",
"text": "After a short introduction to classic cryptography we explain thoroughly how quantum cryptography works. We present then an elegant experimental realization based on a self-balanced interferometer with Faraday mirrors. This phase-coding setup needs no alignment of the interferometer nor polarization control, and therefore considerably facilitates the experiment. Moreover it features excellent fringe visibility. Next, we estimate the practical limits of quantum cryptography. The importance of the detector noise is illustrated and means of reducing it are presented. With present-day technologies maximum distances of about 70 kmwith bit rates of 100 Hzare achievable. PACS: 03.67.Dd; 85.60; 42.25; 33.55.A Cryptography is the art of hiding information in a string of bits meaningless to any unauthorized party. To achieve this goal, one uses encryption: a message is combined according to an algorithm with some additional secret information – the key – to produce a cryptogram. In the traditional terminology, Alice is the party encrypting and transmitting the message, Bob the one receiving it, and Eve the malevolent eavesdropper. For a crypto-system to be considered secure, it should be impossible to unlock the cryptogram without Bob’s key. In practice, this demand is often softened, and one requires only that the system is sufficiently difficult to crack. The idea is that the message should remain protected as long as the information it contains is valuable. There are two main classes of crypto-systems, the publickey and the secret-key crypto-systems: Public key systems are based on so-called one-way functions: given a certainx, it is easy to computef(x), but difficult to do the inverse, i.e. compute x from f(x). “Difficult” means that the task shall take a time that grows exponentially with the number of bits of the input. The RSA (Rivest, Shamir, Adleman) crypto-system for example is based on the factorizing of large integers. Anyone can compute 137 ×53 in a few seconds, but it may take a while to find the prime factors of 28 907. To transmit a message Bob chooses a private key (based on two large prime numbers) and computes from it a public key (based on the product of these numbers) which he discloses publicly. Now Alice can encrypt her message using this public key and transmit it to Bob, who decrypts it with the private key. Public key systems are very convenient and became very popular over the last 20 years, however, they suffer from two potential major flaws. To date, nobody knows for sure whether or not factorizing is indeed difficult. For known algorithms, the time for calculation increases exponentially with the number of input bits, and one can easily improve the safety of RSA by choosing a longer key. However, a fast algorithm for factorization would immediately annihilate the security of the RSA system. Although it has not been published yet, there is no guarantee that such an algorithm does not exist. Second, problems that are difficult for a classical computer could become easy for a quantum computer. With the recent developments in the theory of quantum computation, there are reasons to fear that building these machines will eventually become possible. If one of these two possibilities came true, RSA would become obsolete. One would then have no choice, but to turn to secret-key cryptosystems. Very convenient and broadly used are crypto-systems based on a public algorithm and a relatively short secret key. The DES (Data Encryption Standard, 1977) for example uses a 56-bit key and the same algorithm for coding and decoding. The secrecy of the cryptogram, however, depends again on the calculating power and the time of the eavesdropper. The only crypto-system providing proven, perfect secrecy is the “one-time pad” proposed by Vernam in 1935. With this scheme, a message is encrypted using a random key of equal length, by simply “adding” each bit of the message to the orresponding bit of the key. The scrambled text can then be sent to Bob, who decrypts the message by “subtracting” the same key. The bits of the ciphertext are as random as those of the key and consequently do not contain any information. Although perfectly secure, the problem with this system is that it is essential for Alice and Bob to share a common secret key, at least as long as the message they want to exchange, and use it only for a single encryption. This key must be transmitted by some trusted means or personal meeting, which turns out to be complex and expensive.",
"title": ""
},
{
"docid": "1dcc48994fada1b46f7b294e08f2ed5d",
"text": "This paper presents an application-specific integrated processor for an angular estimation system that works with 9-D inertial measurement units. The application-specific instruction-set processor (ASIP) was implemented on field-programmable gate array and interfaced with a gyro-plus-accelerometer 6-D sensor and with a magnetic compass. Output data were recorded on a personal computer and also used to perform a live demo. During system modeling and design, it was chosen to represent angular position data with a quaternion and to use an extended Kalman filter as sensor fusion algorithm. For this purpose, a novel two-stage filter was designed: The first stage uses accelerometer data, and the second one uses magnetic compass data for angular position correction. This allows flexibility, less computational requirements, and robustness to magnetic field anomalies. The final goal of this work is to realize an upgraded application-specified integrated circuit that controls the microelectromechanical systems (MEMS) sensor and integrates the ASIP. This will allow the MEMS sensor gyro plus accelerometer and the angular estimation system to be contained in a single package; this system might optionally work with an external magnetic compass.",
"title": ""
},
{
"docid": "222c51f079c785bb2aa64d2937e50ff0",
"text": "Security and privacy in cloud computing are critical components for various organizations that depend on the cloud in their daily operations. Customers' data and the organizations' proprietary information have been subject to various attacks in the past. In this paper, we develop a set of Moving Target Defense (MTD) strategies that randomize the location of the Virtual Machines (VMs) to harden the cloud against a class of Multi-Armed Bandit (MAB) policy-based attacks. These attack policies capture the behavior of adversaries that seek to explore the allocation of VMs in the cloud and exploit the ones that provide the highest rewards (e.g., access to critical datasets, ability to observe credit card transactions, etc). We assess through simulation experiments the performance of our MTD strategies, showing that they can make MAB policy-based attacks no more effective than random attack policies. Additionally, we show the effects of critical parameters – such as discount factors, the time between randomizing the locations of the VMs and variance in the rewards obtained – on the performance of our defenses. We validate our results through simulations and a real OpenStack system implementation in our lab to assess migration times and down times under different system loads.",
"title": ""
},
{
"docid": "cf999fc9b1a604dadfc720cf1bbfafdc",
"text": "The characteristics of the extracellular polymeric substances (EPS) extracted with nine different extraction protocols from four different types of anaerobic granular sludge were studied. The efficiency of four physical (sonication, heating, cationic exchange resin (CER), and CER associated with sonication) and four chemical (ethylenediaminetetraacetic acid, ethanol, formaldehyde combined with heating, or NaOH) EPS extraction methods was compared to a control extraction protocols (i.e., centrifugation). The nucleic acid content and the protein/polysaccharide ratio of the EPS extracted show that the extraction does not induce abnormal cellular lysis. Chemical extraction protocols give the highest EPS extraction yields (calculated by the mass ratio between sludges and EPS dry weight (DW)). Infrared analyses as well as an extraction yield over 100% or organic carbon content over 1 g g−1 of DW revealed, nevertheless, a carry-over of the chemical extractants into the EPS extracts. The EPS of the anaerobic granular sludges investigated are predominantly composed of humic-like substances, proteins, and polysaccharides. The EPS content in each biochemical compound varies depending on the sludge type and extraction technique used. Some extraction techniques lead to a slightly preferential extraction of some EPS compounds, e.g., CER gives a higher protein yield.",
"title": ""
},
{
"docid": "22719028c913aa4d0407352caf185d7a",
"text": "Although the fact that genetic predisposition and environmental exposures interact to shape development and function of the human brain and, ultimately, the risk of psychiatric disorders has drawn wide interest, the corresponding molecular mechanisms have not yet been elucidated. We found that a functional polymorphism altering chromatin interaction between the transcription start site and long-range enhancers in the FK506 binding protein 5 (FKBP5) gene, an important regulator of the stress hormone system, increased the risk of developing stress-related psychiatric disorders in adulthood by allele-specific, childhood trauma–dependent DNA demethylation in functional glucocorticoid response elements of FKBP5. This demethylation was linked to increased stress-dependent gene transcription followed by a long-term dysregulation of the stress hormone system and a global effect on the function of immune cells and brain areas associated with stress regulation. This identification of molecular mechanisms of genotype-directed long-term environmental reactivity will be useful for designing more effective treatment strategies for stress-related disorders.",
"title": ""
},
{
"docid": "44bd4ef644a18dc58a672eb91c873a98",
"text": "Reactive oxygen species (ROS) contain one or more unpaired electrons and are formed as intermediates in a variety of normal biochemical reactions. However, when generated in excess amounts or not appropriately controlled, ROS initiate extensive cellular damage and tissue injury. ROS have been implicated in the progression of cancer, cardiovascular disease and neurodegenerative and neuroinflammatory disorders, such as multiple sclerosis (MS). In the last decade there has been a major interest in the involvement of ROS in MS pathogenesis and evidence is emerging that free radicals play a key role in various processes underlying MS pathology. To counteract ROS-mediated damage, the central nervous system is equipped with an intrinsic defense mechanism consisting of endogenous antioxidant enzymes. Here, we provide a comprehensive overview on the (sub)cellular origin of ROS during neuroinflammation as well as the detrimental effects of ROS in processing underlying MS lesion development and persistence. In addition, we will discuss clinical and experimental studies highlighting the therapeutic potential of antioxidant protection in the pathogenesis of MS.",
"title": ""
}
] |
scidocsrr
|
4534df7a48326def1badb12418df5c36
|
Internet of things for sleep quality monitoring system: A survey
|
[
{
"docid": "8bcc223389b7cc2ce2ef4e872a029489",
"text": "Issues concerning agriculture, countryside and farmers have been always hindering China’s development. The only solution to these three problems is agricultural modernization. However, China's agriculture is far from modernized. The introduction of cloud computing and internet of things into agricultural modernization will probably solve the problem. Based on major features of cloud computing and key techniques of internet of things, cloud computing, visualization and SOA technologies can build massive data involved in agricultural production. Internet of things and RFID technologies can help build plant factory and realize automatic control production of agriculture. Cloud computing is closely related to internet of things. A perfect combination of them can promote fast development of agricultural modernization, realize smart agriculture and effectively solve the issues concerning agriculture, countryside and farmers.",
"title": ""
}
] |
[
{
"docid": "b4e1fdeb6d467eddfea074b802558fb8",
"text": "This paper proposes a novel and more accurate iris segmentation framework to automatically segment iris region from the face images acquired with relaxed imaging under visible or near-infrared illumination, which provides strong feasibility for applications in surveillance, forensics and the search for missing children, etc. The proposed framework is built on a novel total-variation based formulation which uses l1 norm regularization to robustly suppress noisy texture pixels for the accurate iris localization. A series of novel and robust post processing operations are introduced to more accurately localize the limbic boundaries. Our experimental results on three publicly available databases, i.e., FRGC, UBIRIS.v2 and CASIA.v4-distance, achieve significant performance improvement in terms of iris segmentation accuracy over the state-of-the-art approaches in the literature. Besides, we have shown that using iris masks generated from the proposed approach helps to improve iris recognition performance as well. Unlike prior work, all the implementations in this paper are made publicly available to further advance research and applications in biometrics at-d-distance.",
"title": ""
},
{
"docid": "49dcfa6459c83b20f731c61f3a1ed7cf",
"text": "The number of unmanned vehicles and devices deployed underwater is increasing. New communication systems and networking protocols are required to handle this growth. Underwater free-space optical communication is poised to augment acoustic communication underwater, especially for short-range, mobile, multi-user environments in future underwater systems. Existing systems are typically point-to-point links with strict pointing and tracking requirements. In this paper we demonstrate compact smart transmitters and receivers for underwater free-space optical communications. The receivers have segmented wide field of view and are capable of estimating angle of arrival of signals. The transmitters are highly directional with individually addressable LEDs for electronic switched beamsteering, and are capable of estimating water quality from its backscattered light collected by its co-located receiver. Together they form enabling technologies for non-traditional networking schemes in swarms of unmanned vehicles underwater.",
"title": ""
},
{
"docid": "9d979b8cf09dd54b28e314e2846f02a6",
"text": "Purpose – The objective of this paper is to analyse whether individuals’ socioeconomic characteristics – age, gender and income – influence their online shopping behaviour. The individuals analysed are experienced e-shoppers i.e. individuals who often make purchases on the internet. Design/methodology/approach – The technology acceptance model was broadened to include previous use of the internet and perceived self-efficacy. The perceptions and behaviour of e-shoppers are based on their own experiences. The information obtained has been tested using causal and multi-sample analyses. Findings – The results show that socioeconomic variables moderate neither the influence of previous use of the internet nor the perceptions of e-commerce; in short, they do not condition the behaviour of the experienced e-shopper. Practical implications – The results obtained help to determine that once individuals attain the status of experienced e-shoppers their behaviour is similar, independently of their socioeconomic characteristics. The internet has become a marketplace suitable for all ages and incomes and both genders, and thus the prejudices linked to the advisability of selling certain products should be revised. Originality/value – Previous research related to the socioeconomic variables affecting e-commerce has been aimed at forecasting who is likely to make an initial online purchase. In contrast to the majority of existing studies, it is considered that the current development of the online environment should lead to analysis of a new kind of e-shopper (experienced purchaser), whose behaviour differs from that studied at the outset of this research field. The experience acquired with online shopping nullifies the importance of socioeconomic characteristics.",
"title": ""
},
{
"docid": "5a3b8a2ec8df71956c10b2eb10eabb99",
"text": "During a project examining the use of machine learning techniques for oil spill detection, we encountered several essential questions that we believe deserve the attention of the research community. We use our particular case study to illustrate such issues as problem formulation, selection of evaluation measures, and data preparation. We relate these issues to properties of the oil spill application, such as its imbalanced class distribution, that are shown to be common to many applications. Our solutions to these issues are implemented in the Canadian Environmental Hazards Detection System (CEHDS), which is about to undergo field testing.",
"title": ""
},
{
"docid": "29e5d267bebdeb2aa22b137219b4407e",
"text": "Social networks are popular platforms for interaction, communication and collaboration between friends. Researchers have recently proposed an emerging class of applications that leverage relationships from social networks to improve security and performance in applications such as email, web browsing and overlay routing. While these applications often cite social network connectivity statistics to support their designs, researchers in psychology and sociology have repeatedly cast doubt on the practice of inferring meaningful relationships from social network connections alone.\n This leads to the question: Are social links valid indicators of real user interaction? If not, then how can we quantify these factors to form a more accurate model for evaluating socially-enhanced applications? In this paper, we address this question through a detailed study of user interactions in the Facebook social network. We propose the use of interaction graphs to impart meaning to online social links by quantifying user interactions. We analyze interaction graphs derived from Facebook user traces and show that they exhibit significantly lower levels of the \"small-world\" properties shown in their social graph counterparts. This means that these graphs have fewer \"supernodes\" with extremely high degree, and overall network diameter increases significantly as a result. To quantify the impact of our observations, we use both types of graphs to validate two well-known social-based applications (RE and SybilGuard). The results reveal new insights into both systems, and confirm our hypothesis that studies of social applications should use real indicators of user interactions in lieu of social graphs.",
"title": ""
},
{
"docid": "6c2a033b374b4318cd94f0a617ec705a",
"text": "In this paper, we propose to use Deep Neural Net (DNN), which has been recently shown to reduce speech recognition errors significantly, in Computer-Aided Language Learning (CALL) to evaluate English learners’ pronunciations. Multi-layer, stacked Restricted Boltzman Machines (RBMs), are first trained as nonlinear basis functions to represent speech signals succinctly, and the output layer is discriminatively trained to optimize the posterior probabilities of correct, sub-phonemic “senone” states. Three Goodness of Pronunciation (GOP) scores, including: the likelihood-based posterior probability, averaged framelevel posteriors of the DNN output layer “senone” nodes, and log likelihood ratio of correct and competing models, are tested with recordings of both native and non-native speakers, along with manual grading of pronunciation quality. The experimental results show that the GOP estimated by averaged frame-level posteriors of “senones” correlate with human scores the best. Comparing with GOPs estimated with non-DNN, i.e. GMMHMM, based models, the new approach can improve the correlations relatively by 22.0% or 15.6%, at word or sentence levels, respectively. In addition, the frame-level posteriors, which doesn’t need a decoding lattice and its corresponding forwardbackward computations, is suitable for supporting fast, on-line, multi-channel applications.",
"title": ""
},
{
"docid": "755535335da1eb05e4b4a163a8f3d2ac",
"text": "Calcium pyrophosphate (CPP) crystal deposition (CPPD) is associated with ageing and osteoarthritis, and with uncommon disorders such as hyperparathyroidism, hypomagnesemia, hemochromatosis and hypophosphatasia. Elevated levels of synovial fluid pyrophosphate promote CPP crystal formation. This extracellular pyrophosphate originates either from the breakdown of nucleotide triphosphates by plasma-cell membrane glycoprotein 1 (PC-1) or from pyrophosphate transport by the transmembrane protein progressive ankylosis protein homolog (ANK). Although the etiology of apparent sporadic CPPD is not well-established, mutations in the ANK human gene (ANKH) have been shown to cause familial CPPD. In this Review, the key regulators of pyrophosphate metabolism and factors that lead to high extracellular pyrophosphate levels are described. Particular emphasis is placed on the mechanisms by which mutations in ANKH cause CPPD and the clinical phenotype of these mutations is discussed. Cartilage factors predisposing to CPPD and CPP-crystal-induced inflammation and current treatment options for the management of CPPD are also described.",
"title": ""
},
{
"docid": "84ece888e2302d13775973f552c6b810",
"text": "We present a qualitative study of hospitality exchange processes that take place via the online peer-to-peer platform Airbnb. We explore 1) what motivates individuals to monetize network hospitality and 2) how the presence of money ties in with the social interaction related to network hospitality. We approach the topic from the perspective of hosts -- that is, Airbnb users who participate by offering accommodation for other members in exchange for monetary compensation. We found that participants were motivated to monetize network hospitality for both financial and social reasons. Our analysis indicates that the presence of money can provide a helpful frame for network hospitality, supporting hosts in their efforts to accomplish desired sociability, select guests consistent with their preferences, and control the volume and type of demand. We conclude the paper with a critical discussion of the implications of our findings for network hospitality and, more broadly, for the so-called sharing economy.",
"title": ""
},
{
"docid": "1a22f7d1d57a00669f3052f8906ac4fa",
"text": "BACKGROUND\nThere have been previous representative nutritional status surveys conducted in Hungary, but this is the first one that examines overweight and obesity prevalence according to the level of urbanization and in different geographic regions among 6-8-year-old children. We also assessed whether these variations were different by sex.\n\n\nMETHODS\nThis survey was part of the fourth data collection round of World Health Organization (WHO) Childhood Obesity Surveillance Initiative which took place during the academic year 2016/2017. The representative sample was determined by two-stage cluster sampling. A total of 5332 children (48.4% boys; age 7.54 ± 0.64 years) were measured from all seven geographic regions including urban (at least 500 inhabitants per square kilometer; n = 1598), semi-urban (100 to 500 inhabitants per square kilometer; n = 1932) and rural (less than 100 inhabitants per square kilometer; n = 1802) areas.\n\n\nRESULTS\nUsing the WHO reference, prevalence of overweight and obesity within the whole sample were 14.2, and 12.7%, respectively. According to the International Obesity Task Force (IOTF) reference, rates were 12.6 and 8.6%. Northern Hungary and Southern Transdanubia were the regions with the highest obesity prevalence of 11.0 and 12.0%, while Central Hungary was the one with the lowest obesity rate (6.1%). The prevalence of overweight and obesity tended to be higher in rural areas (13.0 and 9.8%) than in urban areas (11.9 and 7.0%). Concerning differences in sex, girls had higher obesity risk in rural areas (OR = 2.0) but boys did not. Odds ratios were 2.0-3.4 in different regions for obesity compared to Central Hungary, but only among boys.\n\n\nCONCLUSIONS\nOverweight and obesity are emerging problems in Hungary. Remarkable differences were observed in the prevalence of obesity by geographic regions. These variations can only be partly explained by geographic characteristics.\n\n\nTRIAL REGISTRATION\nStudy protocol was approved by the Scientific and Research Ethics Committee of the Medical Research Council ( 61158-2/2016/EKU ).",
"title": ""
},
{
"docid": "ab231cbc45541b5bdbd0da82571b44ca",
"text": "ABSTRACT Evidence of Sedona magnetic anomaly and brainwave EEG synchronization can be demonstrated with portable equipment on site in the field, during sudden magnetic events. Previously, we have demonstrated magnetic anomaly charts recorded in both known and unrecognized Sedona vortex activity locations. We have also shown a correlation or amplification of vortex phenomena with Schumann Resonance. Adding the third measurable parameter of brain wave activity, we demonstrate resonance and amplification among them. We suggest tiny magnetic crystals, biogenic magnetite, make human beings highly sensitive to ELF field fluctuations. Biological Magnetite could act as a transducer of both low frequency magnetic fields and RF fields.",
"title": ""
},
{
"docid": "3f723663369de329a05ac258d36379eb",
"text": "This paper reviews the history of aerosol therapy; discusses patient, drug, and device factors that can influence the success of aerosol therapy; and identifies trends that will drive the science of aerosol therapy in the future. Aerosol medication is generally less expensive, works more rapidly, and produces fewer side effects than the same drug given systemically. Aerosol therapy has been used for thousands of years by steaming and burning plant material. In the 50 years since the invention of the pressurized metered-dose inhaler, advances in drugs and devices have made aerosols the most commonly used way to deliver therapy for asthma and COPD. The requirements for aerosol therapy depend on the target site of action and the underlying disease. Medication to treat airways disease should deposit on the conducting airways. Effective deposition of airway particles generally requires particle size between 0.5 and 5 microm mass median aerodynamic diameter; however, a smaller particle size neither equates to greater side effects nor greater effectiveness. However, medications like peptides intended for systemic absorption, need to deposit on the alveolar capillary bed. Thus ultrafine particles, a slow inhalation, and relatively normal airways that do not hinder aerosol penetration will optimize systemic delivery. Aerosolized antimicrobials are often used for the treatment of cystic fibrosis or bronchiectasis, and mucoactive agents to promote mucus clearance have been delivered by aerosol. As technology improves, a greater variety of novel medications are being developed for aerosol delivery, including for therapy of pulmonary hypertension, as vaccines, for decreasing dyspnea, to treat airway inflammation, for migraine headache, for nicotine and drug addiction, and ultimately for gene therapy. Common reasons for therapeutic failure of aerosol medications include the use of inactive or depleted medications, inappropriate use of the aerosol device, and, most importantly, poor adherence to prescribed therapy. The respiratory therapist plays a key role in patient education, device selection, and outcomes assessment.",
"title": ""
},
{
"docid": "448040bcefe4a67a2a8c4b2cf75e7ebc",
"text": "Visual analytics has been widely studied in the past decade. One key to make visual analytics practical for both research and industrial applications is the appropriate definition and implementation of the visual analytics pipeline which provides effective abstractions for designing and implementing visual analytics systems. In this paper we review the previous work on visual analytics pipelines and individual modules from multiple perspectives: data, visualization, model and knowledge. In each module we discuss various representations and descriptions of pipelines inside the module, and compare the commonalities and the differences among them.",
"title": ""
},
{
"docid": "0f87cd3209d3cc28b60425eeab37f1a4",
"text": "This paper presents low-loss 3-D transmission lines and vertical interconnects fabricated by aerosol jet printing (AJP) which is an additive manufacturing technology. AJP stacks up multiple layers with minimum feature size as small as 20 μm in the xy-direction and 0.7 μm in the z-direction. It also solves the problem of fabricating vias to realize the vertical transition by 3-D printing. The loss of the stripline is measured to be 0.53 dB/mm at 40 GHz. The vertical transition achieves a broadband bandwidth from 0.1 to 40 GHz. The results of this paper demonstrate the feasibility of utilizing 3-D printing for low-cost multilayer system-on-package RF/millimeter-wave front-ends.",
"title": ""
},
{
"docid": "b7851d3e08d29d613fd908d930afcd6b",
"text": "Word sense embeddings represent a word sense as a low-dimensional numeric vector. While this representation is potentially useful for NLP applications, its interpretability is inherently limited. We propose a simple technique that improves interpretability of sense vectors by mapping them to synsets of a lexical resource. Our experiments with AdaGram sense embeddings and BabelNet synsets show that it is possible to retrieve synsets that correspond to automatically learned sense vectors with Precision of 0.87, Recall of 0.42 and AUC of 0.78.",
"title": ""
},
{
"docid": "74ffa7a819d415ed6381f4128cc04fdd",
"text": "The process of identifying the actual meanings of words in a given text fragment has a long history in the field of computational linguistics. Due to its importance in understanding the semantics of natural language, it is considered one of the most challenging problems facing this field. In this article we propose a new unsupervised similarity-based word sense disambiguation (WSD) algorithm that operates by computing the semantic similarity between glosses of the target word and a context vector. The sense of the target word is determined as that for which the similarity between gloss and context vector is greatest. Thus, whereas conventional unsupervised WSD methods are based on measuring pairwise similarity between words, our approach is based on measuring semantic similarity between sentences. This enables it to utilize a higher degree of semantic information, and is more consistent with the way that human beings disambiguate; that is, by considering the greater context in which the word appears. We also show how performance can be further improved by incorporating a preliminary step in which the relative importance of words within the original text fragment is estimated, thereby providing an ordering that can be used to determine the sequence in which words should be disambiguated. We provide empirical results that show that our method performs favorably against the state-of-the-art unsupervised word sense disambiguation methods, as evaluated on several benchmark datasets through different models of evaluation.",
"title": ""
},
{
"docid": "3e817504c0db80831d9edbda60254247",
"text": "OBJECTIVES\nThe purpose of this descriptive study was to investigate the current situation of clinical alarms in intensive care unit (ICU), nurses' recognition of and fatigue in relation to clinical alarms, and obstacles in alarm management.\n\n\nMETHODS\nSubjects were ICU nurses and devices from 48 critically ill patient cases. Data were collected through direct observation of alarm occurrence and questionnaires that were completed by the ICU nurses. The observation time unit was one hour block. One bed out of 56 ICU beds was randomly assigned to each observation time unit.\n\n\nRESULTS\nOverall 2,184 clinical alarms were counted for 48 hours of observation, and 45.5 clinical alarms occurred per hour per subject. Of these, 1,394 alarms (63.8%) were categorized as false alarms. The alarm fatigue score was 24.3 ± 4.0 out of 35. The highest scoring item was \"always get bothered due to clinical alarms\". The highest scoring item in obstacles was \"frequent false alarms, which lead to reduced attention or response to alarms\".\n\n\nCONCLUSIONS\nNurses reported that they felt some fatigue due to clinical alarms, and false alarms were also obstacles to proper management. An appropriate hospital policy should be developed to reduce false alarms and nurses' alarm fatigue.",
"title": ""
},
{
"docid": "6964d3ac400abd6ace1ed48c36d68d06",
"text": "Sentiment Analysis (SA) is indeed a fascinating area of research which has stolen the attention of researchers as it has many facets and more importantly it promises economic stakes in the corporate and governance sector. SA has been stemmed out of text analytics and established itself as a separate identity and a domain of research. The wide ranging results of SA have proved to influence the way some critical decisions are taken. Hence, it has become relevant in thorough understanding of the different dimensions of the input, output and the processes and approaches of SA.",
"title": ""
},
{
"docid": "ee9cb495280dc6e252db80c23f2f8c2b",
"text": "Due to the dramatical increase in popularity of mobile devices in the last decade, more sensitive user information is stored and accessed on these devices everyday. However, most existing technologies for user authentication only cover the login stage or only work in restricted controlled environments or GUIs in the post login stage. In this work, we present TIPS, a Touch based Identity Protection Service that implicitly and unobtrusively authenticates users in the background by continuously analyzing touch screen gestures in the context of a running application. To the best of our knowledge, this is the first work to incorporate contextual app information to improve user authentication. We evaluate TIPS over data collected from 23 phone owners and deployed it to 13 of them with 100 guest users. TIPS can achieve over 90% accuracy in real-life naturalistic conditions within a small amount of computational overhead and 6% of battery usage.",
"title": ""
},
{
"docid": "8f0805ba67919e349f2cd506378a5171",
"text": "Cycloastragenol (CAG) is an aglycone of astragaloside IV. It was first identified when screening Astragalus membranaceus extracts for active ingredients with antiaging properties. The present study demonstrates that CAG stimulates telomerase activity and cell proliferation in human neonatal keratinocytes. In particular, CAG promotes scratch wound closure of human neonatal keratinocyte monolayers in vitro. The distinct telomerase-activating property of CAG prompted evaluation of its potential application in the treatment of neurological disorders. Accordingly, CAG induced telomerase activity and cAMP response element binding (CREB) activation in PC12 cells and primary neurons. Blockade of CREB expression in neuronal cells by RNA interference reduced basal telomerase activity, and CAG was no longer efficacious in increasing telomerase activity. CAG treatment not only induced the expression of bcl2, a CREB-regulated gene, but also the expression of telomerase reverse transcriptase in primary cortical neurons. Interestingly, oral administration of CAG for 7 days attenuated depression-like behavior in experimental mice. In conclusion, CAG stimulates telomerase activity in human neonatal keratinocytes and rat neuronal cells, and induces CREB activation followed by tert and bcl2 expression. Furthermore, CAG may have a novel therapeutic role in depression.",
"title": ""
},
{
"docid": "444bcff9a7fdcb80041aeb01b8724eed",
"text": "The morphologic anatomy of the liver is described as 2 main and 2 accessory lobes. The more recent functional anatomy of the liver is based on the distribution of the portal pedicles and the location of the hepatic veins. The liver is divided into 4 sectors, some of them composed of 2 segments. In all, there are 8 segments. According to the anatomy, typical hepatectomies (or “réglées”) are those which are performed along anatomical scissurae. The 2 main technical conceptions of typical hepatectomies are those with preliminary vascular control (Lortat-Jacob's technique) and hepatectomies with primary parenchymatous transection (Ton That Tung's technique). A good knowledge of the anatomy of the liver is a prerequisite for anatomical surgery of this organ. L'anatomie morphologique du foie permet d'individualiser 2 lobes principaux et 2 lobes accessoires. L'anatomie fonctionnelle du foie, plus récemment décrite, est fondée sur la distribution des pédicules portaux et sur la localisation des veines sus-hépatiques. Le foie est divisé en 4 secteurs, eux-mÊmes composés en général de 2 segments. Au total, il y a 8 segments. Selon les données anatomiques, les hépatectomies typiques (ou réglées) sont celles qui sont réalisées le long des scissures anatomiques. Les deux conceptions principales des exérèses hépatiques typiques sont, du point de vue technique, les hépatectomies avec contrÔle vasculaire préalable (technique de Lortat-Jacob) et les hépatectomies avec abord transparenchymateux premier (technique de Ton That Tung). Une connaissance approfondie de l'anatomie du foie est une condition préalable à la réalisation d'une chirurgie anatomique de cet organe.",
"title": ""
}
] |
scidocsrr
|
385f6d4010c29fe15ae103b795f138d7
|
Predicting customer churn in banking industry using neural networks
|
[
{
"docid": "310e525bc7a78da2987d8c6d6a0ff46b",
"text": "This tutorial provides an overview of the data mining process. The tutorial also provides a basic understanding of how to plan, evaluate and successfully refine a data mining project, particularly in terms of model building and model evaluation. Methodological considerations are discussed and illustrated. After explaining the nature of data mining and its importance in business, the tutorial describes the underlying machine learning and statistical techniques involved. It describes the CRISP-DM standard now being used in industry as the standard for a technology-neutral data mining process model. The paper concludes with a major illustration of the data mining process methodology and the unsolved problems that offer opportunities for research. The approach is both practical and conceptually sound in order to be useful to both academics and practitioners.",
"title": ""
}
] |
[
{
"docid": "e64608f39ab082982178ad2b3539890f",
"text": "Hoeschele, Michael David. M.S., Purdue University, May, 2006, Detecting Social Engineering. Major Professor: Marcus K. Rogers. This study consisted of creating and evaluating a proof of concept model of the Social Engineering Defense Architecture (SEDA) as theoretically proposed by Hoeschele and Rogers (2005). The SEDA is a potential solution to the problem of Social Engineering (SE) attacks perpetrated over the phone lines. The proof of concept model implemented some simple attack detection processes and the database to store all gathered information. The model was tested by generating benign telephone conversations in addition to conversations that include Social Engineering (SE) attacks. The conversations were then processed by the model to determine its accuracy to detect attacks. The model was able to detect all attacks and to store all of the correct data in the database, resulting in 100% accuracy.",
"title": ""
},
{
"docid": "c16f21fd2b50f7227ea852882004ef5b",
"text": "We study a stock dealer’s strategy for submitting bid and ask quotes in a limit order book. The agent faces an inventory risk due to the diffusive nature of the stock’s mid-price and a transactions risk due to a Poisson arrival of market buy and sell orders. After setting up the agent’s problem in a maximal expected utility framework, we derive the solution in a two step procedure. First, the dealer computes a personal indifference valuation for the stock, given his current inventory. Second, he calibrates his bid and ask quotes to the market’s limit order book. We compare this ”inventory-based” strategy to a ”naive” best bid/best ask strategy by simulating stock price paths and displaying the P&L profiles of both strategies. We find that our strategy has a P&L profile that has both a higher return and lower variance than the benchmark strategy.",
"title": ""
},
{
"docid": "f3860c0ed0803759e44133a0110a60bb",
"text": "Using comment information available from Digg we define a co-participation network between users. We focus on the analysis of this implicit network, and study the behavioral characteristics of users. Using an entropy measure, we infer that users at Digg are not highly focused and participate across a wide range of topics. We also use the comment data and social network derived features to predict the popularity of online content linked at Digg using a classification and regression framework. We show promising results for predicting the popularity scores even after limiting our feature extraction to the first few hours of comment activity that follows a Digg submission.",
"title": ""
},
{
"docid": "49215cb8cb669aef5ea42dfb1e7d2e19",
"text": "Many people rely on Web-based tutorials to learn how to use complex software. Yet, it remains difficult for users to systematically explore the set of tutorials available online. We present Sifter, an interface for browsing, comparing and analyzing large collections of image manipulation tutorials based on their command-level structure. Sifter first applies supervised machine learning to identify the commands contained in a collection of 2500 Photoshop tutorials obtained from the Web. It then provides three different views of the tutorial collection based on the extracted command-level structure: (1) A Faceted Browser View allows users to organize, sort and filter the collection based on tutorial category, command names or on frequently used command subsequences, (2) a Tutorial View summarizes and indexes tutorials by the commands they contain, and (3) an Alignment View visualizes the commandlevel similarities and differences between a subset of tutorials. An informal evaluation (n=9) suggests that Sifter enables users to successfully perform a variety of browsing and analysis tasks that are difficult to complete with standard keyword search. We conclude with a meta-analysis of our Photoshop tutorial collection and present several implications for the design of image manipulation software. ACM Classification H5.2 [Information interfaces and presentation]: User Interfaces. Graphical user interfaces. Author",
"title": ""
},
{
"docid": "84d2e697b2f2107d34516909f22768c6",
"text": "PURPOSE\nSchema therapy was first applied to individuals with borderline personality disorder (BPD) over 20 years ago, and more recent work has suggested efficacy across a range of disorders. The present review aimed to systematically synthesize evidence for the efficacy and effectiveness of schema therapy in reducing early maladaptive schema (EMS) and improving symptoms as applied to a range of mental health disorders in adults including BPD, other personality disorders, eating disorders, anxiety disorders, and post-traumatic stress disorder.\n\n\nMETHODS\nStudies were identified through electronic searches (EMBASE, PsycINFO, MEDLINE from 1990 to January 2016).\n\n\nRESULTS\nThe search produced 835 titles, of which 12 studies were found to meet inclusion criteria. A significant number of studies of schema therapy treatment were excluded as they failed to include a measure of schema change. The Clinical Trial Assessment Measure was used to rate the methodological quality of studies. Schema change and disorder-specific symptom change was found in 11 of the 12 studies.\n\n\nCONCLUSIONS\nSchema therapy has demonstrated initial significant results in terms of reducing EMS and improving symptoms for personality disorders, but formal mediation analytical studies are lacking and rigorous evidence for other mental health disorders is currently sparse.\n\n\nPRACTITIONER POINTS\nFirst review to investigate whether schema therapy leads to reduced maladaptive schemas and symptoms across mental health disorders. Limited evidence for schema change with schema therapy in borderline personality disorder (BPD), with only three studies conducting correlational analyses. Evidence for schema and symptom change in other mental health disorders is sparse, and so use of schema therapy for disorders other than BPD should be based on service user/patient preference and clinical expertise and/or that the theoretical underpinnings of schema therapy justify the use of it therapeutically. Further work is needed to develop the evidence base for schema therapy for other disorders.",
"title": ""
},
{
"docid": "2cba0f9b3f4b227dfe0b40e3bebd99e4",
"text": "In this paper we propose a discriminant learning framework for problems in which data consist of linear subspaces instead of vectors. By treating subspaces as basic elements, we can make learning algorithms adapt naturally to the problems with linear invariant structures. We propose a unifying view on the subspace-based learning method by formulating the problems on the Grassmann manifold, which is the set of fixed-dimensional linear subspaces of a Euclidean space. Previous methods on the problem typically adopt an inconsistent strategy: feature extraction is performed in the Euclidean space while non-Euclidean distances are used. In our approach, we treat each sub-space as a point in the Grassmann space, and perform feature extraction and classification in the same space. We show feasibility of the approach by using the Grassmann kernel functions such as the Projection kernel and the Binet-Cauchy kernel. Experiments with real image databases show that the proposed method performs well compared with state-of-the-art algorithms.",
"title": ""
},
{
"docid": "e5a6a42edcfd66dc16e6caa09cc67a10",
"text": "Eosinophilic esophagitis is an adaptive immune response to patient-specific antigens, mostly foods. Eosinophilic esophagitis is not solely IgE-mediated and is likely characterized by Th2 lymphocytes with an impaired esophageal barrier function. The key cytokines and chemokines are thymic stromal lymphopoeitin, interleukin-13, CCL26/eotaxin-3, and transforming growth factor-β, all involved in eosinophil recruitment and remodeling. Chronic food dysphagia and food impactions, the feared late complications, are related in part to dense subepithelial fibrosis, likely induced by interleukin-13 and transforming growth factor-β.",
"title": ""
},
{
"docid": "edcdae3f9da761cedd52273ccd850520",
"text": "Extracting information from Web pages requires the ability to work at Web scale in terms of the number of documents, the number of domains and domain complexity. Recent approaches have used existing knowledge bases to learn to extract information with promising results. In this paper we propose the use of distant supervision for relation extraction from the Web. Distant supervision is a method which uses background information from the Linking Open Data cloud to automatically label sentences with relations to create training data for relation classifiers. Although the method is promising, existing approaches are still not suitable for Web extraction as they suffer from three main issues: data sparsity, noise and lexical ambiguity. Our approach reduces the impact of data sparsity by making entity recognition tools more robust across domains, as well as extracting relations across sentence boundaries. We reduce the noise caused by lexical ambiguity by employing statistical methods to strategically select training data. Our experiments show that using a more robust entity recognition approach and expanding the scope of relation extraction results in about 8 times the number of extractions, and that strategically selecting training data can result in an error reduction of about 30%.",
"title": ""
},
{
"docid": "2272325860332d5d41c02f317ab2389e",
"text": "For a developing nation, deploying big data (BD) technology and introducing data science in higher education is a challenge. A pessimistic scenario is: Mis-use of data in many possible ways, waste of trained manpower, poor BD certifications from institutes, under-utilization of resources, disgruntled management staff, unhealthy competition in the market, poor integration with existing technical infrastructures. Also, the questions in the minds of students, scientists, engineers, teachers and managers deserve wider attention. Besides the stated perceptions and analyses perhaps ignoring socio-political and scientific temperaments in developing nations, the following questions arise: How did the BD phenomenon naturally occur, post technological developments in Computer and Communications Technology and how did different experts react to it? Are academicians elsewhere agreeing on the fact that BD is a new science? Granted that big data science is a new science what are its foundations as compared to conventional topics in Physics, Chemistry or Biology? Or, is it similar in an esoteric sense to astronomy or nuclear science? What are the technological and engineering implications locally and globally and how these can be advantageously used to augment business intelligence, for example? In other words, will the industry adopt the changes due to tactical advantages? How can BD success stories be faithfully carried over elsewhere? How will BD affect the Computer Science and other curricula? How will BD benefit different segments of our society on a large scale? To answer these, an appreciation of the BD as a science and as a technology is necessary. This paper presents a quick BD overview, relying on the contemporary literature; it addresses: characterizations of BD and the BD people, the background required for the students and teachers to join the BD bandwagon, the management challenges in embracing BD so that the bottomline is clear.",
"title": ""
},
{
"docid": "514bf9c9105dd3de95c3965bb86ebe36",
"text": "Origami is the centuries-old art of folding paper, and recently, it is investigated as computer science: Given an origami with creases, the problem to determine if it can be flat after folding all creases is NP-hard. Another hundreds-old art of folding paper is a pop-up book. A model for the pop-up book design problem is given, and its computational complexity is investigated. We show that both of the opening book problem and the closing book problem are NP-hard.",
"title": ""
},
{
"docid": "1c60ddeb7e940992094cb8f3913e811a",
"text": "In this paper, we address the scene segmentation task by capturing rich contextual dependencies based on the selfattention mechanism. Unlike previous works that capture contexts by multi-scale features fusion, we propose a Dual Attention Networks (DANet) to adaptively integrate local features with their global dependencies. Specifically, we append two types of attention modules on top of traditional dilated FCN, which model the semantic interdependencies in spatial and channel dimensions respectively. The position attention module selectively aggregates the features at each position by a weighted sum of the features at all positions. Similar features would be related to each other regardless of their distances. Meanwhile, the channel attention module selectively emphasizes interdependent channel maps by integrating associated features among all channel maps. We sum the outputs of the two attention modules to further improve feature representation which contributes to more precise segmentation results. We achieve new state-of-the-art segmentation performance on three challenging scene segmentation datasets, i.e., Cityscapes, PASCAL Context and COCO Stuff dataset. In particular, a Mean IoU score of 81.5% on Cityscapes test set is achieved without using coarse data. we make the code and trained models publicly available at https://github.com/junfu1115/DANet",
"title": ""
},
{
"docid": "88c592bdd7bb9c9348545734a9508b7b",
"text": "environments: An introduction C.-S. Li B. L. Brech S. Crowder D. M. Dias H. Franke M. Hogstrom D. Lindquist G. Pacifici S. Pappe B. Rajaraman J. Rao R. P. Ratnaparkhi R. A. Smith M. D. Williams During the past few years, enterprises have been increasingly aggressive in moving mission-critical and performance-sensitive applications to the cloud, while at the same time many new mobile, social, and analytics applications are directly developed and operated on cloud computing platforms. These two movements are encouraging the shift of the value proposition of cloud computing from cost reduction to simultaneous agility and optimization. These requirements (agility and optimization) are driving the recent disruptive trend of software defined computing, for which the entire computing infrastructureVcompute, storage and networkVis becoming software defined and dynamically programmable. The key elements within software defined environments include capability-based resource abstraction, goal-based and policy-based workload definition, and outcome-based continuous mapping of the workload to the available resources. Furthermore, software defined environments provide the tooling and capabilities to compose workloads from existing components that are then continuously and autonomously mapped onto the underlying programmable infrastructure. These elements enable software defined environments to achieve agility, efficiency, and continuous outcome-optimized provisioning and management, plus continuous assurance for resiliency and security. This paper provides an overview and introduction to the key elements and challenges of software defined environments.",
"title": ""
},
{
"docid": "540099388527a2e8dd5b43162b697fea",
"text": "This paper describes NCRF++, a toolkit for neural sequence labeling. NCRF++ is designed for quick implementation of different neural sequence labeling models with a CRF inference layer. It provides users with an inference for building the custom model structure through configuration file with flexible neural feature design and utilization. Built on PyTorch1, the core operations are calculated in batch, making the toolkit efficient with the acceleration of GPU. It also includes the implementations of most state-of-the-art neural sequence labeling models such as LSTMCRF, facilitating reproducing and refinement on those methods.",
"title": ""
},
{
"docid": "2bfd884e92a26d017a7854be3dfb02e8",
"text": "The tasks in fine-grained opinion mining can be regarded as either a token-level sequence labeling problem or as a semantic compositional task. We propose a general class of discriminative models based on recurrent neural networks (RNNs) and word embeddings that can be successfully applied to such tasks without any taskspecific feature engineering effort. Our experimental results on the task of opinion target identification show that RNNs, without using any hand-crafted features, outperform feature-rich CRF-based models. Our framework is flexible, allows us to incorporate other linguistic features, and achieves results that rival the top performing systems in SemEval-2014.",
"title": ""
},
{
"docid": "abeb22a9a8066091e5f508e61d17f101",
"text": "• I. What is Artificial Intelligence (AI)? • II. What are Expert Systems (ES)? ◦ Functional Components ◦ Structural Components • III. How do People Reason? • IV. How do Computers Reason? ◦ IV-1. Frames ◦ IV-2. Rule Based Reasoning ◾ IV-2a. Knowledge Engineering ◦ IV-3. Case-Based Reasoning ◦ IV-4. Neural Networks • V. Advantages and Disadvantages • VI. Additional Sources of Information ◦ VI-1. Additional Sources on World Wide Web ◾ Accounting Expert Systems Applications compiled by Carol E. Brown ◾ Artificial Intelligence in Business by Daniel E. O'Leary ◾ Artificial Intelligence / Expert Systems Section of the American Accounting Association ◾ International Journal of Intelligent Systems in Accounting, Finance and Management ◾ VI-2. Recent Books of Readings ◾ VI-3. References Used for Definitions • Photocopy Permission",
"title": ""
},
{
"docid": "4a31889cf90d39b7c49d02174a425b5b",
"text": "Inter-vehicle communication (IVC) protocols have the potential to increase the safety, efficiency, and convenience of transportation systems involving planes, trains, automobiles, and robots. The applications targeted include peer-to-peer networks for web surfing, coordinated braking, runway incursion prevention, adaptive traffic control, vehicle formations, and many others. The diversity of the applications and their potential communication protocols has challenged a systematic literature survey. We apply a classification technique to IVC applications to provide a taxonomy for detailed study of their communication requirements. The applications are divided into type classes which share common communication organization and performance requirements. IVC protocols are surveyed separately and their fundamental characteristics are revealed. The protocol characteristics are then used to determine the relevance of specific protocols to specific types of IVC applications.",
"title": ""
},
{
"docid": "922a4369bf08f23e1c0171dc35d5642b",
"text": "Most automated facial expression analysis methods treat the face as a 2D object, flat like a sheet of paper. That works well provided images are frontal or nearly so. In real-world conditions, moderate to large head rotation is common and system performance to recognize expression degrades. Multi-view Convolutional Neural Networks (CNNs) have been proposed to increase robustness to pose, but they require greater model sizes and may generalize poorly across views that are not included in the training set. We propose FACSCaps architecture to handle multi-view and multi-label facial action unit (AU) detection within a single model that can generalize to novel views. Additionally, FACSCaps's ability to synthesize faces enables insights into what is leaned by the model. FACSCaps models video frames using matrix capsules, where hierarchical pose relationships between face parts are built into internal representations. The model is trained by jointly optimizing a multi-label loss and the reconstruction accuracy. FACSCaps was evaluated using the FERA 2017 facial expression dataset that includes spontaneous facial expressions in a wide range of head orientations. FACSCaps outperformed both state-of-the-art CNNs and their temporal extensions.",
"title": ""
},
{
"docid": "e36659351fcd339533b73fd3dd77f261",
"text": "Past research provided abundant evidence that exposure to violent video games increases aggressive tendencies and decreases prosocial tendencies. In contrast, research on the effects of exposure to prosocial video games has been relatively sparse. The present research found support for the hypothesis that exposure to prosocial video games is positively related to prosocial affect and negatively related to antisocial affect. More specifically, two studies revealed that playing a prosocial (relative to a neutral) video game increased interpersonal empathy and decreased reported pleasure at another's misfortune (i.e., schadenfreude). These results lend further credence to the predictive validity of the General Learning Model (Buckley & Anderson, 2006) for the effects of media exposure on social tendencies.",
"title": ""
},
{
"docid": "e5bea734149b69a05455c5fec2d802e3",
"text": "This article introduces a collection of essays on continuity and discontinuity in cognitive development. In his lead essay, J. Kagan (2008) argues that limitations in past research (e.g., on number concepts, physical solidarity, and object permanence) render conclusions about continuity premature. Commentaries respectively (1) argue that longitudinal contexts are essential for interpreting developmental data, (2) illustrate the value of converging measures, (3) identify qualitative change via dynamical systems theory, (4) redirect the focus from states to process, and (5) review epistemological premises of alternative research traditions. Following an overview of the essays, this introductory article discusses how the search for developmental structures, continuity, and process differs between mechanistic-contextualist and organismic-contextualist metatheoretical frameworks, and closes by highlighting continuities in Kagan's scholarship over the past half century.",
"title": ""
},
{
"docid": "11d418decc0d06a3af74be77d4c71e5e",
"text": "Automatic generation control (AGC) regulates mechanical power generation in response to load changes through local measurements. Its main objective is to maintain system frequency and keep energy balanced within each control area in order to maintain the scheduled net interchanges between control areas. The scheduled interchanges as well as some other factors of AGC are determined at a slower time scale by considering a centralized economic dispatch (ED) problem among different generators. However, how to make AGC more economically efficient is less studied. In this paper, we study the connections between AGC and ED by reverse engineering AGC from an optimization view, and then we propose a distributed approach to slightly modify the conventional AGC to improve its economic efficiency by incorporating ED into the AGC automatically and dynamically.",
"title": ""
}
] |
scidocsrr
|
9d2b360c9c72fc379b84c5966beb05c3
|
Fetal intracranial translucency and cisterna magna at 11 to 14 weeks : reference ranges and correlation with chromosomal abnormalities.
|
[
{
"docid": "557da3544fd738ecfc3edf812b92720b",
"text": "OBJECTIVES\nTo describe the sonographic appearance of the structures of the posterior cranial fossa in fetuses at 11 + 3 to 13 + 6 weeks of pregnancy and to determine whether abnormal findings of the brain and spine can be detected by sonography at this time.\n\n\nMETHODS\nThis was a prospective study including 692 fetuses whose mothers attended Innsbruck Medical University Hospital for first-trimester sonography. In 3% (n = 21) of cases, measurement was prevented by fetal position. Of the remaining 671 cases, in 604 there was either a normal anomaly scan at 20 weeks or delivery of a healthy child and in these cases the transcerebellar diameter (TCD) and the anteroposterior diameter of the cisterna magna (CM), measured at 11 + 3 to 13 + 6 weeks, were analyzed. In 502 fetuses, the anteroposterior diameter of the fourth ventricle (4V) was also measured. In 25 fetuses, intra- and interobserver repeatability was calculated.\n\n\nRESULTS\nWe observed a linear correlation between crown-rump length (CRL) and CM (CM = 0.0536 × CRL - 1.4701; R2 = 0.688), TCD (TCD = 0.1482 × CRL - 1.2083; R2 = 0.701) and 4V (4V = 0.0181 × CRL + 0.9186; R2 = 0.118). In three patients with posterior fossa cysts, measurements significantly exceeded the reference values. One fetus with spina bifida had an obliterated CM and the posterior border of the 4V could not be visualized.\n\n\nCONCLUSIONS\nTransabdominal sonographic assessment of the posterior fossa is feasible in the first trimester. Measurements of the 4V, the CM and the TCD performed at this time are reliable. The established reference values assist in detecting fetal anomalies. However, findings must be interpreted carefully, as some supposed malformations might be merely delayed development of brain structures.",
"title": ""
},
{
"docid": "7170110b2520fb37e282d08ed8774d0f",
"text": "OBJECTIVE\nTo examine the performance of the 11-13 weeks scan in detecting non-chromosomal abnormalities.\n\n\nMETHODS\nProspective first-trimester screening study for aneuploidies, including basic examination of the fetal anatomy, in 45 191 pregnancies. Findings were compared to those at 20-23 weeks and postnatal examination.\n\n\nRESULTS\nAneuploidies (n = 332) were excluded from the analysis. Fetal abnormalities were observed in 488 (1.1%) of the remaining 44 859 cases; 213 (43.6%) of these were detected at 11-13 weeks. The early scan detected all cases of acrania, alobar holoprosencephaly, exomphalos, gastroschisis, megacystis and body stalk anomaly, 77% of absent hand or foot, 50% of diaphragmatic hernia, 50% of lethal skeletal dysplasias, 60% of polydactyly, 34% of major cardiac defects, 5% of facial clefts and 14% of open spina bifida, but none of agenesis of the corpus callosum, cerebellar or vermian hypoplasia, echogenic lung lesions, bowel obstruction, most renal defects or talipes. Nuchal translucency (NT) was above the 95th percentile in 34% of fetuses with major cardiac defects.\n\n\nCONCLUSION\nAt 11-13 weeks some abnormalities are always detectable, some can never be and others are potentially detectable depending on their association with increased NT, the phenotypic expression of the abnormality with gestation and the objectives set for such a scan.",
"title": ""
}
] |
[
{
"docid": "e58036f93195603cb7dc7265b9adeb25",
"text": "Pseudomonas aeruginosa thrives in many aqueous environments and is an opportunistic pathogen that can cause both acute and chronic infections. Environmental conditions and host defenses cause differing stresses on the bacteria, and to survive in vastly different environments, P. aeruginosa must be able to adapt to its surroundings. One strategy for bacterial adaptation is to self-encapsulate with matrix material, primarily composed of secreted extracellular polysaccharides. P. aeruginosa has the genetic capacity to produce at least three secreted polysaccharides; alginate, Psl, and Pel. These polysaccharides differ in chemical structure and in their biosynthetic mechanisms. Since alginate is often associated with chronic pulmonary infections, its biosynthetic pathway is the best characterized. However, alginate is only produced by a subset of P. aeruginosa strains. Most environmental and other clinical isolates secrete either Pel or Psl. Little information is available on the biosynthesis of these polysaccharides. Here, we review the literature on the alginate biosynthetic pathway, with emphasis on recent findings describing the structure of alginate biosynthetic proteins. This information combined with the characterization of the domain architecture of proteins encoded on the Psl and Pel operons allowed us to make predictive models for the biosynthesis of these two polysaccharides. The results indicate that alginate and Pel share certain features, including some biosynthetic proteins with structurally or functionally similar properties. In contrast, Psl biosynthesis resembles the EPS/CPS capsular biosynthesis pathway of Escherichia coli, where the Psl pentameric subunits are assembled in association with an isoprenoid lipid carrier. These models and the environmental cues that cause the cells to produce predominantly one polysaccharide over the others are subjects of current investigation.",
"title": ""
},
{
"docid": "fc9061348b46fc1bf7039fa5efcbcea1",
"text": "We propose that a leadership identity is coconstructed in organizations when individuals claim and grant leader and follower identities in their social interactions. Through this claiming-granting process, individuals internalize an identity as leader or follower, and those identities become relationally recognized through reciprocal role adoption and collectively endorsed within the organizational context. We specify the dynamic nature of this process, antecedents to claiming and granting, and an agenda for research on leadership identity and development.",
"title": ""
},
{
"docid": "70622607a75305882251c073536aa282",
"text": "a r t i c l e i n f o",
"title": ""
},
{
"docid": "3a1019c31ff34f8a45c65703c1528fc4",
"text": "The increasing trend of studying the innate softness of robotic structures and amalgamating it with the benefits of the extensive developments in the field of embodied intelligence has led to sprouting of a relatively new yet extremely rewarding sphere of technology. The fusion of current deep reinforcement algorithms with physical advantages of a soft bio-inspired structure certainly directs us to a fruitful prospect of designing completely self-sufficient agents that are capable of learning from observations collected from their environment to achieve a task they have been assigned. For soft robotics structure possessing countless degrees of freedom, it is often not easy (something not even possible) to formulate mathematical constraints necessary for training a deep reinforcement learning (DRL) agent for the task in hand, hence, we resolve to imitation learning techniques due to ease of manually performing such tasks like manipulation that could be comfortably mimicked by our agent. Deploying current imitation learning algorithms on soft robotic systems have been observed to provide satisfactory results but there are still challenges in doing so. This review article thus posits an overview of various such algorithms along with instances of them being applied to real world scenarios and yielding state-of-the-art results followed by brief descriptions on various pristine branches of DRL research that may be centers of future research in this field of interest.",
"title": ""
},
{
"docid": "12d625fe60790761ff604ab8aa70c790",
"text": "We describe a system designed to monitor the gaze of a user working naturally at a computer workstation. The system consists of three cameras situated between the keyboard and the monitor. Free head movements are allowed within a three-dimensional volume approximately 40 centimeters in diameter. Two fixed, wide-field \"face\" cameras equipped with active-illumination systems enable rapid localization of the subject's pupils. A third steerable \"eye\" camera has a relatively narrow field of view, and acquires the images of the eyes which are used for gaze estimation. Unlike previous approaches which construct an explicit three-dimensional representation of the subject's head and eye, we derive mappings for steering control and gaze estimation using a procedure we call implicit calibration. Implicit calibration is performed by collecting a \"training set\" of parameters and associated measurements, and solving for a set of coefficients relating the measurements back to the parameters of interest. Preliminary data on three subjects indicate an median gaze estimation error of ap-proximately 0.8 degree.",
"title": ""
},
{
"docid": "a02f1ee7b77d00809d89c4a8fad462ed",
"text": "In a modern vehicle systems one of the main goals to achieve is driver's safety, and many sophisticated systems are made for that purpose. Vibration isolation for the vehicle seats, and at the same time for the driver, is one of the challenging problems. Parameters of the controller used for the isolation can be tuned for a different road types, making the isolation better (specially for the vehicles like dampers, tractors, field machinery, bulldozers, etc.). In this paper we propose the method where neural networks are used for road type recognition. The main goal is to obtain a good road recognition for the purpose of better vibration damping of a driver's semi active controllable seat. The recognition of a specific road type will be based on the measurable parameters of a vehicle. Discrete Fourier Transform of measurable parameters is obtained and used for the neural network learning. The dimension of the input vector, as the main parameter that decides the speed of road recognition, is varied.",
"title": ""
},
{
"docid": "462248d6ebad4ed197b0322a5ab09406",
"text": "The purpose of this study was to quantify the response of the forearm musculature to combinations of wrist and forearm posture and grip force. Ten healthy individuals performed five relative handgrip efforts (5%, 50%, 70% and 100% of maximum, and 50 N) for combinations of three wrist postures (flexed, neutral and extended) and three forearm postures (pronated, neutral and supinated). 'Baseline' extensor muscle activity (associated with holding the dynamometer without exerting grip force) was greatest with the forearm pronated and the wrist extended, while flexor activity was largest in supination when the wrist was flexed. Extensor activity was generally larger than that of flexors during low to mid-range target force levels, and was always greater when the forearm was pronated. Flexor activation only exceeded the extensor activation at the 70% and 100% target force levels in some postures. A flexed wrist reduced maximum grip force by 40-50%, but EMG amplitude remained elevated. Women produced 60-65% of the grip strength of men, and required 5-10% more of both relative force and extensor activation to produce a 50 N grip. However, this appeared to be due to strength rather than gender. Forearm rotation affected grip force generation only when the wrist was flexed, with force decreasing from supination to pronation (p < 0.005). The levels of extensor activation observed, especially during baseline and low level grip exertions, suggest a possible contributing mechanism to the development of lateral forearm muscle pain in the workplace.",
"title": ""
},
{
"docid": "194156892cbdb0161e9aae6a01f78703",
"text": "Model repositories play a central role in the model driven development of complex software-intensive systems by offering means to persist and manipulate models obtained from heterogeneous languages and tools. Complex models can be assembled by interconnecting model fragments by hard links, i.e., regular references, where the target end points to external resources using storage-specific identifiers. This approach, in certain application scenarios, may prove to be a too rigid and error prone way of interlinking models. As a flexible alternative, we propose to combine derived features with advanced incremental model queries as means for soft interlinking of model elements residing in different model resources. These soft links can be calculated on-demand with graceful handling for temporarily unresolved references. In the background, the links are maintained efficiently and flexibly by using incremental model query evaluation. The approach is applicable to modeling environments or even property graphs for representing query results as first-class relations, which also allows the chaining of soft links that is useful for modular applications. The approach is evaluated using the Eclipse Modeling Framework (EMF) and EMF-IncQuery in two complex industrial case studies. The first case study is motivated by a knowledge management project from the financial domain, involving a complex interlinked structure of concept and business process models. The second case study is set in the avionics domain with strict traceability requirements enforced by certification standards (DO-178b). It consists of multiple domain models describing the allocation scenario of software functions to hardware components.",
"title": ""
},
{
"docid": "2cbb2af6ed4ef193aad77c2f696a45c5",
"text": "Consider mutli-goal tasks that involve static environments and dynamic goals. Examples of such tasks, such as goaldirected navigation and pick-and-place in robotics, abound. Two types of Reinforcement Learning (RL) algorithms are used for such tasks: model-free or model-based. Each of these approaches has limitations. Model-free RL struggles to transfer learned information when the goal location changes, but achieves high asymptotic accuracy in single goal tasks. Model-based RL can transfer learned information to new goal locations by retaining the explicitly learned state-dynamics, but is limited by the fact that small errors in modelling these dynamics accumulate over long-term planning. In this work, we improve upon the limitations of model-free RL in multigoal domains. We do this by adapting the Floyd-Warshall algorithm for RL and call the adaptation Floyd-Warshall RL (FWRL). The proposed algorithm learns a goal-conditioned action-value function by constraining the value of the optimal path between any two states to be greater than or equal to the value of paths via intermediary states. Experimentally, we show that FWRL is more sample-efficient and learns higher reward strategies in multi-goal tasks as compared to Q-learning, model-based RL and other relevant baselines in a tabular domain.",
"title": ""
},
{
"docid": "49ca8739b6e28f0988b643fc97e7c6b1",
"text": "Stroke is a leading cause of severe physical disability, causing a range of impairments. Frequently stroke survivors are left with partial paralysis on one side of the body and movement can be severely restricted in the affected side’s hand and arm. We know that effective rehabilitation must be early, intensive and repetitive, which leads to the challenge of how to maintain motivation for people undergoing therapy. This paper discusses why games may be an effective way of addressing the problem of engagement in therapy and analyses which game design patterns may be important for rehabilitation. We present a number of serious games that our group has developed for upper limb rehabilitation. Results of an evaluation of the games are presented which indicate that they may be appropriate for people with stroke.",
"title": ""
},
{
"docid": "8892e3f007967f8274b0513e4c451aed",
"text": "Research on narcissism and envy suggests a variable relationship that may reflect differences between how vulnerable and grandiose narcissism relate to precursors of envy. Accordingly, we proposed a model in which dispositional envy and relative deprivation differentially mediate envy's association with narcissistic vulnerability, grandiosity, and entitlement. To test the model, 330 young adults completed dispositional measures of narcissism, entitlement, and envy; one week later, participants reported on deprivation and envy feelings toward a peer who outperformed others on an intelligence test for a cash prize (Study 1) or earned higher monetary payouts in a betting game (Study 2). In both studies, structural equation modeling broadly supported the proposed model. Vulnerable narcissism robustly predicted episodic envy via dispositional envy. Entitlement-a narcissistic facet common to grandiosity and vulnerability-was a significant indirect predictor via relative deprivation. Study 2 also found that (a) the grandiose leadership/authority facet indirectly curbed envy feelings via dispositional envy, and (b) episodic envy contributed to schadenfreude feelings, which promoted efforts to sabotage a successful rival. Whereas vulnerable narcissists appear dispositionally envy-prone, grandiose narcissists may be dispositionally protected. Both, however, are susceptible to envy through entitlement when relative deprivation is encountered.",
"title": ""
},
{
"docid": "9533193407869250854157e89d2815eb",
"text": "Life events are often described as major forces that are going to shape tomorrow's consumer need, behavior and mood. Thus, the prediction of life events is highly relevant in marketing and sociology. In this paper, we propose a data-driven, real-time method to predict individual life events, using readily available data from smartphones. Our large-scale user study with more than 2000 users shows that our method is able to predict life events with 64.5% higher accuracy, 183.1% better precision and 88.0% higher specificity than a random model on average.",
"title": ""
},
{
"docid": "06bfa716dd067d05229c92dc66757772",
"text": "Although many critics are reluctant to accept the trustworthiness of qualitative research, frameworks for ensuring rigour in this form of work have been in existence for many years. Guba’s constructs, in particular, have won considerable favour and form the focus of this paper. Here researchers seek to satisfy four criteria. In addressing credibility, investigators attempt to demonstrate that a true picture of the phenomenon under scrutiny is being presented. To allow transferability, they provide sufficient detail of the context of the fieldwork for a reader to be able to decide whether the prevailing environment is similar to another situation with which he or she is familiar and whether the findings can justifiably be applied to the other setting. The meeting of the dependability criterion is difficult in qualitative work, although researchers should at least strive to enable a future investigator to repeat the study. Finally, to achieve confirmability, researchers must take steps to demonstrate that findings emerge from the data and not their own predispositions. The paper concludes by suggesting that it is the responsibility of research methods teachers to ensure that this or a comparable model for ensuring trustworthiness is followed by students undertaking a qualitative inquiry.",
"title": ""
},
{
"docid": "36cc985d2d86c4047533550293e8c7f4",
"text": "The pyISC is a Python API and extension to the C++ based Incremental Stream Clustering (ISC) anomaly detection and classification framework. The framework is based on parametric Bayesian statistical inference using the Bayesian Principal Anomaly (BPA), which enables to combine the output from several probability distributions. pyISC is designed to be easy to use and integrated with other Python libraries, specifically those used for data science. In this paper, we show how to use the framework and we also compare its performance to other well-known methods on 22 real-world datasets. The simulation results show that the performance of pyISC is comparable to the other methods. pyISC is part of the Stream toolbox developed within the STREAM project.",
"title": ""
},
{
"docid": "6e2d4a24764265cf86c097d5b750113c",
"text": "BACKGROUND\nMusic has been used for medicinal purposes throughout history due to its variety of physiological, psychological and social effects.\n\n\nOBJECTIVE\nTo identify the effects of prenatal music stimulation on the vital signs of pregnant women at full term, on the modification of fetal cardiac status during a fetal monitoring cardiotocograph, and on anthropometric measurements of newborns taken after birth.\n\n\nMATERIAL AND METHOD\nA randomized controlled trial was implemented. The four hundred and nine pregnant women coming for routine prenatal care were randomized in the third trimester to receive either music (n = 204) or no music (n = 205) during a fetal monitoring cardiotocograph. All of the pregnant women were evaluated by measuring fetal cardiac status (basal fetal heart rate and fetal reactivity), vital signs before and after a fetal monitoring cardiotocograph (maternal heart rate and systolic and diastolic blood pressure), and anthropometric measurements of the newborns were taken after birth (weight, height, head circumference and chest circumference).\n\n\nRESULTS\nThe strip charts showed a significantly increased basal fetal heart rate and higher fetal reactivity, with accelerations of fetal heart rate in pregnant women with music stimulation. After the fetal monitoring cardiotocograph, a statistically significant decrease in systolic blood pressure, diastolic blood pressure and heart rate in women receiving music stimulation was observed.\n\n\nCONCLUSION\nMusic can be used as a tool which improves the vital signs of pregnant women during the third trimester, and can influence the fetus by increasing fetal heart rate and fetal reactivity.",
"title": ""
},
{
"docid": "aaba4377acbd22cbc52681d4d15bf9af",
"text": "This paper presents a new human body communication (HBC) technique that employs magnetic resonance for data transfer in wireless body-area networks (BANs). Unlike electric field HBC (eHBC) links, which do not necessarily travel well through many biological tissues, the proposed magnetic HBC (mHBC) link easily travels through tissue, offering significantly reduced path loss and, as a result, reduced transceiver power consumption. In this paper the proposed mHBC concept is validated via finite element method simulations and measurements. It is demonstrated that path loss across the body under various postures varies from 10-20 dB, which is significantly lower than alternative BAN techniques.",
"title": ""
},
{
"docid": "f5b72167077481ca04e339ad4dc4da3c",
"text": "We have implemented a MATLAB source code for VES forward modeling and its inversion using a genetic algorithm (GA) optimization technique. The codes presented here are applied to the Schlumberger electrode arrangement. In the forward modeling computation, we have developed code to generate theoretical apparent resistivity curves from a specified layered earth model. The input to this program consists of the number of layers, the layer resistivity and thickness. The output of this program is apparent resistivity versus electrode spacing incorporated in the inversion process as apparent resistivity data. For the inversion, we have developed a MATLAB code to invert (for layer resistivity and thickness) the apparent resistivity data by the genetic algorithm optimization technique. The code also has some function files involving the basic stages in the GA inversion. Our inversion procedure addressed calculates forward solutions from sets of random input, to find the apparent resistivity. Then, it evolves the models by better sets of inputs through processes that imitate natural mating, selection, crossover, and mutation in each generation. The aim of GA inversion is to find the best correlation between model and theoretical apparent resistivity curves. In this study, we present three synthetic examples that demonstrate the effectiveness and usefulness of this program. Our numerical modeling shows that the GA optimization technique can be applied for resolving layer parameters with reasonably low error values.",
"title": ""
},
{
"docid": "c0954a0e283c27f1dba130ad8f907b64",
"text": "Optical techniques for measurement-interferometry, spectrometry and polarimetry\"have long been used in materials measurement and environmental evaluation. The optical fiber lends get more flexibility in the implementation of these basic concepts. Fiber-optic technology has, for over 30 years, made important contributions to the science of measurement. The paper presents a perspective on these contributions which while far from exhaustive highlights the important conceptual advances made in the early days of optical fiber technology and the breadth of application which has emerged. There are also apparent opportunities for yet more imaginative research in applying guided-wave optics to emerging and challenging measurement requirements ranging from microsystems characterization to cellular biochemistry to art restoration.",
"title": ""
},
{
"docid": "0deda73c3cb7e87bcf3e1df0716e13d2",
"text": "The continuous development and extensive use of computed tomography (CT) in medical practice has raised a public concern over the associated radiation dose to the patient. Reducing the radiation dose may lead to increased noise and artifacts, which can adversely affect the radiologists’ judgment and confidence. Hence, advanced image reconstruction from low-dose CT data is needed to improve the diagnostic performance, which is a challenging problem due to its ill-posed nature. Over the past years, various low-dose CT methods have produced impressive results. However, most of the algorithms developed for this application, including the recently popularized deep learning techniques, aim for minimizing the mean-squared error (MSE) between a denoised CT image and the ground truth under generic penalties. Although the peak signal-to-noise ratio is improved, MSE- or weighted-MSE-based methods can compromise the visibility of important structural details after aggressive denoising. This paper introduces a new CT image denoising method based on the generative adversarial network (GAN) with Wasserstein distance and perceptual similarity. The Wasserstein distance is a key concept of the optimal transport theory and promises to improve the performance of GAN. The perceptual loss suppresses noise by comparing the perceptual features of a denoised output against those of the ground truth in an established feature space, while the GAN focuses more on migrating the data noise distribution from strong to weak statistically. Therefore, our proposed method transfers our knowledge of visual perception to the image denoising task and is capable of not only reducing the image noise level but also trying to keep the critical information at the same time. Promising results have been obtained in our experiments with clinical CT images.",
"title": ""
},
{
"docid": "786f1bbc10cfb952c7709b635ec01fcf",
"text": "Artificial neural networks (NN) have shown a significant promise in difficult tasks like image classification or speech recognition. Even well-optimized hardware implementations of digital NNs show significant power consumption. It is mainly due to non-uniform pipeline structures and inherent redundancy of numerous arithmetic operations that have to be performed to produce each single output vector. This paper provides a methodology for the design of well-optimized power-efficient NNs with a uniform structure suitable for hardware implementation. An error resilience analysis was performed in order to determine key constraints for the design of approximate multipliers that are employed in the resulting structure of NN. By means of a search based approximation method, approximate multipliers showing desired tradeoffs between the accuracy and implementation cost were created. Resulting approximate NNs, containing the approximate multipliers, were evaluated using standard benchmarks (MNIST dataset) and a real-world classification problem of Street-View House Numbers. Significant improvement in power efficiency was obtained in both cases with respect to regular NNs. In some cases, 91% power reduction of multiplication led to classification accuracy degradation of less than 2.80%. Moreover, the paper showed the capability of the back propagation learning algorithm to adapt with NNs containing the approximate multipliers.",
"title": ""
}
] |
scidocsrr
|
002423c52965056329ebe4f7d4f13715
|
Sudarshan Kriya Yogic breathing in the treatment of stress, anxiety, and depression. Part II--clinical applications and guidelines.
|
[
{
"docid": "ee2c37fd2ebc3fd783bfe53213e7470e",
"text": "Mind-body interventions are beneficial in stress-related mental and physical disorders. Current research is finding associations between emotional disorders and vagal tone as indicated by heart rate variability. A neurophysiologic model of yogic breathing proposes to integrate research on yoga with polyvagal theory, vagal stimulation, hyperventilation, and clinical observations. Yogic breathing is a unique method for balancing the autonomic nervous system and influencing psychologic and stress-related disorders. Many studies demonstrate effects of yogic breathing on brain function and physiologic parameters, but the mechanisms have not been clarified. Sudarshan Kriya yoga (SKY), a sequence of specific breathing techniques (ujjayi, bhastrika, and Sudarshan Kriya) can alleviate anxiety, depression, everyday stress, post-traumatic stress, and stress-related medical illnesses. Mechanisms contributing to a state of calm alertness include increased parasympathetic drive, calming of stress response systems, neuroendocrine release of hormones, and thalamic generators. This model has heuristic value, research implications, and clinical applications.",
"title": ""
},
{
"docid": "6f0ffda347abfd11dc78c0b76ceb11f8",
"text": "A previous study of 22 medical patients with DSM-III-R-defined anxiety disorders showed clinically and statistically significant improvements in subjective and objective symptoms of anxiety and panic following an 8-week outpatient physician-referred group stress reduction intervention based on mindfulness meditation. Twenty subjects demonstrated significant reductions in Hamilton and Beck Anxiety and Depression scores postintervention and at 3-month follow-up. In this study, 3-year follow-up data were obtained and analyzed on 18 of the original 22 subjects to probe long-term effects. Repeated measures analysis showed maintenance of the gains obtained in the original study on the Hamilton [F(2,32) = 13.22; p < 0.001] and Beck [F(2,32) = 9.83; p < 0.001] anxiety scales as well as on their respective depression scales, on the Hamilton panic score, the number and severity of panic attacks, and on the Mobility Index-Accompanied and the Fear Survey. A 3-year follow-up comparison of this cohort with a larger group of subjects from the intervention who had met criteria for screening for the original study suggests generalizability of the results obtained with the smaller, more intensively studied cohort. Ongoing compliance with the meditation practice was also demonstrated in the majority of subjects at 3 years. We conclude that an intensive but time-limited group stress reduction intervention based on mindfulness meditation can have long-term beneficial effects in the treatment of people diagnosed with anxiety disorders.",
"title": ""
}
] |
[
{
"docid": "1d6733d6b017248ef935a833ecfe6f0d",
"text": "Users increasingly rely on crowdsourced information, such as reviews on Yelp and Amazon, and liked posts and ads on Facebook. This has led to a market for blackhat promotion techniques via fake (e.g., Sybil) and compromised accounts, and collusion networks. Existing approaches to detect such behavior relies mostly on supervised (or semi-supervised) learning over known (or hypothesized) attacks. They are unable to detect attacks missed by the operator while labeling, or when the attacker changes strategy. We propose using unsupervised anomaly detection techniques over user behavior to distinguish potentially bad behavior from normal behavior. We present a technique based on Principal Component Analysis (PCA) that models the behavior of normal users accurately and identifies significant deviations from it as anomalous. We experimentally validate that normal user behavior (e.g., categories of Facebook pages liked by a user, rate of like activity, etc.) is contained within a low-dimensional subspace amenable to the PCA technique. We demonstrate the practicality and effectiveness of our approach using extensive ground-truth data from Facebook: we successfully detect diverse attacker strategies—fake, compromised, and colluding Facebook identities—with no a priori labeling while maintaining low false-positive rates. Finally, we apply our approach to detect click-spam in Facebook ads and find that a surprisingly large fraction of clicks are from anomalous users.",
"title": ""
},
{
"docid": "9f9c51b8e657fd9625b6cf22b1f003ab",
"text": "Most popular deep models for action recognition split video sequences into short sub-sequences consisting of a few frames, frame-based features are then pooled for recognizing the activity. Usually, this pooling step discards the temporal order of the frames, which could otherwise be used for better recognition. Towards this end, we propose a novel pooling method, generalized rank pooling (GRP), that takes as input, features from the intermediate layers of a CNN that is trained on tiny sub-sequences, and produces as output the parameters of a subspace which (i) provides a low-rank approximation to the features and (ii) preserves their temporal order. We propose to use these parameters as a compact representation for the video sequence, which is then used in a classification setup. We formulate an objective for computing this subspace as a Riemannian optimization problem on the Grassmann manifold, and propose an efficient conjugate gradient scheme for solving it. Experiments on several activity recognition datasets show that our scheme leads to state-of-the-art performance.",
"title": ""
},
{
"docid": "e364db9141c85b1f260eb3a9c1d42c5b",
"text": "Ten US presidential elections ago in Chapel Hill, North Carolina, the agenda of issues that a small group of undecided voters regarded as the most important ones of the day was compared with the news coverage of public issues in the news media these voters used to follow the campaign (McCombs and Shaw, 1972). Since that election, the principal finding in Chapel Hill*/those aspects of public affairs that are prominent in the news become prominent among the public*/has been replicated in hundreds of studies worldwide. These replications include both election and non-election settings for a broad range of public issues and other aspects of political communication and extend beyond the United States to Europe, Asia, Latin America and Australia. Recently, as the news media have expanded to include online newspapers available on the Web, agenda-setting effects have been documented for these new media. All in all, this research has grown far beyond its original domain*/the transfer of salience from the media agenda to the public agenda*/and now encompasses five distinct stages of theoretical attention. Until very recently, the ideas and findings that detail these five stages of agenda-setting theory have been scattered in a wide variety of research journals, book chapters and books published in many different countries. As a result, knowledge of agenda setting has been very unevenly distributed. Scholars designing new studies often had incomplete knowledge of previous research, and graduate students entering the field of mass communication had difficulty learning in detail what we know about the agenda-setting role of the mass media. This situation was my incentive to write Setting the Agenda: the mass media and public opinion, which was published in England in late 2004 and in the United States early in 2005. My primary goal was to gather the principal ideas and empirical findings about agenda setting in one place. John Pavlik has described this integrated presentation as the Gray’s Anatomy of agenda setting (McCombs, 2004, p. xii). Shortly after the US publication of Setting the Agenda , I received an invitation from Journalism Studies to prepare an overview of agenda setting. The timing was wonderfully fortuitous because a book-length presentation of what we have learned in the years since Chapel Hill could be coupled with a detailed discussion in a major journal of current trends and future likely directions in agenda-setting research. Journals are the best venue for advancing the stepby-step accretion of knowledge because they typically reach larger audiences than books, generate more widespread discussion and offer more space for the focused presentation of a particular aspect of a research area. Books can then periodically distill this knowledge. Given the availability of a detailed overview in Setting the Agenda , the presentation here of the five stages of agenda-setting theory emphasizes current and near-future research questions in these areas. Moving beyond these specific Journalism Studies, Volume 6, Number 4, 2005, pp. 543 557",
"title": ""
},
{
"docid": "a4e733379c2720e731d448ec80599c53",
"text": "As digitalization sustainably alters industries and societies, small and medium-sized enterprises (SME) must initiate a digital transformation to remain competitive and to address the increasing complexity of customer needs. Although many enterprises encounter challenges in practice, research does not yet provide practicable recommendations to increase the feasibility of digitalization. Furthermore, SME frequently fail to fully realize the implications of digitalization for their organizational structures, strategies, and operations, and have difficulties to identify a suitable starting point for corresponding initiatives. In order to address these challenges, this paper uses the concept of Business Process Management (BPM) to define a set of capabilities for a management framework, which builds upon the paradigm of process orientation to cope with the various requirements of digital transformation. Our findings suggest that enterprises can use a functioning BPM as a starting point for digitalization, while establishing necessary digital capabilities subsequently.",
"title": ""
},
{
"docid": "d69e8f1e75d74345a93f4899b2a0f073",
"text": "CONTEXT\nThis paper provides an overview of the contribution of medical education research which has employed focus group methodology to evaluate both undergraduate education and continuing professional development.\n\n\nPRACTICALITIES AND PROBLEMS\nIt also examines current debates about the ethics and practicalities involved in conducting focus group research. It gives guidance as to how to go about designing and planning focus group studies, highlighting common misconceptions and pitfalls, emphasising that most problems stem from researchers ignoring the central assumptions which underpin the qualitative research endeavour.\n\n\nPRESENTING AND DEVELOPING FOCUS GROUP RESEARCH\nParticular attention is paid to analysis and presentation of focus group work and the uses to which such information is put. Finally, it speculates about the future of focus group research in general and research in medical education in particular.",
"title": ""
},
{
"docid": "df94e8f3c2cef683db432e3e767fe913",
"text": "The design and manufacture of present-day CPUs causes inherent variation in supercomputer architectures such as variation in power and temperature of the chips. The variation also manifests itself as frequency differences among processors under Turbo Boost dynamic overclocking. This variation can lead to unpredictable and suboptimal performance in tightly coupled HPC applications. In this study, we use compute-intensive kernels and applications to analyze the variation among processors in four top supercomputers: Edison, Cab, Stampede, and Blue Waters. We observe that there is an execution time difference of up to 16% among processors on the Turbo Boost-enabled supercomputers: Edison, Cab, Stampede. There is less than 1% variation on Blue Waters, which does not have a dynamic overclocking feature. We analyze measurements from temperature and power instrumentation and find that intrinsic differences in the chips' power efficiency is the culprit behind the frequency variation. Moreover, we analyze potential solutions such as disabling Turbo Boost, leaving idle cores and replacing slow chips to mitigate the variation. We also propose a speed-aware dynamic task redistribution (load balancing) algorithm to reduce the negative effects of performance variation. Our speed-aware load balancing algorithm improves the performance up to 18% compared to no load balancing performance and 6% better than the non-speed aware counterpart.",
"title": ""
},
{
"docid": "b25416a09c04697f0cbc7eb907bca4f0",
"text": "This paper investigates the reaction of financial markets to the announcement of a business combination between software firms. Based on the theory of economic networks, this article argues that mergers of software firms should lead to greater wealth creation because of the network effect theoretically linked to the combination of software products. This hypothesis is partially supported, as only the targets in software/software outperform those in the other categories, yielding abnormal returns of great magnitude. In addition, we could not conclude that controlling position in the target enabled bidders to make the appropriate technological decisions to ensure the emergence of network effects in the portfolio of the new entity and create additional wealth for the shareholders of both the bidder and the target. Future research is needed to better understand the effect of the different properties of the software pooled inside the product portfolio of the new entity.",
"title": ""
},
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "3dfdc8abe03dd77730fe485f07588f43",
"text": "Background\nThe most common neurodegenerative disease is dementia. Family of dementia patients says that their lives have been changed extensively after happening of dementia to their patients. One of the problems of family and caregivers is depression of the caregiver. In this study, we aimed to find the prevalence of depression and factors can affect depression in the dementia caregivers.\n\n\nMaterials and Methods\nThis study was cross-sectional study with convenient sampling method. Our society was 96 main caregivers of dementia patients in the year 2015 in Iran. We had two questionnaires, a demographic and Beck Depression Inventory (BDI). BDI Cronbach's alpha is 0.86 for psychiatric patients and 0.81 for nonpsychiatric persons, and Beck's scores are between 0 and 64. We used SPSS version 22 for statistical analysis.\n\n\nResults\nAccording to Beck depression test, 69.8% (n = 67 out of 96) of all caregivers had scores in the range of depression. In bivariate analysis, we found higher dementia severity and lower support of other family members from the caregiver can predict higher depression in the caregiver. As well, in regression analysis using GLM model, we found higher age and lower educational level of the caregiver can predict higher depression in the caregiver. Moreover, regression analysis approved findings about severity and support of other family members in bivariate analysis.\n\n\nConclusion\nHigh-level depression is found in caregivers of dementia patients. It needs special attention from healthcare managers, clinicians and all of health-care personnel who deals with dementia patients and their caregivers.",
"title": ""
},
{
"docid": "2edd599684751b95ddde1bf3847dfadb",
"text": "Partially shaded (PS) photovoltaic (PV) arrays have multiple peaks at their P–V characteristic. Although conventional maximum power point tracking (MPPT) algorithms are successful when PV arrays are under uniform irradiance conditions (UICs), their tracking speeds are low and may fail to track global maximum power point (GMPP) for PS arrays. Several MPPT algorithms have been proposed for PS arrays. Most of them require numerous samplings which decreases MPPT speed and increases energy loss. The proposed method in this paper gets the GMPP deterministically and very fast. It intelligently takes some samples from the array's P–V curve and divides the search voltage range into small subregions. Then, it approximates the I–V curve of each subregion with a simple curve, and accordingly estimates an upper limit for the array power in that subregion. Next, by comparing the measured real power values with the estimated upper limits, the search region of GMPP is limited, and based on some defined criteria, the vicinity of GMPP is determined. Simulation and experimental results and comparisons are presented to highlight the performance and superiority of the proposed approach.",
"title": ""
},
{
"docid": "b2962d473a4b2d1a20996ae578ceccd4",
"text": "In this paper, we examine the logic and methodology of engineering design from the perspective of the philosophy of science. The fundamental characteristics of design problems and design processes are discussed and analyzed. These characteristics establish the framework within which different design paradigms are examined. Following the discussions on descriptive properties of design, and the prescriptive role of design paradigms, we advocate the plausible hypothesis that there is a direct resemblance between the structure of design processes and the problem solving of scientific communities. The scientific community metaphor has been useful in guiding the development of general purpose highly effective design process meta-tools [73], [125].",
"title": ""
},
{
"docid": "8553229613282672e12a175bfaca554d",
"text": "The K Nearest Neighbor (kNN) method has widely been used in the applications of data mining and machine learning due to its simple implementation and distinguished performance. However, setting all test data with the same k value in the previous kNN methods has been proven to make these methods impractical in real applications. This article proposes to learn a correlation matrix to reconstruct test data points by training data to assign different k values to different test data points, referred to as the Correlation Matrix kNN (CM-kNN for short) classification. Specifically, the least-squares loss function is employed to minimize the reconstruction error to reconstruct each test data point by all training data points. Then, a graph Laplacian regularizer is advocated to preserve the local structure of the data in the reconstruction process. Moreover, an ℓ1-norm regularizer and an ℓ2, 1-norm regularizer are applied to learn different k values for different test data and to result in low sparsity to remove the redundant/noisy feature from the reconstruction process, respectively. Besides for classification tasks, the kNN methods (including our proposed CM-kNN method) are further utilized to regression and missing data imputation. We conducted sets of experiments for illustrating the efficiency, and experimental results showed that the proposed method was more accurate and efficient than existing kNN methods in data-mining applications, such as classification, regression, and missing data imputation.",
"title": ""
},
{
"docid": "f5182ad077b1fdaa450d16544d63f01b",
"text": "This article paves the knowledge about the next generation Bluetooth Standard-BT 5 that will bring some mesmerizing upgrades including increased range, speed, and broadcast messaging capacity. Further, three relevant queries such as what is better about BT 5, why does that matter, and how will it affect IoT have been explained to gather related information so that developers, practitioners, and naive people could formulate BT 5 into IoT based applications while assimilating the need of short range communication in true sense.",
"title": ""
},
{
"docid": "be317160d07d0430787f99cf006172c4",
"text": "Chromium (VI) is a widely used industrial chemical, extensively used in paints, metal finishes, steel including stainless steel manufacturing, alloy cast irons, chrome, and wood treatment. On the contrary, chromium (III) salts such as chromium polynicotinate, chromium chloride and chromium picolinate, are used as micronutrients and nutritional supplements, and have been demonstrated to exhibit a significant number of health benefits in rodents and humans. However, the cause for the hexavalent chromium to induce cytotoxicity is not entirely understood. A series of in vitro and in vivo studies have demonstrated that chromium (VI) induces an oxidative stress through enhanced production of reactive oxygen species (ROS) leading to genomic DNA damage and oxidative deterioration of lipids and proteins. A cascade of cellular events occur following chromium (VI)‐induced oxidative stress including enhanced production of superoxide anion and hydroxyl radicals, increased lipid peroxidation and genomic DNA fragmentation, modulation of intracellular oxidized states, activation of protein kinase C, apoptotic cell death and altered gene expression. In this paper, we have demonstrated concentration‐ and time‐dependent effects of sodium dichromate (chromium (VI) or Cr (VI)) on enhanced production of superoxide anion and hydroxyl radicals, changes in intracellular oxidized states as determined by laser scanning confocal microscopy, DNA fragmentation and apoptotic cell death (by flow cytometry) in human peripheral blood mononuclear cells. These results were compared with the concentration-dependent effects of chromium (VI) on chronic myelogenous leukemic K562 cells and J774A.1 murine macrophage cells. Chromium (VI)‐induced enhanced production of ROS, as well as oxidative tissue and DNA damage were observed in these cells. More pronounced effect was observed on chronic myelogenous leukemic K562 cells and J774A.1 murine macrophage cells. Furthermore, we have assessed the effect of a single oral LD50 dose of chromium (VI) on female C57BL/6Ntac and p53‐deficient C57BL/6TSG p53 mice on enhanced production of superoxide anion, lipid peroxidation and DNA fragmentation in the hepatic and brain tissues. Chromium (VI)‐induced more pronounced oxidative damage in p53 deficient mice. This in vivo study highlighted that apoptotic regulatory protein p53 may play a major role in chromium (VI)‐induced oxidative stress and toxicity. Taken together, oxidative stress and oxidative tissue damage, and a cascade of cellular events including modulation of apoptotic regulatory gene p53 are involved in chromium (VI)‐induced toxicity and carcinogenesis.",
"title": ""
},
{
"docid": "beb365aacc5f66eea05d8aaebf97f275",
"text": "In this paper, we study the effects of three different kinds of search engine rankings on consumer behavior and search engine revenues: direct ranking effect, interaction effect between ranking and product ratings, and personalized ranking effect. We combine a hierarchical Bayesian model estimated on approximately one million online sessions from Travelocity, together with randomized experiments using a real-world hotel search engine application. Our archival data analysis and randomized experiments are consistent in demonstrating the following: (1) a consumer utility-based ranking mechanism can lead to a significant increase in overall search engine revenue. (2) Significant interplay occurs between search engine ranking and product ratings. An inferior position on the search engine affects “higher-class” hotels more adversely. On the other hand, hotels with a lower customer rating are more likely to benefit from being placed on the top of the screen. These findings illustrate that product search engines could benefit from directly incorporating signals from social media into their ranking algorithms. (3) Our randomized experiments also reveal that an “active” (wherein users can interact with and customize the ranking algorithm) personalized ranking system leads to higher clicks but lower purchase propensities and lower search engine revenue compared to a “passive” (wherein users cannot interact with the ranking algorithm) personalized ranking system. This result suggests that providing more information during the decision-making process may lead to fewer consumer purchases because of information overload. Therefore, product search engines should not adopt personalized ranking systems by default. Overall, our study unravels the economic impact of ranking and its interaction with social media on product search engines.",
"title": ""
},
{
"docid": "52755d4ace354c031368167a9da91547",
"text": "One of the serious challenges in computer vision and image classification is learning an accurate classifier for a new unlabeled image dataset, considering that there is no available labeled training data. Transfer learning and domain adaptation are two outstanding solutions that tackle this challenge by employing available datasets, even with significant difference in distribution and properties, and transfer the knowledge from a related domain to the target domain. The main difference between these two solutions is their primary assumption about change in marginal and conditional distributions where transfer learning emphasizes on problems with same marginal distribution and different conditional distribution, and domain adaptation deals with opposite conditions. Most prior works have exploited these two learning strategies separately for domain shift problem where training and test sets are drawn from different distributions. In this paper, we exploit joint transfer learning and domain adaptation to cope with domain shift problem in which the distribution difference is significantly large, particularly vision datasets. We therefore put forward a novel transfer learning and domain adaptation approach, referred to as visual domain adaptation (VDA). Specifically, VDA reduces the joint marginal and conditional distributions across domains in an unsupervised manner where no label is available in test set. Moreover, VDA constructs condensed domain invariant clusters in the embedding representation to separate various classes alongside the domain transfer. In this work, we employ pseudo target labels refinement to iteratively converge to final solution. Employing an iterative procedure along with a novel optimization problem creates a robust and effective representation for adaptation across domains. Extensive experiments on 16 real vision datasets with different difficulties verify that VDA can significantly outperform state-of-the-art methods in image classification problem.",
"title": ""
},
{
"docid": "12b075837d52d5c73a155466c28f2996",
"text": "Banks in Nigeria need to understand the perceptual difference in both male and female employees to better develop adequate policy on sexual harassment. This study investigated the perceptual differences on sexual harassment among male and female bank employees in two commercial cities (Kano and Lagos) of Nigeria.Two hundred and seventy five employees (149 males, 126 females) were conveniently sampled for this study. A survey design with a questionnaire adapted from Sexual Experience Questionnaire (SEQ) comprises of three dimension scalesof sexual harassment was used. The hypotheses were tested with independent samples t-test. The resultsindicated no perceptual differences in labelling sexual harassment clues between male and female bank employees in Nigeria. Thus, the study recommends that bank managers should support and establish the tone for sexual harassment-free workplace. KeywordsGender Harassment, Sexual Coercion, Unwanted Sexual Attention, Workplace.",
"title": ""
},
{
"docid": "0f3d520a6d09c136816a9e0493c45db1",
"text": "Specular reflection exists widely in photography and causes the recorded color deviating from its true value, thus, fast and high quality highlight removal from a single nature image is of great importance. In spite of the progress in the past decades in highlight removal, achieving wide applicability to the large diversity of nature scenes is quite challenging. To handle this problem, we propose an analytic solution to highlight removal based on an L2 chromaticity definition and corresponding dichromatic model. Specifically, this paper derives a normalized dichromatic model for the pixels with identical diffuse color: a unit circle equation of projection coefficients in two subspaces that are orthogonal to and parallel with the illumination, respectively. In the former illumination orthogonal subspace, which is specular-free, we can conduct robust clustering with an explicit criterion to determine the cluster number adaptively. In the latter, illumination parallel subspace, a property called pure diffuse pixels distribution rule helps map each specular-influenced pixel to its diffuse component. In terms of efficiency, the proposed approach involves few complex calculation, and thus can remove highlight from high resolution images fast. Experiments show that this method is of superior performance in various challenging cases.",
"title": ""
},
{
"docid": "83305a3f13a943b1226cf92375c30ab4",
"text": "The recent availability of Intel Haswell processors marks the transition of hardware transactional memory from research toys to mainstream reality. DBX is an in-memory database that uses Intel's restricted transactional memory (RTM) to achieve high performance and good scalability across multi-core machines. The main limitation (and also key to practicality) of RTM is its constrained working set size: an RTM region that reads or writes too much data will always be aborted. The design of DBX addresses this challenge in several ways. First, DBX builds a database transaction layer on top of an underlying shared-memory store. The two layers use separate RTM regions to synchronize shared memory access. Second, DBX uses optimistic concurrency control to separate transaction execution from its commit. Only the commit stage uses RTM for synchronization. As a result, the working set of the RTMs used scales with the meta-data of reads and writes in a database transaction as opposed to the amount of data read/written. Our evaluation using TPC-C workload mix shows that DBX achieves 506,817 transactions per second on a 4-core machine.",
"title": ""
}
] |
scidocsrr
|
9cadeeec720d0c8287566cc07ffd6fd6
|
Keyphrase Extraction Based on Prior Knowledge
|
[
{
"docid": "956d052c1599e90d31358735d9ea73aa",
"text": "We present a keyphrase extraction algorithm for scientific p ublications. Different from previous work, we introduce features that capture the positions of phrases in document with respect to logical section s f und in scientific discourse. We also introduce features that capture salient morphological phenomena found in scientific keyphrases, such as whether a candida te keyphrase is an acronyms or uses specific terminologically productive suffi xes. We have implemented these features on top of a baseline feature set used by Kea [1]. In our evaluation using a corpus of 120 scientific publications mul tiply annotated for keyphrases, our system significantly outperformed Kea at th e p < .05 level. As we know of no other existing multiply annotated keyphrase do cument collections, we have also made our evaluation corpus publicly avai lable. We hope that this contribution will spur future comparative research.",
"title": ""
}
] |
[
{
"docid": "7c1146ddc6e0904e0b30266b164e91f7",
"text": "The number of digital images that needs to be acquired, analyzed, classified, stored and retrieved in the medical centers is exponentially growing with the advances in medical imaging technology. Accordingly, medical image classification and retrieval has become a popular topic in the recent years. Despite many projects focusing on this problem, proposed solutions are still far from being sufficiently accurate for real-life implementations. Interpreting medical image classification and retrieval as a multi-class classification task, in this work, we investigate the performance of five different feature types in a SVM-based learning framework for classification of human body X-Ray images into classes corresponding to body parts. Our comprehensive experiments show that four conventional feature types provide performances comparable to the literature with low per-class accuracies, whereas local binary patterns produce not only very good global accuracy but also good class-specific accuracies with respect to the features used in the literature.",
"title": ""
},
{
"docid": "ec9fa7d2b0833d1b2f9fb9c7e0d3f350",
"text": "Our goal in this paper is to explore two generic approaches to disrupting dark networks: kinetic and nonkinetic. The kinetic approach involves aggressive and offensive measures to eliminate or capture network members and their supporters, while the non-kinetic approach involves the use of subtle, non-coercive means for combating dark networks. Two strategies derive from the kinetic approach: Targeting and Capacity-building. Four strategies derive from the non-kinetic approach: Institution-Building, Psychological Operations, Information Operations and Rehabilitation. We use network data from Noordin Top’s South East Asian terror network to illustrate how both kinetic and non-kinetic strategies could be pursued depending on a commander’s intent. Using this strategic framework as a backdrop, we strongly advise the use of SNA metrics in developing alterative counter-terrorism strategies that are contextdependent rather than letting SNA metrics define and drive a particular strategy.",
"title": ""
},
{
"docid": "50b5f29431b758e0df5bd6e295ef78d1",
"text": "While deep convolutional neural networks (CNNs) have emerged as the driving force of a wide range of domains, their computationally and memory intensive natures hinder the further deployment in mobile and embedded applications. Recently, CNNs with low-precision parameters have attracted much research attention. Among them, multiplier-free binary- and ternary-weight CNNs are reported to be of comparable recognition accuracy with full-precision networks, and have been employed to improve the hardware efficiency. However, even with the weights constrained to binary and ternary values, large-scale CNNs still require billions of operations in a single forward propagation pass.\n In this paper, we introduce a novel approach to maximally eliminate redundancy in binary- and ternary-weight CNN inference, improving both the performance and energy efficiency. The initial kernels are transformed into much fewer and sparser ones, and the output feature maps are rebuilt from the immediate results. Overall, the number of total operations in convolution is reduced. To find an efficient transformation solution for each already trained network, we propose a searching algorithm, which iteratively matches and eliminates the overlap in a set of kernels. We design a specific hardware architecture to optimize the implementation of kernel transformation. Specialized dataflow and scheduling method are proposed. Tested on SVHN, AlexNet, and VGG-16, our architecture removes 43.4%--79.9% operations, and speeds up the inference by 1.48--3.01 times.",
"title": ""
},
{
"docid": "b214270aacf9c9672af06e58ff26aa5a",
"text": "Traditional techniques for measuring similarities between time series are based on handcrafted similarity measures, whereas more recent learning-based approaches cannot exploit external supervision. We combine ideas from timeseries modeling and metric learning, and study siamese recurrent networks (SRNs) that minimize a classification loss to learn a good similarity measure between time series. Specifically, our approach learns a vectorial representation for each time series in such a way that similar time series are modeled by similar representations, and dissimilar time series by dissimilar representations. Because it is a similarity prediction models, SRNs are particularly well-suited to challenging scenarios such as signature recognition, in which each person is a separate class and very few examples per class are available. We demonstrate the potential merits of SRNs in withindomain and out-of-domain classification experiments and in one-shot learning experiments on tasks such as signature, voice, and sign language recognition.",
"title": ""
},
{
"docid": "1fc2c4294d4c768e5ee80fb0de1eb402",
"text": "A promising approach for dealing with the increasing demand of data traffic is the use of device-to-device (D2D) technologies, in particular when the destination can be reached directly, or though few retransmissions by peer devices. Thus, the cellular network can offload local traffic that is transmitted by an ad hoc network, e.g., a mobile ad hoc network (MANET), or a vehicular ad hoc network (VANET). The cellular base station can help coordinate all the devices in the ad hoc network by reusing the software tools developed for software-defined networks (SDNs), which divide the control and the data messages, transmitted in two separate interfaces. In this paper, we present a practical implementation of an SDN MANET, describe in detail the software components that we adopted, and provide a repository for all the new components that we developed. This work can be a starting point for the wireless networking community to design new testbeds with SDN capabilities that can have the advantages of D2D data transmissions and the flexibility of a centralized network management. In order to prove the feasibility of such a network, we also showcase the performance of the proposed network implemented in real devices, as compared to a distributed ad hoc network.",
"title": ""
},
{
"docid": "001b3155f0d67fd153173648cd483ac2",
"text": "A new approach to the problem of multimodality medical image registration is proposed, using a basic concept from information theory, mutual information (MI), or relative entropy, as a new matching criterion. The method presented in this paper applies MI to measure the statistical dependence or information redundancy between the image intensities of corresponding voxels in both images, which is assumed to be maximal if the images are geometrically aligned. Maximization of MI is a very general and powerful criterion, because no assumptions are made regarding the nature of this dependence and no limiting constraints are imposed on the image content of the modalities involved. The accuracy of the MI criterion is validated for rigid body registration of computed tomography (CT), magnetic resonance (MR), and photon emission tomography (PET) images by comparison with the stereotactic registration solution, while robustness is evaluated with respect to implementation issues, such as interpolation and optimization, and image content, including partial overlap and image degradation. Our results demonstrate that subvoxel accuracy with respect to the stereotactic reference solution can be achieved completely automatically and without any prior segmentation, feature extraction, or other preprocessing steps which makes this method very well suited for clinical applications.",
"title": ""
},
{
"docid": "2cbd47c2e7a1f68bd84d18413db26ea3",
"text": "Horizontal gene transfer (HGT) refers to the acquisition of foreign genes by organisms. The occurrence of HGT among bacteria in the environment is assumed to have implications in the risk assessment of genetically modified bacteria which are released into the environment. First, introduced genetic sequences from a genetically modified bacterium could be transferred to indigenous micro-organisms and alter their genome and subsequently their ecological niche. Second, the genetically modified bacterium released into the environment might capture mobile genetic elements (MGE) from indigenous micro-organisms which could extend its ecological potential. Thus, for a risk assessment it is important to understand the extent of HGT and genome plasticity of bacteria in the environment. This review summarizes the present state of knowledge on HGT between bacteria as a crucial mechanism contributing to bacterial adaptability and diversity. In view of the use of GM crops and microbes in agricultural settings, in this mini-review we focus particularly on the presence and role of MGE in soil and plant-associated bacteria and the factors affecting gene transfer.",
"title": ""
},
{
"docid": "3a86f1f91cfaa398a03a56abb34f497c",
"text": "We present a practical approach to generate stochastic anisotropic samples with Poisson-disk characteristic over a two-dimensional domain. In contrast to isotropic samples, we understand anisotropic samples as nonoverlapping ellipses whose size and density match a given anisotropic metric. Anisotropic noise samples are useful for many visualization and graphics applications. The spot samples can be used as input for texture generation, for example, line integral convolution (LIC), but can also be used directly for visualization. The definition of the spot samples using a metric tensor makes them especially suitable for the visualization of tensor fields that can be translated into a metric. Our work combines ideas from sampling theory and mesh generation to approximate generalized blue noise properties. To generate these samples with the desired properties, we first construct a set of nonoverlapping ellipses whose distribution closely matches the underlying metric. This set of samples is used as input for a generalized anisotropic Lloyd relaxation to distribute noise samples more evenly. Instead of computing the Voronoi tessellation explicitly, we introduce a discrete approach that combines the Voronoi cell and centroid computation in one step. Our method supports automatic packing of the elliptical samples, resulting in textures similar to those generated by anisotropic reaction-diffusion methods. We use Fourier analysis tools for quality measurement of uniformly distributed samples. The resulting samples have nice sampling properties, for example, they satisfy a blue noise property where low frequencies in the power spectrum are reduced to a minimum..",
"title": ""
},
{
"docid": "6f18b8e0a1e7c835dc6f94bfa8d96437",
"text": "Recent years have witnessed the rise of the gut microbiota as a major topic of research interest in biology. Studies are revealing how variations and changes in the composition of the gut microbiota influence normal physiology and contribute to diseases ranging from inflammation to obesity. Accumulating data now indicate that the gut microbiota also communicates with the CNS — possibly through neural, endocrine and immune pathways — and thereby influences brain function and behaviour. Studies in germ-free animals and in animals exposed to pathogenic bacterial infections, probiotic bacteria or antibiotic drugs suggest a role for the gut microbiota in the regulation of anxiety, mood, cognition and pain. Thus, the emerging concept of a microbiota–gut–brain axis suggests that modulation of the gut microbiota may be a tractable strategy for developing novel therapeutics for complex CNS disorders.",
"title": ""
},
{
"docid": "60306e39a7b281d35e8a492aed726d82",
"text": "The aim of this study was to assess the efficiency of four anesthetic agents, tricaine methanesulfonate (MS-222), clove oil, 7 ketamine, and tobacco extract on juvenile rainbow trout. Also, changes of blood indices were evaluated at optimum doses of four anesthetic agents. Basal effective concentrations determined were 40 mg L−1 (induction, 111 ± 16 s and recovery time, 246 ± 36 s) for clove oil, 150 mg L−1 (induction, 287 ± 59 and recovery time, 358 ± 75 s) for MS-222, 1 mg L−1 (induction, 178 ± 38 and recovery time, 264 ± 57 s) for ketamine, and 30 mg L−1 (induction, 134 ± 22 and recovery time, 285 ± 42 s) for tobacco. According to our results, significant changes in hematological parameters including white blood cells (WBCs), red blood cells (RBCs), hematocrit (Ht), and hemoglobin (Hb) were found between four anesthetics agents. Also, significant differences were observed in some plasma parameters including cortical, glucose, and lactate between experimental treatments. Induction and recovery times for juvenile Oncorhynchus mykiss anesthetized with anesthetic agents were dose-dependent.",
"title": ""
},
{
"docid": "ed8bcc72caefe30126ece6eb7a549243",
"text": "This paper describes a new concept for locomotion of mobile robots based on single actuated tensegrity structures. To discuss the working principle, two vibration-driven locomotion systems are considered. Due to the complex dynamics of the applied tensegrity structures with pronounced mechanical compliance, the movement performance of both systems is highly dependent on the driving frequency. By using single-actuation, the system design and also their control can be kept simple. The movement of the robots is depending on their configuration uniaxial bidirectional or planar. The working principle of both systems is discussed with the help of transient dynamic analyses and verified with experimental tests for a selected prototype.",
"title": ""
},
{
"docid": "601b06f0cdf578400b11a54f36e14d56",
"text": "Advances in deep learning algorithms overshadow their security risk in software implementations. This paper discloses a set of vulnerabilities in popular deep learning frameworks including Caffe, TensorFlow, and Torch. Contrary to the small code size of deep learning models, these deep learning frameworks are complex, and they heavily depend on numerous open source packages. This paper considers the risks caused by these vulnerabilities by studying their impact on common deep learning applications such as voice recognition and image classification. By exploiting these framework implementations, attackers can launch denial-of-service attacks that crash or hang a deep learning application, or control-flow hijacking attacks that lead to either system compromise or recognition evasions. The goal of this paper is to draw attention to software implementations and call for community collaborative effort to improve security of deep learning frameworks.",
"title": ""
},
{
"docid": "d5302f6d0633313a30fa9cb0b90dcd0e",
"text": "Differing classes of abused drugs utilize different mechanisms of molecular pharmacological action yet the overuse of these same drugs frequently leads to the same outcome: addiction. Similarly, episodes of stress can lead to drug-seeking behaviors and relapse in recovering addicts. To overcome the labor-intensive headache of having to design a specific addiction-breaking intervention tailored to each drug it would be expedient to attack the cycle of addiction at targets common to such seemingly disparate classes of drugs of abuse. Recently, encouraging observations were made whereby stressful conditions and differing classes of drugs of abuse were found to impinge upon the same excitatory synapses on dopamine neurons in the midbrain. These findings will increase our understanding of the intricacies of addiction and LTP, and may lead to new interventions for breaking addiction.",
"title": ""
},
{
"docid": "57666e9d9b7e69c38d7530633d556589",
"text": "In this paper, we investigate the utility of linguistic features for detecting the sentiment of Twitter messages. We evaluate the usefulness of existing lexical resources as well as features that capture information about the informal and creative language used in microblogging. We take a supervised approach to the problem, but leverage existing hashtags in the Twitter data for building training data.",
"title": ""
},
{
"docid": "c6967ff67346894766f810f44a6bb6bc",
"text": "Knowledge about the effects of physical exercise on brain is accumulating although the mechanisms through which exercise exerts these actions remain largely unknown. A possible involvement of adult hippocampal neurogenesis (AHN) in the effects of exercise is debated while the physiological and pathological significance of AHN is under intense scrutiny. Recently, both neurogenesis-dependent and independent mechanisms have been shown to mediate the effects of physical exercise on spatial learning and anxiety-like behaviors. Taking advantage that the stimulating effects of exercise on AHN depend among others, on serum insulin-like growth factor I (IGF-I), we now examined whether the behavioral effects of running exercise are related to variations in hippocampal neurogenesis, by either increasing or decreasing it according to serum IGF-I levels. Mutant mice with low levels of serum IGF-I (LID mice) had reduced AHN together with impaired spatial learning. These deficits were not improved by running. However, administration of exogenous IGF-I ameliorated the cognitive deficit and restored AHN in LID mice. We also examined the effect of exercise in LID mice in the novelty-suppressed feeding test, a measure of anxiety-like behavior in laboratory animals. Normal mice, but not LID mice, showed reduced anxiety after exercise in this test. However, after exercise, LID mice did show improvement in the forced swim test, a measure of behavioral despair. Thus, many, but not all of the beneficial effects of exercise on brain function depend on circulating levels of IGF-I and are associated to increased hippocampal neurogenesis, including improved cognition and reduced anxiety.",
"title": ""
},
{
"docid": "67392cae4df0da44c8fda4b3f9eceb29",
"text": "We propose a modification to weight normalization techniques that provides the same convergence benefits but requires fewer computational operations. The proposed method, FastNorm, exploits the low-rank properties of weight updates and infers the norms without explicitly calculating them, replacing anO(n) computation with an O(n) one for a fully-connected layer. It improves numerical stability and reduces accuracy variance enabling higher learning rate and offering better convergence. We report experimental results that illustrate the advantage of the proposed method.",
"title": ""
},
{
"docid": "60ed46346d2992789e4ecd34e1936cc7",
"text": "The aim of this study was to differentiate the effects of body load and joint movements on the leg muscle activation pattern during assisted locomotion in spinal man. Stepping movements were induced by a driven gait orthosis (DGO) on a treadmill in patients with complete para-/tetraplegia and, for comparison, in healthy subjects. All subjects were unloaded by 70% of their body weight. EMG of upper and lower leg muscles and joint movements of the DGO of both legs were recorded. In the patients, normal stepping movements and those mainly restricted to the hips (blocked knees) were associated with a pattern of leg muscle EMG activity that corresponded to that of the healthy subjects, but the amplitude was smaller. Locomotor movements restricted to imposed ankle joint movements were followed by no, or only focal EMG responses in the stretched muscles. Unilateral locomotion in the patients was associated with a normal pattern of leg muscle EMG activity restricted to the moving side, while in the healthy subjects a bilateral activation occurred. This indicates that interlimb coordination depends on a supraspinal input. During locomotion with 100% body unloading in healthy subjects and patients, no EMG activity was present. Thus, it can be concluded that afferent input from hip joints, in combination with that from load receptors, plays a crucial role in the generation of locomotor activity in the isolated human spinal cord. This is in line with observations from infant stepping experiments and experiments in cats. Afferent feedback from knee and ankle joints may be involved largely in the control of focal movements.",
"title": ""
},
{
"docid": "3f207c3c622d1854a7ad6c5365354db1",
"text": "The field of Music Information Retrieval has always acknowledged the need for rigorous scientific evaluations, and several efforts have set out to develop and provide the infrastructure, technology and methodologies needed to carry out these evaluations. The community has enormously gained from these evaluation forums, but we have reached a point where we are stuck with evaluation frameworks that do not allow us to improve as much and as well as we want. The community recently acknowledged this problem and showed interest in addressing it, though it is not clear what to do to improve the situation. We argue that a good place to start is again the Text IR field. Based on a formalization of the evaluation process, this paper presents a survey of past evaluation work in the context of Text IR, from the point of view of validity, reliability and efficiency of the experiments. We show the problems that our community currently has in terms of evaluation, point to several lines of research to improve it and make various proposals in that line.",
"title": ""
},
{
"docid": "c89a7027de2362aa1bfe64b084073067",
"text": "This paper considers pick-and-place tasks using aerial vehicles equipped with manipulators. The main focus is on the development and experimental validation of a nonlinear model-predictive control methodology to exploit the multi-body system dynamics and achieve optimized performance. At the core of the approach lies a sequential Newton method for unconstrained optimal control and a high-frequency low-level controller tracking the generated optimal reference trajectories. A low cost quadrotor prototype with a simple manipulator extending more than twice the radius of the vehicle is designed and integrated with an on-board vision system for object tracking. Experimental results show the effectiveness of model-predictive control to motivate the future use of real-time optimal control in place of standard ad-hoc gain scheduling techniques.",
"title": ""
},
{
"docid": "c11b77f1392c79f4a03f9633c8f97f4d",
"text": "The paper introduces and discusses a concept of syntactic n-grams (sn-grams) that can be applied instead of traditional n-grams in many NLP tasks. Sn-grams are constructed by following paths in syntactic trees, so sngrams allow bringing syntactic knowledge into machine learning methods. Still, previous parsing is necessary for their construction. We applied sn-grams in the task of authorship attribution for corpora of three and seven authors with very promising results.",
"title": ""
}
] |
scidocsrr
|
b6e8dbd872062bdab44281f822532c16
|
A parallel workload model and its implications for processor allocation
|
[
{
"docid": "da1d1e9ddb5215041b9565044b9feecb",
"text": "As multiprocessors with large numbers of processors become more prevalent, we face the task of developing scheduling algorithms for the multiprogrammed use of such machines. The scheduling decisions must take into account the number of processors available, the overall system load, and the ability of each application awaiting activation to make use of a given number of processors.\nThe parallelism within an application can be characterized at a number of different levels of detail. At the highest level, it might be characterized by a single parameter (such as the proportion of the application that is sequential, or the average number of processors the application would use if an unlimited number of processors were available). At the lowest level, representing all the parallelism in the application requires the full data dependency graph (which is more information than is practically manageable).\nIn this paper, we examine the quality of processor allocation decisions under multiprogramming that can be made with several different high-level characterizations of application parallelism. We demonstrate that decisions based on parallelism characterizations with two to four parameters are superior to those based on single-parameter characterizations (such as fraction sequential or average parallelism). The results are based predominantly on simulation, with some guidance from a simple analytic model.",
"title": ""
}
] |
[
{
"docid": "323f7fd7269d020ebc60af1917e90cb4",
"text": "This paper describes the design concept, operating principle, analytical design, fabrication of a functional prototype, and experimental performance verification of a novel wobble motor with a XY compliant mechanism driven by shape memory alloy (SMA) wires. With the aim of realizing an SMA based motor which could generate bidirectional high-torque motion, the proposed motor is devised with wobble motor driving principle widely utilized for speed reducers. As a key mechanism which functions to guide wobbling motion, a planar XY compliant mechanism is designed and applied to the motor. Since the mechanism has monolithic flat structure with the planar mirror symmetric configuration, cyclic expansion and contraction of the SMA wires could be reliably converted into high-torque rotary motion. For systematic design of the motor, a characterization of electro-thermomechanical behavior of the SMA wire is experimentally carried out, and the design parametric analysis is conducted to determine parametric values of the XY compliant mechanism. The designed motor is fabricated as a functional prototype to experimentally investigate its operational feasibility and working performances. The observed experimental results obviously demonstrate the unique driving characteristics and practical applicability of the proposed motor.",
"title": ""
},
{
"docid": "121f1baeaba51ebfdfc69dde5cd06ce3",
"text": "Mobile operators are facing an exponential traffic growth due to the proliferation of portable devices that require a high-capacity connectivity. This, in turn, leads to a tremendous increase of the energy consumption of wireless access networks. A promising solution to this problem is the concept of heterogeneous networks, which is based on the dense deployment of low-cost and low-power base stations, in addition to the traditional macro cells. However, in such a scenario the energy consumed by the backhaul, which aggregates the traffic from each base station towards the metro/core segment, becomes significant and may limit the advantages of heterogeneous network deployments. This paper aims at assessing the impact of backhaul on the energy consumption of wireless access networks, taking into consideration different data traffic requirements (i.e., from todays to 2020 traffic levels). Three backhaul architectures combining different technologies (i.e., copper, fiber, and microwave) are considered. Results show that backhaul can amount to up to 50% of the power consumption of a wireless access network. On the other hand, hybrid backhaul architectures that combines fiber and microwave performs relatively well in scenarios where the wireless network is characterized by a high small-base-stations penetration rate.",
"title": ""
},
{
"docid": "7fe44f62935744b5ae6ee78ae15150dd",
"text": "The flexibility and general programmability offered by the Software Defined Networking (SDN) technology has supposed a disruption in the evolution of the network. It offers enormous benefits to network control and opens new ways of communication by defining powerful but simple switching elements (forwarders) that can use any single field of a packet or message to determine the outgoing port to which it will be forwarded. Such benefits can be applied to the Internet of Things (IoT) and thus resolve some of the main challenges it exposes, such as the ability to let devices connected to heterogeneous networks to communicate each other. In the present document we describe a general model to integrate SDN and IoT so that heterogeneous communications are achieved. However, it exposes other (simpler) challenges must be resolved, evaluated, and validated against current and future solutions before the design of the integrated approach can be finished.",
"title": ""
},
{
"docid": "8468e279ff6dfcd11a5525ab8a60d816",
"text": "We provide a concise introduction to basic approaches to reinforcement learning from the machine learning perspective. The focus is on value function and policy gradient methods. Some selected recent trends are highlighted.",
"title": ""
},
{
"docid": "4507ae69ed021941ff7b0e39d8d50d22",
"text": "In the last few years a new research area, called stream reasoning, emerged to bridge the gap between reasoning and stream processing. While current reasoning approaches are designed to work on mainly static data, the Web is, on the other hand, extremely dynamic: information is frequently changed and updated, and new data is continuously generated from a huge number of sources, often at high rate. In other words, fresh information is constantly made available in the form of streams of new data and updates. Despite some promising investigations in the area, stream reasoning is still in its infancy, both from the perspective of models and theories development, and from the perspective of systems and tools design and implementation. The aim of this paper is threefold: (i) we identify the requirements coming from different application scenarios, and we isolate the problems they pose; (ii) we survey existing approaches and proposals in the area of stream reasoning, highlighting their strengths and limitations; (iii) we draw a research agenda to guide the future research and development of stream reasoning. In doing so, we also analyze related research fields to extract algorithms, models, techniques, and solutions that could be useful in the area of stream reasoning.",
"title": ""
},
{
"docid": "cf702356b3a8895f5a636cc05597b52a",
"text": "This paper investigates non-fragile exponential <inline-formula> <tex-math notation=\"LaTeX\">$ {H_\\infty }$ </tex-math></inline-formula> control problems for a class of uncertain nonlinear networked control systems (NCSs) with randomly occurring information, such as the controller gain fluctuation and the uncertain nonlinearity, and short time-varying delay via output feedback controller. Using the nominal point technique, the NCS is converted into a novel time-varying discrete time model with norm-bounded uncertain parameters for reducing the conservativeness. Based on linear matrix inequality framework and output feedback control strategy, design methods for general and optimal non-fragile exponential <inline-formula> <tex-math notation=\"LaTeX\">$ {H_\\infty }$ </tex-math></inline-formula> controllers are presented. Meanwhile, these control laws can still be applied to linear NCSs and general fragile control NCSs while introducing random variables. Finally, three examples verify the correctness of the presented scheme.",
"title": ""
},
{
"docid": "faa951d9c72c36c2df205c44c3f60c28",
"text": "Face perception is mediated by a distributed neural system in humans that consists of multiple, bilateral regions. The functional organization of this system embodies a distinction between the representation of invariant aspects of faces, which is the basis for recognizing individuals, and the representation of changeable aspects, such as eye gaze, expression, and lip movement, which underlies the perception of information that facilitates social communication. The system also has a hierarchical organization. A core system, consisting of occipitotemporal regions in extrastriate visual cortex, mediates the visual analysis of faces. An extended system consists of regions from neural systems for other cognitive functions that can act in concert with the core system to extract meaning from faces. Of regions in the extended system for face perception, the amygdala plays a central role in processing the social relevance of information gleaned from faces, particularly when that information may signal a potential threat.",
"title": ""
},
{
"docid": "39b2903849932dd7c4ef1dc669ec04e1",
"text": "Emerging technologies such as the Internet of Things (IoT) require latency-aware computation for real-time application processing. In IoT environments, connected things generate a huge amount of data, which are generally referred to as big data. Data generated from IoT devices are generally processed in a cloud infrastructure because of the on-demand services and scalability features of the cloud computing paradigm. However, processing IoT application requests on the cloud exclusively is not an efficient solution for some IoT applications, especially time-sensitive ones. To address this issue, Fog computing, which resides in between cloud and IoT devices, was proposed. In general, in the Fog computing environment, IoT devices are connected to Fog devices. These Fog devices are located in close proximity to users and are responsible for intermediate computation and storage. One of the key challenges in running IoT applications in a Fog computing environment are resource allocation and task scheduling. Fog computing research is still in its infancy, and taxonomy-based investigation into the requirements of Fog infrastructure, platform, and applications mapped to current research is still required. This survey will help the industry and research community synthesize and identify the requirements for Fog computing. This paper starts with an overview of Fog computing in which the definition of Fog computing, research trends, and the technical differences between Fog and cloud are reviewed. Then, we investigate numerous proposed Fog computing architectures and describe the components of these architectures in detail. From this, the role of each component will be defined, which will help in the deployment of Fog computing. Next, a taxonomy of Fog computing is proposed by considering the requirements of the Fog computing paradigm. We also discuss existing research works and gaps in resource allocation and scheduling, fault tolerance, simulation tools, and Fog-based microservices. Finally, by addressing the limitations of current research works, we present some open issues, which will determine the future research direction for the Fog computing paradigm.",
"title": ""
},
{
"docid": "e0919f53691d17c7cb495c19914683f8",
"text": "Carpooling has long held the promise of reducing gas consumption by decreasing mileage to deliver coriders. Although ad hoc carpools already exist in the real world through private arrangements, little research on the topic has been done. In this article, we present the first systematic work to design, implement, and evaluate a carpool service, called coRide, in a large-scale taxicab network intended to reduce total mileage for less gas consumption. Our coRide system consists of three components, a dispatching cloud server, passenger clients, and an onboard customized device, called TaxiBox. In the coRide design, in response to the delivery requests of passengers, dispatching cloud servers calculate cost-efficient carpool routes for taxicab drivers and thus lower fares for the individual passengers.\n To improve coRide’s efficiency in mileage reduction, we formulate an NP-hard route calculation problem under different practical constraints. We then provide (1) an optimal algorithm using Linear Programming, (2) a 2-approximation algorithm with a polynomial complexity, and (3) its corresponding online version with a linear complexity. To encourage coRide’s adoption, we present a win-win fare model as the incentive mechanism for passengers and drivers to participate. We test the performance of coRide by a comprehensive evaluation with a real-world trial implementation and a data-driven simulation with 14,000 taxi data from the Chinese city Shenzhen. The results show that compared with the ground truth, our service can reduce 33% of total mileage; with our win-win fare model, we can lower passenger fares by 49% and simultaneously increase driver profit by 76%.",
"title": ""
},
{
"docid": "908f862dea52cd9341d2127928baa7de",
"text": "Arsenic's history in science, medicine and technology has been overshadowed by its notoriety as a poison in homicides. Arsenic is viewed as being synonymous with toxicity. Dangerous arsenic concentrations in natural waters is now a worldwide problem and often referred to as a 20th-21st century calamity. High arsenic concentrations have been reported recently from the USA, China, Chile, Bangladesh, Taiwan, Mexico, Argentina, Poland, Canada, Hungary, Japan and India. Among 21 countries in different parts of the world affected by groundwater arsenic contamination, the largest population at risk is in Bangladesh followed by West Bengal in India. Existing overviews of arsenic removal include technologies that have traditionally been used (oxidation, precipitation/coagulation/membrane separation) with far less attention paid to adsorption. No previous review is available where readers can get an overview of the sorption capacities of both available and developed sorbents used for arsenic remediation together with the traditional remediation methods. We have incorporated most of the valuable available literature on arsenic remediation by adsorption ( approximately 600 references). Existing purification methods for drinking water; wastewater; industrial effluents, and technological solutions for arsenic have been listed. Arsenic sorption by commercially available carbons and other low-cost adsorbents are surveyed and critically reviewed and their sorption efficiencies are compared. Arsenic adsorption behavior in presence of other impurities has been discussed. Some commercially available adsorbents are also surveyed. An extensive table summarizes the sorption capacities of various adsorbents. Some low-cost adsorbents are superior including treated slags, carbons developed from agricultural waste (char carbons and coconut husk carbons), biosorbents (immobilized biomass, orange juice residue), goethite and some commercial adsorbents, which include resins, gels, silica, treated silica tested for arsenic removal come out to be superior. Immobilized biomass adsorbents offered outstanding performances. Desorption of arsenic followed by regeneration of sorbents has been discussed. Strong acids and bases seem to be the best desorbing agents to produce arsenic concentrates. Arsenic concentrate treatment and disposal obtained is briefly addressed. This issue is very important but much less discussed.",
"title": ""
},
{
"docid": "b43bcd460924f0b5a7366f23bf0d8fe7",
"text": "Historically, it has been difficult to define paraphilias in a consistent manner or distinguish paraphilias from non-paraphilic or normophilic sexual interests (see Blanchard, 2009a; Moser & Kleinplatz, 2005). As part of the American Psychiatric Association’s (APA) process of revising the Diagnostic and Statistical Manual of Mental Disorders (DSM), Blanchard (2010a), the chair of the DSM-5 Paraphilias subworkgroup (PSWG), has proposed a new paraphilia definition: ‘‘A paraphilia is any powerful and persistent sexual interest other than sexual interest in copulatory or precopulatory behavior with phenotypicallynormal, consentingadulthumanpartners’’ (p. 367). Blanchard (2009a) acknowledges that his paraphilia ‘‘definition is not watertight’’and it already has attracted serious criticism (see Haeberle, 2010; Hinderliter, 2010; Singy, 2010). The current analysis will critique three components of Blanchard’s proposed definition (sexual interest in copulatory or precopulatory behavior, phenotypically normal, and consenting adult human partners) to determine if the definition is internally consistent andreliably distinguishes individualswith a paraphilia from individuals with normophilia. Blanchard (2009a) believes his definition ‘‘is better than no real definition,’’but that remains to be seen. According to Blanchard (2009a), the current DSM paraphilia definition (APA, 2000) is a definition by concatenation (a list of things that are paraphilias), but he believes a definition by exclusion (everything that is not normophilic) is preferable. The change is not substantive as normophilia (formerly a definitionofexclusion)nowbecomesadefinitionofconcatenation (a list of acceptable activities). Nevertheless, it seems odd to define a paraphilia on the basis of what it is not, rather than by the commonalities among the different paraphilias. Most definitions are statements of what things are, not what things are excluded or lists of things to be included. Blanchard (2009a) purposefully left ‘‘intact the distinction betweennormativeandnon-normativesexualbehavior,’’implying that these categories are meaningful. Blanchard (2010b; see alsoBlanchardetal.,2009)definesaparaphiliabyrelativeascertainment (the interest in paraphilic stimuli is greater than the interest in normophilic stimuli) rather than absolute ascertainment (the interest is intense). Using relative ascertainment confirms that one cannot be both paraphilic and normophilic; the greater interest would classify the individual as paraphilic or normophilic. Blanchard (2010a) then contradicts himself when he asserts that once ascertained with a paraphilia, the individual should retain that label, even if the powerful and persistent paraphilic sexual interest dissipates. Logically, the relative dissipation of the paraphilic and augmentation of the normophilic interests should re-categorize the individual as normophilic. The first aspect of Blanchard’s paraphilia definition is the ‘‘sexual interest incopulatoryorprecopulatorybehavior.’’Obviously, most normophilic individuals do not desire or respond sexually to all adults. Ascertaining if someone is more aroused by the coitus or their partner’s physique, attitude, attributes, etc. seems fruitless and hopelessly convoluted. I can see no other way to interpret sexual interest in copulatory or precopulatory behavior, except to conclude that coitus (between phenotypically normal consenting adults) is normophilic. Otherwise, a powerful and persistent preference for blonde (or Asian or petite) coital partners is a paraphilia. If a relative lack of sexual interest in brunettes as potential coital partners indicates a C. Moser (&) Department of Sexual Medicine, Institute for Advanced Study of Human Sexuality, 45 Castro Street, #125, San Francisco, CA 94114, USA e-mail: docx2@ix.netcom.com 1 Another version of this definition exists (Blanchard, 2009a, 2009b), but I do not believe the changes substantially alter any of my comments.",
"title": ""
},
{
"docid": "7457c09c1068ba1397f468879bc3b0d1",
"text": "Genome editing has potential for the targeted correction of germline mutations. Here we describe the correction of the heterozygous MYBPC3 mutation in human preimplantation embryos with precise CRISPR–Cas9-based targeting accuracy and high homology-directed repair efficiency by activating an endogenous, germline-specific DNA repair response. Induced double-strand breaks (DSBs) at the mutant paternal allele were predominantly repaired using the homologous wild-type maternal gene instead of a synthetic DNA template. By modulating the cell cycle stage at which the DSB was induced, we were able to avoid mosaicism in cleaving embryos and achieve a high yield of homozygous embryos carrying the wild-type MYBPC3 gene without evidence of off-target mutations. The efficiency, accuracy and safety of the approach presented suggest that it has potential to be used for the correction of heritable mutations in human embryos by complementing preimplantation genetic diagnosis. However, much remains to be considered before clinical applications, including the reproducibility of the technique with other heterozygous mutations.",
"title": ""
},
{
"docid": "36b7b37429a8df82e611df06303a8fcb",
"text": "Complex machine learning models for NLP are often brittle, making different predictions for input instances that are extremely similar semantically. To automatically detect this behavior for individual instances, we present semantically equivalent adversaries (SEAs) – semantic-preserving perturbations that induce changes in the model’s predictions. We generalize these adversaries into semantically equivalent adversarial rules (SEARs) – simple, universal replacement rules that induce adversaries on many instances. We demonstrate the usefulness and flexibility of SEAs and SEARs by detecting bugs in black-box state-of-the-art models for three domains: machine comprehension, visual questionanswering, and sentiment analysis. Via user studies, we demonstrate that we generate high-quality local adversaries for more instances than humans, and that SEARs induce four times as many mistakes as the bugs discovered by human experts. SEARs are also actionable: retraining models using data augmentation significantly reduces bugs, while maintaining accuracy.",
"title": ""
},
{
"docid": "56dabbcf36d734211acc0b4a53f23255",
"text": "Cloud computing is a way to increase the capacity or add capabilities dynamically without investing in new infrastructure, training new personnel, or licensing new software. It extends Information Technology’s (IT) existing capabilities. In the last few years, cloud computing has grown from being a promising business concept to one of the fast growing segments of the IT industry. But as more and more information on individuals and companies are placed in the cloud, concerns are beginning to grow about just how safe an environment it is. Despite of all the hype surrounding the cloud, enterprise customers are still reluctant to deploy their business in the cloud. Security is one of the major issues which reduces the growth of cloud computing and complications with data privacy and data protection continue to plague the market. The advent of an advanced model should not negotiate with the required functionalities and capabilities present in the current model. A new model targeting at improving features of an existing model must not risk or threaten other important features of the current model. The architecture of cloud poses such a threat to the security of the existing technologies when deployed in a cloud environment. Cloud service users need to be vigilant in understanding the risks of data breaches in this new environment. In this paper, a survey of the different security risks that pose a threat to the cloud is presented. This paper is a survey more specific to the different security issues that has emanated due to the nature of the service delivery models of a cloud computing system. & 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "2ac20d934cb911b6751e93d9bc750fcf",
"text": "In recent years, visual saliency estimation in images has attracted much attention in the computer vision community. However, predicting saliency in videos has received relatively little attention. Inspired by the recent success of deep convolutional neural networks based static saliency models, in this work, we study two different two-stream convolutional networks for dynamic saliency prediction. To improve the generalization capability of our models, we also introduce a novel, empirically grounded data augmentation technique for this task. We test our models on DIEM dataset and report superior results against the existing models. Moreover, we perform transfer learning experiments on SALICON, a recently proposed static saliency dataset, by finetuning our models on the optical flows estimated from static images. Our experiments show that taking motion into account in this way can be helpful for static saliency estimation.",
"title": ""
},
{
"docid": "b4714cacd13600659e8a94c2b8271697",
"text": "AIM AND OBJECTIVE\nExamine the pharmaceutical qualities of cannabis including a historical overview of cannabis use. Discuss the use of cannabis as a clinical intervention for people experiencing palliative care, including those with life-threatening chronic illness such as multiple sclerosis and motor neurone disease [amyotrophic lateral sclerosis] in the UK.\n\n\nBACKGROUND\nThe non-medicinal use of cannabis has been well documented in the media. There is a growing scientific literature on the benefits of cannabis in symptom management in cancer care. Service users, nurses and carers need to be aware of the implications for care and treatment if cannabis is being used medicinally.\n\n\nDESIGN\nA comprehensive literature review.\n\n\nMETHOD\nLiterature searches were made of databases from 1996 using the term cannabis and the combination terms of cannabis and palliative care; symptom management; cancer; oncology; chronic illness; motor neurone disease/amyotrophic lateral sclerosis; and multiple sclerosis. Internet material provided for service users searching for information about the medicinal use of cannabis was also examined.\n\n\nRESULTS\nThe literature on the use of cannabis in health care repeatedly refers to changes for users that may be equated with improvement in quality of life as an outcome of its use. This has led to increased use of cannabis by these service users. However, the cannabis used is usually obtained illegally and can have consequences for those who choose to use it for its therapeutic value and for nurses who are providing care.\n\n\nRELEVANCE TO CLINICAL PRACTICE\nQuestions and dilemmas are raised concerning the role of the nurse when caring and supporting a person making therapeutic use of cannabis.",
"title": ""
},
{
"docid": "6162ad3612b885add014bd09baa5f07a",
"text": "The Neural Bag-of-Words (NBOW) model performs classification with an average of the input word vectors and achieves an impressive performance. While the NBOW model learns word vectors targeted for the classification task it does not explicitly model which words are important for given task. In this paper we propose an improved NBOW model with this ability to learn task specific word importance weights. The word importance weights are learned by introducing a new weighted sum composition of the word vectors. With experiments on standard topic and sentiment classification tasks, we show that (a) our proposed model learns meaningful word importance for a given task (b) our model gives best accuracies among the BOW approaches. We also show that the learned word importance weights are comparable to tf-idf based word weights when used as features in a BOW SVM classifier.",
"title": ""
},
{
"docid": "0b135f95bfcccf34c75959a41a0a7fe6",
"text": "Analogy is a kind of similarity in which the same system of relations holds across different objects. Analogies thus capture parallels across different situations. When such a common structure is found, then what is known about one situation can be used to infer new information about the other. This chapter describes the processes involved in analogical reasoning, reviews foundational research and recent developments in the field, and proposes new avenues of investigation.",
"title": ""
},
{
"docid": "cf29cfcec35d7005641b38cae8cd4b74",
"text": "University can be a difficult, stressful time for students. This stress causes problems ranging from academic difficulties and poor performance, to serious mental and physical health issues. Studies have shown that physical activity can help reduce stress, improve academic performance and contribute to a healthier campus atmosphere physically, mentally, and emotionally. Computer science is often considered among the most difficult and stressful programs offered at academic institutions. Yet the current stereotype of computer scientists includes unhealthy lifestyle choices and de-emphasizes physical activity. \n This paper analyzes the effects of introducing short periods of physical activity into an introductory CS course, during the normal lecture break. Contrary to the stereotype of CS students, participation was high, and the students enjoyed these Fit-Breaks more than alternative break activities. This small injection of physical activity also had a measurable impact on the students' overall satisfaction with life, and may have had positive impacts on stress, retention, and academic performance as well as improved student perception, especially in areas that are traditionally problematic for female computer science students. \n Fit-Breaks are low-cost, easy to replicate, and enjoyable exercises. Instead of sitting quietly for ten minutes staring at a phone; stretching, moving, and getting a short burst of physical activity has a positive benefit for students. And the good news is: they actually enjoy it.",
"title": ""
},
{
"docid": "80477fdab96ae761dbbb7662b87e82a0",
"text": "This article provides minimum requirements for having confidence in the accuracy of EC50/IC50 estimates. Two definitions of EC50/IC50s are considered: relative and absolute. The relative EC50/IC50 is the parameter c in the 4-parameter logistic model and is the concentration corresponding to a response midway between the estimates of the lower and upper plateaus. The absolute EC50/IC50 is the response corresponding to the 50% control (the mean of the 0% and 100% assay controls). The guidelines first describe how to decide whether to use the relative EC50/IC50 or the absolute EC50/IC50. Assays for which there is no stable 100% control must use the relative EC50/IC50. Assays having a stable 100% control but for which there may be more than 5% error in the estimate of the 50% control mean should use the relative EC50/IC50. Assays that can be demonstrated to produce an accurate and stable 100% control and less than 5% error in the estimate of the 50% control mean may gain efficiency as well as accuracy by using the absolute EC50/IC50. Next, the guidelines provide rules for deciding when the EC50/IC50 estimates are reportable. The relative EC50/IC50 should only be used if there are at least two assay concentrations beyond the lower and upper bend points. The absolute EC50/IC50 should only be used if there are at least two assay concentrations whose predicted response is less than 50% and two whose predicted response is greater than 50%. A wide range of typical assay conditions are considered in the development of the guidelines.",
"title": ""
}
] |
scidocsrr
|
ee2412b831c8c519d3e6a0993f259ac0
|
A new 5-transistor XOR-XNOR circuit based on the pass transistor logic
|
[
{
"docid": "971398019db2fb255769727964f1e38a",
"text": "Scaling down to deep submicrometer (DSM) technology has made noise a metric of equal importance as compared to power, speed, and area. Smaller feature size, lower supply voltage, and higher frequency are some of the characteristics for DSM circuits that make them more vulnerable to noise. New designs and circuit techniques are required in order to achieve robustness in presence of noise. Novel methodologies for designing energy-efficient noise-tolerant exclusive-OR-exclusive- NOR circuits that can operate at low-supply voltages with good signal integrity and driving capability are proposed. The circuits designed, after applying the proposed methodologies, are characterized and compared with previously published circuits for reliability, speed and energy efficiency. To test the driving capability of the proposed circuits, they are embedded in an existing 5-2 compressor design. The average noise threshold energy (ANTE) is used for quantifying the noise immunity of the proposed circuits. Simulation results show that, compared with the best available circuit in literature, the proposed circuits exhibit better noise-immunity, lower power-delay product (PDP) and good driving capability. All of the proposed circuits prove to be faster and successfully work at all ranges of supply voltage starting from 3.3 V down to 0.6 V. The savings in the PDP range from 94% to 21% for the given supply voltage range respectively and the average improvement in the ANTE is 2.67X.",
"title": ""
}
] |
[
{
"docid": "a32ea25ea3adc455dd3dfd1515c97ae3",
"text": "Item-to-item collaborative filtering (aka.item-based CF) has been long used for building recommender systems in industrial settings, owing to its interpretability and efficiency in real-time personalization. It builds a user's profile as her historically interacted items, recommending new items that are similar to the user's profile. As such, the key to an item-based CF method is in the estimation of item similarities. Early approaches use statistical measures such as cosine similarity and Pearson coefficient to estimate item similarities, which are less accurate since they lack tailored optimization for the recommendation task. In recent years, several works attempt to learn item similarities from data, by expressing the similarity as an underlying model and estimating model parameters by optimizing a recommendation-aware objective function. While extensive efforts have been made to use shallow linear models for learning item similarities, there has been relatively less work exploring nonlinear neural network models for item-based CF. In this work, we propose a neural network model named Neural Attentive Item Similarity model (NAIS) for item-based CF. The key to our design of NAIS is an attention network, which is capable of distinguishing which historical items in a user profile are more important for a prediction. Compared to the state-of-the-art item-based CF method Factored Item Similarity Model (FISM) [1] , our NAIS has stronger representation power with only a few additional parameters brought by the attention network. Extensive experiments on two public benchmarks demonstrate the effectiveness of NAIS. This work is the first attempt that designs neural network models for item-based CF, opening up new research possibilities for future developments of neural recommender systems.",
"title": ""
},
{
"docid": "a7607444b58f0e86000c7f2d09551fcc",
"text": "Background modeling is a critical component for various vision-based applications. Most traditional methods tend to be inefficient when solving large-scale problems. In this paper, we introduce sparse representation into the task of large-scale stable-background modeling, and reduce the video size by exploring its discriminative frames. A cyclic iteration process is then proposed to extract the background from the discriminative frame set. The two parts combine to form our sparse outlier iterative removal (SOIR) algorithm. The algorithm operates in tensor space to obey the natural data structure of videos. Experimental results show that a few discriminative frames determine the performance of the background extraction. Furthermore, SOIR can achieve high accuracy and high speed simultaneously when dealing with real video sequences. Thus, SOIR has an advantage in solving large-scale tasks.",
"title": ""
},
{
"docid": "cf219b9093dc55f09d067954d8049aeb",
"text": "In this work we explore a straightforward variational Bayes scheme for Recurrent Neural Networks. Firstly, we show that a simple adaptation of truncated backpropagation through time can yield good quality uncertainty estimates and superior regularisation at only a small extra computational cost during training, also reducing the amount of parameters by 80%. Secondly, we demonstrate how a novel kind of posterior approximation yields further improvements to the performance of Bayesian RNNs. We incorporate local gradient information into the approximate posterior to sharpen it around the current batch statistics. We show how this technique is not exclusive to recurrent neural networks and can be applied more widely to train Bayesian neural networks. We also empirically demonstrate how Bayesian RNNs are superior to traditional RNNs on a language modelling benchmark and an image captioning task, as well as showing how each of these methods improve our model over a variety of other schemes for training them. We also introduce a new benchmark for studying uncertainty for language models so future methods can be easily compared.",
"title": ""
},
{
"docid": "a9ed70274d7908193625717a80c3f2ea",
"text": "Soft robotics is a growing area of research which utilizes the compliance and adaptability of soft structures to develop highly adaptive robotics for soft interactions. One area in which soft robotics has the ability to make significant impact is in the development of soft grippers and manipulators. With an increased requirement for automation, robotics systems are required to perform task in unstructured and not well defined environments; conditions which conventional rigid robotics are not best suited. This requires a paradigm shift in the methods and materials used to develop robots such that they can adapt to and work safely in human environments. One solution to this is soft robotics, which enables soft interactions with the surroundings while maintaining the ability to apply significant force. This review paper assesses the current materials and methods, actuation methods and sensors which are used in the development of soft manipulators. The achievements and shortcomings of recent technology in these key areas are evaluated, and this paper concludes with a discussion on the potential impacts of soft manipulators on industry and society.",
"title": ""
},
{
"docid": "6c857ae5ce9db878c7ecd4263604874e",
"text": "In the investigations of chaos in dynamical systems a major role is played by symbolic dynamics, i.e. the description of the system by a shift on a symbol space via conjugation. We examine whether any kind of noise can strengthen the stochastic behaviour of chaotic systems dramatically and what the consequences for the symbolic description are. This leads to the introduction of random subshifts of nite type which are appropriate for the description of quite general dynamical systems evolving under the innuence of noise and showing internal stochastic features. We investigate some of the ergodic and stochastic properties of these shifts and show situations when they behave dynamically like the common shifts. In particular we want to present examples where such random shift systems appear as symbolic descriptions.",
"title": ""
},
{
"docid": "4ecac491b8029cf9de0ebe0d03bebec8",
"text": "In this work, we aim at developing an unsupervised abstractive summarization system in the multi-document setting. We design a paraphrastic sentence fusion model which jointly performs sentence fusion and paraphrasing using skip-gram word embedding model at the sentence level. Our model improves the information coverage and at the same time abstractiveness of the generated sentences. We conduct our experiments on the human-generated multi-sentence compression datasets and evaluate our system on several newly proposed Machine Translation (MT) evaluation metrics. Furthermore, we apply our sentence level model to implement an abstractive multi-document summarization system where documents usually contain a related set of sentences. We also propose an optimal solution for the classical summary length limit problem which was not addressed in the past research. For the document level summary, we conduct experiments on the datasets of two different domains (e.g., news article and user reviews) which are well suited for multi-document abstractive summarization. Our experiments demonstrate that the methods bring significant improvements over the state-of-the-art methods.",
"title": ""
},
{
"docid": "e7f771269ee99c04c69d1a7625a4196f",
"text": "This report is a summary of Device-associated (DA) Module data collected by hospitals participating in the National Healthcare Safety Network (NHSN) for events occurring from January through December 2010 and re ported to the Centers for Disease Control and Prevention (CDC) by July 7, 2011. This report updates previously published DA Module data from the NHSN and provides contemporary comparative rates. This report comple ments other NHSN reports, including national and state-specific reports of standardized infection ratios for select health care-associated infections (HAIs). The NHSN was established in 2005 to integrate and supersede 3 legacy surveillance systems at the CDC: the National Nosocomial Infections Surveillance system, the Dialysis Surveillance Network, and the National Sur veillance System for Healthcare Workers. NHSN data col lection, reporting, and analysis are organized into 3 components—Patient Safety, Healthcare Personnel",
"title": ""
},
{
"docid": "7f8ee14d2d185798c3864178bd450f3d",
"text": "In this paper, a new sensing device that can simultaneously monitor traffic congestion and urban flash floods is presented. This sensing device is based on the combination of passive infrared sensors (PIRs) and ultrasonic rangefinder, and is used for real-time vehicle detection, classification, and speed estimation in the context of wireless sensor networks. This framework relies on dynamic Bayesian Networks to fuse heterogeneous data both spatially and temporally for vehicle detection. To estimate the speed of the incoming vehicles, we first use cross correlation and wavelet transform-based methods to estimate the time delay between the signals of different sensors. We then propose a calibration and self-correction model based on Bayesian Networks to make a joint inference by all sensors about the speed and the length of the detected vehicle. Furthermore, we use the measurements of the ultrasonic and the PIR sensors to perform vehicle classification. Validation data (using an experimental dual infrared and ultrasonic traffic sensor) show a 99% accuracy in vehicle detection, a mean error of 5 kph in vehicle speed estimation, a mean error of 0.7m in vehicle length estimation, and a high accuracy in vehicle classification. Finally, we discuss the computational performance of the algorithm, and show that this framework can be implemented on low-power computational devices within a wireless sensor network setting. Such decentralized processing greatly improves the energy consumption of the system and minimizes bandwidth usage.",
"title": ""
},
{
"docid": "cd7c2eee84942324c77b6acd2b3e3e86",
"text": "Learning word embeddings has received a significant amount of attention recently. Often, word embeddings are learned in an unsupervised manner from a large collection of text. The genre of the text typically plays an important role in the effectiveness of the resulting embeddings. How to effectively train word embedding models using data from different domains remains a problem that is underexplored. In this paper, we present a simple yet effective method for learning word embeddings based on text from different domains. We demonstrate the effectiveness of our approach through extensive experiments on various down-stream NLP tasks.",
"title": ""
},
{
"docid": "aa5c22fa803a65f469236d2dbc5777a3",
"text": "This article presents data on CVD and risk factors in Asian women. Data were obtained from available cohort studies and statistics for mortality from the World Health Organization. CVD is becoming an important public health problem among Asian women. There are high rates of CHD mortality in Indian and Central Asian women; rates are low in southeast and east Asia. Chinese and Indian women have very high rates and mortality from stroke; stroke is also high in central Asian and Japanese women. Hypertension and type 2 DM are as prevalent as in western women, but rates of obesity and smoking are less common. Lifestyle interventions aimed at prevention are needed in all areas.",
"title": ""
},
{
"docid": "ca1c193e5e5af821772a5d123e84b72a",
"text": "Over the last few years, the phenomenon of adversarial examples — maliciously constructed inputs that fool trained machine learning models — has captured the attention of the research community, especially when the adversary is restricted to small modifications of a correctly handled input. Less surprisingly, image classifiers also lack human-level performance on randomly corrupted images, such as images with additive Gaussian noise. In this paper we provide both empirical and theoretical evidence that these are two manifestations of the same underlying phenomenon, establishing close connections between the adversarial robustness and corruption robustness research programs. This suggests that improving adversarial robustness should go hand in hand with improving performance in the presence of more general and realistic image corruptions. Based on our results we recommend that future adversarial defenses consider evaluating the robustness of their methods to distributional shift with benchmarks such as Imagenet-C.",
"title": ""
},
{
"docid": "17247d2991fac47bcd675f547a5c8185",
"text": "In this paper, we describe an approach for efficiently streaming large and highly detailed 3D city models, which is based on open standards and open source developments. This approach meets both the rendering performance requirements in WebGL enabled web browsers and the requirements by 3D Geographic Information Systems regarding data structuring, geo-referencing and accessibility of feature properties. 3D city models are assumed to be available as CityGML data sets due to its widespread adoption by public authorities. The Cesium.js open source virtual globe is used as a platform for embedding custom 3D assets. glTF and related formats are used for efficiently encoding 3D data and for enabling streaming of large 3D models. In order to fully exploit the capabilities of web browsers and standard internet protocols, a series of filtering and data processing steps must be performed, which are described in this paper.",
"title": ""
},
{
"docid": "5eeb17964742e1bf1e517afcb1963b02",
"text": "Global navigation satellite system reflectometry is a multistatic radar using navigation signals as signals of opportunity. It provides wide-swath and improved spatiotemporal sampling over current space-borne missions. The lack of experimental datasets from space covering signals from multiple constellations (GPS, GLONASS, Galileo, and Beidou) at dual-band (L1 and L2) and dual-polarization (right- and left-hand circular polarization), over the ocean, land, and cryosphere remains a bottleneck to further develop these techniques. 3Cat-2 is a 6-unit (3 × 2 elementary blocks of 10 × 10 × 10 cm3) CubeSat mission designed and implemented at the Universitat Politècnica de Catalunya-BarcelonaTech to explore fundamental issues toward an improvement in the understanding of the bistatic scattering properties of different targets. Since geolocalization of the specific reflection points is determined by the geometry only, a moderate pointing accuracy is only required to correct the antenna pattern in scatterometry measurements. This paper describes the mission analysis and the current status of the assembly, integration, and verification activities of both the engineering model and the flight model performed at Universitat Politècnica de Catalunya NanoSatLab premises. 3Cat-2 launch is foreseen for the second quarter of 2016 into a Sun-Synchronous orbit of 510-km height.",
"title": ""
},
{
"docid": "461a4911e3dedf13db369d2b85861f77",
"text": "This paper proposes a novel approach using a coarse-to-fine analysis strategy for sentence-level emotion classification which takes into consideration of similarities to sentences in training set as well as adjacent sentences in the context. First, we use intra-sentence based features to determine the emotion label set of a target sentence coarsely through the statistical information gained from the label sets of the k most similar sentences in the training data. Then, we use the emotion transfer probabilities between neighboring sentences to refine the emotion labels of the target sentences. Such iterative refinements terminate when the emotion classification converges. The proposed algorithm is evaluated on Ren-CECps, a Chinese blog emotion corpus. Experimental results show that the coarse-to-fine emotion classification algorithm improves the sentence-level emotion classification by 19.11% on the average precision metric, which outperforms the baseline methods.",
"title": ""
},
{
"docid": "61953281f4b568ad15e1f62be9d68070",
"text": "Most of the effort in today’s digital forensics community lies in the retrieval and analysis of existing information from computing systems. Little is being done to increase the quantity and quality of the forensic information on today’s computing systems. In this paper we pose the question of what kind of information is desired on a system by a forensic investigator. We give an overview of the information that exists on current systems and discuss its shortcomings. We then examine the role that file system metadata plays in digital forensics and analyze what kind of information is desirable for different types of forensic investigations, how feasible it is to obtain it, and discuss issues about storing the information.",
"title": ""
},
{
"docid": "90dc36628f9262157ea8722d82830852",
"text": "Inexpensive fixed wing UAV are increasingly useful in remote sensing operations. They are a cheaper alternative to manned vehicles, and are ideally suited for dangerous or monotonous missions that would be inadvisable for a human pilot. Groups of UAV are of special interest for their abilities to coordinate simultaneous coverage of large areas, or cooperate to achieve goals such as mapping. Cooperation and coordination in UAV groups also allows increasingly large numbers of aircraft to be operated by a single user. Specific applications under consideration for groups of cooperating UAV are border patrol, search and rescue, surveillance, communications relaying, and mapping of hostile territory. The capabilities of small UAV continue to grow with advances in wireless communications and computing power. Accordingly, research topics in cooperative UAV control include efficient computer vision for real-time navigation and networked computing and communication strategies for distributed control, as well as traditional aircraft-related topics such as collision avoidance and formation flight. Emerging results in cooperative UAV control are presented via discussion of these topics, including particular requirements, challenges, and some promising strategies relating to each area. Case studies from a variety of programs highlight specific solutions and recent results, ranging from pure simulation to control of multiple UAV. This wide range of case studies serves as an overview of current problems of Interest, and does not present every relevant result.",
"title": ""
},
{
"docid": "64d14f0be0499ddb4183fe9c48653205",
"text": "Many analysis and machine learning tasks require the availability of marginal statistics on multidimensional datasets while providing strong privacy guarantees for the data subjects. Applications for these statistics range from finding correlations in the data to fitting sophisticated prediction models. In this paper, we provide a set of algorithms for materializing marginal statistics under the strong model of local differential privacy. We prove the first tight theoretical bounds on the accuracy of marginals compiled under each approach, perform empirical evaluation to confirm these bounds, and evaluate them for tasks such as modeling and correlation testing. Our results show that releasing information based on (local) Fourier transformations of the input is preferable to alternatives based directly on (local) marginals.",
"title": ""
},
{
"docid": "0f9a4d22cc7f63ea185f3f17759e185a",
"text": "Image super-resolution (SR) reconstruction is essentially an ill-posed problem, so it is important to design an effective prior. For this purpose, we propose a novel image SR method by learning both non-local and local regularization priors from a given low-resolution image. The non-local prior takes advantage of the redundancy of similar patches in natural images, while the local prior assumes that a target pixel can be estimated by a weighted average of its neighbors. Based on the above considerations, we utilize the non-local means filter to learn a non-local prior and the steering kernel regression to learn a local prior. By assembling the two complementary regularization terms, we propose a maximum a posteriori probability framework for SR recovery. Thorough experimental results suggest that the proposed SR method can reconstruct higher quality results both quantitatively and perceptually.",
"title": ""
},
{
"docid": "13f7df2198bfe474e92e0072a3de2f9b",
"text": "Humans and other primates shift their gaze to allocate processing resources to a subset of the visual input. Understanding and emulating the way that human observers freeview a natural scene has both scientific and economic impact. It has therefore attracted the attention from researchers in a wide range of science and engineering disciplines. With the ever increasing computational power, machine learning has become a popular tool to mine human data in the exploration of how people direct their gaze when inspecting a visual scene. This paper reviews recent advances in learning saliency-based visual attention and discusses several key issues in this topic. & 2012 Elsevier B.V. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
27a5fb33ff8a2ae0a8e59311b8188740
|
Interactive software maps for web-based source code analysis
|
[
{
"docid": "124c73eb861c0b2fb64d0084b3961859",
"text": "Treemaps are an important and commonly-used approach to hierarchy visualization, but an important limitation of treemaps is the difficulty of discerning the structure of a hierarchy. This paper presents cascaded treemaps, a new approach to treemap presentation that is based in cascaded rectangles instead of the traditional nested rectangles. Cascading uses less space to present the same containment relationship, and the space savings enable a depth effect and natural padding between siblings in complex hierarchies. In addition, we discuss two general limitations of existing treemap layout algorithms: disparities between node weight and relative node size that are introduced by layout algorithms ignoring the space dedicated to presenting internal nodes, and a lack of stability when generating views of different levels of treemaps as a part of supporting interactive zooming. We finally present a two-stage layout process that addresses both concerns, computing a stable structure for the treemap and then using that structure to consider the presentation of internal nodes when arranging the treemap. All of this work is presented in the context of two large real-world hierarchies, the Java package hierarchy and the eBay auction hierarchy.",
"title": ""
}
] |
[
{
"docid": "adccd039cc54352eefd855567e8eeb62",
"text": "In this paper, we propose a novel classification method for the four types of lung nodules, i.e., well-circumscribed, vascularized, juxta-pleural, and pleural-tail, in low dose computed tomography scans. The proposed method is based on contextual analysis by combining the lung nodule and surrounding anatomical structures, and has three main stages: an adaptive patch-based division is used to construct concentric multilevel partition; then, a new feature set is designed to incorporate intensity, texture, and gradient information for image patch feature description, and then a contextual latent semantic analysis-based classifier is designed to calculate the probabilistic estimations for the relevant images. Our proposed method was evaluated on a publicly available dataset and clearly demonstrated promising classification performance.",
"title": ""
},
{
"docid": "fec18dd0fba50779f8e8cc8d83c947e5",
"text": "Trust plays important roles in diverse decentralized environments, including our society at large. Computational trust models help to, for instance, guide users' judgements in online auction sites about other users; or determine quality of contributions in web 2.0 sites. Most of the existing trust models, however, require historical information about past behavior of a specific agent being evaluated - information that is not always available. In contrast, in real life interactions among users, in order to make the first guess about the trustworthiness of a stranger, we commonly use our \"instinct\" - essentially stereotypes developed from our past interactions with \"similar\" people. We propose StereoTrust, a computational trust model inspired by real life stereotypes. A user forms stereotypes using her previous transactions with other agents. A stereotype contains certain features of agents and an expected outcome of the transaction. These features can be taken from agents' profile information, or agents' observed behavior in the system. When facing a stranger, the stereotypes matching stranger's profile are aggregated to derive his expected trust. Additionally, when some information about stranger's previous transactions is available, StereoTrust uses it to refine the stereotype matching. According to our experiments, StereoTrust compares favorably with existing trust models that use different kind of information and more complete historical information. Moreover, because evaluation is done according to user's personal stereotypes, the system is completely distributed and the result obtained is personalized. StereoTrust can be used as a complimentary mechanism to provide the initial trust value for a stranger, especially when there is no trusted, common third parties.",
"title": ""
},
{
"docid": "59b26acc158c728cf485eae27de665f7",
"text": "The ability of the parasite Plasmodium falciparum to evade the immune system and be sequestered within human small blood vessels is responsible for severe forms of malaria. The sequestration depends on the interaction between human endothelial receptors and P. falciparum erythrocyte membrane protein 1 (PfEMP1) exposed on the surface of the infected erythrocytes (IEs). In this study, the transcriptomes of parasite populations enriched for parasites that bind to human P-selectin, E-selectin, CD9 and CD151 receptors were analysed. IT4_var02 and IT4_var07 were specifically expressed in IT4 parasite populations enriched for P-selectin-binding parasites; eight var genes (IT4_var02/07/09/13/17/41/44/64) were specifically expressed in isolate populations enriched for CD9-binding parasites. Interestingly, IT4 parasite populations enriched for E-selectin- and CD151-binding parasites showed identical expression profiles to those of a parasite population exposed to wild-type CHO-745 cells. The same phenomenon was observed for the 3D7 isolate population enriched for binding to P-selectin, E-selectin, CD9 and CD151. This implies that the corresponding ligands for these receptors have either weak binding capacity or do not exist on the IE surface. Conclusively, this work expanded our understanding of P. falciparum adhesive interactions, through the identification of var transcripts that are enriched within the selected parasite populations.",
"title": ""
},
{
"docid": "caad87e49a39569d3af1fe646bd0bde2",
"text": "Over the last years, a variety of pervasive games was developed. Although some of these applications were quite successful in bringing digital games back to the real world, very little is known about their successful integration into smart environments. When developing video games, developers can make use of a broad variety of heuristics. Using these heuristics to guide the development process of applications for intelligent environments could significantly increase their functional quality. This paper addresses the question, whether existing heuristics can be used by pervasive game developers, or if specific design guidelines for smart home environments are required. In order to give an answer, the transferability of video game heuristics was evaluated in a two-step process. In a first step, a set of validated heuristics was analyzed to identify platform-dependent elements. In a second step, the transferability of those elements was assessed in a focus group study.",
"title": ""
},
{
"docid": "8ddb7c62f032fb07116e7847e69b51d1",
"text": "Software requirements are the foundations from which quality is measured. Measurement enables to improve the software process; assist in planning, tracking and controlling the software project and assess the quality of the software thus produced. Quality issues such as accuracy, security and performance are often crucial to the success of a software system. Quality should be maintained from starting phase of software development. Requirements management, play an important role in maintaining quality of software. A project can deliver the right solution on time and within budget with proper requirements management. Software quality can be maintained by checking quality attributes in requirements document. Requirements metrics such as volatility, traceability, size and completeness are used to measure requirements engineering phase of software development lifecycle. Manual measurement is expensive, time consuming and prone to error therefore automated tools should be used. Automated requirements tools are helpful in measuring requirements metrics. The aim of this paper is to study, analyze requirements metrics and automated requirements tools, which will help in choosing right metrics to measure software development based on the evaluation of Automated Requirements Tools",
"title": ""
},
{
"docid": "e4db0ee5c4e2a5c87c6d93f2f7536f15",
"text": "Despite the importance of sparsity in many big data applications, there are few existing methods for efficient distributed optimization of sparsely-regularized objectives. In this paper, we present a communication-efficient framework for L1-regularized optimization in distributed environments. By taking a nontraditional view of classical objectives as part of a more general primal-dual setting, we obtain a new class of methods that can be efficiently distributed and is applicable to common L1-regularized regression and classification objectives, such as Lasso, sparse logistic regression, and elastic net regression. We provide convergence guarantees for this framework and demonstrate strong empirical performance as compared to other stateof-the-art methods on several real-world distributed datasets.",
"title": ""
},
{
"docid": "e066761ecb7d8b7468756fb4be6b8fcb",
"text": "The surest way to increase the system capacity of a wireless link is by getting the transmitter and receiver closer to each other, which creates the dual benefits of higher-quality links and more spatial reuse. In a network with nomadic users, this inevitably involves deploying more infrastructure, typically in the form of microcells, hot spots, distributed antennas, or relays. A less expensive alternative is the recent concept of femtocells - also called home base stations - which are data access points installed by home users to get better indoor voice and data coverage. In this article we overview the technical and business arguments for femtocells and describe the state of the art on each front. We also describe the technical challenges facing femtocell networks and give some preliminary ideas for how to overcome them.",
"title": ""
},
{
"docid": "7928ad4d18e3f3eaaf95fa0b49efafa0",
"text": "Associative classifiers have been proposed to achieve an accurate model with each individual rule being interpretable. However, existing associative classifiers often consist of a large number of rules and, thus, can be difficult to interpret. We show that associative classifiers consisting of an ordered rule set can be represented as a tree model. From this view, it is clear that these classifiers are restricted in that at least one child node of a non-leaf node is never split. We propose a new tree model, i.e., condition-based tree (CBT), to relax the restriction. Furthermore, we also propose an algorithm to transform a CBT to an ordered rule set with concise rule conditions. This ordered rule set is referred to as a condition-based classifier (CBC). Thus, the interpretability of an associative classifier is maintained, but more expressive models are possible. The rule transformation algorithm can be also applied to regular binary decision trees to extract an ordered set of rules with simple This research was partially supported by ONR grant N00014-09-1-0656. Email addresses: hdeng3@asu.com (Houtao Deng), george.runger@asu.edu (George Runger), eugene.tuv@intel.com (Eugene Tuv), wade.bannister@ingenixconsulting.com (Wade Bannister) Preprint submitted to Elsevier November 17, 2013 rule conditions. Feature selection is applied to a binary representation of conditions to simplify/improve the models further. Experimental studies show that CBC has competitive accuracy performance, and has a significantly smaller number of rules (median of 10 rules per data set) than well-known associative classifiers such as CBA (median of 47) and GARC (median of 21). CBC with feature selection has even a smaller number of rules.",
"title": ""
},
{
"docid": "38f85a10e8f8b815974f5e42386b1fa3",
"text": "Because Facebook is available on hundreds of millions of desktop and mobile computing platforms around the world and because it is available on many different kinds of platforms (from desktops and laptops running Windows, Unix, or OS X to hand held devices running iOS, Android, or Windows Phone), it would seem to be the perfect place to conduct steganography. On Facebook, information hidden in image files will be further obscured within the millions of pictures and other images posted and transmitted daily. Facebook is known to alter and compress uploaded images so they use minimum space and bandwidth when displayed on Facebook pages. The compression process generally disrupts attempts to use Facebook for image steganography. This paper explores a method to minimize the disruption so JPEG images can be used as steganography carriers on Facebook.",
"title": ""
},
{
"docid": "0d3e55a7029d084f6ba889b7d354411c",
"text": "Electrophysiological and computational studies suggest that nigro-striatal dopamine may play an important role in learning about sequences of environmentally important stimuli, particularly when this learning is based upon step-by-step associations between stimuli, such as in second-order conditioning. If so, one would predict that disruption of the midbrain dopamine system--such as occurs in Parkinson's disease--may lead to deficits on tasks that rely upon such learning processes. This hypothesis was tested using a \"chaining\" task, in which each additional link in a sequence of stimuli leading to reward is trained step-by-step, until a full sequence is learned. We further examined how medication (L-dopa) affects this type of learning. As predicted, we found that Parkinson's patients tested 'off' L-dopa performed as well as controls during the first phase of this task, when required to learn a simple stimulus-response association, but were impaired at learning the full sequence of stimuli. In contrast, we found that Parkinson's patients tested 'on' L-dopa performed better than those tested 'off', and no worse than controls, on all phases of the task. These findings suggest that the loss of dopamine that occurs in Parkinson's disease can lead to specific learning impairments that are predicted by electrophysiological and computational studies, and that enhancing dopamine levels with L-dopa alleviates this deficit. This last result raises questions regarding the mechanisms by which midbrain dopamine modulates learning in Parkinson's disease, and how L-dopa affects these processes.",
"title": ""
},
{
"docid": "18dbbf0338d138f71a57b562883f0677",
"text": "We present the analytical capability of TecDEM, a MATLAB toolbox used in conjunction with Global DEMs for the extraction of tectonic geomorphologic information. TecDEM includes a suite of algorithms to analyze topography, extracted drainage networks and sub-basins. The aim of part 2 of this paper series is the generation of morphometric maps for surface dynamics and basin analysis. TecDEM therefore allows the extraction of parameters such as isobase, incision, drainage density and surface roughness maps. We also provide tools for basin asymmetry and hypsometric analysis. These are efficient graphical user interfaces (GUIs) for mapping drainage deviation from basin mid-line and basin hypsometry. A morphotectonic interpretation of the Kaghan Valley (Northern Pakistan) is performed with TecDEM and the findings indicate a high correlation between surface dynamics and basin analysis parameters with neotectonic features in the study area. & 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "5e896b2d47853088dc51323507f2f23a",
"text": "A number of Learning Management Systems (LMSs) exist on the market today. A subset of a LMS is the component in which student assessment is managed. In some forms of assessment, such as open questions, the LMS is incapable of evaluating the students’ responses and therefore human intervention is necessary. In order to assess at higher levels of Bloom’s (1956) taxonomy, it is necessary to include open-style questions in which the student is given the task as well as the freedom to arrive at a response without the comfort of recall words and/or phrases. Automating the assessment process of open questions is an area of research that has been ongoing since the 1960s. Earlier work focused on statistical or probabilistic approaches based primarily on conceptual understanding. Recent gains in Natural Language Processing have resulted in a shift in the way in which free text can be evaluated. This has allowed for a more linguistic approach which focuses heavily on factual understanding. This study will leverage the research conducted in recent studies in the area of Natural Language Processing, Information Extraction and Information Retrieval in order to provide a fair, timely and accurate assessment of student responses to open questions based on the semantic meaning of those responses.",
"title": ""
},
{
"docid": "71c7c98b55b2b2a9c475d4522310cfaa",
"text": "This paper studies an active underground economy which spec ializes in the commoditization of activities such as credit car d fraud, identity theft, spamming, phishing, online credential the ft, and the sale of compromised hosts. Using a seven month trace of logs c ollected from an active underground market operating on publi c Internet chat networks, we measure how the shift from “hacking for fun” to “hacking for profit” has given birth to a societal subs trate mature enough to steal wealth into the millions of dollars in less than one year.",
"title": ""
},
{
"docid": "afc12fcceaf1bc1de724ba6e7935c086",
"text": "OLAP tools have been extensively used by enterprises to make better and faster decisions. Nevertheless, they require users to specify group-by attributes and know precisely what they are looking for. This paper takes the first attempt towards automatically extracting top-k insights from multi-dimensional data. This is useful not only for non-expert users, but also reduces the manual effort of data analysts. In particular, we propose the concept of insight which captures interesting observation derived from aggregation results in multiple steps (e.g., rank by a dimension, compute the percentage of measure by a dimension). An example insight is: ``Brand B's rank (across brands) falls along the year, in terms of the increase in sales''. Our problem is to compute the top-k insights by a score function. It poses challenges on (i) the effectiveness of the result and (ii) the efficiency of computation. We propose a meaningful scoring function for insights to address (i). Then, we contribute a computation framework for top-k insights, together with a suite of optimization techniques (i.e., pruning, ordering, specialized cube, and computation sharing) to address (ii). Our experimental study on both real data and synthetic data verifies the effectiveness and efficiency of our proposed solution.",
"title": ""
},
{
"docid": "caa60a57e847cec04d16f9281b3352f3",
"text": "Part-based trackers are effective in exploiting local details of the target object for robust tracking. In contrast to most existing part-based methods that divide all kinds of target objects into a number of fixed rectangular patches, in this paper, we propose a novel framework in which a set of deformable patches dynamically collaborate on tracking of non-rigid objects. In particular, we proposed a shape-preserved kernelized correlation filter (SP-KCF) which can accommodate target shape information for robust tracking. The SP-KCF is introduced into the level set framework for dynamic tracking of individual patches. In this manner, our proposed deformable patches are target-dependent, have the capability to assume complex topology, and are deformable to adapt to target variations. As these deformable patches properly capture individual target subregions, we exploit their photometric discrimination and shape variation to reveal the trackability of individual target subregions, which enables the proposed tracker to dynamically take advantage of those subregions with good trackability for target likelihood estimation. Finally the shape information of these deformable patches enables accurate object contours to be computed as the tracking output. Experimental results on the latest public sets of challenging sequences demonstrate the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "3519172a7bf6d4183484c613dcc65b0a",
"text": "There has been minimal attention paid in the literature to the aesthetics of the perioral area, either in youth or in senescence. Aging around the lips traditionally was thought to result from a combination of thinning skin surrounding the area, ptosis, and loss of volume in the lips. The atrophy of senescence was treated by adding volume to the lips and filling the deep nasolabial creases. There is now a growing appreciation for the role of volume enhancement in the perioral region and the sunken midface, as well as for dentition, in the resting and dynamic appearance of the perioral area (particularly in youth). In this article, the authors describe the senior author's (BG) preferred methods for aesthetic enhancement of the perioral region and his rejuvenative techniques developed over the past 28 years. The article describes the etiologies behind the dysmorphologies in this area and presents a problem-oriented algorithm for treating them.",
"title": ""
},
{
"docid": "872ef59b5bec5f6cbb9fcb206b6fe49e",
"text": "In this paper, the analysis and design of a three-level LLC series resonant converter (TL LLC SRC) for high- and wide-input-voltage applications is presented. The TL LLC SRC discussed in this paper consists of two half-bridge LLC SRCs in series, sharing a resonant inductor and a transformer. Its main advantages are that the voltage across each switch is clamped at half of the input voltage and that voltage balance is achieved. Thus, it is suitable for high-input-voltage applications. Moreover, due to its simple driving signals, the additional circulating current of the conventional TL LLC SRCs does not appear in the converter, and a simpler driving circuitry is allowed to be designed. With this converter, the operation principles, the gain of the LLC resonant tank, and the zero-voltage-switching condition under wide input voltage variation are analyzed. Both the current and voltage stresses over different design factors of the resonant tank are discussed as well. Based on the results of these analyses, a design example is provided and its validity is confirmed by an experiment involving a prototype converter with an input of 400-600 V and an output of 48 V/20 A. In addition, a family of TL LLC SRCs with double-resonant tanks for high-input-voltage applications is introduced. While this paper deals with a TL LLC SRC, the analysis results can be applied to other TL LLC SRCs for wide-input-voltage applications.",
"title": ""
},
{
"docid": "7c99299463d7f2a703f7bd9fbec3df74",
"text": "Group emotional contagion, the transfer of moods among people in a group, and its influence on work group dynamics was examined in a laboratory study of managerial decision making using multiple, convergent measures of mood, individual attitudes, behavior, and group-level dynamics. Using a 2 times 2 experimental design, with a trained confederate enacting mood conditions, the predicted effect of emotional contagion was found among group members, using both outside coders' ratings of participants' mood and participants' selfreported mood. No hypothesized differences in contagion effects due to the degree of pleasantness of the mood expressed and the energy level with which it was conveyed were found. There was a significant influence of emotional contagion on individual-level attitudes and group processes. As predicted, the positive emotional contagion group members experienced improved cooperation, decreased conflict, and increased perceived task performance. Theoretical implications and practical ramifications of emotional contagion in groups and organizations are discussed. Disciplines Human Resources Management | Organizational Behavior and Theory This journal article is available at ScholarlyCommons: http://repository.upenn.edu/mgmt_papers/72 THE RIPPLE EFFECT: EMOTIONAL CONTAGION AND ITS INFLUENCE ON GROUP BEHAVIOR SIGAL G. BARSADE School of Management Yale University Box 208200 New Haven, CT 06520-8200 Telephone: (203) 432-6159 Fax: (203) 432-9994 E-mail: sigal.barsade@yale.edu August 2001 Revise and Resubmit, ASQ; Comments Welcome i I would like to thank my mentor Barry Staw, Charles O’Reilly, JB, Ken Craik, Batia Wiesenfeld, Jennifer Chatman, J. Turners, John Nezlek, Keith Murnigan, Linda Johanson, and three anonymous ASQ reviewers who have helped lead to positive emotional and cognitive contagion.",
"title": ""
},
{
"docid": "22fc1e303a4c2e7d1e5c913dca73bd9e",
"text": "The artificial potential field (APF) approach provides a simple and effective motion planning method for practical purpose. However, artificial potential field approach has a major problem, which is that the robot is easy to be trapped at a local minimum before reaching its goal. The avoidance of local minimum has been an active research topic in path planning by potential field. In this paper, we introduce several methods to solve this problem, emphatically, introduce and evaluate the artificial potential field approach with simulated annealing (SA). As one of the powerful techniques for escaping local minimum, simulated annealing has been applied to local and global path planning",
"title": ""
},
{
"docid": "83742a3fcaed826877074343232be864",
"text": "In this paper we propose a design of the main modulation and demodulation units of a modem compliant with the new DVB-S2 standard (Int. J. Satellite Commun. 2004; 22:249–268). A typical satellite channel model consistent with the targeted applications of the aforementioned standard is assumed. In particular, non-linear pre-compensation as well as synchronization techniques are described in detail and their performance assessed by means of analysis and computer simulations. The proposed algorithms are shown to provide a good trade-off between complexity and performance and they apply to both the broadcast and the unicast profiles, the latter allowing the exploitation of adaptive coding and modulation (ACM) (Proceedings of the 20th AIAA Satellite Communication Systems Conference, Montreal, AIAA-paper 2002-1863, May 2002). Finally, end-to-end system performances in term of BER versus the signal-to-noise ratio are shown as a result of extensive computer simulations. The whole communication chain is modelled in these simulations, including the BCH and LDPC coder, the modulator with the pre-distortion techniques, the satellite transponder model with its typical impairments, the downlink chain inclusive of the RF-front-end phase noise, the demodulator with the synchronization sub-system units and finally the LDPC and BCH decoders. Copyright # 2004 John Wiley & Sons, Ltd.",
"title": ""
}
] |
scidocsrr
|
609ca5aa81db62f38bf6ea117f3271b6
|
RSSI based indoor and outdoor distance estimation for localization in WSN
|
[
{
"docid": "45bd28fbea66930fca36bc20328d6d6f",
"text": "Localization is one of the most challenging and important issues in wireless sensor networks (WSNs), especially if cost-effective approaches are demanded. In this paper, we present intensively discuss and analyze approaches relying on the received signal strength indicator (RSSI). The advantage of employing the RSSI values is that no extra hardware (e.g. ultrasonic or infra-red) is needed for network-centric localization. We studied different factors that affect the measured RSSI values. Finally, we evaluate two methods to estimate the distance; the first approach is based on statistical methods. For the second one, we use an artificial neural network to estimate the distance.",
"title": ""
}
] |
[
{
"docid": "6cbd51bbef3b56df6d97ec7b4348cd94",
"text": "This study reviews human clinical experience to date with several synthetic cannabinoids, including nabilone, levonantradol, ajulemic acid (CT3), dexanabinol (HU-211), HU-308, and SR141716 (Rimonabant®). Additionally, the concept of “clinical endogenous cannabinoid deficiency” is explored as a possible factor in migraine, idiopathic bowel disease, fibromyalgia and other clinical pain states. The concept of analgesic synergy of cannabinoids and opioids is addressed. A cannabinoid-mediated improvement in night vision at the retinal level is discussed, as well as its potential application to treatment of retinitis pigmentosa and other conditions. Additionally noted is the role of cannabinoid treatment in neuroprotection and its application to closed head injury, cerebrovascular accidents, and CNS degenerative diseases including Alzheimer, Huntington, Parkinson diseases and ALS. Excellent clinical results employing cannabis based medicine extracts (CBME) in spasticity and spasms of MS suggests extension of such treatment to other spasmodic and dystonic conditions. Finally, controversial areas of cannabinoid treatment in obstetrics, gynecology and pediatrics are addressed along with a rationale for such interventions. [Article copies available for a fee from The Haworth Document Delivery Service: 1-800-HAWORTH. E-mail address: <docdelivery@haworthpress. com> Website: <http://www.HaworthPress.com> 2003 by The Haworth Press, Inc. All rights reserved.]",
"title": ""
},
{
"docid": "643d75042a38c24b0e4130cb246fc543",
"text": "Silicon carbide (SiC) switching power devices (MOSFETs, JFETs) of 1200 V rating are now commercially available, and in conjunction with SiC diodes, they offer substantially reduced switching losses relative to silicon (Si) insulated gate bipolar transistors (IGBTs) paired with fast-recovery diodes. Low-voltage industrial variable-speed drives are a key application for 1200 V devices, and there is great interest in the replacement of the Si IGBTs and diodes that presently dominate in this application with SiC-based devices. However, much of the performance benefit of SiC-based devices is due to their increased switching speeds ( di/dt, dv/ dt), which raises the issues of increased electromagnetic interference (EMI) generation and detrimental effects on the reliability of inverter-fed electrical machines. In this paper, the tradeoff between switching losses and the high-frequency spectral amplitude of the device switching waveforms is quantified experimentally for all-Si, Si-SiC, and all-SiC device combinations. While exploiting the full switching-speed capability of SiC-based devices results in significantly increased EMI generation, the all-SiC combination provides a 70% reduction in switching losses relative to all-Si when operated at comparable dv/dt. It is also shown that the loss-EMI tradeoff obtained with the Si-SiC device combination can be significantly improved by driving the IGBT with a modified gate voltage profile.",
"title": ""
},
{
"docid": "14fe4e2fb865539ad6f767b9fc9c1ff5",
"text": "BACKGROUND\nFetal tachyarrhythmia may result in low cardiac output and death. Consequently, antiarrhythmic treatment is offered in most affected pregnancies. We compared 3 drugs commonly used to control supraventricular tachycardia (SVT) and atrial flutter (AF).\n\n\nMETHODS AND RESULTS\nWe reviewed 159 consecutive referrals with fetal SVT (n=114) and AF (n=45). Of these, 75 fetuses with SVT and 36 with AF were treated nonrandomly with transplacental flecainide (n=35), sotalol (n=52), or digoxin (n=24) as a first-line agent. Prenatal treatment failure was associated with an incessant versus intermittent arrhythmia pattern (n=85; hazard ratio [HR]=3.1; P<0.001) and, for SVT, with fetal hydrops (n=28; HR=1.8; P=0.04). Atrial flutter had a lower rate of conversion to sinus rhythm before delivery than SVT (HR=2.0; P=0.005). Cardioversion at 5 and 10 days occurred in 50% and 63% of treated SVT cases, respectively, but in only 25% and 41% of treated AF cases. Sotalol was associated with higher rates of prenatal AF termination than digoxin (HR=5.4; P=0.05) or flecainide (HR=7.4; P=0.03). If incessant AF/SVT persisted to day 5 (n=45), median ventricular rates declined more with flecainide (-22%) and digoxin (-13%) than with sotalol (-5%; P<0.001). Flecainide (HR=2.1; P=0.02) and digoxin (HR=2.9; P=0.01) were also associated with a higher rate of conversion of fetal SVT to a normal rhythm over time. No serious drug-related adverse events were observed, but arrhythmia-related mortality was 5%.\n\n\nCONCLUSION\nFlecainide and digoxin were superior to sotalol in converting SVT to a normal rhythm and in slowing both AF and SVT to better-tolerated ventricular rates and therefore might be considered first to treat significant fetal tachyarrhythmia.",
"title": ""
},
{
"docid": "2f48b326aaa7b41a7ee347cedce344ed",
"text": "In this paper a new kind of quasi-quartic trigonometric polynomial base functions with two shape parameters λ and μ over the space Ω = span {1, sin t, cos t, sin2t, cos2t, sin3t, cos3t} is presented and the corresponding quasi-quartic trigonometric Bézier curves and surfaces are defined by the introduced base functions. Each curve segment is generated by five consecutive control points. The shape of the curve can be adjusted by altering the values of shape parameters while the control polygon is kept unchanged. These curves inherit most properties of the usual quartic Bézier curves in the polynomial space and they can be used as an efficient new model for geometric design in the fields of CAGD.",
"title": ""
},
{
"docid": "082894a8498a5c22af8903ad8ea6399a",
"text": "Despite the proliferation of mobile health applications, few target low literacy users. This is a matter of concern because 43% of the United States population is functionally illiterate. To empower everyone to be a full participant in the evolving health system and prevent further disparities, we must understand the design needs of low literacy populations. In this paper, we present two complementary studies of four graphical user interface (GUI) widgets and three different cross-page navigation styles in mobile applications with a varying literacy, chronically-ill population. Participant's navigation and interaction styles were documented while they performed search tasks using high fidelity prototypes running on a mobile device. Results indicate that participants could use any non-text based GUI widgets. For navigation structures, users performed best when navigating a linear structure, but preferred the features of cross-linked navigation. Based on these findings, we provide some recommendations for designing accessible mobile applications for varying-literacy populations.",
"title": ""
},
{
"docid": "031562142f7a2ffc64156f9d09865604",
"text": "The demand for video content is continuously increasing as video sharing on the Internet is becoming enormously popular recently. This demand, with its high bandwidth requirements, has a considerable impact on the load of the network infrastructure. As more users access videos from their mobile devices, the load on the current wireless infrastructure (which has limited capacity) will be even more significant. Based on observations from many local video sharing scenarios, in this paper, we study the tradeoffs of using Wi-Fi ad-hoc mode versus infrastructure mode for video streaming between adjacent devices. We thus show the potential of direct device-to-device communication as a way to reduce the load on the wireless infrastructure and to improve user experiences. Setting up experiments for WiFi devices connected in ad-hoc mode, we collect measurements for various video streaming scenarios and compare them to the case where the devices are connected through access points. The results show the improvements in latency, jitter and loss rate. More importantly, the results show that the performance in direct device-to-device streaming is much more stable in contrast to the access point case, where different factors affect the performance causing widely unpredictable qualities.",
"title": ""
},
{
"docid": "8906b0cf1b58f6d58a15538946aacd5f",
"text": "This glossary presents a comprehensive list of indicators of socioeconomic position used in health research. A description of what they intend to measure is given together with how data are elicited and the advantages and limitation of the indicators. The glossary is divided into two parts for journal publication but the intention is that it should be used as one piece. The second part highlights a life course approach and will be published in the next issue of the journal.",
"title": ""
},
{
"docid": "b2d8c0397151ca043ffb5cef8046d2af",
"text": "This paper describes the large-scale experimental results from the Face Recognition Vendor Test (FRVT) 2006 and the Iris Challenge Evaluation (ICE) 2006. The FRVT 2006 looked at recognition from high-resolution still frontal face images and 3D face images, and measured performance for still frontal face images taken under controlled and uncontrolled illumination. The ICE 2006 evaluation reported verification performance for both left and right irises. The images in the ICE 2006 intentionally represent a broader range of quality than the ICE 2006 sensor would normally acquire. This includes images that did not pass the quality control software embedded in the sensor. The FRVT 2006 results from controlled still and 3D images document at least an order-of-magnitude improvement in recognition performance over the FRVT 2002. The FRVT 2006 and the ICE 2006 compared recognition performance from high-resolution still frontal face images, 3D face images, and the single-iris images. On the FRVT 2006 and the ICE 2006 data sets, recognition performance was comparable for high-resolution frontal face, 3D face, and the iris images. In an experiment comparing human and algorithms on matching face identity across changes in illumination on frontal face images, the best performing algorithms were more accurate than humans on unfamiliar faces.",
"title": ""
},
{
"docid": "013ca7d513b658f2dac68644a915b43a",
"text": "Money laundering a suspicious fund transfer between accounts without names which affects and threatens the stability of countries economy. The growth of internet technology and loosely coupled nature of fund transfer gateways helps the malicious user’s to perform money laundering. There are many approaches has been discussed earlier for the detection of money laundering and most of them suffers with identifying the root of money laundering. We propose a time variant approach using behavioral patterns to identify money laundering. In this approach, the transaction logs are split into various time window and for each account specific to the fund transfer the time value is split into different time windows and we generate the behavioral pattern of the user. The behavioral patterns specifies the method of transfer between accounts and the range of amounts and the frequency of destination accounts and etc.. Based on generated behavioral pattern , the malicious transfers and accounts are identified to detect the malicious root account. The proposed approach helps to identify more suspicious accounts and their group accounts to perform money laundering identification. The proposed approach has produced efficient results with less time complexity.",
"title": ""
},
{
"docid": "f8435db6c6ea75944d1c6b521e0f3dd3",
"text": "We present the design, fabrication process, and characterization of a multimodal tactile sensor made of polymer materials and metal thin film sensors. The multimodal sensor can detect the hardness, thermal conductivity, temperature, and surface contour of a contact object for comprehensive evaluation of contact objects and events. Polymer materials reduce the cost and the fabrication complexity for the sensor skin, while increasing mechanical flexibility and robustness. Experimental tests show the skin is able to differentiate between objects using measured properties. © 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "0b894c503a11c7638c0fd25ea22088dc",
"text": "We are moving towards a general public where web is the need of hour. Today the vast majority of the product applications executed, are composed as online applications which are keep running in a web program. Testing programming applications is critical. Numerous associations make utilization of a specific web application, so the same web applications are tried habitually by diverse clients from distinctive regions physically. Testing a web application physically is tedious, so we go for test automation. In test automation we make utilization of a product device to run repeatable tests against the application to be tried. There are various focal points of test automation. They are exceptionally exact and have more prominent preparing pace when contrasted with manual automation. There are various open source and business devices accessible for test mechanization. Selenium is one of the broadly utilized open source device for test computerization. Test automation enhances the effectiveness of programming testing procedures. Test automation gives quick criticism to engineers. It additionally discovers the imperfections when one may miss in the manual testing. In test automation we can perform boundless emphases for testing the same example of code ceaselessly commonly.",
"title": ""
},
{
"docid": "419116a3660f1c1f7127de31f311bd1e",
"text": "Unlike dimensionality reduction (DR) tools for single-view data, e.g., principal component analysis (PCA), canonical correlation analysis (CCA) and generalized CCA (GCCA) are able to integrate information from multiple feature spaces of data. This is critical in multi-modal data fusion and analytics, where samples from a single view may not be enough for meaningful DR. In this work, we focus on a popular formulation of GCCA, namely, MAX-VAR GCCA. The classic MAX-VAR problem is optimally solvable via eigen-decomposition, but this solution has serious scalability issues. In addition, how to impose regularizers on the sought canonical components was unclear - while structure-promoting regularizers are often desired in practice. We propose an algorithm that can easily handle datasets whose sample and feature dimensions are both large by exploiting data sparsity. The algorithm is also flexible in incorporating regularizers on the canonical components. Convergence properties of the proposed algorithm are carefully analyzed. Numerical experiments are presented to showcase its effectiveness.",
"title": ""
},
{
"docid": "a9b20ad74b3a448fbc1555b27c4dcac9",
"text": "A new learning algorithm for multilayer feedforward networks, RPROP, is proposed. To overcome the inherent disadvantages of pure gradient-descent, RPROP performs a local adaptation of the weight-updates according to the behaviour of the errorfunction. In substantial difference to other adaptive techniques, the effect of the RPROP adaptation process is not blurred by the unforseeable influence of the size of the derivative but only dependent on the temporal behaviour of its sign. This leads to an efficient and transparent adaptation process. The promising capabilities of RPROP are shown in comparison to other wellknown adaptive techniques.",
"title": ""
},
{
"docid": "c5103654cc2b28bc4408c2d0bee17f13",
"text": "Unless the practitioner is familiar with the morphology of the roots of all teeth, and the associated intricate root canal anatomy, effective debridement and obturation may be impossible. Recent research has improved knowledge and understanding of this intricate aspect of dental practice. After studying this part you should know in what percentage of each tooth type you may expect unusual numbers of root canals and other anatomical variations.",
"title": ""
},
{
"docid": "b1958bbb9348a05186da6db649490cdd",
"text": "Fourier ptychography (FP) utilizes illumination control and computational post-processing to increase the resolution of bright-field microscopes. In effect, FP extends the fixed numerical aperture (NA) of an objective lens to form a larger synthetic system NA. Here, we build an FP microscope (FPM) using a 40X 0.75NA objective lens to synthesize a system NA of 1.45. This system achieved a two-slit resolution of 335 nm at a wavelength of 632 nm. This resolution closely adheres to theoretical prediction and is comparable to the measured resolution (315 nm) associated with a standard, commercially available 1.25 NA oil immersion microscope. Our work indicates that Fourier ptychography is an attractive method to improve the resolution-versus-NA performance, increase the working distance, and enlarge the field-of-view of high-resolution bright-field microscopes by employing lower NA objectives.",
"title": ""
},
{
"docid": "ec6fb21b7ae27cc4df67f3d6745ffe34",
"text": "In today's world data is growing very rapidly, which we call as big data. To deal with these large data sets, currently we are using NoSQL databases, as relational database is not capable for handling such data. These schema less NoSQL database allow us to handle unstructured data. Through this paper we are comparing two NoSQL databases MongoDB and CouchBase server, in terms of image storage and retrieval. Aim behind selecting these two databases as both comes under Document store category. Major applications like social media, traffic analysis, criminal database etc. require image database. The motivation behind this paper is to compare database performance in terms of time required to store and retrieve images from database. In this paper, firstly we are going describe advantages of NoSQL databases over SQL, then brief idea about MongoDB and CouchBase and finally comparison of time required to insert various size images in databases and to retrieve various size images using front end tool Java.",
"title": ""
},
{
"docid": "2d41891667b3cc0572827c104fb2c1c1",
"text": "Stock market prediction is forever important issue for investor. Computer science plays vital role to solve this problem. From the evolution of machine learning, people from this area are busy to solve this problem effectively. Many different techniques are used to build predicting system. This research describes different state of the art techniques used for stock forecasting and compare them w.r.t. their pros and cons. We have classified different techniques categorically; Time Series, Neural Network and its different variation (RNN, ESN, MLP, LRNN etc.) and different hybrid techniques (combination of neural network with different machine learning techniques) (ANFIS, GA/ATNN, GA/TDNN, ICA-BPN). By extensive study of different techniques, it was analyzed that Neural Network is the best technique till time to predict stock prices especially when some denoising schemes are applied with neural network. We, also, have implemented and compared different neural network techniques like Layered Recurrent Neural Network (LRNN), Wsmpca-NN and Feed forward Neural Network (NN). By comparing said techniques, it was observed that LRNN performs better than feed forward NN and Wsmpca-NN performs better than LRNN and NN. We have applied said techniques on PSO (Pakistan State Oil), S&P500 data sets.",
"title": ""
},
{
"docid": "dc64fa6178f46a561ef096fd2990ad3d",
"text": "Forest fires cost millions of dollars in damages and claim many human lives every year. Apart from preventive measures, early detection and suppression of fires is the only way to minimize the damages and casualties. We present the design and evaluation of a wireless sensor network for early detection of forest fires. We first present the key aspects in modeling forest fires. We do this by analyzing the Fire Weather Index (FWI) System, and show how its different components can be used in designing efficient fire detection systems. The FWI System is one of the most comprehensive forest fire danger rating systems in North America, and it is backed by several decades of forestry research. The analysis of the FWI System could be of interest in its own right to researchers working in the sensor network area and to sensor manufacturers who can optimize the communication and sensing modules of their products to better fit forest fire detection systems. Then, we model the forest fire detection problem as a coverage problem in wireless sensor networks, and we present a distributed algorithm to solve it. In addition, we show how our algorithm can achieve various coverage degrees at different subareas of the forest, which can be used to provide unequal monitoring quality of forest zones. Unequal monitoring is important to protect residential and industrial neighborhoods close to forests. Finally, we present a simple data aggregation scheme based on the FWI System. This data aggregation scheme significantly prolongs the network lifetime, because it only delivers the data that is of interest to the application. We validate several aspects of our design using simulation.",
"title": ""
},
{
"docid": "55aff936a5ff97d9229e90f6d5394b2e",
"text": "Children are ubiquitous imitators, but how do they decide which actions to imitate? One possibility is that children rationally combine multiple sources of information about which actions are necessary to cause a particular outcome. For instance, children might learn from contingencies between action sequences and outcomes across repeated demonstrations, and they might also use information about the actor's knowledge state and pedagogical intentions. We define a Bayesian model that predicts children will decide whether to imitate part or all of an action sequence based on both the pattern of statistical evidence and the demonstrator's pedagogical stance. To test this prediction, we conducted an experiment in which preschool children watched an experimenter repeatedly perform sequences of varying actions followed by an outcome. Children's imitation of sequences that produced the outcome increased, in some cases resulting in production of shorter sequences of actions that the children had never seen performed in isolation. A second experiment established that children interpret the same statistical evidence differently when it comes from a knowledgeable teacher versus a naïve demonstrator. In particular, in the pedagogical case children are more likely to \"overimitate\" by reproducing the entire demonstrated sequence. This behavior is consistent with our model's predictions, and suggests that children attend to both statistical and pedagogical evidence in deciding which actions to imitate, rather than obligately imitating successful action sequences.",
"title": ""
},
{
"docid": "2af56829daf6d2c6c633c759d07f2208",
"text": "Height of Burst (HOB) sensor is one of the critical parts in guided missiles. While seekers control the guiding scheme of the missile, proximity sensors set the trigger for increased effectiveness of the warhead. For the well-developed guided missiles of Roketsan, a novel proximity sensor is developed. The design of the sensor is for multi-purpose use. In this presentation, the application of the sensor is explained for operation as a HOB sensor in the range of 3m–50m with ± 1m accuracy. Measurement results are also presented. The same sensor is currently being developed for proximity sensor for missile defence.",
"title": ""
}
] |
scidocsrr
|
9d4d1861a00d94986f1fed4bbbe06218
|
Analyzing User Activities, Demographics, Social Network Structure and User-Generated Content on Instagram
|
[
{
"docid": "349f85e6ffd66d6a1dd9d9c6925d00bc",
"text": "Wearable computers have the potential to act as intelligent agents in everyday life and assist the user in a variety of tasks, using context to determine how to act. Location is the most common form of context used by these agents to determine the user’s task. However, another potential use of location context is the creation of a predictive model of the user’s future movements. We present a system that automatically clusters GPS data taken over an extended period of time into meaningful locations at multiple scales. These locations are then incorporated into a Markov model that can be consulted for use with a variety of applications in both single–user and collaborative scenarios.",
"title": ""
}
] |
[
{
"docid": "733e379ecaab79ac328f55ccc2384b69",
"text": "Introduction Since Beijing 1995, gender mainstreaming has heralded the beginning of a renewed effort to address what is seen as one of the roots of gender inequality: the genderedness of systems, procedures and organizations. In the definition of the Council of Europe, gender mainstreaming is the (re)organisation, improvement, development and evaluation of policy processes, so that a gender equality perspective is incorporated in all policies at all levels and at all stages, by the actors normally involved in policymaking. All member states and some candidate states of the European Union have started to implement gender mainstreaming. The 1997 Treaty of Amsterdam places equality between women and men among the explicit tasks of the European Union and obliges the EU to promote gender equality in all its tasks and activities. The Gender Mainstreaming approach that has been legitimated by this Treaty is backed by legislation and by positive action in favour of women (or the “under-represented sex”). Gender equality policies have not only been part and parcel of modernising action in the European Union, but can be expected to continue to be so (Rossili 2000). With regard to gender inequality, the EU has both a formal EU problem definition at the present time, and a formalised set of EU strategies. Problems in the implementation of gender equality policies abound, at both national and EU level. To give just one example, it took the Netherlands – usually very supportive of the EU –14 years to implement article 119 on Equal Pay (Van der Vleuten 2001). Moreover, it has been documented that overall EU action has run counter to its goal of gender equality. Overall EU action has weakened women’s social rights more seriously than men’s (Rossili 2000). The introduction of Gender Mainstreaming, the incorporation of gender and women’s concerns in all regular policymaking is meant to address precisely this problem of a contradiction between specific gender policies and regular EU policies. Yet, in the case of the Structural Funds, for instance, Gender Mainstreaming has been used to further reduce existing funds and incentives for gender equality (Rossili 2000). Against this backdrop, this paper will present an approach at studying divergences in policy frames around gender equality as one of the elements connected to implementation problems: the MAGEEQ project.",
"title": ""
},
{
"docid": "2e864dcde57ea1716847f47977af0140",
"text": "I focus on the role of case studies in developing causal explanations. I distinguish between the theoretical purposes of case studies and the case selection strategies or research designs used to advance those objectives. I construct a typology of case studies based on their purposes: idiographic (inductive and theory-guided), hypothesis-generating, hypothesis-testing, and plausibility probe case studies. I then examine different case study research designs, including comparable cases, most and least likely cases, deviant cases, and process tracing, with attention to their different purposes and logics of inference. I address the issue of selection bias and the “single logic” debate, and I emphasize the utility of multi-method research.",
"title": ""
},
{
"docid": "ce402c150d74cbc954378ea7927dfa71",
"text": "The study investigated the influence of extrinsic and intrinsic motivation on employees performance. Subjects for the study consisted of one hundred workers of Flour Mills of Nigeria PLC, Lagos. Data for the study were gathered through the administration of a self-designed questionnaire. The data collected were subjected to appropriate statistical analysis using Pearson Product Moment Correlation Coefficient, and all the findings were tested at 0.05 level of significance. The result obtained from the analysis showed that there existed relationship between extrinsic motivation and the performance of employees, while no relationship existed between intrinsic motivation and employees performance. On the basis of these findings, implications of the findings for future study were stated.",
"title": ""
},
{
"docid": "b594a4fafc37a18773b1144dfdbb965d",
"text": "Deep generative modelling for robust human body analysis is an emerging problem with many interesting applications, since it enables analysis-by-synthesis and unsupervised learning. However, the latent space learned by such models is typically not human-interpretable, resulting in less flexible models. In this work, we adopt a structured semi-supervised variational auto-encoder approach and present a deep generative model for human body analysis where the pose and appearance are disentangled in the latent space, allowing for pose estimation. Such a disentanglement allows independent manipulation of pose and appearance and hence enables applications such as pose-transfer without being explicitly trained for such a task. In addition, the ability to train in a semi-supervised setting relaxes the need for labelled data. We demonstrate the merits of our generative model on the Human3.6M and ChictopiaPlus datasets.",
"title": ""
},
{
"docid": "20dd21215f9dc6bd125b2af53500614d",
"text": "In this paper we present a novel method for deriving paraphrases during automatic MT evaluation using only the source and reference texts, which are necessary for the evaluation, and word and phrase alignment software. Using target language paraphrases produced through word and phrase alignment a number of alternative reference sentences are constructed automatically for each candidate translation. The method produces lexical and lowlevel syntactic paraphrases that are relevant to the domain in hand, does not use external knowledge resources, and can be combined with a variety of automatic MT evaluation system.",
"title": ""
},
{
"docid": "9f184ba1cfe36fde398f896b1ce93745",
"text": "http://dx.doi.org/10.1016/j.compag.2015.08.011 0168-1699/ 2015 Elsevier B.V. All rights reserved. ⇑ Corresponding author at: School of Information Technology, Indian Institute of Technology Kharagpur, India. E-mail addresses: tojha@sit.iitkgp.ernet.in (T. Ojha), smisra@sit.iitkgp.ernet.in (S. Misra), nsr@agfe.iitkgp.ernet.in (N.S. Raghuwanshi). Tamoghna Ojha a,b,⇑, Sudip Misra , Narendra Singh Raghuwanshi b",
"title": ""
},
{
"docid": "d1357b2e247d521000169dce16f182ee",
"text": "Camera shake or target movement often leads to undesired blur effects in videos captured by a hand-held camera. Despite significant efforts having been devoted to video-deblur research, two major challenges remain: 1) how to model the spatio-temporal characteristics across both the spatial domain (i.e., image plane) and the temporal domain (i.e., neighboring frames) and 2) how to restore sharp image details with respect to the conventionally adopted metric of pixel-wise errors. In this paper, to address the first challenge, we propose a deblurring network (DBLRNet) for spatial-temporal learning by applying a 3D convolution to both the spatial and temporal domains. Our DBLRNet is able to capture jointly spatial and temporal information encoded in neighboring frames, which directly contributes to the improved video deblur performance. To tackle the second challenge, we leverage the developed DBLRNet as a generator in the generative adversarial network (GAN) architecture and employ a content loss in addition to an adversarial loss for efficient adversarial training. The developed network, which we name as deblurring GAN, is tested on two standard benchmarks and achieves the state-of-the-art performance.",
"title": ""
},
{
"docid": "28b70047cb41f765504f8f9b54456cc4",
"text": "BACKGROUND\nAccelerometers are widely used to measure sedentary time, physical activity, physical activity energy expenditure (PAEE), and sleep-related behaviors, with the ActiGraph being the most frequently used brand by researchers. However, data collection and processing criteria have evolved in a myriad of ways out of the need to answer unique research questions; as a result there is no consensus.\n\n\nOBJECTIVES\nThe purpose of this review was to: (1) compile and classify existing studies assessing sedentary time, physical activity, energy expenditure, or sleep using the ActiGraph GT3X/+ through data collection and processing criteria to improve data comparability and (2) review data collection and processing criteria when using GT3X/+ and provide age-specific practical considerations based on the validation/calibration studies identified.\n\n\nMETHODS\nTwo independent researchers conducted the search in PubMed and Web of Science. We included all original studies in which the GT3X/+ was used in laboratory, controlled, or free-living conditions published from 1 January 2010 to the 31 December 2015.\n\n\nRESULTS\nThe present systematic review provides key information about the following data collection and processing criteria: placement, sampling frequency, filter, epoch length, non-wear-time, what constitutes a valid day and a valid week, cut-points for sedentary time and physical activity intensity classification, and algorithms to estimate PAEE and sleep-related behaviors. The information is organized by age group, since criteria are usually age-specific.\n\n\nCONCLUSION\nThis review will help researchers and practitioners to make better decisions before (i.e., device placement and sampling frequency) and after (i.e., data processing criteria) data collection using the GT3X/+ accelerometer, in order to obtain more valid and comparable data.\n\n\nPROSPERO REGISTRATION NUMBER\nCRD42016039991.",
"title": ""
},
{
"docid": "a45294bcd622c526be47975abe4e6d66",
"text": "Identification of gene locations in a DNA sequence is one of the important problems in the area of genomics. Nucleotides in exons of a DNA sequence show f = 1/3 periodicity. The period-3 property in exons of eukaryotic gene sequences enables signal processing based time-domain and frequency-domain methods to predict these regions. Identification of the period-3 regions helps in predicting the gene locations within the billions long DNA sequence of eukaryotic cells. Existing non-parametric filtering techniques are less effective in detecting small exons. This paper presents a harmonic suppression filter and parametric minimum variance spectrum estimation technique for gene prediction. We show that both the filtering techniques are able to detect smaller exon regions and adaptive MV filter minimizes the power in introns (non-coding regions) giving more suppression to the intron regions. Furthermore, 2-simplex mapping is used to reduce the computational complexity.",
"title": ""
},
{
"docid": "7f84e215df3d908249bde3be7f2b3cab",
"text": "With the emergence of ever-growing advanced vehicular applications, the challenges to meet the demands from both communication and computation are increasingly prominent. Without powerful communication and computational support, various vehicular applications and services will still stay in the concept phase and cannot be put into practice in the daily life. Thus, solving this problem is of great importance. The existing solutions, such as cellular networks, roadside units (RSUs), and mobile cloud computing, are far from perfect because they highly depend on and bear the cost of additional infrastructure deployment. Given tremendous number of vehicles in urban areas, putting these underutilized vehicular resources into use offers great opportunity and value. Therefore, we conceive the idea of utilizing vehicles as the infrastructures for communication and computation, named vehicular fog computing (VFC), which is an architecture that utilizes a collaborative multitude of end-user clients or near-user edge devices to carry out communication and computation, based on better utilization of individual communication and computational resources of each vehicle. By aggregating abundant resources of individual vehicles, the quality of services and applications can be enhanced greatly. In particular, by discussing four types of scenarios of moving and parked vehicles as the communication and computational infrastructures, we carry on a quantitative analysis of the capacities of VFC. We unveil an interesting relationship among the communication capability, connectivity, and mobility of vehicles, and we also find out the characteristics about the pattern of parking behavior, which benefits from the understanding of utilizing the vehicular resources. Finally, we discuss the challenges and open problems in implementing the proposed VFC system as the infrastructures. Our study provides insights for this novel promising paradigm, as well as research topics about vehicular information infrastructures.",
"title": ""
},
{
"docid": "b6cd222b0bc5c2839c66cdf4538d7264",
"text": "Stereoscopic 3D (S3D) movies have become widely popular in the movie theaters, but the adoption of S3D at home is low even though most TV sets support S3D. It is widely believed that S3D with glasses is not the right approach for the home. A much more appealing approach is to use automulti-scopic displays that provide a glasses-free 3D experience to multiple viewers. A technical challenge is the lack of native multiview content that is required to deliver a proper view of the scene for every viewpoint. Our approach takes advantage of the abundance of stereoscopic 3D movies. We propose a real-time system that can convert stereoscopic video to a high-quality multiview video that can be directly fed to automultiscopic displays. Our algorithm uses a wavelet-based decomposition of stereoscopic images with per-wavelet disparity estimation. A key to our solution lies in combining Lagrangian and Eulerian approaches for both the disparity estimation and novel view synthesis, which leverages the complementary advantages of both techniques. The solution preserves all the features of Eulerian methods, e.g., subpixel accuracy, high performance, robustness to ambiguous depth cases, and easy integration of inter-view aliasing while maintaining the advantages of Lagrangian approaches, e.g., robustness to large disparities and possibility of performing non-trivial disparity manipulations through both view extrapolation and interpolation. The method achieves real-time performance on current GPUs. Its design also enables an easy hardware implementation that is demonstrated using a field-programmable gate array. We analyze the visual quality and robustness of our technique on a number of synthetic and real-world examples. We also perform a user experiment which demonstrates benefits of the technique when compared to existing solutions.",
"title": ""
},
{
"docid": "e793b233039c9cb105fa311fa08312cd",
"text": "A generalized single-phase multilevel current source inverter (MCSI) topology with self-balancing current is proposed, which uses the duality transformation from the generalized multilevel voltage source inverter (MVSI) topology. The existing single-phase 8- and 6-switch 5-level current source inverters (CSIs) can be derived from this generalized MCSI topology. In the proposed topology, each intermediate DC-link current level can be balanced automatically without adding any external circuits; thus, a true multilevel structure is provided. Moreover, owing to the dual relationship, many research results relating to the operation, modulation, and control strategies of MVSIs can be applied directly to the MCSIs. Some simulation results are presented to verify the proposed MCSI topology.",
"title": ""
},
{
"docid": "1efcace33a3a6ad7805f765edfafb6f4",
"text": "Recently, new configurations of robot legs using a parallel mechanism have been studied for improving the locomotion ability in four-legged robots. However, it is difficult to obtain full dynamics of the parallel-mechanism robot legs because this mechanism has many links and complex constraint conditions, which make it difficult to design a modelbased controller. Here, we propose the simplified modeling of a parallel-mechanism robot leg with two degrees-of-freedom (2DOF), which can be used instead of complex full dynamics for model-based control. The new modeling approach considers the robot leg as a 2DOF Revolute and Prismatic(RP) manipulator, inspired by the actuation mechanism of robot legs, for easily designing a nominal model of the controller. To verify the effectiveness of the new modeling approach experimentally, we conducted dynamic simulations using a commercial multi-dynamics simulator. The simulation results confirmed that the proposed modeling approach could be an alternative modeling method for parallel-mechanism robot legs.",
"title": ""
},
{
"docid": "e9c4877bca5f1bfe51f97818cc4714fa",
"text": "INTRODUCTION Gamification refers to the application of game dynamics, mechanics, and frameworks into non-game settings. Many educators have attempted, with varying degrees of success, to effectively utilize game dynamics to increase student motivation and achievement in the classroom. In an effort to better understand how gamification can effectively be utilized to this end, presented here is a review of existing literature on the subject as well as a case study on three different applications of gamification in the post-secondary setting. This analysis reveals that the underlying dynamics that make games engaging are largely already recognized and utilized in modern pedagogical practices, although under different designations. This provides some legitimacy to a practice that is sometimes dismissed as superficial, and also provides a way of formulating useful guidelines for those wishing to utilize the power of games to motivate student achievement. RELATED WORK The first step of this study was to review literature related to the use of gamification in education. This was undertaken in order to inform the subsequent case studies. Several works were reviewed with the intention of finding specific game dynamics that were met with a certain degree of success across a number of circumstances. To begin, Jill Laster [10] provides a brief summary of the early findings of Lee Sheldon, an assistant professor at Indiana University at Bloomington and the author of The Multiplayer Classroom: Designing Coursework as a Game [16]. Here, Sheldon reports that the gamification of his class on multiplayer game design at Indiana University at Bloomington in 2010 was a success, with the average grade jumping a full letter grade from the previous year [10]. Sheldon gamified his class by renaming the performance of presentations as 'completing quests', taking tests as 'fighting monsters', writing papers as 'crafting', and receiving letter grades as 'gaining experience points'. In particular, he notes that changing the language around grades celebrates getting things right rather than punishing getting things wrong [10]. Although this is plausible, this example is included here first because it points to the common conception of what gamifying a classroom means: implementing game components by simply trading out the parlance of pedagogy for that of gaming culture. Although its intentions are good, it is this reduction of game design to its surface characteristics that Elizabeth Lawley warns is detrimental to the successful gamification of a classroom [5]. Lawley, a professor of interactive games and media at the Rochester Institute of Technology (RIT), notes that when implemented properly, \"gamification can help enrich educational experiences in a way that students will recognize and respond to\" [5]. However, she warns that reducing the complexity of well designed games to their surface elements (i.e. badges and experience points) falls short of engaging students. She continues further, suggesting that beyond failing to engage, limiting the implementation of game dynamics to just the surface characteristics can actually damage existing interest and engagement [5]. Lawley is not suggesting that game elements should be avoided, but rather she is stressing the importance of allowing them to surface as part of a deeper implementation that includes the underlying foundations of good game design. Upon reviewing the available literature, certain underlying dynamics and concepts found in game design are shown to be more consistently successful than others when applied to learning environments, these are: o Freedom to Fail o Rapid Feedback o Progression o Storytelling Freedom to Fail Game design often encourages players to experiment without fear of causing irreversible damage by giving them multiple lives, or allowing them to start again at the most recent 'checkpoint'. Incorporating this 'freedom to fail' into classroom design is noted to be an effective dynamic in increasing student engagement [7,9,11,15]. If students are encouraged to take risks and experiment, the focus is taken away from final results and re-centered on the process of learning instead. The effectiveness of this change in focus is recognized in modern pedagogy as shown in the increased use of formative assessment. Like the game dynamic of having the 'freedom to fail', formative assessment focuses on the process of learning rather than the end result by using assessment to inform subsequent lessons and separating assessment from grades whenever possible [17]. This can mean that the student is using ongoing self assessment, or that the teacher is using",
"title": ""
},
{
"docid": "929534782eaaa41186a1138b0439cdca",
"text": "How do observers respond when the actions of one individual inflict harm on another? The primary reaction to carelessly inflicted harm is to seek restitution; the offender is judged to owe compensation to the harmed individual. The primary reaction to harm inflicted intentionally is moral outrage producing a desire for retribution; the harm-doer must be punished. Reckless conduct, an intermediate case, provokes reactions that involve elements of both careless and intentional harm. The moral outrage felt by those who witness transgressions is a product of both cognitive interpretations of the event and emotional reactions to it. Theory about the exact nature of the emotional reactions is considered, along with suggestions for directions for future research.",
"title": ""
},
{
"docid": "c75ee3e700806bcb098f6e1c05fdecfc",
"text": "This study examines patterns of cellular phone adoption and usage in an urban setting. One hundred and seventy-six cellular telephone users were surveyed abou their patterns of usage, demographic and socioeconomic characteristics, perceptions about the technology, and their motivations to use cellular services. The results of this study confirm that users' perceptions are significantly associated with their motivation to use cellular phones. Specifically, perceived ease of use was found to have significant effects on users' extrinsic and intrinsic motivations; apprehensiveness about cellular technology had a negative effect on intrinsic motivations. Implications of these findings for practice and research are examined.",
"title": ""
},
{
"docid": "627e4d3c2dfb8233f0e345410064f6d0",
"text": "Data clustering is an important task in many disciplines. A large number of studies have attempted to improve clustering by using the side information that is often encoded as pairwise constraints. However, these studies focus on designing special clustering algorithms that can effectively exploit the pairwise constraints. We present a boosting framework for data clustering,termed as BoostCluster, that is able to iteratively improve the accuracy of any given clustering algorithm by exploiting the pairwise constraints. The key challenge in designing a boosting framework for data clustering is how to influence an arbitrary clustering algorithm with the side information since clustering algorithms by definition are unsupervised. The proposed framework addresses this problem by dynamically generating new data representations at each iteration that are, on the one hand, adapted to the clustering results at previous iterations by the given algorithm, and on the other hand consistent with the given side information. Our empirical study shows that the proposed boosting framework is effective in improving the performance of a number of popular clustering algorithms (K-means, partitional SingleLink, spectral clustering), and its performance is comparable to the state-of-the-art algorithms for data clustering with side information.",
"title": ""
},
{
"docid": "9b791932b6f2cdbbf0c1680b9a610614",
"text": "To survive in today’s global marketplace, businesses need to be able to deliver products on time, maintain market credibility and introduce new products and services faster than competitors. This is especially crucial to the Smalland Medium-sized Enterprises (SMEs). Since the emergence of the Internet, it has allowed SMEs to compete effectively and efficiently in both domestic and international market. Unfortunately, such leverage is often impeded by the resistance and mismanagement of SMEs to adopt Electronic Commerce (EC) proficiently. Consequently, this research aims to investigate how SMEs can adopt and implement EC successfully to achieve competitive advantage. Building on an examination of current technology diffusion literature, a model of EC diffusion has been developed. It investigates the factors that influence SMEs in the adoption of EC, followed by an examination in the diffusion process, which SMEs adopt to integrate EC into their business systems.",
"title": ""
},
{
"docid": "7d7ea6239106f614f892701e527122e2",
"text": "The purpose of this study was to investigate the effects of aromatherapy on the anxiety, sleep, and blood pressure (BP) of percutaneous coronary intervention (PCI) patients in an intensive care unit (ICU). Fifty-six patients with PCI in ICU were evenly allocated to either the aromatherapy or conventional nursing care. Aromatherapy essential oils were blended with lavender, roman chamomile, and neroli with a 6 : 2 : 0.5 ratio. Participants received 10 times treatment before PCI, and the same essential oils were inhaled another 10 times after PCI. Outcome measures patients' state anxiety, sleeping quality, and BP. An aromatherapy group showed significantly low anxiety (t = 5.99, P < .001) and improving sleep quality (t = -3.65, P = .001) compared with conventional nursing intervention. The systolic BP of both groups did not show a significant difference by time or in a group-by-time interaction; however, a significant difference was observed between groups (F = 4.63, P = .036). The diastolic BP did not show any significant difference by time or by a group-by-time interaction; however, a significant difference was observed between groups (F = 6.93, P = .011). In conclusion, the aromatherapy effectively reduced the anxiety levels and increased the sleep quality of PCI patients admitted to the ICU. Aromatherapy may be used as an independent nursing intervention for reducing the anxiety levels and improving the sleep quality of PCI patients.",
"title": ""
},
{
"docid": "e67986714c6bda56c03de25168c51e6b",
"text": "With the development of modern technology and Android Smartphone, Smart Living is gradually changing people’s life. Bluetooth technology, which aims to exchange data wirelessly in a short distance using short-wavelength radio transmissions, is providing a necessary technology to create convenience, intelligence and controllability. In this paper, a new Smart Living system called home lighting control system using Bluetooth-based Android Smartphone is proposed and prototyped. First Smartphone, Smart Living and Bluetooth technology are reviewed. Second the system architecture, communication protocol and hardware design aredescribed. Then the design of a Bluetooth-based Smartphone application and the prototype are presented. It is shown that Android Smartphone can provide a platform to implement Bluetooth-based application for Smart Living.",
"title": ""
}
] |
scidocsrr
|
0f9121a2bbc0c9f9ba5dfa567e29e17d
|
PLDA: Parallel Latent Dirichlet Allocation for Large-Scale Applications
|
[
{
"docid": "64e93cfb58b7cf331b4b74fadb4bab74",
"text": "Support Vector Machines (SVMs) suffer from a widely recognized scalability problem in both memory use and computational time. To improve scalability, we have developed a parallel SVM algorithm (PSVM), which reduces memory use through performing a row-based, approximate matrix factorization, and which loads only essential data to each machine to perform parallel computation. Let n denote the number of training instances, p the reduced matrix dimension after factorization (p is significantly smaller than n), and m the number of machines. PSVM reduces the memory requirement from O(n2) to O(np/m), and improves computation time to O(np2/m). Empirical study shows PSVM to be effective. PSVM Open Source is available for download at http://code.google.com/p/psvm/.",
"title": ""
},
{
"docid": "83060ef5605b19c14d8b0f41cbd61de5",
"text": "We are at the beginning of the multicore era. Computers will have increasingly many cores (processors), but there is still no good programming framework for these architectures, and thus no simple and unified way for machine learning to take advantage of the potential speed up. In this paper, we develop a broadly applicable parallel programming method, one that is easily applied to manydifferent learning algorithms. Our work is in distinct contrast to the tradition in machine learning of designing (often ingenious) ways to speed up a singlealgorithm at a time. Specifically, we show that algorithms that fit the Statistical Query model [15] can be written in a certain “summation form,” which allows them to be easily parallelized on multicore computers. We adapt Google’s map-reduce [7] paradigm to demonstrate this parallel speed up technique on a variety of learning algorithms including locally weighted linear regression (LWLR), k-means, logistic regression (LR), naive Bayes (NB), SVM, ICA, PCA, gaussian discriminant analysis (GDA), EM, and backpropagation (NN). Our experimental results show basically linear speedup with an increasing number of processors.",
"title": ""
}
] |
[
{
"docid": "fd9992b50e6d58afab53954eac400b84",
"text": "Several physico-mechanical designs evolved in fish are currently inspiring robotic devices for propulsion and manoeuvring purposes in underwater vehicles. Considering the potential benefits involved, this paper presents an overview of the swimming mechanisms employed by fish. The motivation is to provide a relevant and useful introduction to the existing literature for engineers with an interest in the emerging area of aquatic biomechanisms. The fish swimming types are presented following the well-established classification scheme and nomenclature originally proposed by Breder. Fish swim either by Body and/or Caudal Fin (BCF) movements or using Median and/or Paired Fin (MPF) propulsion. The latter is generally employed at slow speeds, offering greater manoeuvrability and better propulsive efficiency, while BCF movements can achieve greater thrust and accelerations. For both BCF and MPF locomotion specific swimming modes are identified, based on the propulsor and the type of movements (oscillatory or undulatory) employed for thrust generation. Along with general descriptions and kinematic data, the analytical approaches developed to study each swimming mode are also introduced. Particular reference is made to lunate tail propulsion, undulating fins and labriform (oscillatory pectoral fin) swimming mechanisms, identified as having the greatest potential for exploitation in artificial systems. Index Terms marine animals, hydrodynamics, underwater vehicle propulsion, mobile robots, kinematics * Submitted as a regular paper to the IEEE Journal of Oceanic Engineering, March 1998. † Ocean Systems Laboratory, Dept. of Computing & Electrical Engineering, Heriot-Watt University, Edinburgh EH14 4AS, Scotland, U.K. Tel: +(44) (0) 131 4513350. Fax: +(44) (0) 131 4513327. Email: dml@cee.hw.ac.uk ‡ Dept. of Mechanical & Chemical Engineering, Heriot-Watt University, Edinburgh EH14 4AS, Scotland,U.K. Review of Fish Swimming Modes for Aquatic Locomotion -2",
"title": ""
},
{
"docid": "ed9528fe8e4673c30de35d33130c728e",
"text": "This paper introduces a friendly system to control the home appliances remotely by the use of mobile cell phones; this system is well known as “Home Automation System” (HAS).",
"title": ""
},
{
"docid": "fb173d15e079fcdf0cc222f558713f9c",
"text": "Structured data summarization involves generation of natural language summaries from structured input data. In this work, we consider summarizing structured data occurring in the form of tables as they are prevalent across a wide variety of domains. We formulate the standard table summarization problem, which deals with tables conforming to a single predefined schema. To this end, we propose a mixed hierarchical attention based encoderdecoder model which is able to leverage the structure in addition to the content of the tables. Our experiments on the publicly available WEATHERGOV dataset show around 18 BLEU (∼ 30%) improvement over the current state-of-the-art.",
"title": ""
},
{
"docid": "7e6eab1db77c8404720563d0eed1b325",
"text": "With the success of Open Data a huge amount of tabular data sources became available that could potentially be mapped and linked into the Web of (Linked) Data. Most existing approaches to “semantically label” such tabular data rely on mappings of textual information to classes, properties, or instances in RDF knowledge bases in order to link – and eventually transform – tabular data into RDF. However, as we will illustrate, Open Data tables typically contain a large portion of numerical columns and/or non-textual headers; therefore solutions that solely focus on textual “cues” are only partially applicable for mapping such data sources. We propose an approach to find and rank candidates of semantic labels and context descriptions for a given bag of numerical values. To this end, we apply a hierarchical clustering over information taken from DBpedia to build a background knowledge graph of possible “semantic contexts” for bags of numerical values, over which we perform a nearest neighbour search to rank the most likely candidates. Our evaluation shows that our approach can assign fine-grained semantic labels, when there is enough supporting evidence in the background knowledge graph. In other cases, our approach can nevertheless assign high level contexts to the data, which could potentially be used in combination with other approaches to narrow down the search space of possible labels.",
"title": ""
},
{
"docid": "1e7b2271c7efc02f2e9148cefc55e7a1",
"text": "Foot-and-mouth disease (FMD) is one of the most important diseases with heavy economic losses. The causative agent of the disease is a virus, named as FMD virus, belonging to the picornavirus family. There is no treatment for the disease and vaccination is the main control strategy. Several vaccination methods have been introduced against FMD including DNA vaccines. In this study, two genetic constructs, which were defined by absence and presence of an intron, were tested for their ability to induce the anti-FMD virus responses in mouse. Both constructs encoded a fusion protein consisting of viral (P12A and 3C) and EGFP proteins under the control of CMV promoter. The protein expression was studied in the COS-7 cells transfected with the plasmids by detecting EGFP protein. Cell death was induced in the cells expressing the P12A3C-EGFP, but not the EGFP, protein. This might be explained by the protease activity of the 3C protein which cleaved critical proteins of the host cells. Mice injected with the intron-containing plasmid induced 16-fold higher antibody level than the intronless plasmid. In addition, serum neutralization antibodies were only induced in the mice injected with intron-containing plasmid. In conclusion, the use of intron might be a useful strategy for enhancing antibody responses by DNA vaccines. Moreover, cell death inducing activity of the 3C protein might suggest applying it along with DNA vaccines to improve immunogenicity.",
"title": ""
},
{
"docid": "0569bcd89de031431e755ad827cc6828",
"text": "In his enigmatic death bed letter to Hardy, written in January 1920, Ramanujan introduced the notion of a mock theta function. Despite many works, very little was known about the role that these functions play within the theory of automorphic and modular forms until 2002. In that year Sander Zwegers (in his Ph.D. thesis) established that these functions are “holomorphic parts” of harmonic Maass forms. This realization has resulted in many applications in a wide variety of areas: arithmetic geometry, combinatorics, modular forms, and mathematical physics. Here we outline the general facets of the theory, and we give several applications to number theory: partitions and q-series, modular forms, singular moduli, Borcherds products, extensions of theorems of Kohnen-Zagier and Waldspurger on modular L-functions, and the work of Bruinier and Yang on Gross-Zagier formulae. Following our discussion of these works on harmonic Maass forms, we shall then study the emerging new theory of quantum modular forms. Don Zagier introduced the notion of a quantum modular form in his 2010 Clay lecture, and it turns out that a beautiful part of this theory lives at the interface of classical modular forms and harmonic Maass forms.",
"title": ""
},
{
"docid": "1569bcea0c166d9bf2526789514609c5",
"text": "In this paper, we present the developmert and initial validation of a new self-report instrument, the Differentiation of Self Inventory (DSI). T. DSI represents the first attempt to create a multi-dimensional measure of differentiation based on Bowen Theory, focusing specifically on adults (ages 25 +), their current significant relationships, and their relations with families of origin. Principal components factor analysis on a sample of 313 normal adults (mean age = 36.8) suggested four dimensions: Emotional Reactivity, Reactive Distancing, Fusion with Parents, and \"I\" Position. Scales constructed from these factors were found to be moderately correlated in the expected direction, internally consistent, and significantly predictive of trait anxiety. The potential contribution of the DSI is discussed -for testing Bowen Theory, as a clinical assessment tool, and as an indicator of psychotherapeutic outcome.",
"title": ""
},
{
"docid": "af28e57d508511ce4f494eb45da0e525",
"text": "Posthumanism entails the idea of transcendence of the human being achieved through technology. The article begins by distinguishing perfection and change (or growth). It also attempts to show the anthropological premises of posthumanism itself and suggests that we can identify two roots: the liberal humanistic subject (autonomous and unrelated that simply realizes herself/himself through her/his own project) and the interpretation of thought as a computable process. Starting from these premises, many authors call for the loosening of the clear boundaries of one’s own subject in favour of blending with other beings. According to these theories, we should become post-human: if the human being is thought and thought is a computable process, whatever is able to process information broader and faster is better than the actual human being and has to be considered as the way towards the real completeness of the human being itself. The paper endeavours to discuss the adequacy of these premises highlighting the structural dependency of the human being, the role of the human body, the difference between thought and a computational process, the singularity of some useless and unexpected human acts. It also puts forward the need for axiological criteria to define growth as perfectionism.",
"title": ""
},
{
"docid": "a430a43781d7fd4e36cd393103958265",
"text": "BACKGROUND\nThis review evaluates the DSM-IV criteria of social anxiety disorder (SAD), with a focus on the generalized specifier and alternative specifiers, the considerable overlap between the DSM-IV diagnostic criteria for SAD and avoidant personality disorder, and developmental issues.\n\n\nMETHOD\nA literature review was conducted, using the validators provided by the DSM-V Spectrum Study Group. This review presents a number of options and preliminary recommendations to be considered for DSM-V.\n\n\nRESULTS/CONCLUSIONS\nLittle supporting evidence was found for the current specifier, generalized SAD. Rather, the symptoms of individuals with SAD appear to fall along a continuum of severity based on the number of fears. Available evidence suggested the utility of a specifier indicating a \"predominantly performance\" variety of SAD. A specifier based on \"fear of showing anxiety symptoms\" (e.g., blushing) was considered. However, a tendency to show anxiety symptoms is a core fear in SAD, similar to acting or appearing in a certain way. More research is needed before considering subtyping SAD based on core fears. SAD was found to be a valid diagnosis in children and adolescents. Selective mutism could be considered in part as a young child's avoidance response to social fears. Pervasive test anxiety may belong not only to SAD, but also to generalized anxiety disorder. The data are equivocal regarding whether to consider avoidant personality disorder simply a severe form of SAD. Secondary data analyses, field trials, and validity tests are needed to investigate the recommendations and options.",
"title": ""
},
{
"docid": "5ff345f050ec14b02c749c41887d592d",
"text": "Testing multithreaded code is hard and expensive. Each multithreaded unit test creates two or more threads, each executing one or more methods on shared objects of the class under test. Such unit tests can be generated at random, but basic generation produces tests that are either slow or do not trigger concurrency bugs. Worse, such tests have many false alarms, which require human effort to filter out. We present BALLERINA, a novel technique for automatic generation of efficient multithreaded random tests that effectively trigger concurrency bugs. BALLERINA makes tests efficient by having only two threads, each executing a single, randomly selected method. BALLERINA increases chances that such a simple parallel code finds bugs by appending it to more complex, randomly generated sequential code. We also propose a clustering technique to reduce the manual effort in inspecting failures of automatically generated multithreaded tests. We evaluate BALLERINA on 14 real-world bugs from 6 popular codebases: Groovy, Java JDK, jFreeChart, Log4j, Lucene, and Pool. The experiments show that tests generated by BALLERINA can find bugs on average 2X-10X faster than various configurations of basic random generation, and our clustering technique reduces the number of inspected failures on average 4X-8X. Using BALLERINA, we found three previously unknown bugs in Apache Pool and Log4j, one of which was already confirmed and fixed.",
"title": ""
},
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "8a414e60b4a81da21d21d5bcfcff1ccf",
"text": "We propose an e¢ cient liver allocation system for allocating donated organs to patients waiting for transplantation, the only viable treatment for End-Stage Liver Disease. We optimize two metrics which are used to measure the e¢ ciency: total quality adjusted life years and the number of organs wasted due to patients rejecting some organ o¤ers. Our model incorporates the possibility that the patients may turn down the organ o¤ers. Given the scarcity of available organs relative to the number patients waiting for transplantation, we model the system as a multiclass uid model of overloaded queues. The uid model we advance captures the disease evolution over time by allowing the patients to switch between classes over time, e.g. patients waiting for transplantation may get sicker/better, or may die. We characterize the optimal solution to the uid model using the duality framework for optimal control problems developed by Rockafellar (1970a). The optimal solution for assigning livers to patients is an intuitive dynamic index policy, where the indices depend on patients acceptance probabilities of the organ o¤er, immediate rewards, and the shadow prices calculated from the dual dynamical system. Finally, we perform a detailed simulation study to demonstrate the e¤ectiveness of the proposed policy using data from the United Network for Organ Sharing System (UNOS).",
"title": ""
},
{
"docid": "706b2948b19d15953809d2bdff4c04a3",
"text": "The aim of image enhancement is to produce a processed image which is more suitable than the original image for specific application. Application can be edge detection, boundary detection, image fusion, segmentation etc. In this paper different types of image enhancement algorithms in spatial domain are presented for gray scale as well as for color images. Quantitative analysis like AMBE (Absolute mean brightness error), MSE (Mean square error) and PSNR (Peak signal to noise ratio) for the different algorithms are evaluated. For gray scale image Weighted histogram equalization, Linear contrast stretching (LCS), Non linear contrast stretching logarithmic (NLLCS), Non linear contrast stretching exponential (NLECS), Bi Histogram Equalization (BHE) algorithms are discussed and compared. For color image (RGB) Linear contrast stretching, Non linear contrast stretching logarithmic and Non linear contrast stretching exponential algorithms are discussed. During result analysis, it has been observed that some algorithms does give considerably highly distinct values(MSE or AMBE) for different images. To stabilize these parameters, had proposed the new enhancement scheme Local mean and local standard deviation(LMLS) which will take care of these issues. By experimental analysis It has been observed that proposed method gives better AMBE (should be less) and PSNR (should be high) values compared with other algorithms, also these values are not highly distinct for different images.",
"title": ""
},
{
"docid": "e42805b57fa2f8f95d03fea8af2e8560",
"text": "Models are used in a variety of fields, including land change science, to better understand the dynamics of systems, to develop hypotheses that can be tested empirically, and to make predictions and/or evaluate scenarios for use in assessment activities. Modeling is an important component of each of the three foci outlined in the science plan of the Land-use and -cover change (LUCC) project (Turner et al. 1995) of the International Geosphere-Biosphere Program (IGBP) and the International Human Dimensions Program (IHDP). In Focus 1, on comparative land-use dynamics, models are used to help improve our understanding of the dynamics of land-use that arise from human decision-making at all levels, households to nations. These models are supported by surveys and interviews of decision makers. Focus 2 emphasizes development of empirical diagnostic models based on aerial and satellite observations of spatial and temporal land-cover dynamics. Finally, Focus 3 focuses specifically on the development of models of land-use and -cover change (LUCC) that can be used for prediction and scenario generation in the context of integrative assessments of global change.",
"title": ""
},
{
"docid": "e59f53449783b3b7aceef8ae3b43dae1",
"text": "W E use the definitions of (11). However, in deference to some recent attempts to unify the terminology of graph theory we replace the term 'circuit' by 'polygon', and 'degree' by 'valency'. A graph G is 3-connected (nodally 3-connected) if it is simple and non-separable and satisfies the following condition; if G is the union of two proper subgraphs H and K such that HnK consists solely of two vertices u and v, then one of H and K is a link-graph (arc-graph) with ends u and v. It should be noted that the union of two proper subgraphs H and K of G can be the whole of G only if each of H and K includes at least one edge or vertex not belonging to the other. In this paper we are concerned mainly with nodally 3-connected graphs, but a specialization to 3-connected graphs is made in § 12. In § 3 we discuss conditions for a nodally 3-connected graph to be planar, and in § 5 we discuss conditions for the existence of Kuratowski subgraphs of a given graph. In §§ 6-9 we show how to obtain a convex representation of a nodally 3-connected graph, without Kuratowski subgraphs, by solving a set of linear equations. Some extensions of these results to general graphs, with a proof of Kuratowski's theorem, are given in §§ 10-11. In § 12 we discuss the representation in the plane of a pair of dual graphs, and in § 13 we draw attention to some unsolved problems.",
"title": ""
},
{
"docid": "25f73f6a65d115443ef56b8d25527adc",
"text": "Humans learn to speak before they can read or write, so why can’t computers do the same? In this paper, we present a deep neural network model capable of rudimentary spoken language acquisition using untranscribed audio training data, whose only supervision comes in the form of contextually relevant visual images. We describe the collection of our data comprised of over 120,000 spoken audio captions for the Places image dataset and evaluate our model on an image search and annotation task. We also provide some visualizations which suggest that our model is learning to recognize meaningful words within the caption spectrograms.",
"title": ""
},
{
"docid": "c2dd0a4616bdb5931debaad1edf06a60",
"text": "For polar codes with short-to-medium code length, list successive cancellation decoding is used to achieve a good error-correcting performance. However, list pruning in the current list decoding is based on the sorting strategy and its timing complexity is high. This results in a long decoding latency for large list size. In this work, aiming at a low-latency list decoding implementation, a double thresholding algorithm is proposed for a fast list pruning. As a result, with a negligible performance degradation, the list pruning delay is greatly reduced. Based on the double thresholding, a low-latency list decoding architecture is proposed and implemented using a UMC 90nm CMOS technology. Synthesis results show that, even for a large list size of 16, the proposed low-latency architecture achieves a decoding throughput of 220 Mbps at a frequency of 641 MHz.",
"title": ""
},
{
"docid": "ec0733962301d6024da773ad9d0f636d",
"text": "This paper focuses on the design, fabrication and characterization of unimorph actuators for a microaerial flapping mechanism. PZT-5H and PZN-PT are investigated as piezoelectric layers in the unimorph actuators. Design issues for microaerial flapping actuators are discussed, and criteria for the optimal dimensions of actuators are determined. For low power consumption actuation, a square wave based electronic driving circuit is proposed. Fabricated piezoelectric unimorphs are characterized by an optical measurement system in quasi-static and dynamic mode. Experimental performance of PZT5H and PZN-PT based unimorphs is compared with desired design specifications. A 1 d.o.f. flapping mechanism with a PZT-5H unimorph is constructed, and 180◦ stroke motion at 95 Hz is achieved. Thus, it is shown that unimorphs could be promising flapping mechanism actuators.",
"title": ""
},
{
"docid": "2316e37df8796758c86881aaeed51636",
"text": "Physical activity recognition using embedded sensors has enabled many context-aware applications in different areas, such as healthcare. Initially, one or more dedicated wearable sensors were used for such applications. However, recently, many researchers started using mobile phones for this purpose, since these ubiquitous devices are equipped with various sensors, ranging from accelerometers to magnetic field sensors. In most of the current studies, sensor data collected for activity recognition are analyzed offline using machine learning tools. However, there is now a trend towards implementing activity recognition systems on these devices in an online manner, since modern mobile phones have become more powerful in terms of available resources, such as CPU, memory and battery. The research on offline activity recognition has been reviewed in several earlier studies in detail. However, work done on online activity recognition is still in its infancy and is yet to be reviewed. In this paper, we review the studies done so far that implement activity recognition systems on mobile phones and use only their on-board sensors. We discuss various aspects of these studies. Moreover, we discuss their limitations and present various recommendations for future research.",
"title": ""
},
{
"docid": "03b4b786ba40b4c631fe679b591880aa",
"text": "The abundance of user-generated data in social media has incentivized the development of methods to infer the latent attributes of users, which are crucially useful for personalization, advertising and recommendation. However, the current user profiling approaches have limited success, due to the lack of a principled way to integrate different types of social relationships of a user, and the reliance on scarcely-available labeled data in building a prediction model. In this paper, we present a novel solution termed Collective Semi-Supervised Learning (CSL), which provides a principled means to integrate different types of social relationship and unlabeled data under a unified computational framework. The joint learning from multiple relationships and unlabeled data yields a computationally sound and accurate approach to model user attributes in social media. Extensive experiments using Twitter data have demonstrated the efficacy of our CSL approach in inferring user attributes such as account type and marital status. We also show how CSL can be used to determine important user features, and to make inference on a larger user population.",
"title": ""
}
] |
scidocsrr
|
01ee25fe6322230fcf237832e9d3cb93
|
Using Eye Tracking to Trace a Cognitive Process : Gaze Behaviour During Decision Making in a Natural Environment
|
[
{
"docid": "bd077cbf7785fc84e98724558832aaf6",
"text": "Two process tracing techniques, explicit information search and verbal protocols, were used to examine the information processing strategies subjects use in reaching a decision. Subjects indicated preferences among apartments. The number of alternatives available and number of dimensions of information available was varied across sets of apartments. When faced with a two alternative situation, the subjects employed search strategies consistent with a compensatory decision process. In contrast, when faced with a more complex (multialternative) decision task, the subjects employed decision strategies designed to eliminate some of the available alternatives as quickly as possible and on the basis of a limited amount of information search and evaluation. The results demonstrate that the information processing leading to choice will vary as a function of task complexity. An integration of research in decision behavior with the methodology and theory of more established areas of cognitive psychology, such as human problem solving, is advocated.",
"title": ""
},
{
"docid": "0d723c344ab5f99447f7ad2ff72c0455",
"text": "The aim of this study was to determine the pattern of fixations during the performance of a well-learned task in a natural setting (making tea), and to classify the types of monitoring action that the eyes perform. We used a head-mounted eye-movement video camera, which provided a continuous view of the scene ahead, with a dot indicating foveal direction with an accuracy of about 1 deg. A second video camera recorded the subject's activities from across the room. The videos were linked and analysed frame by frame. Foveal direction was always close to the object being manipulated, and very few fixations were irrelevant to the task. The first object-related fixation typically led the first indication of manipulation by 0.56 s, and vision moved to the next object about 0.61 s before manipulation of the previous object was complete. Each object-related act that did not involve a waiting period lasted an average of 3.3 s and involved about 7 fixations. Roughly a third of all fixations on objects could be definitely identified with one of four monitoring functions: locating objects used later in the process, directing the hand or object in the hand to a new location, guiding the approach of one object to another (e.g. kettle and lid), and checking the state of some variable (e.g. water level). We conclude that although the actions of tea-making are 'automated' and proceed with little conscious involvement, the eyes closely monitor every step of the process. This type of unconscious attention must be a common phenomenon in everyday life.",
"title": ""
}
] |
[
{
"docid": "e6db8cbbb3f7bac211f672ffdef44fb6",
"text": "This paper aims to develop a benchmarking framework that evaluates the cold chain performance of a company, reveals its strengths and weaknesses and finally identifies and prioritizes potential alternatives for continuous improvement. A Delphi-AHP-TOPSIS based methodology has divided the whole benchmarking into three stages. The first stage is Delphi method, where identification, synthesis and prioritization of key performance factors and sub-factors are done and a novel consistent measurement scale is developed. The second stage is Analytic Hierarchy Process (AHP) based cold chain performance evaluation of a selected company against its competitors, so as to observe cold chain performance of individual factors and sub-factors, as well as overall performance index. And, the third stage is Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) based assessment of possible alternatives for the continuous improvement of the company’s cold chain performance. Finally a demonstration of proposed methodology in a retail industry is presented for better understanding. The proposed framework can assist managers to comprehend the present strengths and weaknesses of their cold. They can identify good practices from the market leader and can benchmark them for improving weaknesses keeping in view the current operational conditions and strategies of the company. This framework also facilitates the decision makers to better understand the complex relationships of the relevant cold chain performance factors in decision-making. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "f6b974c04dceaea3176a0092304bab72",
"text": "Information-Centric Networking (ICN) has recently emerged as a promising Future Internet architecture that aims to cope with the increasing demand for highly scalable and efficient distribution of content. Moving away from the Internet communication model based in addressable hosts, ICN leverages in-network storage for caching, multi-party communication through replication, and interaction models that decouple senders and receivers. This novel networking approach has the potential to outperform IP in several dimensions, besides just content dissemination. Concretely, the rise of the Internet of Things (IoT), with its rich set of challenges and requirements placed over the current Internet, provide an interesting ground for showcasing the contribution and performance of ICN mechanisms. This work analyses how the in-network caching mechanisms associated to ICN, particularly those implemented in the Content-Centric Networking (CCN) architecture, contribute in IoT environments, particularly in terms of energy consumption and bandwidth usage. A simulation comparing IP and the CCN architecture (an instantiation of ICN) in IoT environments demonstrated that CCN leads to a considerable reduction of the energy consumed by the information producers and to a reduction of bandwidth requirements, as well as highlighted the flexibility for adapting current ICN caching mechanisms to target specific requirements of IoT.",
"title": ""
},
{
"docid": "14857144b52dbfb661d6ef4cd2c59b64",
"text": "The candidate confirms that the work submitted is his/her own and that appropriate credit has been given where reference has been made to the work of others. i ACKNOWLEDGMENT I am truly indebted and thankful to my scholarship sponsor ―National Information Technology Development Agency (NITDA), Nigeria‖ for giving me the rare privilege to study at the University of Leeds. I am sincerely and heartily grateful to my supervisor Dr. Des McLernon for his valuable support, patience and guidance throughout the course of this dissertation. I am sure it would not have been possible without his help. I would like to express my deep gratitude to Romero-Zurita Nabil for his enthusiastic encouragement, useful critique, recommendation and providing me with great information resources. I also acknowledge my colleague Frempong Kwadwo for his invaluable suggestions and discussion. Finally, I would like to appreciate my parents for their support and encouragement throughout my study at Leeds. Above all, special thanks to God Almighty for the gift of life. ii DEDICATION This thesis is dedicated to family especially; to my parents for inculcating the importance of hardwork and higher education to Omobolanle for being a caring and loving sister. to Abimbola for believing in me.",
"title": ""
},
{
"docid": "5b8a5a8c87acec59a5430cb5b28fb2e6",
"text": "This paper investigates the problems of outliers and/or noise in surface segmentation and proposes a statistically robust segmentation algorithm for laser scanning 3-D point cloud data. Principal component analysis (PCA)-based local saliency features, e.g., normal and curvature, have been frequently used in many ways for point cloud segmentation. However, PCA is sensitive to outliers; saliency features from PCA are nonrobust and inaccurate in the presence of outliers; consequently, segmentation results can be erroneous and unreliable. As a remedy, robust techniques, e.g., RANdom SAmple Consensus (RANSAC), and/or robust versions of PCA (RPCA) have been proposed. However, RANSAC is influenced by the well-known swamping effect, and RPCA methods are computationally intensive for point cloud processing. We propose a region growing based robust segmentation algorithm that uses a recently introduced maximum consistency with minimum distance based robust diagnostic PCA (RDPCA) approach to get robust saliency features. Experiments using synthetic and laser scanning data sets show that the RDPCA-based method has an intrinsic ability to deal with outlier- and/or noise-contaminated data. Results for a synthetic data set show that RDPCA is 105 times faster than RPCA and gives more accurate and robust results when compared with other segmentation methods. Compared with RANSAC and RPCA based methods, RDPCA takes almost the same time as RANSAC, but RANSAC results are markedly worse than RPCA and RDPCA results. Coupled with a segment merging algorithm, the proposed method is efficient for huge volumes of point cloud data consisting of complex objects surfaces from mobile, terrestrial, and aerial laser scanning systems.",
"title": ""
},
{
"docid": "88ea3f043b43a11a0a7d79e59a774c1f",
"text": "The purpose of this paper is to present an alternative systems thinking–based perspective and approach to the requirements elicitation process in complex situations. Three broad challenges associated with the requirements engineering elicitation in complex situations are explored, including the (1) role of the system observer, (2) nature of system requirements in complex situations, and (3) influence of the system environment. Authors have asserted that the expectation of unambiguous, consistent, complete, understandable, verifiable, traceable, and modifiable requirements is not consistent with complex situations. In contrast, complex situations are an emerging design reality for requirements engineering processes, marked by high levels of ambiguity, uncertainty, and emergence. This paper develops the argument that dealing with requirements for complex situations requires a change in paradigm. The elicitation of requirements for simple and technically driven systems is appropriately accomplished by proven methods. In contrast, the elicitation of requirements in complex situations (e.g., integrated multiple critical infrastructures, system-of-systems, etc.) requires more holistic thinking and can be enhanced by grounding in systems theory.",
"title": ""
},
{
"docid": "5495aeaa072a1f8f696298ebc7432045",
"text": "Deep neural networks (DNNs) are widely used in data analytics, since they deliver state-of-the-art accuracies. Binarized neural networks (BNNs) are recently proposed optimized variant of DNNs. BNNs constraint network weight and/or neuron value to either +1 or −1, which is representable in 1 bit. This leads to dramatic algorithm efficiency improvement, due to reduction in the memory and computational demands. This paper evaluates the opportunity to further improve the execution efficiency of BNNs through hardware acceleration. We first proposed a BNN hardware accelerator design. Then, we implemented the proposed accelerator on Aria 10 FPGA as well as 14-nm ASIC, and compared them against optimized software on Xeon server CPU, Nvidia Titan X server GPU, and Nvidia TX1 mobile GPU. Our evaluation shows that FPGA provides superior efficiency over CPU and GPU. Even though CPU and GPU offer high peak theoretical performance, they are not as efficiently utilized since BNNs rely on binarized bit-level operations that are better suited for custom hardware. Finally, even though ASIC is still more efficient, FPGA can provide orders of magnitudes in efficiency improvements over software, without having to lock into a fixed ASIC solution.",
"title": ""
},
{
"docid": "74ed962cbf02712f33dac9f901561cad",
"text": "Leak detection in transmission pipelines is crucially important for safe operation. Delay in detecting leaks leads to loss of property and human life in fire hazards and loss of valuable material. Leaking of methane and hydrocarbon gas causes negative impacts on the eco system such as global warming and air pollution. Pipeline leak detection systems play a key role in minimization of the probability of occurrence of leaks and hence their impacts. Today there are many available technologies in the domain of leak detection. This paper provides an overview on external and internal leak detection and location systems and a summary of comparison regarding performance of each system.",
"title": ""
},
{
"docid": "dfcc931d9cd7d084bbbcf400f44756a5",
"text": "In this paper we address the problem of aligning very long (often more than one hour) audio files to their corresponding textual transcripts in an effective manner. We present an efficient recursive technique to solve this problem that works well even on noisy speech signals. The key idea of this algorithm is to turn the forced alignment problem into a recursive speech recognition problem with a gradually restricting dictionary and language model. The algorithm is tolerant to acoustic noise and errors or gaps in the text transcript or audio tracks. We report experimental results on a 3 hour audio file containing TV and radio broadcasts. We will show accurate alignments on speech under a variety of real acoustic conditions such as speech over music and speech over telephone lines. We also report results when the same audio stream has been corrupted with white additive noise or compressed using a popular web encoding format such as RealAudio. This algorithm has been used in our internal multimedia indexing project. It has processed more than 200 hours of audio from varied sources, such as WGBH NOVA documentaries and NPR web audio files. The system aligns speech media content in about one to five times realtime, depending on the acoustic conditions of the audio signal.",
"title": ""
},
{
"docid": "c678ea5e9bc8852ec80a8315a004c7f0",
"text": "Educators, researchers, and policy makers have advocated student involvement for some time as an essential aspect of meaningful learning. In the past twenty years engineering educators have implemented several means of better engaging their undergraduate students, including active and cooperative learning, learning communities, service learning, cooperative education, inquiry and problem-based learning, and team projects. This paper focuses on classroom-based pedagogies of engagement, particularly cooperative and problem-based learning. It includes a brief history, theoretical roots, research support, summary of practices, and suggestions for redesigning engineering classes and programs to include more student engagement. The paper also lays out the research ahead for advancing pedagogies aimed at more fully enhancing students’ involvement in their learning.",
"title": ""
},
{
"docid": "f8d554c215cc40ddc71171b3f266c43a",
"text": "Nowadays, Edge computing allows to push the application intelligence at the boundaries of a network in order to get high-performance processing closer to both data sources and end-users. In this scenario, the Horizon 2020 BEACON project - enabling federated Cloud-networking - can be used to setup Fog computing environments were applications can be deployed in order to instantiate Edge computing applications. In this paper, we focus on the deployment orchestration of Edge computing distributed services on such fog computing environments. We assume that a distributed service is composed of many microservices. Users, by means of geolocation deployment constrains can select regions in which microservices will be deployed. Specifically, we present an Orchestration Broker that starting from an ad-hoc OpenStack-based Heat Orchestraton Template (HOT) service manifest of an Edge computing distributed service produces several HOT microservice manifests including the the deployment instruction for each involved Fog computing node. Experiments prove the goodness of our approach.",
"title": ""
},
{
"docid": "a3a373130b5c602022449919dcc81f98",
"text": "We describe a method for registering and super-resolving moving vehicles from aerial surveillance video. The challenge of vehicle super-resolution lies in the fact that vehicles may be very small and thus frame-to-frame registration does not offer enough constraints to yield registration with sub-pixel accuracy. To overcome this, we first register the large-scale image backgrounds and then, relative to the background registration, register the small-scale moving vehicle over all frames simultaneously using a vehicle motion model. To solve for the vehicle motion parameters we optimize a cost function that incorporates both vehicle appearance and background appearance consistency. Once this process accurately registers a moving vehicle, it is super-resolved. We apply both a frequency domain and a spatial domain approach. The frequency domain approach can be used when the final registered vehicle motion is well approximated by shifts in the image plane. The robust regularized spatial domain approach handles all cases of vehicle motion.",
"title": ""
},
{
"docid": "c02865dab28db59a22b972d570c2929a",
"text": "............................................................................................................................. iii Table of",
"title": ""
},
{
"docid": "b2a670d90d53825c53d8ce0082333db6",
"text": "Social media platforms facilitate the emergence of citizen communities that discuss real-world events. Their content reflects a variety of intent ranging from social good (e.g., volunteering to help) to commercial interest (e.g., criticizing product features). Hence, mining intent from social data can aid in filtering social media to support organizations, such as an emergency management unit for resource planning. However, effective intent mining is inherently challenging due to ambiguity in interpretation, and sparsity of relevant behaviors in social data. In this paper, we address the problem of multiclass classification of intent with a use-case of social data generated during crisis events. Our novel method exploits a hybrid feature representation created by combining top-down processing using knowledge-guided patterns with bottom-up processing using a bag-of-tokens model. We employ pattern-set creation from a variety of knowledge sources including psycholinguistics to tackle the ambiguity challenge, social behavior about conversations to enrich context, and contrast patterns to tackle the sparsity challenge. Our results show a significant absolute gain up to 7% in the F1 score relative to a baseline using bottom-up processing alone, within the popular multiclass frameworks of One-vs-One and One-vs-All. Intent mining can help design efficient cooperative information systems between citizens and organizations for serving organizational information needs.",
"title": ""
},
{
"docid": "9b02cd39293b2f2fb74de14ea3cdd67b",
"text": "Convolutional Neural Networks (CNNs) have been widely used in computer vision tasks, such as face recognition, and have achieved state-of-the-art results due to their ability to learn discriminative deep features. Conventionally, CNNs have been trained with Softmax as supervision signal to penalize the classification loss. In order to further enhance discriminative capability of deep features, we introduced a joint supervision signal, Git loss, which leverages on Softmax and Center loss functions. The aim of our loss function is to minimizes the intra-class variances as well as maximizes the interclass distances. Such minimization and maximization of deep features are considered ideal for face recognition task. Results obtained on two popular face recognition benchmarks datasets show that our proposed loss function achieves maximum separability between deep face features of different identities and achieves state-of-the-art accuracy on two major face recognition benchmark datasets: Labeled Faces in the Wild (LFW) and YouTube Faces (YTF).",
"title": ""
},
{
"docid": "81534e94c4d5714fadd7de63d7f3f631",
"text": "OBJECTIVES\nSocial capital has been studied due to its contextual influence on health. However, no specific assessment tool has been developed and validated for the measurement of social capital among 12-year-old adolescent students. The aim of the present study was to develop and validate a quick, simple assessment tool to measure social capital among adolescent students.\n\n\nMETHODS\nA questionnaire was developed based on a review of relevant literature. For such, searches were made of the Scientific Electronic Library Online, Latin American and Caribbean Health Sciences, The Cochrane Library, ISI Web of Knowledge, International Database for Medical Literature and PubMed Central bibliographical databases from September 2011 to January 2014 for papers addressing assessment tools for the evaluation of social capital. Focus groups were also formed by adolescent students as well as health, educational and social professionals. The final assessment tool was administered to a convenience sample from two public schools (79 students) and one private school (22 students), comprising a final sample of 101 students. Reliability and internal consistency were evaluated using the Kappa coefficient and Cronbach's alpha coefficient, respectively. Content validity was determined by expert consensus as well as exploratory and confirmatory factor analysis.\n\n\nRESULTS\nThe final version of the questionnaire was made up of 12 items. The total scale demonstrated very good internal consistency (Cronbach's alpha: 0.71). Reproducibility was also very good, as the Kappa coefficient was higher than 0.72 for the majority of items (range: 0.63 to 0.97). Factor analysis grouped the 12 items into four subscales: School Social Cohesion, School Friendships, Neighborhood Social Cohesion and Trust (school and neighborhood).\n\n\nCONCLUSIONS\nThe present findings indicate the validity and reliability of the Social Capital Questionnaire for Adolescent Students.",
"title": ""
},
{
"docid": "768a8cfff3f127a61f12139466911a94",
"text": "The metabolism of NAD has emerged as a key regulator of cellular and organismal homeostasis. Being a major component of both bioenergetic and signaling pathways, the molecule is ideally suited to regulate metabolism and major cellular events. In humans, NAD is synthesized from vitamin B3 precursors, most prominently from nicotinamide, which is the degradation product of all NAD-dependent signaling reactions. The scope of NAD-mediated regulatory processes is wide including enzyme regulation, control of gene expression and health span, DNA repair, cell cycle regulation and calcium signaling. In these processes, nicotinamide is cleaved from NAD(+) and the remaining ADP-ribosyl moiety used to modify proteins (deacetylation by sirtuins or ADP-ribosylation) or to generate calcium-mobilizing agents such as cyclic ADP-ribose. This review will also emphasize the role of the intermediates in the NAD metabolome, their intra- and extra-cellular conversions and potential contributions to subcellular compartmentalization of NAD pools.",
"title": ""
},
{
"docid": "3e5ae0b370b98185d95b428be727d1a8",
"text": "A 40-Gb/s receiver includes a continuous-time linear equalizer, a discrete-time linear equalizer, a two-tap decision-feedback equalizer, a clock and data recovery circuit, and a one-to-four deserializer. Hardware minimization and charge steering techniques are extensively used to reduce the power consumption by a factor of ten. Fabricated in 45-nm CMOS technology, the receiver exhibits a bathtub curve opening of 0.28 UI with a recovered clock jitter of 0.5 psrms.",
"title": ""
},
{
"docid": "31e558e1d306e204bfa64121749b75fc",
"text": "Experimental results in psychology have shown the important role of manipulation in guiding infant development. This has inspired work in developmental robotics as well. In this case, however, the benefits of this approach have been limited by the intrinsic difficulties of the task. Controlling the interaction between the robot and the environment in a meaningful and safe way is hard especially when little prior knowledge is available. We push the idea that haptic feedback can enhance the way robots interact with unmodeled environments. We approach grasping and manipulation as tasks driven mainly by tactile and force feedback. We implemented a grasping behavior on a robotic platform with sensitive tactile sensors and compliant actuators; the behavior allows the robot to grasp objects placed on a table. Finally, we demonstrate that the haptic feedback originated by the interaction with the objects carries implicit information about their shape and can be useful for learning.",
"title": ""
},
{
"docid": "aa7026774074ed81dd7836ef6dc44334",
"text": "To improve safety on the roads, next-generation vehicles will be equipped with short-range communication technologies. Many applications enabled by such communication will be based on a continuous broadcast of information about the own status from each vehicle to the neighborhood, often referred as cooperative awareness or beaconing. Although the only standardized technology allowing direct vehicle-to-vehicle (V2V) communication has been IEEE 802.11p until now, the latest release of long-term evolution (LTE) included advanced device-to-device features designed for the vehicular environment (LTE-V2V) making it a suitable alternative to IEEE 802.11p. Advantages and drawbacks are being considered for both technologies, and which one will be implemented is still under debate. The aim of this paper is thus to provide an insight into the performance of both technologies for cooperative awareness and to compare them. The investigation is performed analytically through the implementation of novel models for both IEEE 802.11p and LTE-V2V able to address the same scenario, with consistent settings and focusing on the same output metrics. The proposed models take into account several aspects that are often neglected by related works, such as hidden terminals and capture effect in IEEE 802.11p, the impact of imperfect knowledge of vehicles position on the resource allocation in LTE-V2V, and the various modulation and coding scheme combinations that are available in both technologies. Results show that LTE-V2V allows us to maintain the required quality of service at even double or more the distance than IEEE 802.11p in moderate traffic conditions. However, due to the half-duplex nature of devices and the structure of LTE frames, it shows lower capacity than IEEE 802.11p if short distances and very high vehicle density are targeted.",
"title": ""
}
] |
scidocsrr
|
a88aaf49001e63adafce5bd5554b17df
|
Democratizing Production-Scale Distributed Deep Learning
|
[
{
"docid": "3435041805c5cb2629d70ff909c10637",
"text": "Synchronized stochastic gradient descent (SGD) optimizers with data parallelism are widely used in training large-scale deep neural networks. Although using larger mini-batch sizes can improve the system scalability by reducing the communication-to-computation ratio, it may hurt the generalization ability of the models. To this end, we build a highly scalable deep learning training system for dense GPU clusters with three main contributions: (1) We propose a mixed-precision training method that significantly improves the training throughput of a single GPU without losing accuracy. (2) We propose an optimization approach for extremely large minibatch size (up to 64k) that can train CNN models on the ImageNet dataset without losing accuracy. (3) We propose highly optimized all-reduce algorithms that achieve up to 3x and 11x speedup on AlexNet and ResNet-50 respectively than NCCL-based training on a cluster with 1024 Tesla P40 GPUs. On training ResNet-50 with 90 epochs, the state-of-the-art GPU-based system with 1024 Tesla P100 GPUs spent 15 minutes and achieved 74.9% top-1 test accuracy, and another KNL-based system with 2048 Intel KNLs spent 20 minutes and achieved 75.4% accuracy. Our training system can achieve 75.8% top-1 test accuracy in only 6.6 minutes using 2048 Tesla P40 GPUs. When training AlexNet with 95 epochs, our system can achieve 58.7% top-1 test accuracy within 4 minutes, which also outperforms all other existing systems.",
"title": ""
}
] |
[
{
"docid": "2281d739c6858d35eb5f3650d2d03474",
"text": "We discuss an implementation of the RRT* optimal motion planning algorithm for the half-car dynamical model to enable autonomous high-speed driving. To develop fast solutions of the associated local steering problem, we observe that the motion of a special point (namely, the front center of oscillation) can be modeled as a double integrator augmented with fictitious inputs. We first map the constraints on tire friction forces to constraints on these augmented inputs, which provides instantaneous, state-dependent bounds on the curvature of geometric paths feasibly traversable by the front center of oscillation. Next, we map the vehicle's actual inputs to the augmented inputs. The local steering problem for the half-car dynamical model can then be transformed to a simpler steering problem for the front center of oscillation, which we solve efficiently by first constructing a curvature-bounded geometric path and then imposing a suitable speed profile on this geometric path. Finally, we demonstrate the efficacy of the proposed motion planner via numerical simulation results.",
"title": ""
},
{
"docid": "c9d833d872ab0550edb0aa26565ac76b",
"text": "In this paper we investigate the potential of the neural machine translation (NMT) when taking into consideration the linguistic aspect of target language. From this standpoint, the NMT approach with attention mechanism [1] is extended in order to produce several linguistically derived outputs. We train our model to simultaneously output the lemma and its corresponding factors (e.g. part-of-speech, gender, number). The word level translation is built with a mapping function using a priori linguistic information. Compared to the standard NMT system, factored architecture increases significantly the vocabulary coverage while decreasing the number of unknown words. With its richer architecture, the Factored NMT approach allows us to implement several training setup that will be discussed in detail along this paper. On the IWSLT’15 English-to-French task, FNMT model outperforms NMT model in terms of BLEU score. A qualitative analysis of the output on a set of test sentences shows the effectiveness of the FNMT model.",
"title": ""
},
{
"docid": "67421eaa6f719f37fd91407714ba2a2d",
"text": "With the widespread use of online shopping in recent years, consumer search requests for products have become more diverse. Previous web search methods have used adjectives as input by consumers. However, given that the number of adjectives that can be used to express textures is limited, it is debatable whether adjectives are capable of richly expressing variations of product textures. In Japanese, tactile experiences are easily and frequently expressed by onomatopoeia, such as “ fuwa-fuwa” which indicates a soft and light sensation. Onomatopoeia are useful for understanding not only material textures but also a user’s intuitive, sensitive, and even ambiguous feelings evoked by materials. In this study, we propose a system to recommend products corresponding to product textures associated with Japanese onomatopoeia based on their symbolic sound associations between the onomatopoeia phonemes and the texture sensations. Our system quantitatively estimates the texture sensations of onomatopoeia input by users, and calculates the similarities between the users’ impressions of the onomatopoeia and those of product pictures. Our system also suggests products which best match the entered onomatopoeia. An evaluation of our method revealed that the best performance was achieved when the SIFT features, the colors of product pictures, and text describing product pictures were used; Specifically, precision was 66 for the top 15 search results. Our system is expected to contribute to online shopping activity as an intuitive product recommendation system.",
"title": ""
},
{
"docid": "992d71459b616bfe72845493a6f8f910",
"text": "Finding patterns and trends in spatial and temporal datasets has been a long studied problem in statistics and different domains of science. This paper presents a visual analytics approach for the interactive exploration and analysis of spatiotemporal correlations among multivariate datasets. Our approach enables users to discover correlations and explore potentially causal or predictive links at different spatiotemporal aggregation levels among the datasets, and allows them to understand the underlying statistical foundations that precede the analysis. Our technique utilizes the Pearson's product-moment correlation coefficient and factors in the lead or lag between different datasets to detect trends and periodic patterns amongst them.",
"title": ""
},
{
"docid": "ecbd9201a7f8094a02fcec2c4f78240d",
"text": "Neural network compression has recently received much attention due to the computational requirements of modern deep models. In this work, our objective is to transfer knowledge from a deep and accurate model to a smaller one. Our contributions are threefold: (i) we propose an adversarial network compression approach to train the small student network to mimic the large teacher, without the need for labels during training; (ii) we introduce a regularization scheme to prevent a trivially-strong discriminator without reducing the network capacity and (iii) our approach generalizes on different teacher-student models. In an extensive evaluation on five standard datasets, we show that our student has small accuracy drop, achieves better performance than other knowledge transfer approaches and it surpasses the performance of the same network trained with labels. In addition, we demonstrate state-ofthe-art results compared to other compression strategies.",
"title": ""
},
{
"docid": "77059bf4b66792b4f34bc78bbb0b373a",
"text": "Cloud computing systems host most of today's commercial business applications yielding it high revenue which makes it a target of cyber attacks. This emphasizes the need for a digital forensic mechanism for the cloud environment. Conventional digital forensics cannot be directly presented as a cloud forensic solution due to the multi tenancy and virtualization of resources prevalent in cloud. While we do cloud forensics, the data to be inspected are cloud component logs, virtual machine disk images, volatile memory dumps, console logs and network captures. In this paper, we have come up with a remote evidence collection and pre-processing framework using Struts and Hadoop distributed file system. Collection of VM disk images, logs etc., are initiated through a pull model when triggered by the investigator, whereas cloud node periodically pushes network captures to HDFS. Pre-processing steps such as clustering and correlation of logs and VM disk images are carried out through Mahout and Weka to implement cross drive analysis.",
"title": ""
},
{
"docid": "081347f2376f4e4061ea5009af137ca7",
"text": "The Internet of things can be defined as to make the “things” belong to the Internet. However, many wonder if the current Internet can support such a challenge. For this and other reasons, hundreds of worldwide initiatives to redesign the Internet are underway. This article discusses the perspectives, challenges and opportunities behind a future Internet that fully supports the “things”, as well as how the “things” can help in the design of a more synergistic future Internet. Keywords–Internet of things, smart things, future Internet, software-defined networking, service-centrism, informationcentrism, ID/Loc splitting, security, privacy, trust.",
"title": ""
},
{
"docid": "d21ec7373565211670a0b43f6e39cd90",
"text": "In this paper, resonant tank design procedure and practical design considerations are presented for a high performance LLC multiresonant dc-dc converter in a two-stage smart battery charger for neighborhood electric vehicle applications. The multiresonant converter has been analyzed and its performance characteristics are presented. It eliminates both low- and high-frequency current ripple on the battery, thus maximizing battery life without penalizing the volume of the charger. Simulation and experimental results are presented for a prototype unit converting 390 V from the input dc link to an output voltage range of 48-72 V dc at 650 W. The prototype achieves a peak efficiency of 96%.",
"title": ""
},
{
"docid": "f975a1fa2905f8ae42ced1f13a88a15b",
"text": "This paper presents a new method of detecting and tracking the boundaries of drivable regions in road without road-markings. As unmarked roads connect residential places to public roads, the capability of autonomously driving on such a roadway is important to truly realize self-driving cars in daily driving scenarios. To detect the left and right boundaries of drivable regions, our method first examines the image region at the front of ego-vehicle and then uses the appearance information of that region to identify the boundary of the drivable region from input images. Due to variation in the image acquisition condition, the image features necessary for boundary detection may not be present. When this happens, a boundary detection algorithm working frame-by-frame basis would fail to successfully detect the boundaries. To effectively handle these cases, our method tracks, using a Bayes filter, the detected boundaries over frames. Experiments using real-world videos show promising results.",
"title": ""
},
{
"docid": "00100476074a90ecb616308b63a128e8",
"text": "We propose a novel approach for unsupervised zero-shot learning (ZSL) of classes based on their names. Most existing unsupervised ZSL methods aim to learn a model for directly comparing image features and class names. However, this proves to be a difficult task due to dominance of non-visual semantics in underlying vector-space embeddings of class names. To address this issue, we discriminatively learn a word representation such that the similarities between class and combination of attribute names fall in line with the visual similarity. Contrary to the traditional zero-shot learning approaches that are built upon attribute presence, our approach bypasses the laborious attributeclass relation annotations for unseen classes. In addition, our proposed approach renders text-only training possible, hence, the training can be augmented without the need to collect additional image data. The experimental results show that our method yields state-of-the-art results for unsupervised ZSL in three benchmark datasets.",
"title": ""
},
{
"docid": "ce650daedc7ba277d245a2150062775f",
"text": "Amongst the large number of write-and-throw-away-spreadsheets developed for one-time use there is a rather neglected proportion of spreadsheets that are huge, periodically used, and submitted to regular update-cycles like any conventionally evolving valuable legacy application software. However, due to the very nature of spreadsheets, their evolution is particularly tricky and therefore error-prone. In our strive to develop tools and methodologies to improve spreadsheet quality, we analysed consolidation spreadsheets of an internationally operating company for the errors they contain. The paper presents the results of the field audit, involving 78 spreadsheets with 60,446 non-empty cells. As a by-product, the study performed was also to validate our analysis tools in an industrial context. The evaluated auditing tool offers the auditor a new view on the formula structure of the spreadsheet by grouping similar formulas into equivalence classes. Our auditing approach defines three similarity criteria between formulae, namely copy, logical and structural equivalence. To improve the visualization of large spreadsheets, equivalences and data dependencies are displayed in separated windows that are interlinked with the spreadsheet. The auditing approach helps to find irregularities in the geometrical pattern of similar formulas.",
"title": ""
},
{
"docid": "f7252ab3871dfae3860f575515867db6",
"text": "This review paper deals with IoT that can be used to improve cultivation of food crops, as lots of research work is going on to monitor the effective food crop cycle, since from the start to till harvesting the famers are facing very difficult for better yielding of food crops. Although few initiatives have also been taken by the Indian Government for providing online and mobile messaging services to farmers related to agricultural queries and agro vendor’s information to farmers even such information’s are not enough for farmer so still lot of research work need to be carried out on current agricultural approaches so that continuous sensing and monitoring of crops by convergence of sensors with IoT and making farmers to aware about crops growth, harvest time periodically and in turn making high productivity of crops and also ensuring correct delivery of products to end consumers at right place and right time.",
"title": ""
},
{
"docid": "9326b7c1bd16e7db931131f77aaad687",
"text": "We argue in this article that many common adverbial phrases generally taken to signal a discourse relation between syntactically connected units within discourse structure instead work anaphorically to contribute relational meaning, with only indirect dependence on discourse structure. This allows a simpler discourse structure to provide scaffolding for compositional semantics and reveals multiple ways in which the relational meaning conveyed by adverbial connectives can interact with that associated with discourse structure. We conclude by sketching out a lexicalized grammar for discourse that facilitates discourse interpretation as a product of compositional rules, anaphor resolution, and inference.",
"title": ""
},
{
"docid": "4248fb006221fbb74d565705dcbc5a7a",
"text": "Shot boundary detection (SBD) is an important and fundamental step in video content analysis such as content-based video indexing, browsing, and retrieval. In this paper, a hybrid SBD method is presented by integrating a high-level fuzzy Petri net (HLFPN) model with keypoint matching. The HLFPN model with histogram difference is executed as a predetection. Next, the speeded-up robust features (SURF) algorithm that is reliably robust to image affine transformation and illumination variation is used to figure out all possible false shots and the gradual transition based on the assumption from the HLFPN model. The top-down design can effectively lower down the computational complexity of SURF algorithm. The proposed approach has increased the precision of SBD and can be applied in different types of videos.",
"title": ""
},
{
"docid": "bcf55ba5534aca41cefddb6f4b0b4d22",
"text": "In a point-to-point wireless fading channel, multiple transmit and receive antennas can be used to improve the reliability of reception (diversity gain) or increase the rate of communication for a fixed reliability level (multiplexing gain). In a multiple-access situation, multiple receive antennas can also be used to spatially separate signals from different users (multiple-access gain). Recent work has characterized the fundamental tradeoff between diversity and multiplexing gains in the point-to-point scenario. In this paper, we extend the results to a multiple-access fading channel. Our results characterize the fundamental tradeoff between the three types of gain and provide insights on the capabilities of multiple antennas in a network context.",
"title": ""
},
{
"docid": "7fbc3820c259d9ea58ecabaa92f8c875",
"text": "The use of digital imaging devices, ranging from professional digital cinema cameras to consumer grade smartphone cameras, has become ubiquitous. The acquired image is a degraded observation of the unknown latent image, while the degradation comes from various factors such as noise corruption, camera shake, object motion, resolution limit, hazing, rain streaks, or a combination of them. Image restoration (IR), as a fundamental problem in image processing and low-level vision, aims to reconstruct the latent high-quality image from its degraded observation. Image degradation is, in general, irreversible, and IR is a typical ill-posed inverse problem. Due to the large space of natural image contents, prior information on image structures is crucial to regularize the solution space and produce a good estimation of the latent image. Image prior modeling and learning then are key issues in IR research. This lecture note describes the development of image prior modeling and learning techniques, including sparse representation models, low-rank models, and deep learning models.",
"title": ""
},
{
"docid": "0e68fa08edfc2dcb52585b13d0117bf1",
"text": "Knowledge graphs contain knowledge about the world and provide a structured representation of this knowledge. Current knowledge graphs contain only a small subset of what is true in the world. Link prediction approaches aim at predicting new links for a knowledge graph given the existing links among the entities. Tensor factorization approaches have proved promising for such link prediction problems. Proposed in 1927, Canonical Polyadic (CP) decomposition is among the first tensor factorization approaches. CP generally performs poorly for link prediction as it learns two independent embedding vectors for each entity, whereas they are really tied. We present a simple enhancement of CP (which we call SimplE) to allow the two embeddings of each entity to be learned dependently. The complexity of SimplE grows linearly with the size of embeddings. The embeddings learned through SimplE are interpretable, and certain types of background knowledge can be incorporated into these embeddings through weight tying. We prove SimplE is fully expressive and derive a bound on the size of its embeddings for full expressivity. We show empirically that, despite its simplicity, SimplE outperforms several state-of-the-art tensor factorization techniques. SimplE’s code is available on GitHub at https://github.com/Mehran-k/SimplE.",
"title": ""
},
{
"docid": "09bfe483e80464d0116bda5ec57c7d66",
"text": "The problem of distance-based outlier detection is difficult to solve efficiently in very large datasets because of potential quadratic time complexity. We address this problem and develop sequential and distributed algorithms that are significantly more efficient than state-of-the-art methods while still guaranteeing the same outliers. By combining simple but effective indexing and disk block accessing techniques, we have developed a sequential algorithm iOrca that is up to an order-of-magnitude faster than the state-of-the-art. The indexing scheme is based on sorting the data points in order of increasing distance from a fixed reference point and then accessing those points based on this sorted order. To speed up the basic outlier detection technique, we develop two distributed algorithms (DOoR and iDOoR) for modern distributed multi-core clusters of machines, connected on a ring topology. The first algorithm passes data blocks from each machine around the ring, incrementally updating the nearest neighbors of the points passed. By maintaining a cutoff threshold, it is able to prune a large number of points in a distributed fashion. The second distributed algorithm extends this basic idea with the indexing scheme discussed earlier. In our experiments, both distributed algorithms exhibit significant improvements compared to the state-of-the-art distributed method [13].",
"title": ""
},
{
"docid": "2a41af8ad6000163951b9e7399ce7444",
"text": "Accurate location of the endpoints of an isolated word is important for reliable and robust word recognition. The endpoint detection problem is nontrivial for nonstationary backgrounds where artifacts (i.e., nonspeech events) may be introduced by the speaker, the recording environment, and the transmission system. Several techniques for the detection of the endpoints of isolated words recorded over a dialed-up telephone line were studied. The techniques were broadly classified as either explicit, implicit, or hybrid in concept. The explicit techniques for endpoint detection locate the endpoints prior to and independent of the recognition and decision stages of the system. For the implicit methods, the endpoints are determined solely by the recognition and decision stages Of the system, i.e., there is no separate stage for endpoint detection. The hybrid techniques incorporate aspects from both the explicit and implicit methods. Investigations showed that the hybrid techniques consistently provided the best estimates for both of the word endpoints and, correspondingly, the highest recognition accuracy of the three classes studied. A hybrid endpoint detector is proposed which gives a rejection rate of less than 0.5 percent, while providing recognition accuracy close to that obtained from hand-edited endpoints.",
"title": ""
},
{
"docid": "57b35e32b92b54fc1ea7724e73b26f39",
"text": "The authors examined relations between the Big Five personality traits and academic outcomes, specifically SAT scores and grade-point average (GPA). Openness was the strongest predictor of SAT verbal scores, and Conscientiousness was the strongest predictor of both high school and college GPA. These relations replicated across 4 independent samples and across 4 different personality inventories. Further analyses showed that Conscientiousness predicted college GPA, even after controlling for high school GPA and SAT scores, and that the relation between Conscientiousness and college GPA was mediated, both concurrently and longitudinally, by increased academic effort and higher levels of perceived academic ability. The relation between Openness and SAT verbal scores was independent of academic achievement and was mediated, both concurrently and longitudinally, by perceived verbal intelligence. Together, these findings show that personality traits have independent and incremental effects on academic outcomes, even after controlling for traditional predictors of those outcomes. ((c) 2007 APA, all rights reserved).",
"title": ""
}
] |
scidocsrr
|
64fb7af3a0293707c72f34f8fedd7fe5
|
Algorithmic Bias: From Discrimination Discovery to Fairness-aware Data Mining
|
[
{
"docid": "18a524545090542af81e0a66df3a1395",
"text": "What does it mean for an algorithm to be biased? In U.S. law, unintentional bias is encoded via disparate impact, which occurs when a selection process has widely different outcomes for different groups, even as it appears to be neutral. This legal determination hinges on a definition of a protected class (ethnicity, gender) and an explicit description of the process.\n When computers are involved, determining disparate impact (and hence bias) is harder. It might not be possible to disclose the process. In addition, even if the process is open, it might be hard to elucidate in a legal setting how the algorithm makes its decisions. Instead of requiring access to the process, we propose making inferences based on the data it uses.\n We present four contributions. First, we link disparate impact to a measure of classification accuracy that while known, has received relatively little attention. Second, we propose a test for disparate impact based on how well the protected class can be predicted from the other attributes. Third, we describe methods by which data might be made unbiased. Finally, we present empirical evidence supporting the effectiveness of our test for disparate impact and our approach for both masking bias and preserving relevant information in the data. Interestingly, our approach resembles some actual selection practices that have recently received legal scrutiny.",
"title": ""
},
{
"docid": "6c9acb831bc8dc82198aef10761506be",
"text": "In the context of civil rights law, discrimination refers to unfair or unequal treatment of people based on membership to a category or a minority, without regard to individual merit. Rules extracted from databases by data mining techniques, such as classification or association rules, when used for decision tasks such as benefit or credit approval, can be discriminatory in the above sense. In this paper, the notion of discriminatory classification rules is introduced and studied. Providing a guarantee of non-discrimination is shown to be a non trivial task. A naive approach, like taking away all discriminatory attributes, is shown to be not enough when other background knowledge is available. Our approach leads to a precise formulation of the redlining problem along with a formal result relating discriminatory rules with apparently safe ones by means of background knowledge. An empirical assessment of the results on the German credit dataset is also provided.",
"title": ""
}
] |
[
{
"docid": "4835360fec2ca50355d71f0d0ba76cbc",
"text": "The surge in global population is compelling a shift toward smart agriculture practices. This coupled with the diminishing natural resources, limited availability of arable land, increase in unpredictable weather conditions makes food security a major concern for most countries. As a result, the use of Internet of Things (IoT) and data analytics (DA) are employed to enhance the operational efficiency and productivity in the agriculture sector. There is a paradigm shift from use of wireless sensor network (WSN) as a major driver of smart agriculture to the use of IoT and DA. The IoT integrates several existing technologies, such as WSN, radio frequency identification, cloud computing, middleware systems, and end-user applications. In this paper, several benefits and challenges of IoT have been identified. We present the IoT ecosystem and how the combination of IoT and DA is enabling smart agriculture. Furthermore, we provide future trends and opportunities which are categorized into technological innovations, application scenarios, business, and marketability.",
"title": ""
},
{
"docid": "2f0eb4a361ff9f09bda4689a1f106ff2",
"text": "The growth of Quranic digital publishing increases the need to develop a better framework to authenticate Quranic quotes with the original source automatically. This paper aims to demonstrate the significance of the quote authentication approach. We propose an approach to verify the e-citation of the Quranic quote as compared with original texts from the Quran. In this paper, we will concentrate mainly on discussing the Algorithm to verify the fundamental text for Quranic quotes.",
"title": ""
},
{
"docid": "cfb665d0ca71289a4da834584604250b",
"text": "This work is motivated by the engineering task of achieving a near state-of-the-art face recognition on a minimal computing budget running on an embedded system. Our main technical contribution centers around a novel training method, called Multibatch, for similarity learning, i.e., for the task of generating an invariant “face signature” through training pairs of “same” and “not-same” face images. The Multibatch method first generates signatures for a mini-batch of k face images and then constructs an unbiased estimate of the full gradient by relying on all k2 k pairs from the mini-batch. We prove that the variance of the Multibatch estimator is bounded by O(1/k2), under some mild conditions. In contrast, the standard gradient estimator that relies on random k/2 pairs has a variance of order 1/k. The smaller variance of the Multibatch estimator significantly speeds up the convergence rate of stochastic gradient descent. Using the Multibatch method we train a deep convolutional neural network that achieves an accuracy of 98.2% on the LFW benchmark, while its prediction runtime takes only 30msec on a single ARM Cortex A9 core. Furthermore, the entire training process took only 12 hours on a single Titan X GPU.",
"title": ""
},
{
"docid": "0c9a76222f885b95f965211e555e16cd",
"text": "In this paper we address the following question: “Can we approximately sample from a Bayesian posterior distribution if we are only allowed to touch a small mini-batch of data-items for every sample we generate?”. An algorithm based on the Langevin equation with stochastic gradients (SGLD) was previously proposed to solve this, but its mixing rate was slow. By leveraging the Bayesian Central Limit Theorem, we extend the SGLD algorithm so that at high mixing rates it will sample from a normal approximation of the posterior, while for slow mixing rates it will mimic the behavior of SGLD with a pre-conditioner matrix. As a bonus, the proposed algorithm is reminiscent of Fisher scoring (with stochastic gradients) and as such an efficient optimizer during burn-in.",
"title": ""
},
{
"docid": "4667b31c7ee70f7bc3709fc40ec6140f",
"text": "This article presents a method for rectifying and stabilising video from cell-phones with rolling shutter (RS) cameras. Due to size constraints, cell-phone cameras have constant, or near constant focal length, making them an ideal application for calibrated projective geometry. In contrast to previous RS rectification attempts that model distortions in the image plane, we model the 3D rotation of the camera. We parameterise the camera rotation as a continuous curve, with knots distributed across a short frame interval. Curve parameters are found using non-linear least squares over inter-frame correspondences from a KLT tracker. By smoothing a sequence of reference rotations from the estimated curve, we can at a small extra cost, obtain a high-quality image stabilisation. Using synthetic RS sequences with associated ground-truth, we demonstrate that our rectification improves over two other methods. We also compare our video stabilisation with the methods in iMovie and Deshaker.",
"title": ""
},
{
"docid": "f88235f1056d66c5dc188fcf747bf570",
"text": "In this paper, we compare the differences between traditional Kelly Criterion and Vince's optimal f through backtesting actual financial transaction data. We apply a momentum trading strategy to the Taiwan Weighted Index Futures, and analyze its profit-and-loss vectors of Kelly Criterion and Vince's optimal f, respectively. Our numerical experiments demonstrate that there is nearly 90% chance that the difference gap between the bet ratio recommended by Kelly criterion and and Vince's optimal f lies within 2%. Therefore, in the actual transaction, the values from Kelly Criterion could be taken directly as the optimal bet ratio for funds control.",
"title": ""
},
{
"docid": "c5928a67d0b8a6a1c40b7cad6ac03d16",
"text": "Drug addiction represents a dramatic dysregulation of motivational circuits that is caused by a combination of exaggerated incentive salience and habit formation, reward deficits and stress surfeits, and compromised executive function in three stages. The rewarding effects of drugs of abuse, development of incentive salience, and development of drug-seeking habits in the binge/intoxication stage involve changes in dopamine and opioid peptides in the basal ganglia. The increases in negative emotional states and dysphoric and stress-like responses in the withdrawal/negative affect stage involve decreases in the function of the dopamine component of the reward system and recruitment of brain stress neurotransmitters, such as corticotropin-releasing factor and dynorphin, in the neurocircuitry of the extended amygdala. The craving and deficits in executive function in the so-called preoccupation/anticipation stage involve the dysregulation of key afferent projections from the prefrontal cortex and insula, including glutamate, to the basal ganglia and extended amygdala. Molecular genetic studies have identified transduction and transcription factors that act in neurocircuitry associated with the development and maintenance of addiction that might mediate initial vulnerability, maintenance, and relapse associated with addiction.",
"title": ""
},
{
"docid": "10d14531df9190f5ffb217406fe8eb49",
"text": "Web technology has enabled e-commerce. However, in our review of the literature, we found little research on how firms can better position themselves when adopting e-commerce for revenue generation. Drawing upon technology diffusion theory, we developed a conceptual model for assessing e-commerce adoption and migration, incorporating six factors unique to e-commerce. A series of propositions were then developed. Survey data of 1036 firms in a broad range of industries were collected and used to test our model. Our analysis based on multi-nominal logistic regression demonstrated that technology integration, web functionalities, web spending, and partner usage were significant adoption predictors. The model showed that these variables could successfully differentiate non-adopters from adopters. Further, the migration model demonstrated that web functionalities, web spending, and integration of externally oriented inter-organizational systems tend to be the most influential drivers in firms’ migration toward e-commerce, while firm size, partner usage, electronic data interchange (EDI) usage, and perceived obstacles were found to negatively affect ecommerce migration. This suggests that large firms, as well as those that have been relying on outsourcing or EDI, tended to be slow to migrate to the internet platform. # 2005 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "a2130c0316eea0fa510f381ea312b65e",
"text": "A technique for building consistent 3D reconstructions from many views based on fitting a low rank matrix to a matrix with missing data is presented. Rank-four submatrices of minimal, or slightly larger, size are sampled and spans of their columns are combined to constrain a basis of the fitted matrix. The error minimized is expressed in terms of the original subspaces which leads to a better resistance to noise compared to previous methods. More than 90% of the missing data can be handled while finding an acceptable solution efficiently. Applications to 3D reconstruction using both affine and perspective camera models are shown. For the perspective model, a new linear method based on logarithms of positive depths from chirality is introduced to make the depths consistent with an overdetermined set of epipolar geometries. Results are shown for scenes and sequences of various types. Many images in open and closed sequences in narrow and wide base-line setups are reconstructed with reprojection errors around one pixel. It is shown that reconstructed cameras can be used to obtain dense reconstructions from epipolarly aligned images.",
"title": ""
},
{
"docid": "3b4607a6b0135eba7c4bb0852b78dda9",
"text": "Heart rate variability for the treatment of major depression is a novel, alternative approach that can offer symptom reduction with minimal-to-no noxious side effects. The following material will illustrate some of the work being conducted at our laboratory to demonstrate the efficacy of heart rate variability. Namely, results will be presented regarding our published work on an initial open-label study and subsequent results of a small, unfinished randomized controlled trial.",
"title": ""
},
{
"docid": "b333be40febd422eae4ae0b84b8b9491",
"text": "BACKGROUND\nRarely, basal cell carcinomas (BCCs) have the potential to become extensively invasive and destructive, a phenomenon that has led to the term \"locally advanced BCC\" (laBCC). We identified and described the diverse settings that could be considered \"locally advanced\".\n\n\nMETHODS\nThe panel of experts included oncodermatologists, dermatological and maxillofacial surgeons, pathologists, radiotherapists and geriatricians. During a 1-day workshop session, an interactive flow/sequence of questions and inputs was debated.\n\n\nRESULTS\nDiscussion of nine cases permitted us to approach consensus concerning what constitutes laBCC. The expert panel retained three major components for the complete assessment of laBCC cases: factors of complexity related to the tumour itself, factors related to the operability and the technical procedure, and factors related to the patient. Competing risks of death should be precisely identified. To ensure homogeneous multidisciplinary team (MDT) decisions in different clinical settings, the panel aimed to develop a practical tool based on the three components.\n\n\nCONCLUSION\nThe grid presented is not a definitive tool, but rather, it is a method for analysing the complexity of laBCC.",
"title": ""
},
{
"docid": "3c98c5bd1d9a6916ce5f6257b16c8701",
"text": "As financial time series are inherently noisy and non-stationary, it is regarded as one of the most challenging applications of time series forecasting. Due to the advantages of generalization capability in obtaining a unique solution, support vector regression (SVR) has also been successfully applied in financial time series forecasting. In the modeling of financial time series using SVR, one of the key problems is the inherent high noise. Thus, detecting and removing the noise are important but difficult tasks when building an SVR forecasting model. To alleviate the influence of noise, a two-stage modeling approach using independent component analysis (ICA) and support vector regression is proposed in financial time series forecasting. ICA is a novel statistical signal processing technique that was originally proposed to find the latent source signals from observed mixture signals without having any prior knowledge of the mixing mechanism. The proposed approach first uses ICA to the forecasting variables for generating the independent components (ICs). After identifying and removing the ICs containing the noise, the rest of the ICs are then used to reconstruct the forecasting variables which contain less noise and served as the input variables of the SVR forecasting model. In order to evaluate the performance of the proposed approach, the Nikkei 225 opening index and TAIEX closing index are used as illustrative examples. Experimental results show that the proposed model outperforms the SVR model with non-filtered forecasting variables and a random walk model.",
"title": ""
},
{
"docid": "80d457b352362d2b72acb26ca5b8a382",
"text": "Language experience shapes infants' abilities to process speech sounds, with universal phonetic discrimination abilities narrowing in the second half of the first year. Brain measures reveal a corresponding change in neural discrimination as the infant brain becomes selectively sensitive to its native language(s). Whether and how bilingual experience alters the transition to native language specific phonetic discrimination is important both theoretically and from a practical standpoint. Using whole head magnetoencephalography (MEG), we examined brain responses to Spanish and English syllables in Spanish-English bilingual and English monolingual 11-month-old infants. Monolingual infants showed sensitivity to English, while bilingual infants were sensitive to both languages. Neural responses indicate that the dual sensitivity of the bilingual brain is achieved by a slower transition from acoustic to phonetic sound analysis, an adaptive and advantageous response to increased variability in language input. Bilingual neural responses extend into the prefrontal and orbitofrontal cortex, which may be related to their previously described bilingual advantage in executive function skills. A video abstract of this article can be viewed at: https://youtu.be/TAYhj-gekqw.",
"title": ""
},
{
"docid": "60b876a2065587fc7f152d452605dc14",
"text": "Fillers are frequently used in beautifying procedures. Despite major advancements of the chemical and biological features of injected materials, filler-related adverse events may occur, and can substantially impact the clinical outcome. Filler granulomas become manifest as visible grains, nodules, or papules around the site of the primary injection. Early recognition and proper treatment of filler-related complications is important because effective treatment options are available. In this report, we provide a comprehensive overview of the differential diagnosis and diagnostics and develop an algorithm of successful therapy regimens.",
"title": ""
},
{
"docid": "28641a6621a31bf720586e4c5980645b",
"text": "This paper explores the use of self-ensembling for visual domain adaptation problems. Our technique is derived from the mean teacher variant [20] of temporal ensembling [8], a technique that achieved state of the art results in the area of semi-supervised learning. We introduce a number of modifications to their approach for challenging domain adaptation scenarios and evaluate its effectiveness. Our approach achieves state of the art results in a variety of benchmarks, including our winning entry in the VISDA-2017 visual domain adaptation challenge [12]. In small image benchmarks, our algorithm not only outperforms prior art, but can also achieve accuracy that is close to that of a classifier trained in a supervised",
"title": ""
},
{
"docid": "663342554879c5464a7e1aff969339b7",
"text": "Esthetic surgery of external female genitalia remains an uncommon procedure. This article describes a novel, de-epithelialized, labial rim flap technique for labia majora augmentation using de-epithelialized labia minora tissue otherwise to be excised as an adjunct to labia minora reduction. Ten patients were included in the study. The protruding segments of the labia minora were de-epithelialized with a fine scissors or scalpel instead of being excised, and a bulky section of subcutaneous tissue was obtained. Between the outer and inner surfaces of the labia minora, a flap with a subcutaneous pedicle was created in continuity with the de-epithelialized marginal tissue. A pocket was dissected in the labium majus, and the flap was transposed into the pocket to augment the labia majora. Mean patient age was 39.9 (±13.9) years, mean operation time was 60 min, and mean follow-up period was 14.5 (±3.4) months. There were no major complications (hematoma, wound dehiscence, infection) following surgery. No patient complained of postoperative difficulty with coitus or dyspareunia. All patients were satisfied with the final appearance. Several methods for labia minora reduction have been described. Auxiliary procedures are required with labia minora reduction for better results. Nevertheless, few authors have taken into account the final esthetic appearance of the whole female external genitalia. The described technique in this study is indicated primarily for mild atrophy of the labia majora with labia minora hypertrophy; the technique resulted in perfect patient satisfaction with no major complications or postoperative coital problems. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .",
"title": ""
},
{
"docid": "29dab83f08d38702e09acec2f65346b3",
"text": "This paper proposes a weakly- and self-supervised deep convolutional neural network (WSSDCNN) for contentaware image retargeting. Our network takes a source image and a target aspect ratio, and then directly outpues a retargeted image. Retargeting is performed through a shift reap, which is a pixet-wise mapping from the source to the target grid. Our method implicitly learns an attention map, which leads to r content-aware shift map for image retargeting. As a result, discriminative parts in an image are preserved, while background regions are adjusted seamlessly. In the training phase, pairs of an image and its image-level annotation are used to compute content and structure tosses. We demonstrate the effectiveness of our proposed method for a retargeting application with insightful analyses.",
"title": ""
},
{
"docid": "189d0b173f8a9e0b3deb21398955dc3c",
"text": "Do investments in customer satisfaction lead to excess returns? If so, are these returns associated with higher stock market risk? The empirical evidence presented in this article suggests that the answer to the first question is yes, but equally remarkable, the answer to the second question is no, suggesting that satisfied customers are economic assets with high returns/low risk. Although these results demonstrate stock market imperfections with respect to the time it takes for share prices to adjust, they are consistent with previous studies in marketing in that a firm’s satisfied customers are likely to improve both the level and the stability of net cash flows. The implication, implausible as it may seem in other contexts, is high return/low risk. Specifically, the authors find that customer satisfaction, as measured by the American Customer Satisfaction Index (ACSI), is significantly related to market value of equity. Yet news about ACSI results does not move share prices. This apparent inconsistency is the catalyst for examining whether excess stock returns might be generated as a result. The authors present two stock portfolios: The first is a paper portfolio that is back tested, and the second is an actual case. At low systematic risk, both outperform the market by considerable margins. In other words, it is possible to beat the market consistently by investing in firms that do well on the ACSI.",
"title": ""
},
{
"docid": "d569902303b93274baf89527e666adc0",
"text": "We present a novel sparse representation based approach for the restoration of clipped audio signals. In the proposed approach, the clipped signal is decomposed into overlapping frames and the declipping problem is formulated as an inverse problem, per audio frame. This problem is further solved by a constrained matching pursuit algorithm, that exploits the sign pattern of the clipped samples and their maximal absolute value. Performance evaluation with a collection of music and speech signals demonstrate superior results compared to existing algorithms, over a wide range of clipping levels.",
"title": ""
},
{
"docid": "9420760d6945440048cee3566ce96699",
"text": "In this work, we develop a computer vision based fall prevention system for hospital ward application. To prevent potential falls, once the event of patient get up from the bed is automatically detected, nursing staffs are alarmed immediately for assistance. For the detection task, we use a RGBD sensor (Microsoft Kinect). The geometric prior knowledge is exploited by identifying a set of task-specific feature channels, e.g., regions of interest. Extensive motion and shape features from both color and depth image sequences are extracted. Features from multiple modalities and channels are fused via a multiple kernel learning framework for training the event detector. Experimental results demonstrate the high accuracy and efficiency achieved by the proposed system.",
"title": ""
}
] |
scidocsrr
|
f967052774d8ea4c17830f7c5657c9e9
|
Addressing the challenges of underspecification in web search
|
[
{
"docid": "419c721c2d0a269c65fae59c1bdb273c",
"text": "Previous work on understanding user web search behavior has focused on how people search and what they are searching for, but not why they are searching. In this paper, we describe a framework for understanding the underlying goals of user searches, and our experience in using the framework to manually classify queries from a web search engine. Our analysis suggests that so-called navigational\" searches are less prevalent than generally believed while a previously unexplored \"resource-seeking\" goal may account for a large fraction of web searches. We also illustrate how this knowledge of user search goals might be used to improve future web search engines.",
"title": ""
}
] |
[
{
"docid": "2c8f4c911c298cdc19a420781c569d9c",
"text": "Colorectal cancer is the fourth leading cause of cancer deaths worldwide and the second leading cause in the United States. The risk of colorectal cancer can be mitigated by the identification and removal of premalignant lesions through optical colonoscopy. Unfortunately, conventional colonoscopy misses more than 20% of the polyps that should be removed, due in part to poor contrast of lesion topography. Imaging depth and tissue topography during a colonoscopy is difficult because of the size constraints of the endoscope and the deforming mucosa. Most existing methods make unrealistic assumptions which limits accuracy and sensitivity. In this paper, we present a method that avoids these restrictions, using a joint deep convolutional neural network-conditional random field (CNN-CRF) framework for monocular endoscopy depth estimation. Estimated depth is used to reconstruct the topography of the surface of the colon from a single image. We train the unary and pairwise potential functions of a CRF in a CNN on synthetic data, generated by developing an endoscope camera model and rendering over 200,000 images of an anatomically-realistic colon.We validate our approach with real endoscopy images from a porcine colon, transferred to a synthetic-like domain via adversarial training, with ground truth from registered computed tomography measurements. The CNN-CRF approach estimates depths with a relative error of 0.152 for synthetic endoscopy images and 0.242 for real endoscopy images. We show that the estimated depth maps can be used for reconstructing the topography of the mucosa from conventional colonoscopy images. This approach can easily be integrated into existing endoscopy systems and provides a foundation for improving computer-aided detection algorithms for detection, segmentation and classification of lesions.",
"title": ""
},
{
"docid": "861e2a3c19dafdd3273dc718416309c2",
"text": "For the last 40 years high - capacity Unmanned Air Vehicles have been use mostly for military services such as tracking, surveillance, engagement with active weapon or in the simplest term for data acquisition purpose. Unmanned Air Vehicles are also demanded commercially because of their advantages in comparison to manned vehicles such as their low manufacturing and operating cost, configuration flexibility depending on customer request, not risking pilot in the difficult missions. Nevertheless, they have still open issues such as integration to the manned flight air space, reliability and airworthiness. Although Civil Unmanned Air Vehicles comprise 3% of the UAV market, it is estimated that they will reach 10% level within the next 5 years. UAV systems with their useful equipment (camera, hyper spectral imager, air data sensors and with similar equipment) have been in use more and more for civil applications: Tracking and monitoring in the event of agriculture / forest / marine pollution / waste / emergency and disaster situations; Mapping for land registry and cadastre; Wildlife and ecologic monitoring; Traffic Monitoring and; Geology and mine researches. They can bring minimal risk and cost advantage to many civil applications, in which it was risky and costly to use manned air vehicles before. When the cost of Unmanned Air Vehicles designed and produced for military service is taken into account, civil market demands lower cost and original products which are suitable for civil applications. Most of civil applications which are mentioned above require UAVs that are able to take off and land on limited runway, and moreover move quickly in the operation region for mobile applications but hover for immobile measurement and tracking when necessary. This points to a hybrid unmanned vehicle concept optimally, namely the Vertical Take Off and Landing (VTOL) UAVs. At the same time, this system requires an efficient cost solution for applicability / convertibility for different civil applications. It means an Air Vehicle having easily portability of payload depending on application concept and programmability of operation (hover and cruise flight time) specific to the application. The main topic of this project is designing, producing and testing the TURAC VTOL UAV that have the following features : Vertical takeoff and landing, and hovering like helicopter ; High cruise speed and fixed-wing ; Multi-functional and designed for civil purpose ; The project involves two different variants ; The TURAC A variant is a fully electrical platform which includes 2 tilt electric motors in the front, and a fixed electric motor and ducted fan in the rear ; The TURAC B variant uses fuel cells.",
"title": ""
},
{
"docid": "d94a4f07939c0f420787b099336f426b",
"text": "A next generation of AESA antennas will be challenged with the need for lower size, weight, power and cost (SWAP-C). This leads to enhanced demands especially with regard to the integration density of the RF-part inside a T/R module. The semiconductor material GaN has proven its capacity for high power amplifiers, robust receive components as well as switch components for separation of transmit and receive mode. This paper will describe the design and measurement results of a GaN-based single-chip T/R module frontend (HPA, LNA and SPDT) using UMS GH25 technology and covering the frequency range from 8 GHz to 12 GHz. Key performance parameters of the frontend are 13 W minimum transmit (TX) output power over the whole frequency range with peak power up to 17 W. The frontend in receive (RX) mode has a noise figure below 3.2 dB over the whole frequency range, and can survive more than 5 W input power. The large signal insertion loss of the used SPDT is below 0.9 dB at 43 dBm input power level.",
"title": ""
},
{
"docid": "9f9128951d6c842689f61fc19c79f238",
"text": "This paper concerns image reconstruction for helical x-ray transmission tomography (CT) with multi-row detectors. We introduce two approximate cone-beam (CB) filtered-backprojection (FBP) algorithms of the Feldkamp type, obtained by extending to three dimensions (3D) two recently proposed exact FBP algorithms for 2D fan-beam reconstruction. The new algorithms are similar to the standard Feldkamp-type FBP for helical CT. In particular, they can reconstruct each transaxial slice from data acquired along an arbitrary segment of helix, thereby efficiently exploiting the available data. In contrast to the standard Feldkamp-type algorithm, however, the redundancy weight is applied after filtering, allowing a more efficient numerical implementation. To partially alleviate the CB artefacts, which increase with increasing values of the helical pitch, a frequency-mixing method is proposed. This method reconstructs the high frequency components of the image using the longest possible segment of helix, whereas the low frequencies are reconstructed using a minimal, short-scan, segment of helix to minimize CB artefacts. The performance of the algorithms is illustrated using simulated data.",
"title": ""
},
{
"docid": "5ba3baabc84d02f0039748a4626ace36",
"text": "BACKGROUND\nGreen tea (GT) extract may play a role in body weight regulation. Suggested mechanisms are decreased fat absorption and increased energy expenditure.\n\n\nOBJECTIVE\nWe examined whether GT supplementation for 12 wk has beneficial effects on weight control via a reduction in dietary lipid absorption as well as an increase in resting energy expenditure (REE).\n\n\nMETHODS\nSixty Caucasian men and women [BMI (in kg/m²): 18-25 or >25; age: 18-50 y] were included in a randomized placebo-controlled study in which fecal energy content (FEC), fecal fat content (FFC), resting energy expenditure, respiratory quotient (RQ), body composition, and physical activity were measured twice (baseline vs. week 12). For 12 wk, subjects consumed either GT (>0.56 g/d epigallocatechin gallate + 0.28-0.45 g/d caffeine) or placebo capsules. Before the measurements, subjects recorded energy intake for 4 consecutive days and collected feces for 3 consecutive days.\n\n\nRESULTS\nNo significant differences between groups and no significant changes over time were observed for the measured variables. Overall means ± SDs were 7.2 ± 3.8 g/d, 6.1 ± 1.2 MJ/d, 67.3 ± 14.3 kg, and 29.8 ± 8.6% for FFC, REE, body weight, and body fat percentage, respectively.\n\n\nCONCLUSION\nGT supplementation for 12 wk in 60 men and women did not have a significant effect on FEC, FFC, REE, RQ, and body composition.",
"title": ""
},
{
"docid": "4eebd4a2d5c50a2d7de7c36c5296786d",
"text": "Depth information has been used in computer vision for a wide variety of tasks. Since active range sensors are currently available at low cost, high-quality depth maps can be used as relevant input for many applications. Background subtraction and video segmentation algorithms can be improved by fusing depth and color inputs, which are complementary and allow one to solve many classic color segmentation issues. In this paper, we describe one fusion method to combine color and depth based on an advanced color-based algorithm. This technique has been evaluated by means of a complete dataset recorded with Microsoft Kinect, which enables comparison with the original method. The proposed method outperforms the others in almost every test, showing more robustness to illumination changes, shadows, reflections and camouflage.",
"title": ""
},
{
"docid": "b10074ccf133a3c18a2029a5fe52f7ff",
"text": "Maneuvering vessel detection and tracking (VDT), incorporated with state estimation and trajectory prediction, are important tasks for vessel navigational systems (VNSs), as well as vessel traffic monitoring and information systems (VTMISs) to improve maritime safety and security in ocean navigation. Although conventional VNSs and VTMISs are equipped with maritime surveillance systems for the same purpose, intelligent capabilities for vessel detection, tracking, state estimation, and navigational trajectory prediction are underdeveloped. Therefore, the integration of intelligent features into VTMISs is proposed in this paper. The first part of this paper is focused on detecting and tracking of a multiple-vessel situation. An artificial neural network (ANN) is proposed as the mechanism for detecting and tracking multiple vessels. In the second part of this paper, vessel state estimation and navigational trajectory prediction of a single-vessel situation are considered. An extended Kalman filter (EKF) is proposed for the estimation of vessel states and further used for the prediction of vessel trajectories. Finally, the proposed VTMIS is simulated, and successful simulation results are presented in this paper.",
"title": ""
},
{
"docid": "fd54d540c30968bb8682a4f2eee43c8d",
"text": "This paper presents LISSA (“Learning dashboard for Insights and Support during Study Advice”), a learning analytics dashboard designed, developed, and evaluated in collaboration with study advisers. The overall objective is to facilitate communication between study advisers and students by visualizing grade data that is commonly available in any institution. More specifically, the dashboard attempts to support the dialogue between adviser and student through an overview of study progress, peer comparison, and by triggering insights based on facts as a starting point for discussion and argumentation. We report on the iterative design process and evaluation results of a deployment in 97 advising sessions. We have found that the dashboard supports the current adviser-student dialogue, helps them motivate students, triggers conversation, and provides tools to add personalization, depth, and nuance to the advising session. It provides insights at a factual, interpretative, and reflective level and allows both adviser and student to take an active role during the session.",
"title": ""
},
{
"docid": "d2c36f67971c22595bc483ebb7345404",
"text": "Resistive-switching random access memory (RRAM) devices utilizing a crossbar architecture represent a promising alternative for Flash replacement in high-density data storage applications. However, RRAM crossbar arrays require the adoption of diodelike select devices with high on-off -current ratio and with sufficient endurance. To avoid the use of select devices, one should develop passive arrays where the nonlinear characteristic of the RRAM device itself provides self-selection during read and write. This paper discusses the complementary switching (CS) in hafnium oxide RRAM, where the logic bit can be encoded in two high-resistance levels, thus being immune from leakage currents and related sneak-through effects in the crossbar array. The CS physical mechanism is described through simulation results by an ion-migration model for bipolar switching. Results from pulsed-regime characterization are shown, demonstrating that CS can be operated at least in the 10-ns time scale. The minimization of the reset current is finally discussed.",
"title": ""
},
{
"docid": "b63077105e140546a7485167339fdf62",
"text": "Deep multi-layer perceptron neural networks are used in many state-of-the-art systems for machine perception (e.g., speech-to-text, image classification, and object detection). Once a network is trained to do a specific task, e.g., finegrained bird classification, it cannot easily be trained to do new tasks, e.g., incrementally learning to recognize additional bird species or learning an entirely different task such as finegrained flower recognition. When new tasks are added, deep neural networks are prone to catastrophically forgetting previously learned information. Catastrophic forgetting has hindered the use of neural networks in deployed applications that require lifelong learning. There have been multiple attempts to develop schemes that mitigate catastrophic forgetting, but these methods have yet to be compared and the kinds of tests used to evaluate individual methods vary greatly. In this paper, we compare multiple mechanisms designed to mitigate catastrophic forgetting in neural networks. Experiments showed that the mechanism(s) that are critical for optimal performance vary based on the incremental training paradigm and type of data being used.",
"title": ""
},
{
"docid": "b0133ea142da1d4f2612407d4d8bf6c0",
"text": "The ability to transfer knowledge gained in previous tasks into new contexts is one of the most important mechanisms of human learning. Despite this, adapting autonomous behavior to be reused in partially similar settings is still an open problem in current robotics research. In this paper, we take a small step in this direction and propose a generic framework for learning transferable motion policies. Our goal is to solve a learning problem in a target domain by utilizing the training data in a different but related source domain. We present this in the context of an autonomous MAV flight using monocular reactive control, and demonstrate the efficacy of our proposed approach through extensive real-world flight experiments in outdoor cluttered environments.",
"title": ""
},
{
"docid": "170e2f1ad2ffc7ab1666205fdafe01de",
"text": "One of the important issues concerning the spreading process in social networks is the influence maximization. This is the problem of identifying the set of the most influential nodes in order to begin the spreading process based on an information diffusion model in the social networks. In this study, two new methods considering the community structure of the social networks and influence-based closeness centrality measure of the nodes are presented to maximize the spread of influence on the multiplication threshold, minimum threshold and linear threshold information diffusion models. The main objective of this study is to improve the efficiency with respect to the run time while maintaining the accuracy of the final influence spread. Efficiency improvement is obtained by reducing the number of candidate nodes subject to evaluation in order to find the most influential. Experiments consist of two parts: first, the effectiveness of the proposed influence-based closeness centrality measure is established by comparing it with available centrality measures; second, the evaluations are conducted to compare the two proposed community-based methods with well-known benchmarks in the literature on the real datasets, leading to the results demonstrate the efficiency and effectiveness of these methods in maximizing the influence spread in social networks.",
"title": ""
},
{
"docid": "b8322d65e61be7fb252b2e418df85d3e",
"text": "the od. cted ly genof 997 Abstract. Algorithms of filtering, edge detection, and extraction of details and their implementation using cellular neural networks (CNN) are developed in this paper. The theory of CNN based on universal binary neurons (UBN) is also developed. A new learning algorithm for this type of neurons is carried out. Implementation of low-pass filtering algorithms using CNN is considered. Separate processing of the binary planes of gray-scale images is proposed. Algorithms of edge detection and impulsive noise filtering based on this approach and their implementation using CNN-UBN are presented. Algorithms of frequency correction reduced to filtering in the spatial domain are considered. These algorithms make it possible to extract details of given sizes. Implementation of such algorithms using CNN is presented. Finally, a general strategy of gray-scale image processing using CNN is considered. © 1997 SPIE and IS&T. [S1017-9909(97)00703-4]",
"title": ""
},
{
"docid": "f76f400bbb71c724657082d42eb7406e",
"text": "Semantic segmentation is a critical module in robotics related applications, especially autonomous driving. Most of the research on semantic segmentation is focused on improving the accuracy with less attention paid to computationally efficient solutions. Majority of the efficient semantic segmentation algorithms have customized optimizations without scalability and there is no systematic way to compare them. In this paper, we present a real-time segmentation benchmarking framework and study various segmentation algorithms for autonomous driving. We implemented a generic meta-architecture via a decoupled design where different types of encoders and decoders can be plugged in independently. We provide several example encoders including VGG16, Resnet18, MobileNet, and ShuffleNet and decoders including SkipNet, UNet and Dilation Frontend. The framework is scalable for addition of new encoders and decoders developed in the community for other vision tasks. We performed detailed experimental analysis on cityscapes dataset for various combinations of encoder and decoder. The modular framework enabled rapid prototyping of a custom efficient architecture which provides ~x143 GFLOPs reduction compared to SegNet and runs real-time at ~15 fps on NVIDIA Jetson TX2. The source code of the framework is publicly available.",
"title": ""
},
{
"docid": "4449b826b2a6acb5ce10a0bcacabc022",
"text": "Centralized Resource Description Framework (RDF) repositories have limitations both in their failure tolerance and in their scalability. Existing Peer-to-Peer (P2P) RDF repositories either cannot guarantee to find query results, even if these results exist in the network, or require up-front definition of RDF schemas and designation of super peers. We present a scalable distributed RDF repository (RDFPeers) that stores each triple at three places in a multi-attribute addressable network by applying globally known hash functions to its subject predicate and object. Thus all nodes know which node is responsible for storing triple values they are looking for and both exact-match and range queries can be efficiently routed to those nodes. RDFPeers has no single point of failure nor elevated peers and does not require the prior definition of RDF schemas. Queries are guaranteed to find matched triples in the network if the triples exist. In RDFPeers both the number of neighbors per node and the number of routing hops for inserting RDF triples and for resolving most queries are logarithmic to the number of nodes in the network. We further performed experiments that show that the triple-storing load in RDFPeers differs by less than an order of magnitude between the most and the least loaded nodes for real-world RDF data.",
"title": ""
},
{
"docid": "f614df1c1775cd4e2a6927fce95ffa46",
"text": "In this paper we have designed and implemented (15, k) a BCH Encoder and decoder using VHDL for reliable data transfer in AWGN channel with multiple error correction control. The digital logic implementation of binary encoding of multiple error correcting BCH code (15, k) of length n=15 over GF (2 4 ) with irreducible primitive polynomial x 4 +x+1 is organized into shift register circuits. Using the cyclic codes, the reminder b(x) can be obtained in a linear (15-k) stage shift register with feedback connections corresponding to the coefficients of the generated polynomial. Three encoders are designed using VHDL to encode the single, double and triple error correcting BCH code (15, k) corresponding to the coefficient of generated polynomial. Information bit is transmitted in unchanged form up to K clock cycles and during this period parity bits are calculated in the LFSR then the parity bits are transmitted from k+1 to 15 clock cycles. Total 15-k numbers of parity bits with k information bits are transmitted in 15 code word. In multiple error correction method, we have implemented (15, 5 ,3 ) ,(15,7, 2) and (15, 11, 1) BCH encoder and decoder using VHDL and the simulation is done using Xilinx ISE 14.2. KeywordsBCH, BER, SNR, BCH Encoder, Decoder VHDL, Error Correction, AWGN, LFSR",
"title": ""
},
{
"docid": "ca4e2cff91621bca4018ce1eca5450e2",
"text": "Decentralized optimization algorithms have received much attention due to the recent advances in network information processing. However, conventional decentralized algorithms based on projected gradient descent are incapable of handling high-dimensional constrained problems, as the projection step becomes computationally prohibitive. To address this problem, this paper adopts a projection-free optimization approach, a.k.a. the Frank–Wolfe (FW) or conditional gradient algorithm. We first develop a decentralized FW (DeFW) algorithm from the classical FW algorithm. The convergence of the proposed algorithm is studied by viewing the decentralized algorithm as an <italic>inexact </italic> FW algorithm. Using a diminishing step size rule and letting <inline-formula><tex-math notation=\"LaTeX\">$t$ </tex-math></inline-formula> be the iteration number, we show that the DeFW algorithm's convergence rate is <inline-formula><tex-math notation=\"LaTeX\">${\\mathcal O}(1/t)$</tex-math></inline-formula> for convex objectives; is <inline-formula><tex-math notation=\"LaTeX\">${\\mathcal O}(1/t^2)$</tex-math></inline-formula> for strongly convex objectives with the optimal solution in the interior of the constraint set; and is <inline-formula> <tex-math notation=\"LaTeX\">${\\mathcal O}(1/\\sqrt{t})$</tex-math></inline-formula> toward a stationary point for smooth but nonconvex objectives. We then show that a consensus-based DeFW algorithm meets the above guarantees with two communication rounds per iteration. We demonstrate the advantages of the proposed DeFW algorithm on low-complexity robust matrix completion and communication efficient sparse learning. Numerical results on synthetic and real data are presented to support our findings.",
"title": ""
},
{
"docid": "4d3195c6fd592a7b8379bc61529b44c3",
"text": "Financial institutions all over the world are providing banking services via information systems, such as: automated teller machines (ATMs), Internet banking, and telephone banking, in an effort to remain competitive as well as enhancing customer service. However, the acceptance of such banking information systems (BIS) in developing countries remains open. The classical Technology Acceptance Model (TAM) has been well validated over hundreds of studies in the past two decades. This study contributed to the extensive body of research of technology acceptance by attempting to validate the integration of trust and computer self-efficacy (CSE) constructs into the classical TAM model. Moreover, the key uniqueness of this work is in the context of BIS in a developing country, namely Jamaica. Based on structural equations modeling using data of 374 customers from three banks in Jamaica, this study results indicated that the classic TAM provided a better fit than the extended TAM with Trust and CSE. However, the results also indicated that trust is indeed a significant construct impacting both perceived usefulness and perceived ease-of-use. Additionally, test for gender differences indicated that across all study participants, only trust was found to be significantly different between male and female bank customers. Conclusions and recommendations for future research are also provided.",
"title": ""
},
{
"docid": "cc1876cf1d71be6c32c75bd2ded25e65",
"text": "Traditional anomaly detection on social media mostly focuses on individual point anomalies while anomalous phenomena usually occur in groups. Therefore, it is valuable to study the collective behavior of individuals and detect group anomalies. Existing group anomaly detection approaches rely on the assumption that the groups are known, which can hardly be true in real world social media applications. In this article, we take a generative approach by proposing a hierarchical Bayes model: Group Latent Anomaly Detection (GLAD) model. GLAD takes both pairwise and point-wise data as input, automatically infers the groups and detects group anomalies simultaneously. To account for the dynamic properties of the social media data, we further generalize GLAD to its dynamic extension d-GLAD. We conduct extensive experiments to evaluate our models on both synthetic and real world datasets. The empirical results demonstrate that our approach is effective and robust in discovering latent groups and detecting group anomalies.",
"title": ""
},
{
"docid": "b8505166c395750ee47127439a4afa1a",
"text": "Modern replicated data stores aim to provide high availability, by immediately responding to client requests, often by implementing objects that expose concurrency. Such objects, for example, multi-valued registers (MVRs), do not have sequential specifications. This paper explores a recent model for replicated data stores that can be used to precisely specify causal consistency for such objects, and liveness properties like eventual consistency, without revealing details of the underlying implementation. The model is used to prove the following results: An eventually consistent data store implementing MVRs cannot satisfy a consistency model strictly stronger than observable causal consistency (OCC). OCC is a model somewhat stronger than causal consistency, which captures executions in which client observations can use causality to infer concurrency of operations. This result holds under certain assumptions about the data store. Under the same assumptions, an eventually consistent and causally consistent replicated data store must send messages of unbounded size: If s objects are supported by n replicas, then, for every k > 1, there is an execution in which an Ω({n,s} k)-bit message is sent.",
"title": ""
}
] |
scidocsrr
|
aa230d13a85bb2fb47cbd0bcd514b38f
|
DeepFM: A Factorization-Machine based Neural Network for CTR Prediction
|
[
{
"docid": "c943fcc6664681d832133dc8739e6317",
"text": "The explosion in online advertisement urges to better estimate the click prediction of ads. For click prediction on single ad impression, we have access to pairwise relevance among elements in an impression, but not to global interaction among key features of elements. Moreover, the existing method on sequential click prediction treats propagation unchangeable for different time intervals. In this work, we propose a novel model, Convolutional Click Prediction Model (CCPM), based on convolution neural network. CCPM can extract local-global key features from an input instance with varied elements, which can be implemented for not only single ad impression but also sequential ad impression. Experiment results on two public large-scale datasets indicate that CCPM is effective on click prediction.",
"title": ""
},
{
"docid": "3734fd47cf4e4e5c00f660cbb32863f0",
"text": "We describe a new Bayesian click-through rate (CTR) prediction algorithm used for Sponsored Search in Microsoft’s Bing search engine. The algorithm is based on a probit regression model that maps discrete or real-valued input features to probabilities. It maintains Gaussian beliefs over weights of the model and performs Gaussian online updates derived from approximate message passing. Scalability of the algorithm is ensured through a principled weight pruning procedure and an approximate parallel implementation. We discuss the challenges arising from evaluating and tuning the predictor as part of the complex system of sponsored search where the predictions made by the algorithm decide about future training sample composition. Finally, we show experimental results from the production system and compare to a calibrated Naïve Bayes algorithm.",
"title": ""
},
{
"docid": "fd03cf7e243571e9b3e81213fe91fd29",
"text": "Most real-world recommender services measure their performance based on the top-N results shown to the end users. Thus, advances in top-N recommendation have far-ranging consequences in practical applications. In this paper, we present a novel method, called Collaborative Denoising Auto-Encoder (CDAE), for top-N recommendation that utilizes the idea of Denoising Auto-Encoders. We demonstrate that the proposed model is a generalization of several well-known collaborative filtering models but with more flexible components. Thorough experiments are conducted to understand the performance of CDAE under various component settings. Furthermore, experimental results on several public datasets demonstrate that CDAE consistently outperforms state-of-the-art top-N recommendation methods on a variety of common evaluation metrics.",
"title": ""
}
] |
[
{
"docid": "98a703bc054e871826173e2517074d06",
"text": "Several attempts have been made in the past to construct encoding schemes that allow modularity to emerge in evolving systems, but success is limited. We believe that in order to create successful and scalable encodings for emerging modularity, we first need to explore the benefits of different types of modularity by hard-wiring these into evolvable systems. In this paper we explore different ways of exploiting sensory symmetry inherent in the agent in the simple game Cellz by evolving symmetrically identical modules. It is concluded that significant increases in both speed of evolution and final fitness can be achieved relative to monolithic controllers. Furthermore, we show that simple function approximation task that exhibits sensory symmetry can be used as a quick approximate measure of the utility of an encoding scheme for the more complex game-playing task.",
"title": ""
},
{
"docid": "ba5cd7dcf8d7e9225df1d9dc69c95c11",
"text": "e eective of information retrieval (IR) systems have become more important than ever. Deep IR models have gained increasing aention for its ability to automatically learning features from raw text; thus, many deep IR models have been proposed recently. However, the learning process of these deep IR models resemble a black box. erefore, it is necessary to identify the dierence between automatically learned features by deep IR models and hand-craed features used in traditional learning to rank approaches. Furthermore, it is valuable to investigate the dierences between these deep IR models. is paper aims to conduct a deep investigation on deep IR models. Specically, we conduct an extensive empirical study on two dierent datasets, including Robust and LETOR4.0. We rst compared the automatically learned features and handcraed features on the respects of query term coverage, document length, embeddings and robustness. It reveals a number of disadvantages compared with hand-craed features. erefore, we establish guidelines for improving existing deep IR models. Furthermore, we compare two dierent categories of deep IR models, i.e. representation-focused models and interaction-focused models. It is shown that two types of deep IR models focus on dierent categories of words, including topic-related words and query-related words.",
"title": ""
},
{
"docid": "729fac8328b57376a954f2e7fc10405e",
"text": "Generative Adversarial Networks are proved to be efficient on various kinds of image generation tasks. However, it is still a challenge if we want to generate images precisely. Many researchers focus on how to generate images with one attribute. But image generation under multiple attributes is still a tough work. In this paper, we try to generate a variety of face images under multiple constraints using a pipeline process. The Pip-GAN (Pipeline Generative Adversarial Network) we present employs a pipeline network structure which can generate a complex facial image step by step using a neutral face image. We applied our method on two face image databases and demonstrate its ability to generate convincing novel images of unseen identities under multiple conditions previously.",
"title": ""
},
{
"docid": "5705022b0a08ca99d4419485f3c03eaa",
"text": "In this paper, we propose a wireless sensor network paradigm for real-time forest fire detection. The wireless sensor network can detect and forecast forest fire more promptly than the traditional satellite-based detection approach. This paper mainly describes the data collecting and processing in wireless sensor networks for real-time forest fire detection. A neural network method is applied to in-network data processing. We evaluate the performance of our approach by simulations.",
"title": ""
},
{
"docid": "733e5961428e5aad785926e389b9bd75",
"text": "OBJECTIVE\nPeer support can be defined as the process of giving and receiving nonprofessional, nonclinical assistance from individuals with similar conditions or circumstances to achieve long-term recovery from psychiatric, alcohol, and/or other drug-related problems. Recently, there has been a dramatic rise in the adoption of alternative forms of peer support services to assist recovery from substance use disorders; however, often peer support has not been separated out as a formalized intervention component and rigorously empirically tested, making it difficult to determine its effects. This article reports the results of a literature review that was undertaken to assess the effects of peer support groups, one aspect of peer support services, in the treatment of addiction.\n\n\nMETHODS\nThe authors of this article searched electronic databases of relevant peer-reviewed research literature including PubMed and MedLINE.\n\n\nRESULTS\nTen studies met our minimum inclusion criteria, including randomized controlled trials or pre-/post-data studies, adult participants, inclusion of group format, substance use-related, and US-conducted studies published in 1999 or later. Studies demonstrated associated benefits in the following areas: 1) substance use, 2) treatment engagement, 3) human immunodeficiency virus/hepatitis C virus risk behaviors, and 4) secondary substance-related behaviors such as craving and self-efficacy. Limitations were noted on the relative lack of rigorously tested empirical studies within the literature and inability to disentangle the effects of the group treatment that is often included as a component of other services.\n\n\nCONCLUSION\nPeer support groups included in addiction treatment shows much promise; however, the limited data relevant to this topic diminish the ability to draw definitive conclusions. More rigorous research is needed in this area to further expand on this important line of research.",
"title": ""
},
{
"docid": "a5f78c3708a808fd39c4ced6152b30b8",
"text": "Building ontology for wireless network intrusion detection is an emerging method for the purpose of achieving high accuracy, comprehensive coverage, self-organization and flexibility for network security. In this paper, we leverage the power of Natural Language Processing (NLP) and Crowdsourcing for this purpose by constructing lightweight semi-automatic ontology learning framework which aims at developing a semantic-based solution-oriented intrusion detection knowledge map using documents from Scopus. Our proposed framework uses NLP as its automatic component and Crowdsourcing is applied for the semi part. The main intention of applying both NLP and Crowdsourcing is to develop a semi-automatic ontology learning method in which NLP is used to extract and connect useful concepts while in uncertain cases human power is leveraged for verification. This heuristic method shows a theoretical contribution in terms of lightweight and timesaving ontology learning model as well as practical value by providing solutions for detecting different types of intrusions.",
"title": ""
},
{
"docid": "c05b2317f529d79a2d05223c249549b6",
"text": "PURPOSE\nThis study presents a two-degree customized animated stimulus developed to evaluate smooth pursuit in children and investigates the effect of its predetermined characteristics (stimulus type and size) in an adult population. Then, the animated stimulus is used to evaluate the impact of different pursuit motion paradigms in children.\n\n\nMETHODS\nTo study the effect of animating a stimulus, eye movement recordings were obtained from 20 young adults while the customized animated stimulus and a standard dot stimulus were presented moving horizontally at a constant velocity. To study the effect of using a larger stimulus size, eye movement recordings were obtained from 10 young adults while presenting a standard dot stimulus of different size (1° and 2°) moving horizontally at a constant velocity. Finally, eye movement recordings were obtained from 12 children while the 2° customized animated stimulus was presented after three different smooth pursuit motion paradigms. Performance parameters, including gains and number of saccades, were calculated for each stimulus condition.\n\n\nRESULTS\nThe animated stimulus produced in young adults significantly higher velocity gain (mean: 0.93; 95% CI: 0.90-0.96; P = .014), position gain (0.93; 0.85-1; P = .025), proportion of smooth pursuit (0.94; 0.91-0.96, P = .002), and fewer saccades (5.30; 3.64-6.96, P = .008) than a standard dot (velocity gain: 0.87; 0.82-0.92; position gain: 0.82; 0.72-0.92; proportion smooth pursuit: 0.87; 0.83-0.90; number of saccades: 7.75; 5.30-10.46). In contrast, changing the size of a standard dot stimulus from 1° to 2° did not have an effect on smooth pursuit in young adults (P > .05). Finally, smooth pursuit performance did not significantly differ in children for the different motion paradigms when using the animated stimulus (P > .05).\n\n\nCONCLUSIONS\nAttention-grabbing and more dynamic stimuli, such as the developed animated stimulus, might potentially be useful for eye movement research. Finally, with such stimuli, children perform equally well irrespective of the motion paradigm used.",
"title": ""
},
{
"docid": "ca52ed08e302b843ca4bc0a0e8d2fd5c",
"text": "We report a case of surgical treatment for Hallermann-Streiff syndrome in a patient with ocular manifestations of esotropia, entropion, and blepharoptosis. A 54-year-old man visited Yeouido St. Mary's Hospital complaining of ocular discomfort due to cilia touching the corneas of both eyes for several years. He had a bird-like face, pinched nose, hypotrichosis of the scalp, mandibular hypoplasia with forward displacement of the temporomandibular joints, a small mouth, and proportional short stature. His ophthalmic features included sparse eyelashes and eyebrows, microphthalmia, nystagmus, lower lid entropion in the right eye, and upper lid entropion with blepharoptosis in both eyes. There was esodeviation of the eyeball of more than 100 prism diopters at near and distance, and there were limitations in ocular movement on lateral gaze. The capsulopalpebral fascia was repaired to treat the right lower lid entropion, but an additional Quickert suture was required to prevent recurrence. Blepharoplasty and levator palpebrae repair were performed for blepharoptosis and dermatochalasis. Three months after lid surgery, the right medial rectus muscle was recessed 7.5 mm, the left medial rectus was recessed 7.25 mm, and the left lateral rectus muscle was resected 8.0 mm.",
"title": ""
},
{
"docid": "7078d24d78abf6c46a6bc8c2213561c4",
"text": "In the past two decades, a new form of scholarship has appeared in which researchers present an overview of previously conducted research syntheses on the same topic. In these efforts, research syntheses are the principal units of evidence. Overviews of reviews introduce unique problems that require unique solutions. This article describes what methods overviewers have developed or have adopted from other forms of scholarship. These methods concern how to (a) define the broader problem space of an overview, (b) conduct literature searches that specifically look for research syntheses, (c) address the overlap in evidence in related reviews, (d) evaluate the quality of both primary research and research syntheses, (e) integrate the outcomes of research syntheses, especially when they produce discordant results, (f) conduct a second-order meta-analysis, and (g) present findings. The limitations of overviews are also discussed, especially with regard to the age of the included evidence.",
"title": ""
},
{
"docid": "491ad4b4ab179db2efd54f3149d08db5",
"text": "In robotics, Air Muscle is used as the analogy of the biological motor for locomotion or manipulation. It has advantages like the passive Damping, good power-weight ratio and usage in rough environments. An experimental test set up is designed to test both contraction and volume trapped in Air Muscle. This paper gives the characteristics of Air Muscle in terms of contraction of Air Muscle with variation of pressure at different loads and also in terms of volume of air trapped in it with variation in pressure at different loads. Braid structure of the Muscle has been described and its theoretical and experimental aspects of the characteristics of an Air Muscle are analysed.",
"title": ""
},
{
"docid": "cf62cb1e0b3cac894a277762808c68e0",
"text": "-Most educational institutions’ administrators are concerned about student irregular attendance. Truancies can affect student overall academic performance. The conventional method of taking attendance by calling names or signing on paper is very time consuming and insecure, hence inefficient. Therefore, computer based student attendance management system is required to assist the faculty and the lecturer for this time-provide much convenient method to take attendance, but some prerequisites has to be done before start using the program. Although the use of RFID systems in educational institutions is not new, it is intended to show how the use of it came to solve daily problems in our university. The system has been built using the web-based applications such as ASP.NET and IIS server to cater the recording and reporting of the students’ attendances The system can be easily accessed by the lecturers via the web and most importantly, the reports can be generated in real-time processing, thus, providing valuable information about the students’.",
"title": ""
},
{
"docid": "c446ce16a62f832a167101293fe8b58d",
"text": "Unforeseen events such as node failures and resource contention can have a severe impact on the performance of data processing frameworks, such as Hadoop, especially in cloud environments where such incidents are common. SLA compliance in the presence of such events requires the ability to quickly and dynamically resize infrastructure resources. Unfortunately, the distributed and stateful nature of data processing frameworks makes it challenging to accurately scale the system at run-time. In this paper, we present the design and implementation of a model-driven autoscaling solution for Hadoop clusters. We first develop novel gray-box performance models for Hadoop workloads that specifically relate job execution times to resource allocation and workload parameters. We then employ these models to dynamically determine the resources required to successfully complete the Hadoop jobs as per the user-specified SLA under various scenarios including node failures and multi-job executions. Our experimental results on three different Hadoop cloud clusters and across different workloads demonstrate the efficacy of our models and highlight their autoscaling capabilities.",
"title": ""
},
{
"docid": "fdcf6e60ad11b10fba077a62f7f1812d",
"text": "Delivering web software as a service has grown into a powerful paradigm for deploying a wide range of Internetscale applications. However for end-users, accessing software as a service is fundamentally at odds with free software, because of the associated cost of maintaining server infrastructure. Users end up paying for the service in one way or another, often indirectly through ads or the sale of their private data. In this paper, we aim to enable a new generation of portable and free web apps by proposing an alternative model to the existing client-server web architecture. freedom.js is a platform for developing and deploying rich multi-user web apps, where application logic is pushed out from the cloud and run entirely on client-side browsers. By shifting the responsibility of where code runs, we can explore a novel incentive structure where users power applications with their own resources, gain the ability to control application behavior and manage privacy of data. For developers, we lower the barrier of writing popular web apps by removing much of the deployment cost and making applications simpler to write. We provide a set of novel abstractions that allow developers to automatically scale their application with low complexity and overhead. freedom.js apps are inherently sandboxed, multi-threaded, and composed of reusable modules. We demonstrate the flexibility of freedom.js through a number of applications that we have built on top of the platform, including a messaging application, a social file synchronization tool, and a peer-to-peer (P2P) content delivery network (CDN). Our experience shows that we can implement a P2P-CDN with 50% fewer lines of application-specific code in the freedom.js framework when compared to a standalone version. In turn, we incur an additional startup latency of 50-60ms (about 6% of the page load time) with the freedom.js version, without any noticeable impact on system throughput.",
"title": ""
},
{
"docid": "d0ea7fe7ed0dfdca3b43de20bb1dc1d0",
"text": "Text clustering methods can be used to structure large sets of text or hypertext documents. The well-known methods of text clustering, however, do not really address the special problems of text clustering: very high dimensionality of the data, very large size of the databases and understandability of the cluster description. In this paper, we introduce a novel approach which uses frequent item (term) sets for text clustering. Such frequent sets can be efficiently discovered using algorithms for association rule mining. To cluster based on frequent term sets, we measure the mutual overlap of frequent sets with respect to the sets of supporting documents. We present two algorithms for frequent term-based text clustering, FTC which creates flat clusterings and HFTC for hierarchical clustering. An experimental evaluation on classical text documents as well as on web documents demonstrates that the proposed algorithms obtain clusterings of comparable quality significantly more efficiently than state-of-the- art text clustering algorithms. Furthermore, our methods provide an understandable description of the discovered clusters by their frequent term sets.",
"title": ""
},
{
"docid": "763b8982d13b0637a17347b2c557f1f8",
"text": "This paper describes an application of Case-Based Reasonin g to the problem of reducing the number of final-line fraud investigation s i the credit approval process. The performance of a suite of algorithms whi ch are applied in combination to determine a diagnosis from a set of retriev ed cases is reported. An adaptive diagnosis algorithm combining several neighbourhoodbased and probabilistic algorithms was found to have the bes t performance, and these results indicate that an adaptive solution can pro vide fraud filtering and case ordering functions for reducing the number of fin al-li e fraud investigations necessary.",
"title": ""
},
{
"docid": "a0e68c731cdb46d1bdf708997a871695",
"text": "Iris segmentation is an essential module in iris recognition because it defines the effective image region used for subsequent processing such as feature extraction. Traditional iris segmentation methods often involve an exhaustive search of a large parameter space, which is time consuming and sensitive to noise. To address these problems, this paper presents a novel algorithm for accurate and fast iris segmentation. After efficient reflection removal, an Adaboost-cascade iris detector is first built to extract a rough position of the iris center. Edge points of iris boundaries are then detected, and an elastic model named pulling and pushing is established. Under this model, the center and radius of the circular iris boundaries are iteratively refined in a way driven by the restoring forces of Hooke's law. Furthermore, a smoothing spline-based edge fitting scheme is presented to deal with noncircular iris boundaries. After that, eyelids are localized via edge detection followed by curve fitting. The novelty here is the adoption of a rank filter for noise elimination and a histogram filter for tackling the shape irregularity of eyelids. Finally, eyelashes and shadows are detected via a learned prediction model. This model provides an adaptive threshold for eyelash and shadow detection by analyzing the intensity distributions of different iris regions. Experimental results on three challenging iris image databases demonstrate that the proposed algorithm outperforms state-of-the-art methods in both accuracy and speed.",
"title": ""
},
{
"docid": "3a7bfdaf92ae9b0509220016eecc8042",
"text": "Background/Objectives:Policies focused on food quality are intended to facilitate healthy choices by consumers, even those who are not fully informed about the links between food consumption and health. The goal of this paper is to evaluate the potential impact of such a food reformulation scenario on health outcomes.Subjects/Methods:We first created reformulation scenarios adapted to the French characteristics of foods. After computing the changes in the nutrient intakes of representative consumers, we determined the health effects of these changes. To do so, we used the DIETRON health assessment model, which calculates the number of deaths avoided by changes in food and nutrient intakes.Results:Depending on the reformulation scenario, the total impact of reformulation varies between 2408 and 3597 avoided deaths per year, which amounts to a 3.7–5.5% reduction in mortality linked to diseases considered in the DIETRON model. The impacts are much higher for men than for women and much higher for low-income categories than for high-income categories. These differences result from the differences in consumption patterns and initial disease prevalence among the various income categories.Conclusions:Even without any changes in consumers’ behaviors, realistic food reformulation may have significant health outcomes.",
"title": ""
},
{
"docid": "149de84d7cbc9ea891b4b1297957ade7",
"text": "Deep convolutional neural networks (CNNs) have had a major impact in most areas of image understanding, including object category detection. In object detection, methods such as R-CNN have obtained excellent results by integrating CNNs with region proposal generation algorithms such as selective search. In this paper, we investigate the role of proposal generation in CNN-based detectors in order to determine whether it is a necessary modelling component, carrying essential geometric information not contained in the CNN, or whether it is merely a way of accelerating detection. We do so by designing and evaluating a detector that uses a trivial region generation scheme, constant for each image. Combined with SPP, this results in an excellent and fast detector that does not require to process an image with algorithms other than the CNN itself. We also streamline and simplify the training of CNN-based detectors by integrating several learning steps in a single algorithm, as well as by proposing a number of improvements that accelerate detection.",
"title": ""
},
{
"docid": "7fff067167bb50cab7ab84c91518031a",
"text": "Unsupervised depth estimation from a single image is a very attractive technique with several implications in robotic, autonomous navigation, augmented reality and so on. This topic represents a very challenging task and the advent of deep learning enabled to tackle this problem with excellent results. However, these architectures are extremely deep and complex. Thus, real-time performance can be achieved only by leveraging power-hungry GPUs that do not allow to infer depth maps in application fields characterized by low-power constraints. To tackle this issue, in this paper we propose a novel architecture capable to quickly infer an accurate depth map on a CPU, even of an embedded system, using a pyramid of features extracted from a single input image. Similarly to state-of-the-art, we train our network in an unsupervised manner casting depth estimation as an image reconstruction problem. Extensive experimental results on the KITTI dataset show that compared to the top performing approach our network has similar accuracy but a much lower complexity (about 6% of parameters) enabling to infer a depth map for a KITTI image in about 1.7 s on the Raspberry Pi 3 and at more than 8 Hz on a standard CPU. Moreover, by trading accuracy for efficiency, our network allows to infer maps at about 2 Hz and 40 Hz respectively, still being more accurate than most state-of-the-art slower methods. To the best of our knowledge, it is the first method enabling such performance on CPUs paving the way for effective deployment of unsupervised monocular depth estimation even on embedded systems.",
"title": ""
},
{
"docid": "0b22d7f6326210f02da44b0fa686f25a",
"text": "Current methods learn monolithic attribute predictors, with the assumption that a single model is sufficient to reflect human understanding of a visual attribute. However, in reality, humans vary in how they perceive the association between a named property and image content. For example, two people may have slightly different internal models for what makes a shoe look \"formal\", or they may disagree on which of two scenes looks \"more cluttered\". Rather than discount these differences as noise, we propose to learn user-specific attribute models. We adapt a generic model trained with annotations from multiple users, tailoring it to satisfy user-specific labels. Furthermore, we propose novel techniques to infer user-specific labels based on transitivity and contradictions in the user's search history. We demonstrate that adapted attributes improve accuracy over both existing monolithic models as well as models that learn from scratch with user-specific data alone. In addition, we show how adapted attributes are useful to personalize image search, whether with binary or relative attributes.",
"title": ""
}
] |
scidocsrr
|
d7dcdb0f375f3cd055764fb1951a7241
|
AND: Autoregressive Novelty Detectors
|
[
{
"docid": "5d80ce0bffd5bc2016aac657669a98de",
"text": "Information and Communication Technology (ICT) has a great impact on social wellbeing, economic growth and national security in todays world. Generally, ICT includes computers, mobile communication devices and networks. ICT is also embraced by a group of people with malicious intent, also known as network intruders, cyber criminals, etc. Confronting these detrimental cyber activities is one of the international priorities and important research area. Anomaly detection is an important data analysis task which is useful for identifying the network intrusions. This paper presents an in-depth analysis of four major categories of anomaly detection techniques which include classification, statistical, information theory and clustering. The paper also discusses research challenges with the datasets used for network intrusion detection. & 2015 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "a7456ecf7af7e447cdde61f371128965",
"text": "For most deep learning practitioners, sequence modeling is synonymous with recurrent networks. Yet recent results indicate that convolutional architectures can outperform recurrent networks on tasks such as audio synthesis and machine translation. Given a new sequence modeling task or dataset, which architecture should one use? We conduct a systematic evaluation of generic convolutional and recurrent architectures for sequence modeling. The models are evaluated across a broad range of standard tasks that are commonly used to benchmark recurrent networks. Our results indicate that a simple convolutional architecture outperforms canonical recurrent networks such as LSTMs across a diverse range of tasks and datasets, while demonstrating longer effective memory. We conclude that the common association between sequence modeling and recurrent networks should be reconsidered, and convolutional networks should be regarded as a natural starting point for sequence modeling tasks. To assist related work, we have made code available at http://github.com/locuslab/TCN.",
"title": ""
}
] |
[
{
"docid": "bba81ac392b87a123a1e2f025bffd30c",
"text": "This paper presents a new multi-objective deep reinforcement learning (MODRL) framework based on deep Q-networks. We propose the use of linear and non-linear methods to develop the MODRL framework that includes both single-policy and multi-policy strategies. The experimental results on two benchmark problems including the two-objective deep sea treasure environment and the three-objective mountain car problem indicate that the proposed framework is able to converge to the optimal Pareto solutions effectively. The proposed framework is generic, which allows implementation of different deep reinforcement learning algorithms in different complex environments. This therefore overcomes many difficulties involved with standard multi-objective reinforcement learning (MORL) methods existing in the current literature. The framework creates a platform as a testbed environment to develop methods for solving various problems associated with the current MORL. Details of the framework implementation can be referred to http://www.deakin.edu.au/~thanhthi/drl.htm.",
"title": ""
},
{
"docid": "b7961e6b82ca38e65fcfefcb5309bd46",
"text": "IMPORTANCE\nCryolipolysis is the noninvasive reduction of fat with localized cutaneous cooling. Since initial introduction, over 650,000 cryolipolysis treatment cycles have been performed worldwide. We present a previously unreported, rare adverse effect following cryolipolysis: paradoxical adipose hyperplasia.\n\n\nOBSERVATIONS\nA man in his 40s underwent a single cycle of cryolipolysis to his abdomen. Three months following his treatment, a gradual enlargement of the treatment area was noted. This enlargement was a large, well-demarcated subcutaneous mass, slightly tender to palpation. Imaging studies revealed accumulation of adipose tissue with normal signal intensity within the treatment area.\n\n\nCONCLUSIONS AND RELEVANCE\nParadoxical adipose hyperplasia is a rare, previously unreported adverse effect of cryolipolysis with an incidence of 0.0051%. No single unifying risk factor has been identified. The phenomenon seems to be more common in male patients undergoing cryolipolysis. At this time, there is no evidence of spontaneous resolution. Further studies are needed to characterize the pathogenesis and histologic findings of this rare adverse event.",
"title": ""
},
{
"docid": "88a8ea1de5ad5cb8883890c1e30b3491",
"text": "Service robots will have to accomplish more and more complex, open-ended tasks and regularly acquire new skills. In this work, we propose a new approach to the problem of generating plans for such household robots. Instead composing them from atomic actions — the common approach in robot planning — we propose to transform task descriptions on web sites like ehow.com into executable robot plans. We present methods for automatically converting the instructions from natural language into a formal, logic-based representation, for resolving the word senses using the WordNet database and the Cyc ontology, and for exporting the generated plans into the mobile robot's plan language RPL. We discuss the problem of inferring information that is missing in these descriptions and the problem of grounding the abstract task descriptions in the perception and action system, and we propose techniques for solving them. The whole system works autonomously without human interaction. It has successfully been tested with a set of about 150 natural language directives, of which up to 80% could be correctly transformed.",
"title": ""
},
{
"docid": "d62c2e7ca3040900d04f83ef4f99de4f",
"text": "Manual classification of brain tumor is time devastating and bestows ambiguous results. Automatic image classification is emergent thriving research area in medical field. In the proposed methodology, features are extracted from raw images which are then fed to ANFIS (Artificial neural fuzzy inference system).ANFIS being neuro-fuzzy system harness power of both hence it proves to be a sophisticated framework for multiobject classification. A comprehensive feature set and fuzzy rules are selected to classify an abnormal image to the corresponding tumor type. This proposed technique is fast in execution, efficient in classification and easy in implementation.",
"title": ""
},
{
"docid": "9adf653a332e07b8aa055b62449e1475",
"text": "False-belief task have mainly been associated with the explanatory notion of the theory of mind and the theory-theory. However, it has often been pointed out that this kind of highlevel reasoning is computational and time expensive. During the last decades, the idea of embodied intelligence, i.e. complex behavior caused by sensorimotor contingencies, has emerged in both the fields of neuroscience, psychology and artificial intelligence. Viewed from this perspective, the failing in a false-belief test can be the result of the impairment to recognize and track others’ sensorimotor contingencies and affordances. Thus, social cognition is explained in terms of lowlevel signals instead of high-level reasoning. In this work, we present a generative model for optimal action selection which simultaneously can be employed to make predictions of others’ actions. As we base the decision making on a hidden state representation of sensorimotor signals, this model is in line with the ideas of embodied intelligence. We demonstrate how the tracking of others’ hidden states can give rise to correct falsebelief inferences, while a lack thereof leads to failing. With this work, we want to emphasize the importance of sensorimotor contingencies in social cognition, which might be a key to artificial, socially intelligent systems.",
"title": ""
},
{
"docid": "a4a5c6cbec237c2cd6fb3abcf6b4a184",
"text": "Developing automatic diagnostic tools for the early detection of skin cancer lesions in dermoscopic images can help to reduce melanoma-induced mortality. Image segmentation is a key step in the automated skin lesion diagnosis pipeline. In this paper, a fast and fully-automatic algorithm for skin lesion segmentation in dermoscopic images is presented. Delaunay Triangulation is used to extract a binary mask of the lesion region, without the need of any training stage. A quantitative experimental evaluation has been conducted on a publicly available database, by taking into account six well-known state-of-the-art segmentation methods for comparison. The results of the experimental analysis demonstrate that the proposed approach is highly accurate when dealing with benign lesions, while the segmentation accuracy significantly decreases when melanoma images are processed. This behavior led us to consider geometrical and color features extracted from the binary masks generated by our algorithm for classification, achieving promising results for melanoma detection.",
"title": ""
},
{
"docid": "1debcbf981ae6115efcc4a853cd32bab",
"text": "Vision and language understanding has emerged as a subject undergoing intense study in Artificial Intelligence. Among many tasks in this line of research, visual question answering (VQA) has been one of the most successful ones, where the goal is to learn a model that understands visual content at region-level details and finds their associations with pairs of questions and answers in the natural language form. Despite the rapid progress in the past few years, most existing work in VQA have focused primarily on images. In this paper, we focus on extending VQA to the video domain and contribute to the literature in three important ways. First, we propose three new tasks designed specifically for video VQA, which require spatio-temporal reasoning from videos to answer questions correctly. Next, we introduce a new large-scale dataset for video VQA named TGIF-QA that extends existing VQA work with our new tasks. Finally, we propose a dual-LSTM based approach with both spatial and temporal attention, and show its effectiveness over conventional VQA techniques through empirical evaluations.",
"title": ""
},
{
"docid": "2c39eafa87d34806dd1897335fdfe41c",
"text": "One of the issues facing credit card fraud detection systems is that a significant percentage of transactions labeled as fraudulent are in fact legitimate. These "false alarms" delay the detection of fraudulent transactions and can cause unnecessary concerns for customers. In this study, over 1 million unique credit card transactions from 11 months of data from a large Canadian bank were analyzed. A meta-classifier model was applied to the transactions after being analyzed by the Bank's existing neural network based fraud detection algorithm. This meta-classifier model consists of 3 base classifiers constructed using the decision tree, naïve Bayesian, and k-nearest neighbour algorithms. The naïve Bayesian algorithm was also used as the meta-level algorithm to combine the base classifier predictions to produce the final classifier. Results from the research show that when a meta-classifier was deployed in series with the Bank's existing fraud detection algorithm improvements of up to 28% to their existing system can be achieved.",
"title": ""
},
{
"docid": "88229017a9d4df8dfc44e996a116cbad",
"text": "BACKGROUND\nThe Society of Thoracic Surgeons (STS)/American College of Cardiology Transcatheter Valve Therapy (TVT) Registry captures all procedures with Food and Drug Administration-approved transcatheter valve devices performed in the United States, and is mandated as a condition of reimbursement by the Centers for Medicaid & Medicare Services.\n\n\nOBJECTIVES\nThis annual report focuses on patient characteristics, trends, and outcomes of transcatheter aortic and mitral valve catheter-based valve procedures in the United States.\n\n\nMETHODS\nWe reviewed data for all patients receiving commercially approved devices from 2012 through December 31, 2015, that are entered in the TVT Registry.\n\n\nRESULTS\nThe 54,782 patients with transcatheter aortic valve replacement demonstrated decreases in expected risk of 30-day operative mortality (STS Predicted Risk of Mortality [PROM]) of 7% to 6% and transcatheter aortic valve replacement PROM (TVT PROM) of 4% to 3% (both p < 0.0001) from 2012 to 2015. Observed in-hospital mortality decreased from 5.7% to 2.9%, and 1-year mortality decreased from 25.8% to 21.6%. However, 30-day post-procedure pacemaker insertion increased from 8.8% in 2013 to 12.0% in 2015. The 2,556 patients who underwent transcatheter mitral leaflet clip in 2015 were similar to patients from 2013 to 2014, with hospital mortality of 2% and with mitral regurgitation reduced to grade ≤2 in 87% of patients (p < 0.0001). The 349 patients who underwent mitral valve-in-valve and mitral valve-in-ring procedures were high risk, with an STS PROM for mitral valve replacement of 11%. The observed hospital mortality was 7.2%, and 30-day post-procedure mortality was 8.5%.\n\n\nCONCLUSIONS\nThe TVT Registry is an innovative registry that that monitors quality, patient safety and trends for these rapidly evolving new technologies.",
"title": ""
},
{
"docid": "b1dd6c2db60cae5405c07c3757ed6696",
"text": "In this paper, we present the Smartbin system that identifies fullness of litter bin. The system is designed to collect data and to deliver the data through wireless mesh network. The system also employs duty cycle technique to reduce power consumption and to maximize operational time. The Smartbin system was tested in an outdoor environment. Through the testbed, we collected data and applied sense-making methods to obtain litter bin utilization and litter bin daily seasonality information. With such information, litter bin providers and cleaning contractors are able to make better decision to increase productivity.",
"title": ""
},
{
"docid": "34623fb38c81af8efaf8e7073e4c43bc",
"text": "The k-means problem consists of finding k centers in R that minimize the sum of the squared distances of all points in an input set P from R to their closest respective center. Awasthi et. al. recently showed that there exists a constant ε′ > 0 such that it is NP-hard to approximate the k-means objective within a factor of 1 + ε′. We establish that the constant ε′ is at least 0.0013. For a given set of points P ⊂ R, the k-means problem consists of finding a partition of P into k clusters (C1, . . . , Ck) with corresponding centers (c1, . . . , ck) that minimize the sum of the squared distances of all points in P to their corresponding center, i.e. the quantity arg min (C1,...,Ck),(c1,...,ck) k ∑",
"title": ""
},
{
"docid": "45bf73a93f0014820864d1805f257bfc",
"text": "SEPIC topology based bidirectional DC-DC Converter is proposed for interfacing energy storage elements such as batteries & super capacitors with various power systems. This proposed bidirectional DC-DC converter acts as a buck boost where it changes its output voltage according to its duty cycle. An important factor is used to increase the voltage conversion ratio as well as it achieves high efficiency. In the proposed SEPIC based BDC converter is used to increase the voltage proposal of this is low voltage at the input side is converted into a very high level at the output side to drive the HVDC smart grid. In this project PIC microcontro9 ller is used to give faster response than the existing system. The proposed scheme ensures that the voltage on the both sides of the converter is always matched thereby the conduction losses can be reduced to improve efficiency. MATLAB/Simulink software is utilized for simulation. The obtained experimental results show the functionality and feasibility of the proposed converter.",
"title": ""
},
{
"docid": "efddb60143c59ee9e459e1048a09787c",
"text": "The aim of this paper is to determine the possibilities of using commercial off the shelf FPGA based Software Defined Radio Systems to develop a system capable of detecting and locating small drones.",
"title": ""
},
{
"docid": "7b4567b9f32795b267f2fb2d39bbee51",
"text": "BACKGROUND\nWearable and mobile devices that capture multimodal data have the potential to identify risk factors for high stress and poor mental health and to provide information to improve health and well-being.\n\n\nOBJECTIVE\nWe developed new tools that provide objective physiological and behavioral measures using wearable sensors and mobile phones, together with methods that improve their data integrity. The aim of this study was to examine, using machine learning, how accurately these measures could identify conditions of self-reported high stress and poor mental health and which of the underlying modalities and measures were most accurate in identifying those conditions.\n\n\nMETHODS\nWe designed and conducted the 1-month SNAPSHOT study that investigated how daily behaviors and social networks influence self-reported stress, mood, and other health or well-being-related factors. We collected over 145,000 hours of data from 201 college students (age: 18-25 years, male:female=1.8:1) at one university, all recruited within self-identified social groups. Each student filled out standardized pre- and postquestionnaires on stress and mental health; during the month, each student completed twice-daily electronic diaries (e-diaries), wore two wrist-based sensors that recorded continuous physical activity and autonomic physiology, and installed an app on their mobile phone that recorded phone usage and geolocation patterns. We developed tools to make data collection more efficient, including data-check systems for sensor and mobile phone data and an e-diary administrative module for study investigators to locate possible errors in the e-diaries and communicate with participants to correct their entries promptly, which reduced the time taken to clean e-diary data by 69%. We constructed features and applied machine learning to the multimodal data to identify factors associated with self-reported poststudy stress and mental health, including behaviors that can be possibly modified by the individual to improve these measures.\n\n\nRESULTS\nWe identified the physiological sensor, phone, mobility, and modifiable behavior features that were best predictors for stress and mental health classification. In general, wearable sensor features showed better classification performance than mobile phone or modifiable behavior features. Wearable sensor features, including skin conductance and temperature, reached 78.3% (148/189) accuracy for classifying students into high or low stress groups and 87% (41/47) accuracy for classifying high or low mental health groups. Modifiable behavior features, including number of naps, studying duration, calls, mobility patterns, and phone-screen-on time, reached 73.5% (139/189) accuracy for stress classification and 79% (37/47) accuracy for mental health classification.\n\n\nCONCLUSIONS\nNew semiautomated tools improved the efficiency of long-term ambulatory data collection from wearable and mobile devices. Applying machine learning to the resulting data revealed a set of both objective features and modifiable behavioral features that could classify self-reported high or low stress and mental health groups in a college student population better than previous studies and showed new insights into digital phenotyping.",
"title": ""
},
{
"docid": "ff1ed09b9952f9d0b67d6f6bb1cd507a",
"text": "Microblogging websites have emerged to the center of information production and diffusion, on which people can get useful information from other users’ microblog posts. In the era of Big Data, we are overwhelmed by the large amount of microblog posts. To make good use of these informative data, an effective search tool is required specialized for microblog posts. However, it is not trivial to do microblog search due to the following reasons: 1) microblog posts are noisy and time-sensitive rendering general information retrieval models ineffective. 2) Conventional IR models are not designed to consider microblog-specific features. In this paper, we propose to utilize learning to rank model for microblog search. We combine content-based, microblog-specific and temporal features into learning to rank models, which are found to model microblog posts effectively. To study the performance of learning to rank models, we evaluate our models using tweet data set provided by TERC 2011 and TREC 2012 microblogs track with the comparison of three stateof-the-art information retrieval baselines, vector space model, language model, BM25 model. Extensive experimental studies demonstrate the effectiveness of learning to rank models and the usefulness to integrate microblog-specific and temporal information for microblog search task.",
"title": ""
},
{
"docid": "b18ca3607462ba54ec86055dfd4683fe",
"text": "Electric power transmission lines face increased threats from malicious attacks and natural disasters. This underscores the need to develop new techniques to ensure safe and reliable transmission of electric power. This paper deals with the development of an online monitoring technique based on mechanical state estimation to determine the sag levels of overhead transmission lines in real time and hence determine if these lines are in normal physical condition or have been damaged or downed. A computational algorithm based on least squares state estimation is applied to the physical transmission line equations to determine the conductor sag levels from measurements of tension, temperature, and other transmission line conductor parameters. The estimated conductor sag levels are used to generate warning signals of vertical clearance violations in the energy management system. These warning signals are displayed to the operator to make appropriate decisions to maintain the line within the prescribed clearance limits and prevent potential cascading failures.",
"title": ""
},
{
"docid": "c7fd5a26da59fab4e66e0cb3e93530d6",
"text": "Switching audio amplifiers are widely used in HBridge topology thanks to their high efficiency; however low audio performances in single ended power stage topology is a strong weakness leading to not be used for headset applications. This paper explains the importance of efficient error correction in Single Ended Class-D audio amplifier. A hysteresis control for Class-D amplifier with a variable window is also presented. The analyses are verified by simulations and measurements. The proposed solution was fabricated in 0.13µm CMOS technology with an active area of 0.2mm2. It could be used in single ended output configuration fully compatible with common headset connectors. The proposed Class-D amplifier achieves a harmonic distortion of 0.01% and a power supply rejection of 70dB with a quite low static current consumption.",
"title": ""
},
{
"docid": "dcf9cba8bf8e2cc3f175e63e235f6b81",
"text": "Convolutional Neural Networks (CNNs) exhibit remarkable performance in various machine learning tasks. As sensor-equipped internet of things (IoT) devices permeate into every aspect of modern life, it is increasingly important to run CNN inference, a computationally intensive application, on resource constrained devices. We present a technique for fast and energy-efficient CNN inference on mobile SoC platforms, which are projected to be a major player in the IoT space. We propose techniques for efficient parallelization of CNN inference targeting mobile GPUs, and explore the underlying tradeoffs. Experiments with running Squeezenet on three different mobile devices confirm the effectiveness of our approach. For further study, please refer to the project repository available on our GitHub page: https://github.com/mtmd/Mobile ConvNet.",
"title": ""
},
{
"docid": "8a20feb22ce8797fa77b5d160919789c",
"text": "We proposed the concept of hardware software co-simulation for image processing using Xilinx system generator. Recent advances in synthesis tools for SIMULINK suggest a feasible high-level approach to algorithm implementation for embedded DSP systems. An efficient FPGA based hardware design for enhancement of color and grey scale images in image and video processing. The top model – based visual development process of SIMULINK facilitates host side simulation and validation, as well as synthesis of target specific code, furthermore, legacy code written in MATLAB or ANCI C can be reuse in custom blocks. However, the code generated for DSP platforms is often not very efficient. We are implemented the Image processing applications on FPGA it can be easily design.",
"title": ""
}
] |
scidocsrr
|
20f57da36f9d8ec9fdab0f7eea8a015c
|
Privacy by design in big data: An overview of privacy enhancing technologies in the era of big data analytics
|
[
{
"docid": "c80222e5a7dfe420d16e10b45f8fab66",
"text": "Matching people across non-overlapping camera views, known as person re-identification, is challenging due to the lack of spatial and temporal constraints and large visual appearance changes caused by variations in view angle, lighting, background clutter and occlusion. To address these challenges, most previous approaches aim to extract visual features that are both distinctive and stable under appearance changes. However, most visual features and their combinations under realistic conditions are neither stable nor distinctive thus should not be used indiscriminately. In this paper, we propose to formulate person re-identification as a distance learning problem, which aims to learn the optimal distance that can maximises matching accuracy regardless the choice of representation. To that end, we introduce a novel Probabilistic Relative Distance Comparison (PRDC) model, which differs from most existing distance learning methods in that, rather than minimising intra-class variation whilst maximising intra-class variation, it aims to maximise the probability of a pair of true match having a smaller distance than that of a wrong match pair. This makes our model more tolerant to appearance changes and less susceptible to model over-fitting. Extensive experiments are carried out to demonstrate that 1) by formulating the person re-identification problem as a distance learning problem, notable improvement on matching accuracy can be obtained against conventional person re-identification techniques, which is particularly significant when the training sample size is small; and 2) our PRDC outperforms not only existing distance learning methods but also alternative learning methods based on boosting and learning to rank.",
"title": ""
}
] |
[
{
"docid": "fde0f116dfc929bf756d80e2ce69b1c7",
"text": "The particle swarm optimization (PSO), new to the electromagnetics community, is a robust stochastic evolutionary computation technique based on the movement and intelligence of swarms. This paper introduces a conceptual overview and detailed explanation of the PSO algorithm, as well as how it can be used for electromagnetic optimizations. This paper also presents several results illustrating the swarm behavior in a PSO algorithm developed by the authors at UCLA specifically for engineering optimizations (UCLA-PSO). Also discussed is recent progress in the development of the PSO and the special considerations needed for engineering implementation including suggestions for the selection of parameter values. Additionally, a study of boundary conditions is presented indicating the invisible wall technique outperforms absorbing and reflecting wall techniques. These concepts are then integrated into a representative example of optimization of a profiled corrugated horn antenna.",
"title": ""
},
{
"docid": "3299c32ee123e8c5fb28582e5f3a8455",
"text": "Software defects, commonly known as bugs, present a serious challenge for system reliability and dependability. Once a program failure is observed, the debugging activities to locate the defects are typically nontrivial and time consuming. In this paper, we propose a novel automated approach to pin-point the root-causes of software failures.\n Our proposed approach consists of three steps. The first step is bug prediction, which leverages the existing work on anomaly-based bug detection as exceptional behavior during program execution has been shown to frequently point to the root cause of a software failure. The second step is bug isolation, which eliminates false-positive bug predictions by checking whether the dynamic forward slices of bug predictions lead to the observed program failure. The last step is bug validation, in which the isolated anomalies are validated by dynamically nullifying their effects and observing if the program still fails. The whole bug prediction, isolation and validation process is fully automated and can be implemented with efficient architectural support. Our experiments with 6 programs and 7 bugs, including a real bug in the gcc 2.95.2 compiler, show that our approach is highly effective at isolating only the relevant anomalies. Compared to state-of-art debugging techniques, our proposed approach pinpoints the defect locations more accurately and presents the user with a much smaller code set to analyze.",
"title": ""
},
{
"docid": "b12cc6abd517246009e1d4230d1878c4",
"text": "Electronic government is being increasingly recognized as a means for transforming public governance. Despite this increasing interest, information systems (IS) literature is mostly silent on what really contributes to the success of e-government 100 TEO, SRIVASTAVA, AND JIANG Web sites. To fill this gap, this study examines the role of trust in e-government success using the updated DeLone and McLean IS success model as the theoretical framework. The model is tested via a survey of 214 Singapore e-government Web site users. The results show that trust in government, but not trust in technology, is positively related to trust in e-government Web sites. Further, trust in e-government Web sites is positively related to information quality, system quality, and service quality. The quality constructs have different effects on “intention to continue” using the Web site and “satisfaction” with the Web site. Post hoc analysis indicates that the nature of usage (active versus passive users) may help us better understand the interrelationships among success variables examined in this study. This result suggests that the DeLone and McLean model can be further extended by examining the nature of IS use. In addition, it is important to consider the role of trust as well as various Web site quality attributes in understanding e-government success.",
"title": ""
},
{
"docid": "141d607eb8caeb7512f777ee3dea5972",
"text": "DBSCAN is a base algorithm for density based clustering. It can detect the clusters of different shapes and sizes from the large amount of data which contains noise and outliers. However, it is fail to handle the local density variation that exists within the cluster. In this paper, we propose a density varied DBSCAN algorithm which is capable to handle local density variation within the cluster. It calculates the growing cluster density mean and then the cluster density variance for any core object, which is supposed to be expended further, by considering density of its -neighborhood with respect to cluster density mean. If cluster density variance for a core object is less than or equal to a threshold value and also satisfying the cluster similarity index, then it will allow the core object for expansion. The experimental results show that the proposed clustering algorithm gives optimized results.",
"title": ""
},
{
"docid": "da1ac93453bc9da937df4eb49902fbe5",
"text": "A novel hierarchical multimodal attention-based model is developed in this paper to generate more accurate and descriptive captions for images. Our model is an \"end-to-end\" neural network which contains three related sub-networks: a deep convolutional neural network to encode image contents, a recurrent neural network to identify the objects in images sequentially, and a multimodal attention-based recurrent neural network to generate image captions. The main contribution of our work is that the hierarchical structure and multimodal attention mechanism is both applied, thus each caption word can be generated with the multimodal attention on the intermediate semantic objects and the global visual content. Our experiments on two benchmark datasets have obtained very positive results.",
"title": ""
},
{
"docid": "2fe33171bc57e5b78ce4dafb30f7d427",
"text": "In this paper, we propose a volume visualization system that accepts direct manipulation through a sketch-based What You See Is What You Get (WYSIWYG) approach. Similar to the operations in painting applications for 2D images, in our system, a full set of tools have been developed to enable direct volume rendering manipulation of color, transparency, contrast, brightness, and other optical properties by brushing a few strokes on top of the rendered volume image. To be able to smartly identify the targeted features of the volume, our system matches the sparse sketching input with the clustered features both in image space and volume space. To achieve interactivity, both special algorithms to accelerate the input identification and feature matching have been developed and implemented in our system. Without resorting to tuning transfer function parameters, our proposed system accepts sparse stroke inputs and provides users with intuitive, flexible and effective interaction during volume data exploration and visualization.",
"title": ""
},
{
"docid": "63f078ce0186faa9f541b5b2145431ea",
"text": "Although insulated-gate bipolar-transistor (IGBT) turn-on losses can be comparable to turn-off losses, IGBT turn-on has not been as thoroughly studied in the literature. In the present work IGBT turn on under resistive and inductive load conditions is studied in detail through experiments, finite element simulations, and circuit simulations using physics-based semiconductor models. Under resistive load conditions, it is critical to accurately model the conductivity-modulation phenomenon. Under clamped inductive load conditions at turn-on there is strong interaction between the IGBT and the freewheeling diode undergoing reverse recovery. Physics-based IGBT and diode models are used that have been proved accurate in the simulation of IGBT turn-off.",
"title": ""
},
{
"docid": "8e0badc0828019460da0017774c8b631",
"text": "To meet the explosive growth in traffic during the next twenty years, 5G systems using local area networks need to be developed. These systems will comprise of small cells and will use extreme cell densification. The use of millimeter wave (Mmwave) frequencies, in particular from 20 GHz to 90 GHz, will revolutionize wireless communications given the extreme amount of available bandwidth. However, the different propagation conditions and hardware constraints of Mmwave (e.g., the use of RF beamforming with very large arrays) require reconsidering the modulation methods for Mmwave compared to those used below 6 GHz. In this paper we present ray-tracing results, which, along with recent propagation measurements at Mmwave, all point to the fact that Mmwave frequencies are very appropriate for next generation, 5G, local area wireless communication systems. Next, we propose null cyclic prefix single carrier as the best candidate for Mmwave communications. Finally, systemlevel simulation results show that with the right access point deployment peak rates of over 15 Gbps are possible at Mmwave along with a cell edge experience in excess of 400 Mbps.",
"title": ""
},
{
"docid": "7e8723331aaec6b4f448030a579fa328",
"text": "With the recent trend toward more non extraction treatment, several appliances have been advocated to distalize molars in the upper arch. Certain principles, as outlined by Burstone, must be borne in mind when designing such an appliance:",
"title": ""
},
{
"docid": "bf152c9b8937f84b3a7796133a5f0749",
"text": "This paper proposes a robust sensor fusion algorithm to accurately track the spatial location and motion of a human under various dynamic activities, such as walking, running, and jumping. The position accuracy of the indoor wireless positioning systems frequently suffers from non-line-of-sight and multipath effects, resulting in heavy-tailed outliers and signal outages. We address this problem by integrating the estimates from an ultra-wideband (UWB) system and inertial measurement units, but also taking advantage of the estimated velocity and height obtained from an aiding lower body biomechanical model. The proposed method is a cascaded Kalman filter-based algorithm where the orientation filter is cascaded with the robust position/velocity filter. The outliers are detected for individual measurements using the normalized innovation squared, where the measurement noise covariance is softly scaled to reduce its weight. The positioning accuracy is further improved with the Rauch–Tung–Striebel smoother. The proposed algorithm was validated against an optical motion tracking system for both slow (walking) and dynamic (running and jumping) activities performed in laboratory experiments. The results show that the proposed algorithm can maintain high accuracy for tracking the location of a subject in the presence of the outliers and UWB signal outages with a combined 3-D positioning error of less than 13 cm.",
"title": ""
},
{
"docid": "a7e2538186ce04325d24842c72ff41c6",
"text": "Omics refers to a field of study in biology such as genomics, proteomics, and metabolomics. Investigating fundamental biological problems based on omics data would increase our understanding of bio-systems as a whole. However, omics data is characterized with high-dimensionality and unbalance between features and samples, which poses big challenges for classical statistical analysis and machine learning methods. This paper studies a minimal-redundancy-maximal-relevance (MRMR) feature selection for omics data classification using three different relevance evaluation measures including mutual information (MI), correlation coefficient (CC), and maximal information coefficient (MIC). A linear forward search method is used to search the optimal feature subset. The experimental results on five real-world omics datasets indicate that MRMR feature selection with CC is more robust to obtain better (or competitive) classification accuracy than the other two measures.",
"title": ""
},
{
"docid": "cb13f835a46c44302e4068241cfc7142",
"text": "Medical diagnosis is an exciting are of research and many researchers have been working on the application of Artificial Intelligence techniques to develop disease recognition systems. They are analysing currently available information and also biochemical data collecting from clinical laboratories and experts for identifying pathological status of the patient. During the process of diagnosis, the clinical data so obtained from several sources must be inferred and classified into a particular pathology. Computer aided diagnosis tools designed based on biologically inspired methods such as artificial neural/immune networks can be employed to improve the regular diagnostic process and to avoid misdiagnosis. In this paper pre-processing and classification techniques are used to train the system. Artificial immune recognition method is used for pre-processing and KNN classifier is used for classification. The system is tested with some sample data and obtained the results. The system is validated with annotated data.",
"title": ""
},
{
"docid": "7b9df4427a6290cf5efda9c41612ad64",
"text": "A systematic design of planar MIMO monopole antennas with significantly reduced mutual coupling is presented, based on the concept of metamaterials. The design is performed by means of individual rectangular loop resonators, placed in the space between the antenna elements. The underlying principle is that resonators act like small metamaterial samples, thus providing an effective means of controlling electromagnetic wave propagation. The proposed design achieves considerably high levels of isolation between antenna elements, without essentially affecting the simplicity and planarity of the MIMO antenna.",
"title": ""
},
{
"docid": "c902e2669f233a48d9048b9c7abd1401",
"text": "Unmanned Aerial Vehicles (UAV)-based remote sensing offers great possibilities to acquire in a fast and easy way field data for precision agriculture applications. This field of study is rapidly increasing due to the benefits and advantages for farm resources management, particularly for studying crop health. This paper reports some experiences related to the analysis of cultivations (vineyards and tomatoes) with Tetracam multispectral data. The Tetracam camera was mounted on a multi-rotor hexacopter. The multispectral data were processed with a photogrammetric pipeline to create triband orthoimages of the surveyed sites. Those orthoimages were employed to extract some Vegetation Indices (VI) such as the Normalized Difference Vegetation Index (NDVI), the Green Normalized Difference Vegetation Index (GNDVI), and the Soil Adjusted Vegetation Index (SAVI), examining the vegetation vigor for each crop. The paper demonstrates the great potential of high-resolution UAV data and photogrammetric techniques applied in the agriculture framework to collect multispectral images and OPEN ACCESS Remote Sens. 2015, 7 4027 evaluate different VI, suggesting that these instruments represent a fast, reliable, and cost-effective resource in crop assessment for precision farming applications.",
"title": ""
},
{
"docid": "2cf6b0b92b84da58c612e3767c6a24d9",
"text": "OBJECTIVE\nTo determine the effectiveness of early physiotherapy in reducing the risk of secondary lymphoedema after surgery for breast cancer.\n\n\nDESIGN\nRandomised, single blinded, clinical trial.\n\n\nSETTING\nUniversity hospital in Alcalá de Henares, Madrid, Spain.\n\n\nPARTICIPANTS\n120 women who had breast surgery involving dissection of axillary lymph nodes between May 2005 and June 2007.\n\n\nINTERVENTION\nThe early physiotherapy group was treated by a physiotherapist with a physiotherapy programme including manual lymph drainage, massage of scar tissue, and progressive active and action assisted shoulder exercises. This group also received an educational strategy. The control group received the educational strategy only.\n\n\nMAIN OUTCOME MEASURE\nIncidence of clinically significant secondary lymphoedema (>2 cm increase in arm circumference measured at two adjacent points compared with the non-affected arm).\n\n\nRESULTS\n116 women completed the one year follow-up. Of these, 18 developed secondary lymphoedema (16%): 14 in the control group (25%) and four in the intervention group (7%). The difference was significant (P=0.01); risk ratio 0.28 (95% confidence interval 0.10 to 0.79). A survival analysis showed a significant difference, with secondary lymphoedema being diagnosed four times earlier in the control group than in the intervention group (intervention/control, hazard ratio 0.26, 95% confidence interval 0.09 to 0.79).\n\n\nCONCLUSION\nEarly physiotherapy could be an effective intervention in the prevention of secondary lymphoedema in women for at least one year after surgery for breast cancer involving dissection of axillary lymph nodes.\n\n\nTRIAL REGISTRATION\nCurrent controlled trials ISRCTN95870846.",
"title": ""
},
{
"docid": "c2daec5b85a4e8eea614d855c6549ef0",
"text": "An audio-visual corpus has been collected to support the use of common material in speech perception and automatic speech recognition studies. The corpus consists of high-quality audio and video recordings of 1000 sentences spoken by each of 34 talkers. Sentences are simple, syntactically identical phrases such as \"place green at B 4 now\". Intelligibility tests using the audio signals suggest that the material is easily identifiable in quiet and low levels of stationary noise. The annotated corpus is available on the web for research use.",
"title": ""
},
{
"docid": "fcc36e4c32953dd9deedd5fd11ca8a1a",
"text": "Effective human-robot cooperation requires robotic devices that understand human goals and intentions. We frame the problem of intent recognition as one of tracking and predicting human actions within the context of plan task sequences. A hybrid mode estimation approach, which estimates both discrete operating modes and continuous state, is used to accomplish this tracking based on possibly noisy sensor input. The operating modes correspond to plan tasks, hence, the ability to estimate and predict these provides a prediction of human actions and associated needs in the plan context. The discrete and continuous estimates interact in that the discrete mode selects continous dynamic models used in the continuous estimation, and the continuous state is used to evaluate guard conditions for mode transitions. Two applications: active prosthetic devices, and cooperative assembly, are described.",
"title": ""
},
{
"docid": "b250df76fdd27728af89b0c02aef5a68",
"text": "In this experiment, seven software teams developed versions of the same small-size (2000-4000 source instruction) application software product. Four teams used the Specifying approach. Three teams used the Prototyping approach.\n The main results of the experiment were:\n Prototyping yielded products with roughly equivalent performance, but with about 40% less code and 45% less effort.\n The prototyped products rated somewhat lower on functionality and robustness, but higher on ease of use and ease of learning.\n Specifying produced more coherent designs and software that was easier to integrate.\n The paper presents the experimental data supporting these and a number of additional conclusions.",
"title": ""
},
{
"docid": "a691214a7ac8a1a7b4ad6fe833afd572",
"text": "Within the field of computer vision, change detection algorithms aim at automatically detecting significant changes occurring in a scene by analyzing the sequence of frames in a video stream. In this paper we investigate how state-of-the-art change detection algorithms can be combined and used to create a more robust algorithm leveraging their individual peculiarities. We exploited genetic programming (GP) to automatically select the best algorithms, combine them in different ways, and perform the most suitable post-processing operations on the outputs of the algorithms. In particular, algorithms’ combination and post-processing operations are achieved with unary, binary and ${n}$ -ary functions embedded into the GP framework. Using different experimental settings for combining existing algorithms we obtained different GP solutions that we termed In Unity There Is Strength. These solutions are then compared against state-of-the-art change detection algorithms on the video sequences and ground truth annotations of the ChangeDetection.net 2014 challenge. Results demonstrate that using GP, our solutions are able to outperform all the considered single state-of-the-art change detection algorithms, as well as other combination strategies. The performance of our algorithm are significantly different from those of the other state-of-the-art algorithms. This fact is supported by the statistical significance analysis conducted with the Friedman test and Wilcoxon rank sum post-hoc tests.",
"title": ""
},
{
"docid": "1cbbc5af1327338283ca75e0bed7d53c",
"text": "Microscopic examination revealed polymorphic cells with abundant cytoplasm and large nuclei within the acanthotic epidermis (Figure 3). There were aggregated melanin granules in the epidermis, as well as a subepidermal lymphocytic infiltrate. The atypical cells were positive for CK7 (Figure 4). A few scattered cells were positive with the Melan-A stain (Figure 5). Pigmented lesion of the left nipple in a 49-year-old woman Case for Diagnosis",
"title": ""
}
] |
scidocsrr
|
6bd5f4367e4b61199da4da47b337a1ae
|
Dual Band-Reject UWB Antenna With Sharp Rejection of Narrow and Closely-Spaced Bands
|
[
{
"docid": "99d5eab7b0dfcb59f7111614714ddf95",
"text": "To prevent interference problems due to existing nearby communication systems within an ultrawideband (UWB) operating frequency, the significance of an efficient band-notched design is increased. Here, the band-notches are realized by adding independent controllable strips in terms of the notch frequency and the width of the band-notches to the fork shape of the UWB antenna. The size of the flat type band-notched UWB antenna is etched on 24 times 36 mm2 substrate. Two novel antennas are presented. One antenna is designed for single band-notch with a separated strip to cover the 5.15-5.825 GHz band. The second antenna is designed for dual band-notches using two separated strips to cover the 5.15-5.35 GHz band and 5.725-5.825 GHz band. The simulation and measurement show that the proposed antenna achieves a wide bandwidth from 3 to 12 GHz with the dual band-notches successfully.",
"title": ""
}
] |
[
{
"docid": "ff572b8e20b6f6792f8598b80660238f",
"text": "In this study, Cu pillar bump is firstly built on FCCSP with 65 nm low k chip. 7 DOE cells are designed to evaluate the effects of Cu pillar height, Cu pillar diameter, PI opening size and PI material on package reliability performance. No obvious failure is found after package assembly and long-term reliability test. The packages are still in good shape even though the reliability test is expanded to 3x test durations With the experiences of Cu pillar bump on 65 nm low k chip, Cu pillar bump is again built on FCBGA package with 45 nm ELK chip. White bump defect is found after chip bond via CSAM inspection, failure analysis shows that the white bump phenomenon is due to crack occurs inside ELK layer. A local heating bond tool (thermal compression bond) is used to improve ELK crack, test results illustrate ELK crack still exists, however the failure rate reduces from original 30%~50% to 5%~20%. Simulation analysis is conducted to study the effect of PI opening size and UBM size on stress concentration at ELK layer. Small PI opening size can reduce stress distribution at ELK layer. On the contrary, relatively large PI opening size and large UBM size also show positive effect on ELK crack. Assembly process and reliability test are conducted again to validate simulation results, experiment data is consistent with simulation result.",
"title": ""
},
{
"docid": "35da724255bbceb859d01ccaa0dec3b1",
"text": "A linear differential equation with rational function coefficients has a Bessel type solution when it is solvable in terms of <i>B</i><sub><i>v</i></sub>(<i>f</i>), <i>B</i><sub><i>v</i>+1</sub>(<i>f</i>). For second order equations, with rational function coefficients, <i>f</i> must be a rational function or the square root of a rational function. An algorithm was given by Debeerst, van Hoeij, and Koepf, that can compute Bessel type solutions if and only if <i>f</i> is a rational function. In this paper we extend this work to the square root case, resulting in a complete algorithm to find all Bessel type solutions.",
"title": ""
},
{
"docid": "d54b25dc88c99a02a66ed056bff78444",
"text": "Objectives: This study was undertaken to observe the topographical features of enamel surface deproteinized with 5.25% sodium hypochlorite (NaOCl) after phosphoric acid (H3PO4) etching using Scanning Electron Microscope (SEM) analysis and also the effect of enamel deproteinization after acid etching on the shear bond strength (SBS) of AdperTM Single Bond 2 adhesive and FiltekTM Z350 XT composite resin. Study design: SEM Observation: 10 enamel blocks of 1 mm2 from 10 human sound permanent molar teeth were obtained and treated with 37% H3PO4 gel for 15 seconds followed by treatment with 5.25% NaOCl for 60 seconds. All the 10 samples were subjected to SEM analysis and 5 microphotographs of each sample were obtained at 500X magnification and evaluated for the occurrence of Type I – II etching pattern in percentage (%) using Auto – CAD 2007 software. SBS Evaluation: A 5×4 mm window of the enamel surface was etched with 37% H3PO4 gel for 15 seconds, washed with distilled water and air dried. The etched enamel surface was then treated with 5.25% NaOCl for 60 seconds, washed with distilled water and air dried. A single coat of AdperTM Single Bond 2 adhesive was applied and photo polymerized for 20 seconds and FiltekTM Z350 XT composite resin block of length 5mm, width 4 mm and height 5 mm respectively was built and photo polymerized in increments for 20 seconds each. The shear bond strength of all the 20 test samples (permanent molar teeth) were measured (in MPa) on Instron Mechanical Testing Machine. Results: The mean value of Type I – II etching pattern of all the test samples was observed to be 40.68 + 26.38% and the mean SBS value for all the test samples was observed to be 17.35 + 7.25 MPa. Conclusions: No significant enhancive effect of enamel deproteinization after acid etching with respect to the occurrence of Type I-II etching patterns as well as on the SBS of adhesive resin and composite resin complex to the enamel surface was observed in this study. The use of 37% phosphoric acid alone for 15 seconds still remains the best method for pretreatment of the enamel. *Corresponding author: Dr. Ramakrishna Yeluri. M.D.S, F.P.F.A, Professor, Department of Pedodontics and Preventive Dentistry, K.D. Dental College and Hospital, Mathura – Delhi N.H #2, Mathura – 281001, Uttar Pradesh, India, Tel: +919997951558; Fax: 0565-2530764; E-mail: drramakrishnay@indiatimes.com, kittypedo@yahoo.com Received December 17, 2013; Accepted Janaury 21, 2014; Published January 23, 2014 Citation: Ramakrishna Y, Bhoomika A, Harleen N, Munshi AK (2014) Enamel Deproteinization after Acid Etching Is it Worth the Effort? Dentistry 4: 200. doi:10.4172/2161-1122.1000200 Copyright: © 2014 Ramakrishna Y, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.",
"title": ""
},
{
"docid": "90813d00050fdb1b8ce1a9dffe858d46",
"text": "Background: Diabetes mellitus is associated with biochemical and pathological alterations in the liver. The aim of this study was to investigate the effects of apple cider vinegar (ACV) on serum biochemical markers and histopathological changes in the liver of diabetic rats for 30 days. Effects were evaluated using streptozotocin (STZ)-induced diabetic rats as an experimental model. Materials and methods: Diabetes mellitus was induced by a single dose of STZ (65 mg/kg) given intraperitoneally. Thirty wistar rats were divided into three groups: control group, STZ-treated group and STZ plus ACV treated group (2 ml/kg BW). Animals were sacrificed 30 days post treatment. Results: Biochemical results indicated that, ACV caused a significant decrease in glucose, TC, LDL-c and a significant increase in HDL-c. Histopathological examination of the liver sections of diabetic rats showed fatty changes in the cytoplasm of the hepatocytes in the form of accumulation of lipid droplets, lymphocytic infiltration. Electron microscopic studies revealed aggregations of polymorphic mitochondria with apparent loss of their cristae and condensed matrices. Besides, the rough endoplasmic reticulum was proliferating and fragmented into smaller stacks. The cytoplasm of the hepatocytes exhibited vacuolations and displayed a large number of lipid droplets of different sizes. On the other hand, the liver sections of diabetic rats treated with ACV showed minimal toxic effects due to streptozotocin. These ultrastructural results revealed that treatment of diabetic rats with ACV led to apparent recovery of the injured hepatocytes. In prophetic medicine, Prophet Muhammad peace is upon him strongly recommended eating vinegar in the Prophetic Hadeeth: \"vinegar is the best edible\". Conclusion: This study showed that ACV, in early stages of diabetes inductioncan decrease the destructive progress of diabetes and cause hepatoprotection against the metabolic damages resulting from streptozotocininduced diabetes mellitus.",
"title": ""
},
{
"docid": "8f25b3b36031653311eee40c6c093768",
"text": "This paper provides a survey of the applications of computers in music teaching. The systems are classified by musical activity rather than by technical approach. The instructional strategies involved and the type of knowledge represented are highlighted and areas for future research are identified.",
"title": ""
},
{
"docid": "a4b8f00bc8c37f56f85ed61cae226ef3",
"text": "Academic motivation is discussed in terms of self-efficacy, an individual's judgments of his or her capabilities to perform given actions. After presenting an overview of self-efficacy theory, I contrast self-efficacy with related constructs (perceived control, outcome expectations, perceived value of outcomes, attributions, and selfconcept) and discuss some efficacy research relevant to academic motivation. Studies of the effects of person variables (goal setting and information processing) and situation variables (models, attributional feedback, and rewards) on self-efficacy and motivation are reviewed. In conjunction with this discussion, I mention substantive issues that need to be addressed in the self-efficacy research and summarize evidence on the utility of self-efficacy for predicting motivational outcomes. Areas for future research are suggested. Article: The concept of personal expectancy has a rich history in psychological theory on human motivation (Atkinson, 1957; Rotter, 1966; Weiner, 1979). Research conducted within various theoretical traditions supports the idea that expectancy can influence behavioral instigation, direction, effort, and persistence (Bandura, 1986; Locke & Latham, 1990; Weiner, 1985). In this article, I discuss academic motivation in terms of one type of personal expectancy: self-efficacy, defined as \"People's judgments of their capabilities to organize and execute courses of action required to attain designated types of performances\" (Bandura, 1986, p. 391). Since Bandura's (1977) seminal article on selfefficacy, much research has clarified and extended the role of self-efficacy as a mechanism underlying behavioral change, maintenance, and generalization. For example, there is evidence that self-efficacy predicts such diverse outcomes as academic achievements, social skills, smoking cessation, pain tolerance, athletic performances, career choices, assertiveness, coping with feared events, recovery from heart attack, and sales performance (Bandura, 1986). After presenting an overview of self-efficacy theory and comparison of self-efficacy with related constructs, I discuss some self-efficacy research relevant to academic motivation, pointing out substantive issues that need to be addressed. I conclude with recommendations for future research. SELF-EFFICACY THEORY Antecedents and Consequences Bandura (1977) hypothesized that self-efficacy affects an individual's choice of activities, effort, and persistence. People who have a low sense of efficacy for accomplishing a task may avoid it; those who believe they are capable should participate readily. Individuals who feel efficacious are hypothesized to work harder and persist longer when they encounter difficulties than those who doubt their capabilities. Self-efficacy theory postulates that people acquire information to appraise efficacy from their performance accomplishments, vicarious (observational) experiences, forms of persuasion, and physiological indexes. An individual's own performances offer the most reliable guides for assessing efficacy. Successes raise efficacy and failure lowers it, but once a strong sense of efficacy is developed, a failure may not have much impact (Bandura, 1986). An individual also acquires capability information from knowledge of others. Similar others offer the best basis for comparison (Schunk, 1989b). Observing similar peers perform a task conveys to observers that they too are capable of accomplishing it. Information acquired vicariously typically has a weaker effect on self-efficacy than performance-based information; a vicarious increase in efficacy can be negated by subsequent failures. Students often receive persuasory information that they possess the capabilities to perform a task (e.g., \"You can do this\"). Positive persuasory feedback enhances self-efficacy, but this increase will be temporary if subsequent efforts turn out poorly. Students also derive efficacy information from physiological indexes (e.g., heart rate and sweating). Bodily symptoms signaling anxiety might be interpreted to indicate a lack of skills. Information acquired from these sources does not automatically influence efficacy; rather, it is cognitively appraised (Bandura, 1986). Efficacy appraisal is an inferential process in which persons weigh and combine the contributions of such personal and situational factors as their perceived ability, the difficulty of the task, amount of effort expended, amount of external assistance received, number and pattern of successes and failures, their perceived similarity to models, and persuader credibility (Schunk, 1989b). Self-efficacy is not the only influence on behavior; it is not necessarily the most important. Behavior is a function of many variables. In achievement settings some other important variables are skills, outcome expectations, and the perceived value of outcomes (Schunk, 1989b). High self-efficacy will not produce competent performances when requisite skills are lacking. Outcome expectations, or beliefs concerning the probable outcomes of actions, are important because individuals are not motivated to act in ways they believe will result in negative outcomes. Perceived value of outcomes refers to how much people desire certain outcomes relative to others. Given adequate skills, positive outcome expectations, and personally valued outcomes, self-efficacy is hypothesized to influence the choice and direction of much human behavior (Bandura, 1989b). Schunk (1989b) discussed how self-efficacy might operate during academic learning. At the start of an activity, students differ in their beliefs about their capabilities to acquire knowledge, perform skills, master the material, and so forth. Initial self-efficacy varies as a function of aptitude (e.g., abilities and attitudes) and prior experience. Such personal factors as goal setting and information processing, along with situational factors (e.g., rewards and teacher feedback), affect students while they are working. From these factors students derive cues signaling how well they are learning, which they use to assess efficacy for further learning. Motivation is enhanced when students perceive they are making progress in learning. In turn, as students work on tasks and become more skillful, they maintain a sense of self-efficacy for performing well.",
"title": ""
},
{
"docid": "20c6b7417a31aceb39bcf1b1fa3fce4b",
"text": "In the process of dealing with the cutting calculation of Multi-axis CNC Simulation, the traditional Voxel Model not only will cost large computation time when judging whether the cutting happens or not, but also the data points may occupy greater storage space. So it cannot satisfy the requirement of real-time emulation, In the construction method of Compressed Voxel Model, it can satisfy the need of Multi-axis CNC Simulation, and storage space is relatively small. Also the model reconstruction speed is faster, but the Boolean computation in the cutting judgment is very complex, so it affects the real-time of CNC Simulation indirectly. Aimed at the shortcomings of these methods, we propose an improved solid modeling technique based on the Voxel model, which can meet the demand of real-time in cutting computation and Graphic display speed.",
"title": ""
},
{
"docid": "058a128a15c7d0e343adb3ada80e18d3",
"text": "PURPOSE OF REVIEW\nOdontogenic causes of sinusitis are frequently missed; clinicians often overlook odontogenic disease whenever examining individuals with symptomatic rhinosinusitis. Conventional treatments for chronic rhinosinusitis (CRS) will often fail in odontogenic sinusitis. There have been several recent developments in the understanding of mechanisms, diagnosis, and treatment of odontogenic sinusitis, and clinicians should be aware of these advances to best treat this patient population.\n\n\nRECENT FINDINGS\nThe majority of odontogenic disease is caused by periodontitis and iatrogenesis. Notably, dental pain or dental hypersensitivity is very commonly absent in odontogenic sinusitis, and symptoms are very similar to those seen in CRS overall. Unilaterality of nasal obstruction and foul nasal drainage are most suggestive of odontogenic sinusitis, but computed tomography is the gold standard for diagnosis. Conventional panoramic radiographs are very poorly suited to rule out odontogenic sinusitis, and cannot be relied on to identify disease. There does not appear to be an optimal sequence of treatment for odontogenic sinusitis; the dental source should be addressed and ESS is frequently also necessary to alleviate symptoms.\n\n\nSUMMARY\nOdontogenic sinusitis has distinct pathophysiology, diagnostic considerations, microbiology, and treatment strategies whenever compared with chronic rhinosinusitis. Clinicians who can accurately identify odontogenic sources can increase efficacy of medical and surgical treatments and improve patient outcomes.",
"title": ""
},
{
"docid": "2bd2bd3b2604d29c11017413c109c47c",
"text": "Supervised semantic role labeling (SRL) systems are generally claimed to have accuracies in the range of 80% and higher (Erk and Padó, 2006). These numbers, though, are the result of highly-restricted evaluations, i.e., typically evaluating on hand-picked lemmas for which training data is available. In this paper we consider performance of such systems when we evaluate at the document level rather than on the lemma level. While it is wellknown that coverage gaps exist in the resources available for training supervised SRL systems, what we have been lacking until now is an understanding of the precise nature of this coverage problem and its impact on the performance of SRL systems. We present a typology of five different types of coverage gaps in FrameNet. We then analyze the impact of the coverage gaps on performance of a supervised semantic role labeling system on full texts, showing an average oracle upper bound of 46.8%.",
"title": ""
},
{
"docid": "dc3417d01a998ee476aeafc0e9d11c74",
"text": "We present an overview of techniques for quantizing convolutional neural networks for inference with integer weights and activations. 1. Per-channel quantization of weights and per-layer quantization of activations to 8-bits of precision post-training produces classification accuracies within 2% of floating point networks for a wide variety of CNN architectures (section 3.1). 2. Model sizes can be reduced by a factor of 4 by quantizing weights to 8bits, even when 8-bit arithmetic is not supported. This can be achieved with simple, post training quantization of weights (section 3.1). 3. We benchmark latencies of quantized networks on CPUs and DSPs and observe a speedup of 2x-3x for quantized implementations compared to floating point on CPUs. Speedups of up to 10x are observed on specialized processors with fixed point SIMD capabilities, like the Qualcomm QDSPs with HVX (section 6). 4. Quantization-aware training can provide further improvements, reducing the gap to floating point to 1% at 8-bit precision. Quantization-aware training also allows for reducing the precision of weights to four bits with accuracy losses ranging from 2% to 10%, with higher accuracy drop for smaller networks (section 3.2). 5. We introduce tools in TensorFlow and TensorFlowLite for quantizing convolutional networks (Section 3). 6. We review best practices for quantization-aware training to obtain high accuracy with quantized weights and activations (section 4). 7. We recommend that per-channel quantization of weights and per-layer quantization of activations be the preferred quantization scheme for hardware acceleration and kernel optimization. We also propose that future processors and hardware accelerators for optimized inference support precisions of 4, 8 and 16 bits (section 7).",
"title": ""
},
{
"docid": "245de72c0f333f4814990926e08c13e9",
"text": "Large deep neural networks are powerful, but exhibit undesirable behaviors such as memorization and sensitivity to adversarial examples. In this work, we propose mixup, a simple learning principle to alleviate these issues. In essence, mixup trains a neural network on convex combinations of pairs of examples and their labels. By doing so, mixup regularizes the neural network to favor simple linear behavior in-between training examples. Our experiments on the ImageNet-2012, CIFAR-10, CIFAR-100, Google commands and UCI datasets show that mixup improves the generalization of state-of-the-art neural network architectures. We also find that mixup reduces the memorization of corrupt labels, increases the robustness to adversarial examples, and stabilizes the training of generative adversarial networks.",
"title": ""
},
{
"docid": "39fc7b710a6d8b0fdbc568b48221de5d",
"text": "The framework of cognitive wireless networks is expected to endow the wireless devices with the cognition-intelligence ability with which they can efficiently learn and respond to the dynamic wireless environment. In many practical scenarios, the complexity of network dynamics makes it difficult to determine the network evolution model in advance. Thus, the wireless decision-making entities may face a black-box network control problem and the model-based network management mechanisms will be no longer applicable. In contrast, model-free learning enables the decision-making entities to adapt their behaviors based on the reinforcement from their interaction with the environment and (implicitly) build their understanding of the system from scratch through trial-and-error. Such characteristics are highly in accordance with the requirement of cognition-based intelligence for devices in cognitive wireless networks. Therefore, model-free learning has been considered as one key implementation approach to adaptive, self-organized network control in cognitive wireless networks. In this paper, we provide a comprehensive survey on the applications of the state-of-the-art model-free learning mechanisms in cognitive wireless networks. According to the system models on which those applications are based, a systematic overview of the learning algorithms in the domains of single-agent system, multiagent systems, and multiplayer games is provided. The applications of model-free learning to various problems in cognitive wireless networks are discussed with the focus on how the learning mechanisms help to provide the solutions to these problems and improve the network performance over the model-based, non-adaptive methods. Finally, a broad spectrum of challenges and open issues is discussed to offer a guideline for the future research directions.",
"title": ""
},
{
"docid": "96c1da4e4b52014e4a9c5df098938c98",
"text": "Deep learning models have lately shown great performance in various fields such as computer vision, speech recognition, speech translation, and natural language processing. However, alongside their state-of-the-art performance, it is still generally unclear what is the source of their generalization ability. Thus, an important question is what makes deep neural networks able to generalize well from the training set to new data. In this article, we provide an overview of the existing theory and bounds for the characterization of the generalization error of deep neural networks, combining both classical and more recent theoretical and empirical results.",
"title": ""
},
{
"docid": "54fc5bc85ef8022d099fff14ab1b7ce0",
"text": "Automatic inspection of Mura defects is a challenging task in thin-film transistor liquid crystal display (TFT-LCD) defect detection, which is critical for LCD manufacturers to guarantee high standard quality control. In this paper, we propose a set of automatic procedures to detect mura defects by using image processing and computer vision techniques. Singular Value Decomposition (SVD) and Discrete Cosine Transformation(DCT) techniques are employed to conduct image reconstruction, based on which we are able to obtain the differential image of LCD Cells. In order to detect different types of mura defects accurately, we then design a method that employs different detection modules adaptively, which can overcome the disadvantage of simply using a single threshold value. Finally, we provide the experimental results to validate the effectiveness of the proposed method in mura detection.",
"title": ""
},
{
"docid": "42d31b6b66192552d0f0aa1ce9a36e21",
"text": "OBJECTIVE\nAlthough stress is often presumed to cause sleep disturbances, little research has documented the role of stressful life events in primary insomnia. The present study examined the relationship of stress and coping skills, and the potential mediating role of presleep arousal, to sleep patterns in good sleepers and insomnia sufferers.\n\n\nMETHODS\nThe sample was composed of 67 participants (38 women, 29 men; mean age, 39.6 years), 40 individuals with insomnia and 27 good sleepers. Subjects completed prospective, daily measures of stressful events, presleep arousal, and sleep for 21 consecutive days. In addition, they completed several retrospective and global measures of depression, anxiety, stressful life events, and coping skills.\n\n\nRESULTS\nThe results showed that poor and good sleepers reported equivalent numbers of minor stressful life events. However, insomniacs rated both the impact of daily minor stressors and the intensity of major negative life events higher than did good sleepers. In addition, insomniacs perceived their lives as more stressful, relied more on emotion-oriented coping strategies, and reported greater presleep arousal than good sleepers. Prospective daily data showed significant relationships between daytime stress and nighttime sleep, but presleep arousal and coping skills played an important mediating role.\n\n\nCONCLUSIONS\nThe findings suggest that the appraisal of stressors and the perceived lack of control over stressful events, rather than the number of stressful events per se, enhance the vulnerability to insomnia. Arousal and coping skills play an important mediating role between stress and sleep. The main implication of these results is that insomnia treatments should incorporate clinical methods designed to teach effective stress appraisal and coping skills.",
"title": ""
},
{
"docid": "ea041a1df42906b0d5a3644ae8ba933b",
"text": "In recent years, program verifiers and interactive theorem provers have become more powerful and more suitable for verifying large programs or proofs. This has demonstrated the need for improving the user experience of these tools to increase productivity and to make them more accessible to nonexperts. This paper presents an integrated development environment for Dafny—a programming language, verifier, and proof assistant—that addresses issues present in most state-of-the-art verifiers: low responsiveness and lack of support for understanding non-obvious verification failures. The paper demonstrates several new features that move the state-of-the-art closer towards a verification environment that can provide verification feedback as the user types and can present more helpful information about the program or failed verifications in a demand-driven and unobtrusive way.",
"title": ""
},
{
"docid": "8704a4033132a1d26cf2da726a60045e",
"text": "In practical classification, there is often a mix of learnable and unlearnable classes and only a classifier above a minimum performance threshold can be deployed. This problem is exacerbated if the training set is created by active learning. The bias of actively learned training sets makes it hard to determine whether a class has been learned. We give evidence that there is no general and efficient method for reducing the bias and correctly identifying classes that have been learned. However, we characterize a number of scenarios where active learning can succeed despite these difficulties.",
"title": ""
},
{
"docid": "5e2cfcfb49286b50bcfc6eb1648afc99",
"text": "Face analysis is a rapidly developing research area and facial landmark detection is one of the pre-processing steps. In recent years, many algorithms and comprehensive survey/ challenge papers have been published on facial landmark detection. In this work, we analysed six survey/challenge papers and observed that among open source systems deep learning (TCDCN, DCR) and regression based (CFSS) methods show superior performance.",
"title": ""
},
{
"docid": "906659aa61bbdb5e904a1749552c4741",
"text": "The Rete–Match algorithm is a matching algorithm used to develop production systems. Although this algorithm is the fastest known algorithm, for many patterns and many objects matching, it still suffers from considerable amount of time needed due to the recursive nature of the problem. In this paper, a parallel version of the Rete–Match algorithm for distributed memory architecture is presented. Also, a theoretical analysis to its correctness and performance is discussed. q 1998 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "28cf177349095e7db4cdaf6c9c4a6cb1",
"text": "Neural Architecture Search aims at automatically finding neural architectures that are competitive with architectures designed by human experts. While recent approaches have achieved state-of-the-art predictive performance for image recognition, they are problematic under resource constraints for two reasons: (1) the neural architectures found are solely optimized for high predictive performance, without penalizing excessive resource consumption; (2) most architecture search methods require vast computational resources. We address the first shortcoming by proposing LEMONADE, an evolutionary algorithm for multi-objective architecture search that allows approximating the entire Pareto-front of architectures under multiple objectives, such as predictive performance and number of parameters, in a single run of the method. We address the second shortcoming by proposing a Lamarckian inheritance mechanism for LEMONADE which generates children networks that are warmstarted with the predictive performance of their trained parents. This is accomplished by using (approximate) network morphism operators for generating children. The combination of these two contributions allows finding models that are on par or even outperform both hand-crafted as well as automatically-designed networks.",
"title": ""
}
] |
scidocsrr
|
dd6d377b0614dce9021713a3d9572e68
|
Altruism and selfishness.
|
[
{
"docid": "7beb0fa9fa3519d291aa3d224bfc1b74",
"text": "In comparisons among Chicago neighbourhoods, homicide rates in 1988-93 varied more than 100-fold, while male life expectancy at birth ranged from 54 to 77 years, even with effects of homicide mortality removed. This \"cause deleted\" life expectancy was highly correlated with homicide rates; a measure of economic inequality added significant additional prediction, whereas median household income did not. Deaths from internal causes (diseases) show similar age patterns, despite different absolute levels, in the best and worst neighbourhoods, whereas deaths from external causes (homicide, accident, suicide) do not. As life expectancy declines across neighbourhoods, women reproduce earlier; by age 30, however, neighbourhood no longer affects age specific fertility. These results support the hypothesis that life expectancy itself may be a psychologically salient determinant of risk taking and the timing of life transitions.",
"title": ""
}
] |
[
{
"docid": "df1e281417844a0641c3b89659e18102",
"text": "In this paper we present a novel method to increase the spatial resolution of depth images. We combine a deep fully convolutional network with a non-local variational method in a deep primal-dual network. The joint network computes a noise-free, highresolution estimate from a noisy, low-resolution input depth map. Additionally, a highresolution intensity image is used to guide the reconstruction in the network. By unrolling the optimization steps of a first-order primal-dual algorithm and formulating it as a network, we can train our joint method end-to-end. This not only enables us to learn the weights of the fully convolutional network, but also to optimize all parameters of the variational method and its optimization procedure. The training of such a deep network requires a large dataset for supervision. Therefore, we generate high-quality depth maps and corresponding color images with a physically based renderer. In an exhaustive evaluation we show that our method outperforms the state-of-the-art on multiple benchmarks.",
"title": ""
},
{
"docid": "4a6d231ce704e4acf9320ac3bd5ade14",
"text": "Despite recent advances in discourse parsing and causality detection, the automatic recognition of argumentation structure of authentic texts is still a very challenging task. To approach this problem, we collected a small corpus of German microtexts in a text generation experiment, resulting in texts that are authentic but of controlled linguistic and rhetoric complexity. We show that trained annotators can determine the argumentation structure on these microtexts reliably. We experiment with different machine learning approaches for automatic argumentation structure recognition on various levels of granularity of the scheme. Given the complex nature of such a discourse understanding tasks, the first results presented here are promising, but invite for further investigation.",
"title": ""
},
{
"docid": "3eb022b3ec1517bc54670a68c8a14106",
"text": "Waste as a management issue has been evident for over four millennia. Disposal of waste to the biosphere has given way to thinking about, and trying to implement, an integrated waste management approach. In 1996 the United Nations Environmental Programme (UNEP) defined 'integrated waste management' as 'a framework of reference for designing and implementing new waste management systems and for analysing and optimising existing systems'. In this paper the concept of integrated waste management as defined by UNEP is considered, along with the parameters that constitute integrated waste management. The examples used are put into four categories: (1) integration within a single medium (solid, aqueous or atmospheric wastes) by considering alternative waste management options, (2) multi-media integration (solid, aqueous, atmospheric and energy wastes) by considering waste management options that can be applied to more than one medium, (3) tools (regulatory, economic, voluntary and informational) and (4) agents (governmental bodies (local and national), businesses and the community). This evaluation allows guidelines for enhancing success: (1) as experience increases, it is possible to deal with a greater complexity; and (2) integrated waste management requires a holistic approach, which encompasses a life cycle understanding of products and services. This in turn requires different specialisms to be involved in the instigation and analysis of an integrated waste management system. Taken together these advance the path to sustainability.",
"title": ""
},
{
"docid": "e808606994c3fd8eea1b78e8a3e55b8c",
"text": "We describe a Japanese-English patent parallel corpus created from the Japanese and US patent data provided for the NTCIR-6 patent retrieval task. The corpus contains about 2 million sentence pairs that were aligned automatically. This is the largest Japanese-English parallel corpus, which will be available to the public after the 7th NTCIR workshop meeting. We estimated that about 97% of the sentence pairs were correct alignments and about 90% of the alignments were adequate translations whose English sentences reflected almost perfectly the contents of the corresponding Japanese sentences.",
"title": ""
},
{
"docid": "dc4a2fa822a685997c83e6fd49b30f56",
"text": "Complex event processing (CEP) has become increasingly important for tracking and monitoring applications ranging from health care, supply chain management to surveillance. These monitoring applications submit complex event queries to track sequences of events that match a given pattern. As these systems mature the need for increasingly complex nested sequence queries arises, while the state-of-the-art CEP systems mostly focus on the execution of flat sequence queries only. In this paper, we now introduce an iterative execution strategy for nested CEP queries composed of sequence, negation, AND and OR operators. Lastly we have introduced the promising direction of applying selective caching of intermediate results to optimize the execution. Our experimental study using real-world stock trades evaluates the performance of our proposed iterative execution strategy for different query types.",
"title": ""
},
{
"docid": "25786c5516b559fc4a566e72485fdcc6",
"text": "We propose an algorithm to improve the quality of depth-maps used for Multi-View Stereo (MVS). Many existing MVS techniques make use of a two stage approach which estimates depth-maps from neighbouring images and then merges them to extract a final surface. Often the depth-maps used for the merging stage will contain outliers due to errors in the matching process. Traditional systems exploit redundancy in the image sequence (the surface is seen in many views), in order to make the final surface estimate robust to these outliers. In the case of sparse data sets there is often insufficient redundancy and thus performance degrades as the number of images decreases. In order to improve performance in these circumstances it is necessary to remove the outliers from the depth-maps. We identify the two main sources of outliers in a top performing algorithm: (1) spurious matches due to repeated texture and (2) matching failure due to occlusion, distortion and lack of texture. We propose two contributions to tackle these failure modes. Firstly, we store multiple depth hypotheses and use a spatial consistently constraint to extract the true depth. Secondly, we allow the algorithm to return an unknown state when the a true depth estimate cannot be found. By combining these in a discrete label MRF optimisation we are able to obtain high accuracy depthmaps with low numbers of outliers. We evaluate our algorithm in a multi-view stereo framework and find it to confer state-of-the-art performance with the leading techniques, in particular on the standard evaluation sparse data sets.",
"title": ""
},
{
"docid": "9e1e42d27521eb20b6fef10087dd2d9a",
"text": "This paper identifies the need for developing new ways to study curiosity in the context of today’s pervasive technologies and unprecedented information access. Curiosity is defined in this paper in a way which incorporates the concomitant constructs of interest and engagement. A theoretical model for curiosity, interest and engagement in new media technology-pervasive learning environments is advanced, taking into consideration personal, situational and contextual factors as influencing variables. While the path associated with curiosity, interest, and engagement during learning and research has remained essentially the same, how individuals tackle research and information-seeking tasks and factors which sustain such efforts have changed. Learning modalities for promoting this theoretical model are discussed leading to a series of recommendations for future research. This article offers a multi-lens perspective on curiosity and suggests a multi-method research agenda for validating such a perspective.",
"title": ""
},
{
"docid": "5f45659c16ca98f991a31d62fd70cdab",
"text": "Iris recognition has legendary resistance to false matches, and the tools of information theory can help to explain why. The concept of entropy is fundamental to understanding biometric collision avoidance. This paper analyses the bit sequences of IrisCodes computed both from real iris images and from synthetic white noise iris images, whose pixel values are random and uncorrelated. The capacity of the IrisCode as a channel is found to be 0.566 bits per bit encoded, of which 0.469 bits of entropy per bit is encoded from natural iris images. The difference between these two rates reflects the existence of anatomical correlations within a natural iris, and the remaining gap from one full bit of entropy per bit encoded reflects the correlations in both phase and amplitude introduced by the Gabor wavelets underlying the IrisCode. A simple two-state hidden Markov model is shown to emulate exactly the statistics of bit sequences generated both from natural and white noise iris images, including their imposter distributions, and may be useful for generating large synthetic IrisCode databases.",
"title": ""
},
{
"docid": "a90c56a22559807463b46d1c7ab36cb3",
"text": "We have studied manual motor function in a man deafferented by a severe peripheral sensory neuropathy. Motor power was almost unaffected. Our patients could produce a very wide range of preprogrammed finger movements with remarkable accuracy, involving complex muscle synergies of the hand and forearm muscles. He could perform individual finger movements and outline figures in the air with high eyes closed. He had normal pre- and postmovement EEG potentials, and showed the normal bi/triphasic pattern of muscle activation in agonist and antagonist muscles during fast limb movements. He could also move his thumb accurately through three different distances at three different speeds, and could produce three different levels of force at his thumb pad when required. Although he could not judge the weights of objects placed in his hands without vision, he was able to match forces applied by the experimenter to the pad of each thumb if he was given a minimal indication of thumb movement. Despite his success with these laboratory tasks, his hands were relatively useless to him in daily life. He was unable to grasp a pen and write, to fasten his shirt buttons or to hold a cup in one hand. Part of hist difficulty lay in the absence of any automatic reflex correction in his voluntary movements, and also to an inability to sustain constant levels of muscle contraction without visual feedback over periods of more than one or two seconds. He was also unable to maintain long sequences of simple motor programmes without vision.",
"title": ""
},
{
"docid": "48a18e689b226936813f8dcfd2664819",
"text": "This report explores integrating fuzzy logic with two data mining methods (association rules and frequency episodes) for intrusion detection. Data mining methods are capable of extracting patterns automatically from a large amount of data. The integration with fuzzy logic can produce more abstract and flexible patterns for intrusion detection, since many quantitative features are involved in intrusion detection and security itself is fuzzy. In this report, Chapter I introduces the concept of intrusion detection and the practicality of applying fuzzy logic to intrusion detection. In Chapter II, two types of intrusion detection systems, host-based systems and network-based systems, are briefly reviewed. Some important artificial intelligence techniques that have been applied to intrusion detection are also reviewed here, including data mining methods for anomaly detection. Chapter III summarizes a set of desired characteristics for the Intelligent Intrusion Detection Model (IIDM) being developed at Mississippi State University. A preliminary architecture which we have developed for integrating machine learning methods with other intrusion detection methods is also described. Chapter IV discusses basic fuzzy logic theory, traditional algorithms for mining association rules, and an original algorithm for mining frequency episodes. In Chapter V, the algorithms we have extended for mining fuzzy association rules and fuzzy frequency episodes are described. We add a normalization step to the procedure for mining fuzzy association rules in order to prevent one data instance from contributing more than others. We also modify the procedure for mining frequency episodes to learn fuzzy frequency episodes. Chapter VI describes a set of experiments of applying fuzzy association rules and fuzzy episode rules for off-line anomaly detection and real-time intrusion detection. We use fuzzy association rules and fuzzy frequency episodes to extract patterns for temporal statistical measurements at a higher level than the data level. We define a modified similarity evaluation function which is continuous and monotonic for the application of fuzzy association rules and fuzzy frequency episodes in anomaly detection. We also present a new real-time intrusion detection method using fuzzy episode rules. The experimental results show the utility of fuzzy association rules and fuzzy frequency episodes in intrusion detection. The conclusions are included in Chapter VII. ii DEDICATION I would like to dedicate this research to my family and my wife. iii ACKNOWLEDGMENTS I am deeply grateful to Dr. Susan Bridges for expending much time to direct me in this entire research project and directing my graduate study and research work …",
"title": ""
},
{
"docid": "db1abd38db0295fc573bdfca2c2b19a3",
"text": "BACKGROUND\nBacterial vaginosis (BV) has been most consistently linked to sexual behaviour, and the epidemiological profile of BV mirrors that of established sexually transmitted infections (STIs). It remains a matter of debate however whether BV pathogenesis does actually involve sexual transmission of pathogenic micro-organisms from men to women. We therefore made a critical appraisal of the literature on BV in relation to sexual behaviour.\n\n\nDISCUSSION\nG. vaginalis carriage and BV occurs rarely with children, but has been observed among adolescent, even sexually non-experienced girls, contradicting that sexual transmission is a necessary prerequisite to disease acquisition. G. vaginalis carriage is enhanced by penetrative sexual contact but also by non-penetrative digito-genital contact and oral sex, again indicating that sex per se, but not necessarily coital transmission is involved. Several observations also point at female-to-male rather than at male-to-female transmission of G. vaginalis, presumably explaining the high concordance rates of G. vaginalis carriage among couples. Male antibiotic treatment has not been found to protect against BV, condom use is slightly protective, whereas male circumcision might protect against BV. BV is also common among women-who-have-sex-with-women and this relates at least in part to non-coital sexual behaviours. Though male-to-female transmission cannot be ruled out, overall there is little evidence that BV acts as an STD. Rather, we suggest BV may be considered a sexually enhanced disease (SED), with frequency of intercourse being a critical factor. This may relate to two distinct pathogenetic mechanisms: (1) in case of unprotected intercourse alkalinisation of the vaginal niche enhances a shift from lactobacilli-dominated microflora to a BV-like type of microflora and (2) in case of unprotected and protected intercourse mechanical transfer of perineal enteric bacteria is enhanced by coitus. A similar mechanism of mechanical transfer may explain the consistent link between non-coital sexual acts and BV. Similar observations supporting the SED pathogenetic model have been made for vaginal candidiasis and for urinary tract infection.\n\n\nSUMMARY\nThough male-to-female transmission cannot be ruled out, overall there is incomplete evidence that BV acts as an STI. We believe however that BV may be considered a sexually enhanced disease, with frequency of intercourse being a critical factor.",
"title": ""
},
{
"docid": "07631274713ad80653552767d2fe461c",
"text": "Life cycle assessment (LCA) methodology was used to determine the optimum municipal solid waste (MSW) management strategy for Eskisehir city. Eskisehir is one of the developing cities of Turkey where a total of approximately 750tons/day of waste is generated. An effective MSW management system is needed in this city since the generated MSW is dumped in an unregulated dumping site that has no liner, no biogas capture, etc. Therefore, five different scenarios were developed as alternatives to the current waste management system. Collection and transportation of waste, a material recovery facility (MRF), recycling, composting, incineration and landfilling processes were considered in these scenarios. SimaPro7 libraries were used to obtain background data for the life cycle inventory. One ton of municipal solid waste of Eskisehir was selected as the functional unit. The alternative scenarios were compared through the CML 2000 method and these comparisons were carried out from the abiotic depletion, global warming, human toxicity, acidification, eutrophication and photochemical ozone depletion points of view. According to the comparisons and sensitivity analysis, composting scenario, S3, is the more environmentally preferable alternative. In this study waste management alternatives were investigated only on an environmental point of view. For that reason, it might be supported with other decision-making tools that consider the economic and social effects of solid waste management.",
"title": ""
},
{
"docid": "68f797b34880bf08a8825332165a955b",
"text": "The immune system responds to pathogens by a variety of pattern recognition molecules such as the Toll-like receptors (TLRs), which promote recognition of dangerous foreign pathogens. However, recent evidence indicates that normal intestinal microbiota might also positively influence immune responses, and protect against the development of inflammatory diseases. One of these elements may be short-chain fatty acids (SCFAs), which are produced by fermentation of dietary fibre by intestinal microbiota. A feature of human ulcerative colitis and other colitic diseases is a change in ‘healthy’ microbiota such as Bifidobacterium and Bacteriodes, and a concurrent reduction in SCFAs. Moreover, increased intake of fermentable dietary fibre, or SCFAs, seems to be clinically beneficial in the treatment of colitis. SCFAs bind the G-protein-coupled receptor 43 (GPR43, also known as FFAR2), and here we show that SCFA–GPR43 interactions profoundly affect inflammatory responses. Stimulation of GPR43 by SCFAs was necessary for the normal resolution of certain inflammatory responses, because GPR43-deficient (Gpr43-/-) mice showed exacerbated or unresolving inflammation in models of colitis, arthritis and asthma. This seemed to relate to increased production of inflammatory mediators by Gpr43-/- immune cells, and increased immune cell recruitment. Germ-free mice, which are devoid of bacteria and express little or no SCFAs, showed a similar dysregulation of certain inflammatory responses. GPR43 binding of SCFAs potentially provides a molecular link between diet, gastrointestinal bacterial metabolism, and immune and inflammatory responses.",
"title": ""
},
{
"docid": "cfa58ab168beb2d52fe6c2c47488e93a",
"text": "In this paper we present our approach to automatically identify the subjectivity, polarity and irony of Italian Tweets. Our system which reaches and outperforms the state of the art in Italian is well adapted for different domains since it uses abstract word features instead of bag of words. We also present experiments carried out to study how Italian Sentiment Analysis systems react to domain changes. We show that bag of words approaches commonly used in Sentiment Analysis do not adapt well to domain changes.",
"title": ""
},
{
"docid": "244116ffa1ed424fc8519eedc7062277",
"text": "This paper describes a method of automatic placement for standard cells (polycells) that yields areas within 10-20 percent of careful hand placements. The method is based on graph partitioning to identify groups of modules that ought to be close to each other, and a technique for properly accounting for external connections at each level of partitioning. The placement procedure is in production use as part of an automated design system; it has been used in the design of more than 40 chips, in CMOS, NMOS, and bipolar technologies.",
"title": ""
},
{
"docid": "8a4956ba4209b4c557f4f85ee7a885e7",
"text": "In the Brand literature, few studies especially in Iran investigated the brand functions and business success. Hence, this study aims to provide the desirable model to creation and developing a deeper insight into the role of brand equity in the relationship between brand personality and customers purchase intention. The study statistical population consists of the whole Mellat Bank customers in Qazvin province, which used a questionnaire to collect data from them. In addition to, four hypotheses were announced and tested using structural equation modeling techniques. Research findings show the significant and positive effects of the brand personality on brand equity and purchase intention. Likewise, the results revealed that brand equity has a positive influence on customers' purchase intention and has a positive mediator role for the other two variables. According to the results of study, it is recommended to organizations and those marketing managers to take action to create a positive brand personality until make differentiation in customers minds compared to other brands and enhance brand equity and achieved to the comprehensive understanding of consumer behavior. © 2015 AESS Publications. All Rights Reserved.",
"title": ""
},
{
"docid": "d7527aeeb5f26f23930b8d674beb0a13",
"text": "A three-part investigation was conducted to explore the meaning of color preferences. Phase 1 used a Q-sort technique to assess intra-individual stability of preferences over 5 wk. Phase 2 used principal components analysis to discern the manner in which preferences were being made. Phase 3 used canonical correlation to evaluate a hypothesized relationship between color preferences and personality, with five scales of the Personality Research Form serving as the criterion measure. Munsell standard papers, a standard light source, and a color vision test were among control devices applied. There were marked differences in stability of color preferences. Sex differences in intra-individual stability were also apparent among the 90 subjects. An interaction of hue and lightness appeared to underlie such judgments when saturation was kept constant. An unexpected breakdown in control pointed toward the possibly powerful effect of surface finish upon color preference. No relationship to five manifest needs were found. It was concluded that the beginning steps had been undertaken toward psychometric development of a reliable technique for the measurement of color preference.",
"title": ""
},
{
"docid": "4d2e8924181d129e23f8b51eccd7e1ef",
"text": "This paper presents the design, fabrication, and characterization of millimeter-scale rotary electromagnetic generators. The axial-flux synchronous machines consist of a three-phase microfabricated surface-wound copper coil and a multipole permanent-magnet (PM) rotor measuring 2 mm in diameter. Several machines with various geometries and numbers of magnetic poles and turns per pole are designed and compared. Moreover, the use of different PM materials is investigated. Multipole magnetic rotors are modeled using finite element analysis to analyze magnetic field distributions. In operation, the rotor is spun above the microfabricated stator coils using an off-the-shelf air-driven turbine. As a result of design choices, the generators present different levels of operating frequency and electrical output power. The four-pole six-turn/pole NdFeB generator exhibits up to 6.6 mWrms of ac electrical power across a resistive load at a rotational speed of 392 000 r/min. This milliwatt-scale power generation indicates the feasibility of such ultrasmall machines for low-power applications. [2008-0078].",
"title": ""
},
{
"docid": "34123b021d95c2380cde6390e9fdac6e",
"text": "Because the leg is known to exhibit springlike behavior during the stance phase of running, several exoskeletons have attempted to place external springs in parallel with some or all of the leg during stance, but these designs have failed to permit natural kinematics during swing. To this end, a parallel-elastic exoskeleton is presented that introduces a clutch to disengage the parallel leg-spring and thereby not constrain swing-phase movements of the biological leg. A custom interference clutch with integrated planetary gear transmission, made necessary by the requirement for high holding torque but low mass, is presented and shown to withstand up to 190 N m at 1.8 deg resolution with a mass of only 710 g. A suitable control strategy for locking the clutch at peak knee extension is also presented, where only an onboard rate gyroscope and exoskeletal joint encoder are employed as sensory inputs. Exoskeletal electromechanics, sensing, and control are shown to achieve design critieria necessary to emulate biological knee stiffness behaviors in running. [DOI: 10.1115/1.4027841]",
"title": ""
},
{
"docid": "feec0094203fdae5a900831ea81fcfb0",
"text": "Costs, market fragmentation, and new media channels that let customers bypass advertisements seem to be in league against the old ways of marketing. Relying on mass media campaigns to build strong brands may be a thing of the past. Several companies in Europe, making a virtue of necessity, have come up with alternative brand-building approaches and are blazing a trail in the post-mass-media age. In England, Nestlé's Buitoni brand grew through programs that taught the English how to cook Italian food. The Body Shop garnered loyalty with its support of environmental and social causes. Cadbury funded a theme park tied to its history in the chocolate business. Häagen-Dazs opened posh ice-cream parlors and got itself featured by name on the menus of fine restaurants. Hugo Boss and Swatch backed athletic or cultural events that became associated with their brands. The various campaigns shared characteristics that could serve as guidelines for any company hoping to build a successful brand: senior managers were closely involved with brand-building efforts; the companies recognized the importance of clarifying their core brand identity; and they made sure that all their efforts to gain visibility were tied to that core identity. Studying the methods of companies outside one's own industry and country can be instructive for managers. Pilot testing and the use of a single and continuous measure of brand equity also help managers get the most out of novel approaches in their ever more competitive world.",
"title": ""
}
] |
scidocsrr
|
1368e066976a3d74e6f0ebef805748d0
|
Efficient Implementations of Apriori and Eclat Christian Borgelt
|
[
{
"docid": "e66f2052a2e9a7e870f8c1b4f2bfb56d",
"text": "New algorithms with previous native palm pdf reader approaches, with gains of over an order of magnitude using.We present two new algorithms for solving this problem. Regularities, association rules, and gave an algorithm for finding such rules. 4 An.fast discovery of association rules based on our ideas in 33, 35. New algorithms with previous approaches, with gains of over an order of magnitude using.",
"title": ""
}
] |
[
{
"docid": "d7acbf20753e2c9c50b2ab0683d7f03a",
"text": "In this paper, we propose a very deep fully convolutional encoding-decoding framework for image restoration such as denoising and super-resolution. The network is composed of multiple layers of convolution and de-convolution operators, learning end-to-end mappings from corrupted images to the original ones. The convolutional layers act as the feature extractor, which capture the abstraction of image contents while eliminating noises/corruptions. De-convolutional layers are then used to recover the image details. We propose to symmetrically link convolutional and de-convolutional layers with skip-layer connections, with which the training converges much faster and attains a higher-quality local optimum. First, The skip connections allow the signal to be back-propagated to bottom layers directly, and thus tackles the problem of gradient vanishing, making training deep networks easier and achieving restoration performance gains consequently. Second, these skip connections pass image details from convolutional layers to de-convolutional layers, which is beneficial in recovering the original image. Significantly, with the large capacity, we can handle different levels of noises using a single model. Experimental results show that our network achieves better performance than all previously reported state-of-the-art methods.",
"title": ""
},
{
"docid": "69561d0f42cf4aae73d4c97c1871739e",
"text": "Recent methods based on 3D skeleton data have achieved outstanding performance due to its conciseness, robustness, and view-independent representation. With the development of deep learning, Convolutional Neural Networks (CNN) and Long Short Term Memory (LSTM)-based learning methods have achieved promising performance for action recognition. However, for CNN-based methods, it is inevitable to loss temporal information when a sequence is encoded into images. In order to capture as much spatial-temporal information as possible, LSTM and CNN are adopted to conduct effective recognition with later score fusion. In addition, experimental results show that the score fusion between CNN and LSTM performs better than that between LSTM and LSTM for the same feature. Our method achieved state-of-the-art results on NTU RGB+D datasets for 3D human action analysis. The proposed method achieved 87.40% in terms of accuracy and ranked 1st place in Large Scale 3D Human Activity Analysis Challenge in Depth Videos.",
"title": ""
},
{
"docid": "b77d297feeff92a2e7b03bf89b5f20db",
"text": "Dependability evaluation main objective is to assess the ability of a system to correctly function over time. There are many possible approaches to the evaluation of dependability: in these notes we are mainly concerned with dependability evaluation based on probabilistic models. Starting from simple probabilistic models with very efficient solution methods we shall then come to the main topic of the paper: how Petri nets can be used to evaluate the dependability of complex systems.",
"title": ""
},
{
"docid": "0f503bded2c4b0676de16345d4596280",
"text": "An emerging approach to the problem of reducing the identity theft is represented by the adoption of biometric authentication systems. Such systems however present however several challenges, related to privacy, reliability, security of the biometric data. Inter-operability is also required among the devices used for the authentication. Moreover, very often biometric authentication in itself is not sufficient as a conclusive proof of identity and has to be complemented with multiple other proofs of identity like passwords, SSN, or other user identifiers. Multi-factor authentication mechanisms are thus required to enforce strong authentication based on the biometric and identifiers of other nature.In this paper we provide a two-phase authentication mechanism for federated identity management systems. The first phase consists of a two-factor biometric authentication based on zero knowledge proofs. We employ techniques from vector-space model to generate cryptographic biometric keys. These keys are kept secret, thus preserving the confidentiality of the biometric data, and at the same time exploit the advantages of a biometric authentication. The second authentication combines several authentication factors in conjunction with the biometric to provide a strong authentication. A key advantage of our approach is that any unanticipated combination of factors can be used. Such authentication system leverages the information of the user that are available from the federated identity management system.",
"title": ""
},
{
"docid": "b6614633537319c500e70a1866019969",
"text": "The life of a teenager today is far different than in past decades. Through semi-structured interviews with 10 teenagers and 10 parents of teenagers, we investigate parent-teen privacy decision making in these uncharted waters. Parents and teens generally agreed that teens had a need for some degree of privacy from their parents and that respecting teens’ privacy demonstrated trust and fostered independence. We explored the boundaries of teen privacy in both the physical and digital worlds. While parents commonly felt none of their children’s possessions should ethically be exempt from parental monitoring, teens felt strongly that cell phones, particularly text messages, were private. Parents discussed struggling to keep up with new technologies and to understand teens’ technology-mediated socializing. While most parents said they thought similarly about privacy in the physical and digital worlds, half of teens said they thought about these concepts differently. We present cases where parents made privacy decisions using false analogies with the physical world or outdated assumptions. We also highlight directions for more usable digital parenting tools.",
"title": ""
},
{
"docid": "ed544d89c317a91cdfe9f5ee8a2f574b",
"text": "The rapid growth of web resources lead to a need of enhanced Search scheme for information retrieval. Every single user contributes a part of new information to be added to the web every day. This huge data supplied are of diverse area in origin being added, without a mere relation. Hence, a novel search scheme must be applied for bringing out the relevant results on querying web for data. The current web search scheme could bring out only relevant pages to be as results. But, a Semantic web is a solution to this issue through providing a suitable result on understanding the appropriate need of information. It can be acquired through extending the support for databases in machine readable form. It leads to redefinition of current web into semantic web by adding semantic annotations. This paper gives an overview of Semantic mapping approaches. The main goal of this paper is to propose the steps for bringing out a new Semantic web discovery algorithm with an efficient Semantic mapping and a novel Classification Scheme for categorization of concepts.",
"title": ""
},
{
"docid": "563af54f4fd71ac011477ed32c041483",
"text": "In Image Processing efficient algorithms are always pursued for applications that use the most advanced hardware architectures. Distance Transform is a classic operation for blurring effects, skeletonizing, segmentation and various other purposes. This article presents two implementations of the Euclidean Distance Transform using CUDA (Compute Unified Device Architecture) in GPU (Graphics Process Unit): of the Meijster's Sequential Algorithm and another is a very efficient algorithm of simple structure. Both using only shared memory. The results presented herein used images of various types and sizes to show a faster run time compared with the best-known implementations in CPU.",
"title": ""
},
{
"docid": "91b6b9e22f191cfec87d7b62d809542c",
"text": "In the past few years, the storage and analysis of large-scale and fast evolving networks present a great challenge. Therefore, a number of different techniques have been proposed for sampling large networks. In general, network exploration techniques approximate the original networks more accurately than random node and link selection. Yet, link selection with additional subgraph induction step outperforms most other techniques. In this paper, we apply subgraph induction also to random walk and forest-fire sampling. We analyze different real-world networks and the changes of their properties introduced by sampling. We compare several sampling techniques based on the match between the original networks and their sampled variants. The results reveal that the techniques with subgraph induction underestimate the degree and clustering distribution, while overestimate average degree and density of the original networks. Techniques without subgraph induction step exhibit exactly the opposite behavior. Hence, the performance of the sampling techniques from random selection category compared to network exploration sampling does not differ significantly, while clear differences exist between the techniques with subgraph induction step and the ones without it.",
"title": ""
},
{
"docid": "71dd012b54ae081933bddaa60612240e",
"text": "This paper analyzes & compares four adders with different logic styles (Conventional, transmission gate, 14 transistors & GDI based technique) for transistor count, power dissipation, delay and power delay product. It is performed in virtuoso platform, using Cadence tool with available GPDK - 90nm kit. The width of NMOS and PMOS is set at 120nm and 240nm respectively. Transmission gate full adder has sheer advantage of high speed but consumes more power. GDI full adder gives reduced voltage swing not being able to pass logic 1 and logic 0 completely showing degraded output. Transmission gate full adder shows better performance in terms of delay (0.417530 ns), whereas 14T full adder shows better performance in terms of all three aspects.",
"title": ""
},
{
"docid": "dcf7214c15c13f13d33c9a7b2c216588",
"text": "Many machine learning tasks such as multiple instance learning, 3D shape recognition and few-shot image classification are defined on sets of instances. Since solutions to such problems do not depend on the permutation of elements of the set, models used to address them should be permutation invariant. We present an attention-based neural network module, the Set Transformer, specifically designed to model interactions among elements in the input set. The model consists of an encoder and a decoder, both of which rely on attention mechanisms. In an effort to reduce computational complexity, we introduce an attention scheme inspired by inducing point methods from sparse Gaussian process literature. It reduces computation time of self-attention from quadratic to linear in the number of elements in the set. We show that our model is theoretically attractive and we evaluate it on a range of tasks, demonstrating increased performance compared to recent methods for set-structured data.",
"title": ""
},
{
"docid": "c7d23af5ad79d9863e83617cf8bbd1eb",
"text": "Insulin resistance has long been associated with obesity. More than 40 years ago, Randle and colleagues postulated that lipids impaired insulin-stimulated glucose use by muscles through inhibition of glycolysis at key points. However, work over the past two decades has shown that lipid-induced insulin resistance in skeletal muscle stems from defects in insulin-stimulated glucose transport activity. The steatotic liver is also resistant to insulin in terms of inhibition of hepatic glucose production and stimulation of glycogen synthesis. In muscle and liver, the intracellular accumulation of lipids-namely, diacylglycerol-triggers activation of novel protein kinases C with subsequent impairments in insulin signalling. This unifying hypothesis accounts for the mechanism of insulin resistance in obesity, type 2 diabetes, lipodystrophy, and ageing; and the insulin-sensitising effects of thiazolidinediones.",
"title": ""
},
{
"docid": "192f8528ca2416f9a49ce152def2fbe6",
"text": "We study the extent to which we can infer users’ geographical locations from social media. Location inference from social media can benet many applications, such as disaster management, targeted advertising, and news content tailoring. In recent years, a number of algorithms have been proposed for identifying user locations on social media platforms such as Twier and Facebook from message contents, friend networks, and interactions between users. In this paper, we propose a novel probabilistic model based on factor graphs for location inference that oers several unique advantages for this task. First, the model generalizes previous methods by incorporating content, network, and deep features learned from social context. e model is also exible enough to support both supervised learning and semi-supervised learning. Second, we explore several learning algorithms for the proposed model, and present a Two-chain Metropolis-Hastings (MH+) algorithm, which improves the inference accuracy. ird, we validate the proposed model on three dierent genres of data – Twier, Weibo, and Facebook – and demonstrate that the proposed model can substantially improve the inference accuracy (+3.3-18.5% by F1-score) over that of several state-of-the-art methods.",
"title": ""
},
{
"docid": "fb5f52c0b845ff23e82d29f8fb705a0b",
"text": "Organizational culture continues to be cited as one of the most important factors for organizations’ success in an increasingly competitive and IT-driven global environment. Given the fact that organizational culture has an influence all over the organization, the complexity of its nature is increased when considering the relationship between business and IT. As a result, different factors that have influence on changing organizational culture were highlighted in literature. These factors are found in the research literature distributed in three main group; micro-environment factors, macro-environment factors and leader’s impact. One of the factors that have not been yet well investigated in researches is concerning business-IT alignment (BITA. Therefore the purpose of this paper is to investigate the impact of BITA maturity on organizational culture. The research process that we have followed is a literature survey followed by an in-depth case study. The result of this research shows a clear interrelation in theories of both BITA and organizational culture, and clear indications of BITA impact on organizational culture and its change. The findings may support both practitioners and researchers in order to understand the insights of the relationships between BITA and organizational culture components and provide a roadmap for improvements or desired changes in organizational culture with highlighted target business area.",
"title": ""
},
{
"docid": "e2af17b368fef36187c895ad5fd20a58",
"text": "We study in this paper the problem of jointly clustering and learning representations. As several previous studies have shown, learning representations that are both faithful to the data to be clustered and adapted to the clustering algorithm can lead to better clustering performance, all the more so that the two tasks are performed jointly. We propose here such an approach for k-Means clustering based on a continuous reparametrization of the objective function that leads to a truly joint solution. The behavior of our approach is illustrated on various datasets showing its efficacy in learning representations for objects while clustering them.",
"title": ""
},
{
"docid": "565831ad3bd5c7efcd258e48fc7dc64b",
"text": "I n his 2003 book Moneyball, financial reporter Michael Lewis made a striking claim: the valuation of skills in the market for baseball players was grossly inefficient. The discrepancy was so large that when the Oakland Athletics hired an unlikely management group consisting of Billy Beane, a former player with mediocre talent, and two quantitative analysts, the team was able to exploit this inefficiency and outproduce most of the competition, while operating on a shoestring budget. The publication of Moneyball triggered a firestorm of criticism from baseball insiders (Lewis, 2004), and it raised the eyebrows of many economists as well. Basic price theory implies a tight correspondence between pay and productivity when markets are competitive and rich in information, as would seem to be the case in baseball. The market for baseball players receives daily attention from the print and broadcast media, along with periodic in-depth analysis from lifelong baseball experts and academic economists. Indeed, a case can be made that more is known about pay and quantified performance in this market than in any other labor market in the American economy. In this paper, we test the central portion of Lewis’s (2003) argument with elementary econometric tools and confirm his claims. In particular, we find that hitters’ salaries during this period did not accurately reflect the contribution of various batting skills to winning games. This inefficiency was sufficiently large that knowledge of its existence, and the ability to exploit it, enabled the Oakland Athletics to gain a substantial advantage over their competition. Further, we find",
"title": ""
},
{
"docid": "b8172acdca89e720783a803d98b271ad",
"text": "Vertically stacked nanowire field effect transistors currently dominate the race to become mainstream devices for 7-nm CMOS technology node. However, these devices are likely to suffer from the issue of nanowire stack position dependent drain current. In this paper, we show that the nanowire located at the bottom of the stack is farthest away from the source/drain silicide contacts and suffers from higher series resistance as compared to the nanowires that are higher up in the stack. It is found that upscaling the diameter of lower nanowires with respect to the upper nanowires improved uniformity of the current in each nanowire, but with the drawback of threshold voltage reduction. We propose to increase source/drain trench silicide depth as a more promising solution to this problem over the nanowire diameter scaling, without compromising on power or performance of these devices.",
"title": ""
},
{
"docid": "3346848a0b6d41856fe05fe2503065ed",
"text": "It has long been recognized that temporal anaphora in French and English depends on the aspectual distinction between events and states. For example, temporal location as well as temporal update depends on the aspectual type. This paper presents a general theory of aspect-based temporal anaphora, which extends from languages with grammatical tenses (like French and English) to tenseless languages (e.g. Kalaallisut). This theory also extends to additional aspect-dependent phenomena and to non-atomic aspectual types, processes and habits, which license anaphora to proper atomic parts (cf. nominal pluralities and kinds).",
"title": ""
},
{
"docid": "fbdda2f44b65944a0a47cee2418ed9dc",
"text": "Volume 5 • Issue 2 • 1000226 Adv Tech Biol Med, an open access journal ISSN: 2379-1764 The main focus of the forensic taphonomy is the study of environmental conditions influencing the decomposition process to estimate the postmortem interval and determine the cause and manner of death. The study is part of a specific branch of the forensic science that makes use of a broad aspect of methodologies taken from different areas of expertise such as botany, archeology, soil microbiology and entomology, all used for uncovering and examining clandestine graves allowing to succeed in the investigation process. Therefore, the “Forensic Mycology” emerges as a new science term meaning the study of the coexistence of fungal species nearby human cadavers as well as those fungal groups potentially useful in establishing a time of death [1,2].",
"title": ""
},
{
"docid": "0b507193ca68d05a3432a9e735df5d95",
"text": "Capturing image with defocused background by using a large aperture is a widely used technique in digital single-lens reflex (DSLR) camera photography. It is also desired to provide this function to smart phones. In this paper, a new algorithm is proposed to synthesize such an effect for a single portrait image. The foreground portrait is detected using a face prior based salient object detection algorithm. Then with an improved gradient domain guided image filter, the details in the foreground are enhanced while the background pixels are blurred. In this way, the background objects are defocused and thus the foreground objects are emphasized. The resultant image looks similar to image captured using a camera with a large aperture. The proposed algorithm can be adopted in smart phones, especially for the front cameras of smart phones.",
"title": ""
}
] |
scidocsrr
|
f85919d864264c7f1266b68b1291cd28
|
Predicting Billboard Success Using Data-Mining in P2P Networks
|
[
{
"docid": "66f684ba92fe735fecfbfb53571bad5f",
"text": "Some empirical learning tasks are concerned with predicting values rather than the more familiar categories. This paper describes a new system, m5, that constructs tree-based piecewise linear models. Four case studies are presented in which m5 is compared to other methods.",
"title": ""
}
] |
[
{
"docid": "f70ff7f71ff2424fbcfea69d63a19de0",
"text": "We propose a method for learning similaritypreserving hash functions that map highdimensional data onto binary codes. The formulation is based on structured prediction with latent variables and a hinge-like loss function. It is efficient to train for large datasets, scales well to large code lengths, and outperforms state-of-the-art methods.",
"title": ""
},
{
"docid": "c69e805751421b516e084498e7fc6f44",
"text": "We investigate two extremal problems for polynomials giving upper bounds for spherical codes and for polynomials giving lower bounds for spherical designs, respectively. We consider two basic properties of the solutions of these problems. Namely, we estimate from below the number of double zeros and find zero Gegenbauer coefficients of extremal polynomials. Our results allow us to search effectively for such solutions using a computer. The best polynomials we have obtained give substantial improvements in some cases on the previously known bounds for spherical codes and designs. Some examples are given in Section 6.",
"title": ""
},
{
"docid": "99574bec7125cfa9e2ebc19bb6bb4bf5",
"text": "Health care delivery and education has become a challenge for providers. Nurses and other professionals are challenged daily to assure that the patient has the necessary information to make informed decisions. Patients and their families are given a multitude of information about their health and commonly must make important decisions from these facts. Obstacles that prevent easy delivery of health care information include literacy, culture, language, and physiological barriers. It is up to the nurse to assess and evaluate the patient's learning needs and readiness to learn because everyone learns differently. This article will examine how each of these barriers impact care delivery along with teaching and learning strategies will be examined.",
"title": ""
},
{
"docid": "f381cce9e26441779b2741e19875f0d9",
"text": "Human affect recognition is the field of study associated with using automatic techniques to identify human emotion or human affective state. A person's affective states is often communicated non-verbally through body language. A large part of human body language communication is the use of head gestures. Almost all cultures use subtle head movements to convey meaning. Two of the most common and distinct head gestures are the head nod and the head shake gestures. In this paper we present a robust system to automatically detect head nod and shakes. We employ the Microsoft Kinect and utilise discrete Hidden Markov Models (HMMs) as the backbone to a machine learning based classifier within the system. The system achieves 86% accuracy on test datasets and results are provided.",
"title": ""
},
{
"docid": "e1885f9c373c355a4df9307c6d90bf83",
"text": "Ricinulei possess movable, slender pedipalps with small chelae. When ricinuleids walk, they occasionally touch the soil surface with the tips of their pedipalps. This behavior is similar to the exploration movements they perform with their elongated second legs. We studied the distal areas of the pedipalps of the cavernicolous Mexican species Pseudocellus pearsei with scanning and transmission electron microscopy. Five different surface structures are characteristic for the pedipalps: (1) slender sigmoidal setae with smooth shafts resembling gustatory terminal pore single-walled (tp-sw) sensilla; (2) conspicuous long, mechanoreceptive slit sensilla; (3) a single, short, clubbed seta inside a deep pit representing a no pore single walled (np-sw) sensillum; (4) a single pore organ containing one olfactory wall pore single-walled (wp-sw) sensillum; and (5) gustatory terminal pore sensilla in the fingers of the pedipalp chela. Additionally, the pedipalps bear sensilla which also occur on the other appendages. With this sensory equipment, the pedipalps are highly effective multimodal short range sensory organs which complement the long range sensory function of the second legs. In order to present the complete sensory equipment of all appendages of the investigated Pseudocellus a comparative overview is provided.",
"title": ""
},
{
"docid": "799573bf08fb91b1ac644c979741e7d2",
"text": "This short paper reports the method and the evaluation results of Casio and Shinshu University joint team for the ISBI Challenge 2017 – Skin Lesion Analysis Towards Melanoma Detection – Part 3: Lesion Classification hosted by ISIC. Our online validation score was 0.958 with melanoma classifier AUC 0.924 and seborrheic keratosis classifier AUC 0.993.",
"title": ""
},
{
"docid": "095dbdc1ac804487235cdd0aeffe8233",
"text": "Sentiment analysis is the task of identifying whether the opinion expressed in a document is positive or negative about a given topic. Unfortunately, many of the potential applications of sentiment analysis are currently infeasible due to the huge number of features found in standard corpora. In this paper we systematically evaluate a range of feature selectors and feature weights with both Naı̈ve Bayes and Support Vector Machine classifiers. This includes the introduction of two new feature selection methods and three new feature weighting methods. Our results show that it is possible to maintain a state-of-the art classification accuracy of 87.15% while using less than 36% of the features.",
"title": ""
},
{
"docid": "aaff9bc2844f2631e11944e049190ba4",
"text": "There has been little work on examining how deep neural networks may be adapted to speakers for improved speech recognition accuracy. Past work has examined using a discriminatively trained affine transformation of the input features applied at a frame level or the re-training of the entire shallow network for a specific speaker. This work explores how deep neural networks may be adapted to speakers by re-training the input layer, the output layer or the entire network. We look at how L2 regularization using weight decay to the speaker independent model improves generalization. Other training factors are examined including the role momentum plays and stochastic mini-batch versus batch training. While improvements are significant for smaller networks, the largest show little gain from adaptation on a large vocabulary mobile speech recognition task.",
"title": ""
},
{
"docid": "26787002ed12cc73a3920f2851449c5e",
"text": "This article brings together three current themes in organizational behavior: (1) a renewed interest in assessing person-situation interactional constructs, (2) the quantitative assessment of organizational culture, and (3) the application of \"Q-sort,\" or template-matching, approaches to assessing person-situation interactions. Using longitudinal data from accountants and M.B.A. students and cross-sectional data from employees of government agencies and public accounting firms, we developed and validated an instrument for assessing personorganization fit, the Organizational Culture Profile (OCP). Results suggest that the dimensionality of individual preferences for organizational cultures and the existence of these cultures are interpretable. Further, person-organization fit predicts job satisfaction and organizational commitment a year after fit was measured and actual turnover after two years. This evidence attests to the importance of understanding the fit between individuals' preferences and organizational cultures.",
"title": ""
},
{
"docid": "52f95d1c0e198c64455269fd09108703",
"text": "Dynamic control theory has long been used in solving optimal asset allocation problems, and a number of trading decision systems based on reinforcement learning methods have been applied in asset allocation and portfolio rebalancing. In this paper, we extend the existing work in recurrent reinforcement learning (RRL) and build an optimal variable weight portfolio allocation under a coherent downside risk measure, the expected maximum drawdown, E(MDD). In particular, we propose a recurrent reinforcement learning method, with a coherent risk adjusted performance objective function, the Calmar ratio, to obtain both buy and sell signals and asset allocation weights. Using a portfolio consisting of the most frequently traded exchange-traded funds, we show that the expected maximum drawdown risk based objective function yields superior return performance compared to previously proposed RRL objective functions (i.e. the Sharpe ratio and the Sterling ratio), and that variable weight RRL long/short portfolios outperform equal weight RRL long/short portfolios under different transaction cost scenarios. We further propose an adaptive E(MDD) risk based RRL portfolio rebalancing decision system with a transaction cost and market condition stop-loss retraining mechanism, and we show that the ∗Corresponding author: Steve Y. Yang, Postal address: School of Business, Stevens Institute of Technology, 1 Castle Point on Hudson, Hoboken, NJ 07030 USA. Tel.: +1 201 216 3394 Fax: +1 201 216 5385 Email addresses: salmahdi@stevens.edu (Saud Almahdi), steve.yang@stevens.edu (Steve Y. Yang) Preprint submitted to Expert Systems with Applications June 15, 2017",
"title": ""
},
{
"docid": "e03d8f990cfcb07d8088681c3811b542",
"text": "The environments in which we live and the tasks we must perform to survive and reproduce have shaped the design of our perceptual systems through evolution and experience. Therefore, direct measurement of the statistical regularities in natural environments (scenes) has great potential value for advancing our understanding of visual perception. This review begins with a general discussion of the natural scene statistics approach, of the different kinds of statistics that can be measured, and of some existing measurement techniques. This is followed by a summary of the natural scene statistics measured over the past 20 years. Finally, there is a summary of the hypotheses, models, and experiments that have emerged from the analysis of natural scene statistics.",
"title": ""
},
{
"docid": "6c7284ca77809210601c213ee8a685bb",
"text": "Patients with non-small cell lung cancer (NSCLC) require careful staging at the time of diagnosis to determine prognosis and guide treatment recommendations. The seventh edition of the TNM Classification of Malignant Tumors is scheduled to be published in 2009 and the International Association for the Study of Lung Cancer (IASLC) created the Lung Cancer Staging Project (LCSP) to guide revisions to the current lung cancer staging system. These recommendations will be submitted to the American Joint Committee on Cancer (AJCC) and to the Union Internationale Contre le Cancer (UICC) for consideration in the upcoming edition of the staging manual. Data from over 100,000 patients with lung cancer were submitted for analysis and several modifications were suggested for the T descriptors and the M descriptors although the current N descriptors remain unchanged. These recommendations will further define homogeneous patient subsets with similar survival rates. More importantly, these revisions will help guide clinicians in making optimal, stage-specific, treatment recommendations.",
"title": ""
},
{
"docid": "7a356a485b46c6fc712a0174947e142e",
"text": "A systematic review of the literature related to effective occupational therapy interventions in rehabilitation of individuals with work-related forearm, wrist, and hand injuries and illnesses was conducted as part of the Evidence-Based Literature Review Project of the American Occupational Therapy Association. This review provides a comprehensive overview and analysis of 36 studies that addressed many of the interventions commonly used in hand rehabilitation. Findings reveal that the use of occupation-based activities has reasonable yet limited evidence to support its effectiveness. This review supports the premise that many client factors can be positively affected through the use of several commonly used occupational therapy-related modalities and methods. The implications for occupational therapy practice, research, and education and limitations of reviewed studies are also discussed.",
"title": ""
},
{
"docid": "aa1c565018371cf12e703e06f430776b",
"text": "We propose a graph-based semantic model for representing document content. Our method relies on the use of a semantic network, namely the DBpedia knowledge base, for acquiring fine-grained information about entities and their semantic relations, thus resulting in a knowledge-rich document model. We demonstrate the benefits of these semantic representations in two tasks: entity ranking and computing document semantic similarity. To this end, we couple DBpedia's structure with an information-theoretic measure of concept association, based on its explicit semantic relations, and compute semantic similarity using a Graph Edit Distance based measure, which finds the optimal matching between the documents' entities using the Hungarian method. Experimental results show that our general model outperforms baselines built on top of traditional methods, and achieves a performance close to that of highly specialized methods that have been tuned to these specific tasks.",
"title": ""
},
{
"docid": "a825bab34866182aa585e079a1596b92",
"text": "Decision theory formally solves the problem of rational agents in uncertain worlds if the true environmental prior probability distribution is known. Solomonoff’s theory of universal induction formally solves the problem of sequence prediction for unknown prior distribution. We combine both ideas and get a parameterless theory of universal Artificial Intelligence. We give strong arguments that the resulting AIξ model is the most intelligent unbiased agent possible. We outline for a number of problem classes, including sequence prediction, strategic games, function minimization, reinforcement and supervised learning, how the AIξ model can formally solve them. The major drawback of the AIξ model is that it is uncomputable. To overcome this problem, we construct a modified algorithm AIξtl, which is still effectively more intelligent than any other time t and space l bounded agent. The computation time of AIξtl is of the order t ·2l. Other discussed topics are formal definitions of intelligence order relations, the horizon problem and relations of the AIξ theory to other AI approaches. Any response to marcus@hutter1.de is welcome.",
"title": ""
},
{
"docid": "ee73847c9dd27672c9860219c293b8dd",
"text": "Sensing cost and data quality are two primary concerns in mobile crowd sensing. In this article, we propose a new crowd sensing paradigm, sparse mobile crowd sensing, which leverages the spatial and temporal correlation among the data sensed in different sub-areas to significantly reduce the required number of sensing tasks allocated, thus lowering overall sensing cost (e.g., smartphone energy consumption and incentives) while ensuring data quality. Sparse mobile crowdsensing applications intelligently select only a small portion of the target area for sensing while inferring the data of the remaining unsensed area with high accuracy. We discuss the fundamental research challenges in sparse mobile crowdsensing, and design a general framework with potential solutions to the challenges. To verify the effectiveness of the proposed framework, a sparse mobile crowdsensing prototype for temperature and traffic monitoring is implemented and evaluated. With several future research directions identified in sparse mobile crowdsensing, we expect that more research interests will be stimulated in this novel crowdsensing paradigm.",
"title": ""
},
{
"docid": "36e72fe58858b4caf4860a3bba5fced4",
"text": "When operating over extended periods of time, an autonomous system will inevitably be faced with severe changes in the appearance of its environment. Coping with such changes is more and more in the focus of current robotics research. In this paper, we foster the development of robust place recognition algorithms in changing environments by describing a new dataset that was recorded during a 728 km long journey in spring, summer, fall, and winter. Approximately 40 hours of full-HD video cover extreme seasonal changes over almost 3000 km in both natural and man-made environments. Furthermore, accurate ground truth information are provided. To our knowledge, this is by far the largest SLAM dataset available at the moment. In addition, we introduce an open source Matlab implementation of the recently published SeqSLAM algorithm and make it available to the community. We benchmark SeqSLAM using the novel dataset and analyse the influence of important parameters and algorithmic steps.",
"title": ""
},
{
"docid": "e5fc30045f458f84435363349d22204d",
"text": "Today, root cause analysis of failures in data centers is mostly done through manual inspection. More often than not, cus- tomers blame the network as the culprit. However, other components of the system might have caused these failures. To troubleshoot, huge volumes of data are collected over the entire data center. Correlating such large volumes of diverse data collected from different vantage points is a daunting task even for the most skilled technicians. In this paper, we revisit the question: how much can you infer about a failure in the data center using TCP statistics collected at one of the endpoints? Using an agent that cap- tures TCP statistics we devised a classification algorithm that identifies the root cause of failure using this information at a single endpoint. Using insights derived from this classi- fication algorithm we identify dominant TCP metrics that indicate where/why problems occur in the network. We val- idate and test these methods using data that we collect over a period of six months in a production data center.",
"title": ""
},
{
"docid": "33ae11cfc67a9afe34483444a03bfd5a",
"text": "In today’s interconnected digital world, targeted attacks have become a serious threat to conventional computer systems and critical infrastructure alike. Many researchers contribute to the fight against network intrusions or malicious software by proposing novel detection systems or analysis methods. However, few of these solutions have a particular focus on Advanced Persistent Threats or similarly sophisticated multi-stage attacks. This turns finding domain-appropriate methodologies or developing new approaches into a major research challenge. To overcome these obstacles, we present a structured review of semantics-aware works that have a high potential for contributing to the analysis or detection of targeted attacks. We introduce a detailed literature evaluation schema in addition to a highly granular model for article categorization. Out of 123 identified papers, 60 were found to be relevant in the context of this study. The selected articles are comprehensively reviewed and assessed in accordance to Kitchenham’s guidelines for systematic literature reviews. In conclusion, we combine new insights and the status quo of current research into the concept of an ideal systemic approach capable of semantically processing and evaluating information from different observation points.",
"title": ""
},
{
"docid": "ea8256df8504cd392f98d92612e4a9a0",
"text": "Employment specialists play a pivotal role in assisting youth and adults with disabilities find and retain jobs. This requires a unique combination of skills, competencies and personal attributes. While the fields of career counseling, vocational rehabilitation and special education transition have documented the ideal skills sets needed to achieve desired outcomes, the authors characterize these as essential mechanics. What have not been examined are the personal qualities that effective employment specialists possess. Theorizing that these successful professionals exhibit traits and behaviors beyond the mechanics, the authors conducted a qualitative study incorporating in-depth interviews with 17 top-performing staff of a highly successful national program, The Marriott Foundation’s Bridges from school to work. Four personal attributes emerged from the interviews: (a) principled optimism; (b) cultural competence; (c) business-oriented professionalism; and (d) networking savvy. In presenting these findings, the authors discuss the implications for recruitment, hiring, training, and advancing truly effective employment specialists, and offer recommendations for further research.",
"title": ""
}
] |
scidocsrr
|
3639e5a245922d1dec3cdca188c5b5be
|
Knowledge , Motivation , and Adaptive Behavior : A Framework for Improving Selling Effectiveness
|
[
{
"docid": "e6c32d3fd1bdbfb2cc8742c9b670ce97",
"text": "A framework for skill acquisition is proposed that includes two major stages in the development of a cognitive skill: a declarative stage in which facts about the skill domain are interpreted and a procedural stage in which the domain knowledge is directly embodied in procedures for performing the skill. This general framework has been instantiated in the ACT system in which facts are encoded in a propositional network and procedures are encoded as productions. Knowledge compilation is the process by which the skill transits from the declarative stage to the procedural stage. It consists of the subprocesses of composition, which collapses sequences of productions into single productions, and proceduralization, which embeds factual knowledge into productions. Once proceduralized, further learning processes operate on the skill to make the productions more selective in their range of applications. These processes include generalization, discrimination, and strengthening of productions. Comparisons are made to similar concepts from past learning theories. How these learning mechanisms apply to produce the power law speedup in processing time with practice is discussed.",
"title": ""
}
] |
[
{
"docid": "86e646b845384d3cfbb146075be5c02a",
"text": "Content-Based Image Retrieval (CBIR) has become one of the most active research areas in the past few years. Many visual feature representations have been explored and many systems built. While these research e orts establish the basis of CBIR, the usefulness of the proposed approaches is limited. Speci cally, these e orts have relatively ignored two distinct characteristics of CBIR systems: (1) the gap between high level concepts and low level features; (2) subjectivity of human perception of visual content. This paper proposes a relevance feedback based interactive retrieval approach, which e ectively takes into account the above two characteristics in CBIR. During the retrieval process, the user's high level query and perception subjectivity are captured by dynamically updated weights based on the user's relevance feedback. The experimental results show that the proposed approach greatly reduces the user's e ort of composing a query and captures the user's information need more precisely.",
"title": ""
},
{
"docid": "4ec7af75127df22c9cb7bd279cb2bcf3",
"text": "This paper describes a real-time walking control system developed for the biped robots JOHNNIE and LOLA. Walking trajectories are planned on-line using a simplified robot model and modified by a stabilizing controller. The controller uses hybrid position/force control in task space based on a resolved motion rate scheme. Inertial stabilization is achieved by modifying the contact force trajectories. The paper includes an analysis of the dynamics of controlled bipeds, which is the basis for the proposed control system. The system was tested both in forward dynamics simulations and in experiments with JOHNNIE.",
"title": ""
},
{
"docid": "64d755d95353a66ec967c7f74aaf2232",
"text": "Purpose: Platinum-based drugs, in particular cisplatin (cis-diamminedichloridoplatinum(II), CDDP), are used for treatment of squamous cell carcinoma of the head and neck (SCCHN). Despite initial responses, CDDP treatment often results in chemoresistance, leading to therapeutic failure. The role of primary resistance at subclonal level and treatment-induced clonal selection in the development of CDDP resistance remains unknown.Experimental Design: By applying targeted next-generation sequencing, fluorescence in situ hybridization, microarray-based transcriptome, and mass spectrometry-based phosphoproteome analysis to the CDDP-sensitive SCCHN cell line FaDu, a CDDP-resistant subline, and single-cell derived subclones, the molecular basis of CDDP resistance was elucidated. The causal relationship between molecular features and resistant phenotypes was determined by siRNA-based gene silencing. The clinical relevance of molecular findings was validated in patients with SCCHN with recurrence after CDDP-based chemoradiation and the TCGA SCCHN dataset.Results: Evidence of primary resistance at clonal level and clonal selection by long-term CDDP treatment was established in the FaDu model. Resistance was associated with aneuploidy of chromosome 17, increased TP53 copy-numbers and overexpression of the gain-of-function (GOF) mutant variant p53R248L siRNA-mediated knockdown established a causal relationship between mutant p53R248L and CDDP resistance. Resistant clones were also characterized by increased activity of the PI3K-AKT-mTOR pathway. The poor prognostic value of GOF TP53 variants and mTOR pathway upregulation was confirmed in the TCGA SCCHN cohort.Conclusions: Our study demonstrates a link of intratumoral heterogeneity and clonal evolution as important mechanisms of drug resistance in SCCHN and establishes mutant GOF TP53 variants and the PI3K/mTOR pathway as molecular targets for treatment optimization. Clin Cancer Res; 24(1); 158-68. ©2017 AACR.",
"title": ""
},
{
"docid": "3d9c02413c80913cb32b5094dcf61843",
"text": "There is an explosion of youth subscriptions to original content-media-sharing Web sites such as YouTube. These Web sites combine media production and distribution with social networking features, making them an ideal place to create, connect, collaborate, and circulate. By encouraging youth to become media creators and social networkers, new media platforms such as YouTube offer a participatory culture in which youth can develop, interact, and learn. As youth development researchers, we must be cognizant of this context and critically examine what this platform offers that might be unique to (or redundant of) typical adolescent experiences in other developmental contexts.",
"title": ""
},
{
"docid": "7ec6790b96e9185bf822eea3a27ad7ab",
"text": "Multi-level converter architectures have been explored for a variety of applications including high-power DC-AC inverters and DC-DC converters. In this work, we explore flying-capacitor multi-level (FCML) DC-DC topologies as a class of hybrid switched-capacitor/inductive converter. Compared to other candidate architectures in this area (e.g. Series-Parallel, Dickson), FCML converters have notable advantages such as the use of single-rated low-voltage switches, potentially lower switching loss, lower passive component volume, and enable regulation across the full VDD-VOUT range. It is shown that multimode operation, including previously published resonant and dynamic off-time modulation, form a single set of techniques that can be used to extend high efficiency over a wide power density range. Some of the general operating considerations of FCML converters, such as the challenge of maintaining voltage balance on flying capacitors, are shown to be of equal concern in other soft-switched SC converter topologies. Experimental verification from a 24V:12V, 3-level converter is presented to show multimode operation with a nominally 2:1 topology. A second 50V:7V 4-level FCML converter demonstrates operation with variable regulation. A method is presented to balance flying capacitor voltages through low frequency closed-loop feedback.",
"title": ""
},
{
"docid": "7192e2ae32eb79aaefdf8e54cdbba715",
"text": "Recently, ridge gap waveguides are considered as guiding structures in high-frequency applications. One of the major problems facing this guiding structure is the limited ability of using all the possible bandwidths due to the limited bandwidth of the transition to the coaxial lines. Here, a review of the different excitation techniques associated with this guiding structure is presented. Next, some modifications are proposed to improve its response in order to cover the possible actual bandwidth. The major aim of this paper is to introduce a wideband coaxial to ridge gap waveguide transition based on five sections of matching networks. The introduced transition shows excellent return loss, which is better than 15 dB over the actual possible bandwidth for double transitions.",
"title": ""
},
{
"docid": "74ad888a96e6dd43bc5f909623f72e43",
"text": "The goal of this roadmap paper is to summarize the stateof-the-art and identify research challenges when developing, deploying and managing self-adaptive software systems. Instead of dealing with a wide range of topics associated with the field, we focus on four essential topics of self-adaptation: design space for adaptive solutions, processes, from centralized to decentralized control, and practical run-time verification and validation. For each topic, we present an overview, suggest future directions, and focus on selected challenges. This paper complements and extends a previous roadmap on software engineering for self-adaptive systems published in 2009 covering a different set of topics, and reflecting in part on the previous paper. This roadmap is one of the many results of the Dagstuhl Seminar 10431 on Software Engineering for Self-Adaptive Systems, which took place in October 2010.",
"title": ""
},
{
"docid": "92e62d56458c3e7c4cd845e1de94178f",
"text": "We introduce a generic framework that reduces the computational cost of object detection while retaining accuracy for scenarios where objects with varied sizes appear in high resolution images. Detection progresses in a coarse-to-fine manner, first on a down-sampled version of the image and then on a sequence of higher resolution regions identified as likely to improve the detection accuracy. Built upon reinforcement learning, our approach consists of a model (R-net) that uses coarse detection results to predict the potential accuracy gain for analyzing a region at a higher resolution and another model (Q-net) that sequentially selects regions to zoom in. Experiments on the Caltech Pedestrians dataset show that our approach reduces the number of processed pixels by over 50% without a drop in detection accuracy. The merits of our approach become more significant on a high resolution test set collected from YFCC100M dataset, where our approach maintains high detection performance while reducing the number of processed pixels by about 70% and the detection time by over 50%.",
"title": ""
},
{
"docid": "c8a9919a2df2cfd730816cd0171f08dd",
"text": "In this paper, we propose a new deep network that learns multi-level deep representations for image emotion classi fication (MldrNet). Image emotion can be recognized through image semantics, image aesthetics and low-level visual fea tures from both global and local views. Existing image emotion classification works using hand-crafted features o r deep features mainly focus on either low-level visual featu res or semantic-level image representations without taking al l factors into consideration. Our proposed MldrNet unifies deep representations of three levels, i.e. image semantics , image aesthetics and low-level visual features through mul tiple instance learning (MIL) in order to effectively cope wit h noisy labeled data, such as images collected from the Intern et. Extensive experiments on both Internet images and abstract paintings demonstrate the proposed method outperforms the state-of-the-art methods using deep features or hand-craf ted features. The proposed approach also outperforms the state of-the-art methods with at least 6% performance improvement in terms of overall classification accuracy.",
"title": ""
},
{
"docid": "fcbd256ad05ef96c9f2997fbfbace473",
"text": "The Internet of Things (IoT) envisions a world-wide, interconnected network of smart physical entities. These physical entities generate a large amount of data in operation, and as the IoT gains momentum in terms of deployment, the combined scale of those data seems destined to continue to grow. Increasingly, applications for the IoT involve analytics. Data analytics is the process of deriving knowledge from data, generating value like actionable insights from them. This article reviews work in the IoT and big data analytics from the perspective of their utility in creating efficient, effective, and innovative applications and services for a wide spectrum of domains. We review the broad vision for the IoT as it is shaped in various communities, examine the application of data analytics across IoT domains, provide a categorisation of analytic approaches, and propose a layered taxonomy from IoT data to analytics. This taxonomy provides us with insights on the appropriateness of analytical techniques, which in turn shapes a survey of enabling technology and infrastructure for IoT analytics. Finally, we look at some tradeoffs for analytics in the IoT that can shape future research.",
"title": ""
},
{
"docid": "a6cf26910cb0cff08b390a1814cc2a40",
"text": "Many rural roads lack sharp, smoothly curving edges and a homogeneous surface appearance, hampering traditional vision-based road-following methods. However, they often have strong texture cues parallel to the road direction in the form of ruts and tracks left by other vehicles. In this paper, we describe an algorithm for following ill-structured roads in which dominant texture orientations computed with multi-scale Gabor wavelet filters vote for a consensus road vanishing point location. In-plane road curvature and out-of-plane undulation are estimated in each image by tracking the vanishing point indicated by a horizontal image strip as it moves up toward the putative vanishing line. Particle filtering is also used to track the vanishing point sequence induced by road curvature from image to image. Results are shown for vanishing point localization on a variety of road scenes ranging from gravel roads to dirt trails to highways.",
"title": ""
},
{
"docid": "77666dea1c0788352d0172a4a3395d59",
"text": "A top-down page segmentation technique known as the recursive X-Y cut decomposes a document image recursively into a set of rectanguzar blocks. This paper proposes that the recursive X-Y cut be implemented using bounding bozes of connected components of black pixels instead of using image pizels. The advantage is that great improvement can be achieved in computation. In fact, once bounding boxes of connected components are obtained, the recursive X-Y cut is completed within an order of a second on Spare-10 workutations for letter-sized document images scanned at 300 dpi resolution. keywords: page segmentation, recursive X-Y cut, projection profile, connected components",
"title": ""
},
{
"docid": "545509f9e3aa65921a7d6faa41247ae6",
"text": "BACKGROUND\nPenicillins inhibit cell wall synthesis; therefore, Helicobacter pylori must be dividing for this class of antibiotics to be effective in eradication therapy. Identifying growth responses to varying medium pH may allow design of more effective treatment regimens.\n\n\nAIM\nTo determine the effects of acidity on bacterial growth and the bactericidal efficacy of ampicillin.\n\n\nMETHODS\nH. pylori were incubated in dialysis chambers suspended in 1.5-L of media at various pHs with 5 mM urea, with or without ampicillin, for 4, 8 or 16 h, thus mimicking unbuffered gastric juice. Changes in gene expression, viability and survival were determined.\n\n\nRESULTS\nAt pH 3.0, but not at pH 4.5 or 7.4, there was decreased expression of ~400 genes, including many cell envelope biosynthesis, cell division and penicillin-binding protein genes. Ampicillin was bactericidal at pH 4.5 and 7.4, but not at pH 3.0.\n\n\nCONCLUSIONS\nAmpicillin is bactericidal at pH 4.5 and 7.4, but not at pH 3.0, due to decreased expression of cell envelope and division genes with loss of cell division at pH 3.0. Therefore, at pH 3.0, the likely pH at the gastric surface, the bacteria are nondividing and persist with ampicillin treatment. A more effective inhibitor of acid secretion that maintains gastric pH near neutrality for 24 h/day should enhance the efficacy of amoxicillin, improving triple therapy and likely even allowing dual amoxicillin-based therapy for H. pylori eradication.",
"title": ""
},
{
"docid": "44a6cfa975745624ae4bebec17702d2a",
"text": "OBJECTIVE\nTo evaluate the performance of the International Ovarian Tumor Analysis (IOTA) ADNEX model in the preoperative discrimination between benign ovarian (including tubal and para-ovarian) tumors, borderline ovarian tumors (BOT), Stage I ovarian cancer (OC), Stage II-IV OC and ovarian metastasis in a gynecological oncology center in Brazil.\n\n\nMETHODS\nThis was a diagnostic accuracy study including 131 women with an adnexal mass invited to participate between February 2014 and November 2015. Before surgery, pelvic ultrasound examination was performed and serum levels of tumor marker CA 125 were measured in all women. Adnexal masses were classified according to the IOTA ADNEX model. Histopathological diagnosis was the gold standard. Receiver-operating characteristics (ROC) curve analysis was used to determine the diagnostic accuracy of the model to classify tumors into different histological types.\n\n\nRESULTS\nOf 131 women, 63 (48.1%) had a benign ovarian tumor, 16 (12.2%) had a BOT, 17 (13.0%) had Stage I OC, 24 (18.3%) had Stage II-IV OC and 11 (8.4%) had ovarian metastasis. The area under the ROC curve (AUC) was 0.92 (95% CI, 0.88-0.97) for the basic discrimination between benign vs malignant tumors using the IOTA ADNEX model. Performance was high for the discrimination between benign vs Stage II-IV OC, BOT vs Stage II-IV OC and Stage I OC vs Stage II-IV OC, with AUCs of 0.99, 0.97 and 0.94, respectively. Performance was poor for the differentiation between BOT vs Stage I OC and between Stage I OC vs ovarian metastasis with AUCs of 0.64.\n\n\nCONCLUSION\nThe majority of adnexal masses in our study were classified correctly using the IOTA ADNEX model. On the basis of our findings, we would expect the model to aid in the management of women with an adnexal mass presenting to a gynecological oncology center. Copyright © 2016 ISUOG. Published by John Wiley & Sons Ltd.",
"title": ""
},
{
"docid": "aae97dd982300accb15c05f9aa9202cd",
"text": "Personal robots and robot technology (RT)-based assistive devices are expected to play a major role in our elderly-dominated society, with an active participation to joint works and community life with humans, as partner and as friends for us. The authors think that the emotion expression of a robot is effective in joint activities of human and robot. In addition, we also think that bipedal walking is necessary to robots which are active in human living environment. But, there was no robot which has those functions. And, it is not clear what kinds of functions are effective actually. Therefore we developed a new bipedal walking robot which is capable to express emotions. In this paper, we present the design and the preliminary evaluation of the new head of the robot with only a small number of degrees of freedom for facial expression.",
"title": ""
},
{
"docid": "02f62ec1ea8b7dba6d3a5d4ea08abe2d",
"text": "MicroRNAs (miRNAs) are short, 22–25 nucleotide long transcripts that may suppress entire signaling pathways by interacting with the 3’-untranslated region (3’-UTR) of coding mRNA targets, interrupting translation and inducing degradation of these targets. The long 3’-UTRs of brain transcripts compared to other tissues predict important roles for brain miRNAs. Supporting this notion, we found that brain miRNAs co-evolved with their target transcripts, that non-coding pseudogenes with miRNA recognition elements compete with brain coding mRNAs on their miRNA interactions, and that Single Nucleotide Polymorphisms (SNPs) on such pseudogenes are enriched in mental diseases including autism and schizophrenia, but not Alzheimer’s disease (AD). Focusing on evolutionarily conserved and primate-specifi c miRNA controllers of cholinergic signaling (‘CholinomiRs’), we fi nd modifi ed CholinomiR levels in the brain and/or nucleated blood cells of patients with AD and Parkinson’s disease, with treatment-related diff erences in their levels and prominent impact on the cognitive and anti-infl ammatory consequences of cholinergic signals. Examples include the acetylcholinesterase (AChE)-targeted evolutionarily conserved miR-132, whose levels decline drastically in the AD brain. Furthermore, we found that interruption of AChE mRNA’s interaction with the primatespecifi c CholinomiR-608 in carriers of a SNP in the AChE’s miR-608 binding site induces domino-like eff ects that reduce the levels of many other miR-608 targets. Young, healthy carriers of this SNP express 40% higher brain AChE activity than others, potentially aff ecting the responsiveness to AD’s anti-AChE therapeutics, and show elevated trait anxiety, infl ammation and hypertension. Non-coding regions aff ecting miRNA-target interactions in neurodegenerative brains thus merit special attention.",
"title": ""
},
{
"docid": "c29b91a5b580a620bb245519695a6cd9",
"text": "It is commonly believed that datacenter networking software must sacri ce generality to attain high performance. The popularity of specialized distributed systems designed speci cally for niche technologies such as RDMA, lossless networks, FPGAs, and programmable switches testi es to this belief. In this paper, we show that such specialization is unnecessary. eRPC is a new general-purpose remote procedure call (RPC) library that o ers performance comparable to specialized systems, while running on commodity CPUs in traditional datacenter networks based on either lossy Ethernet or lossless fabrics. eRPC performs well in three key metrics: message rate for small messages; bandwidth for large messages; and scalability to a large number of nodes and CPU cores. It handles packet loss, congestion, and background request execution. In microbenchmarks, one CPU core can handle up to 5 million small eRPC requests per second, or saturate a 40 Gbps link with large messages. We port a production-grade implementation of Raft state machine replication to eRPC without modifying the core Raft source code. We achieve 5.5 μs of replication latency on lossy Ethernet, which is faster or comparable to specialized replication systems that use programmable switches, FPGAs, or RDMA.",
"title": ""
},
{
"docid": "d88067f2dbcd55dae083134b5eeb7868",
"text": "Current state-of-the-art human activity recognition is fo cused on the classification of temporally trimmed videos in which only one action occurs per frame. We propose a simple, yet effective, method for the temporal detection of activities in temporally untrimmed videos with the help of untrimmed classification. Firstly, our model predicts th e top k labels for each untrimmed video by analysing global video-level features. Secondly, frame-level binary class ification is combined with dynamic programming to generate the temporally trimmed activity proposals . Finally, each proposal is assigned a label based on the global label, and scored with the score of the temporal activity proposal and the global score. Ultimately, we show that untrimmed video classification models can be used as stepping stone for temporal detection.",
"title": ""
},
{
"docid": "4fa9f9ac4204de1394cd7133254aa046",
"text": "Over the last ten years, face recognition has become a specialized applications area within the field of computer vision. Sophisticated commercial systems have been developed that achieve high recognition rates. Although elaborate, many of these systems include a subspace projection step and a nearest neighbor classifier. The goal of this paper is to rigorously compare two subspace projection techniques within the context of a baseline system on the face recognition task. The first technique is principal component analysis (PCA), a well-known “baseline” for projection techniques. The second technique is independent component analysis (ICA), a newer method that produces spatially localized and statistically independent basis vectors. Testing on the FERET data set (and using standard partitions), we find that, when a proper distance metric is used, PCA significantly outperforms ICA on a human face recognition task. This is contrary to previously",
"title": ""
},
{
"docid": "aa7026774074ed81dd7836ef6dc44334",
"text": "To improve safety on the roads, next-generation vehicles will be equipped with short-range communication technologies. Many applications enabled by such communication will be based on a continuous broadcast of information about the own status from each vehicle to the neighborhood, often referred as cooperative awareness or beaconing. Although the only standardized technology allowing direct vehicle-to-vehicle (V2V) communication has been IEEE 802.11p until now, the latest release of long-term evolution (LTE) included advanced device-to-device features designed for the vehicular environment (LTE-V2V) making it a suitable alternative to IEEE 802.11p. Advantages and drawbacks are being considered for both technologies, and which one will be implemented is still under debate. The aim of this paper is thus to provide an insight into the performance of both technologies for cooperative awareness and to compare them. The investigation is performed analytically through the implementation of novel models for both IEEE 802.11p and LTE-V2V able to address the same scenario, with consistent settings and focusing on the same output metrics. The proposed models take into account several aspects that are often neglected by related works, such as hidden terminals and capture effect in IEEE 802.11p, the impact of imperfect knowledge of vehicles position on the resource allocation in LTE-V2V, and the various modulation and coding scheme combinations that are available in both technologies. Results show that LTE-V2V allows us to maintain the required quality of service at even double or more the distance than IEEE 802.11p in moderate traffic conditions. However, due to the half-duplex nature of devices and the structure of LTE frames, it shows lower capacity than IEEE 802.11p if short distances and very high vehicle density are targeted.",
"title": ""
}
] |
scidocsrr
|
cec3ee6652ec779e0f0dfd20b8ab828d
|
Effective Exploration for MAVs Based on the Expected Information Gain
|
[
{
"docid": "88a21d973ec80ee676695c95f6b20545",
"text": "Three-dimensional models provide a volumetric representation of space which is important for a variety of robotic applications including flying robots and robots that are equipped with manipulators. In this paper, we present an open-source framework to generate volumetric 3D environment models. Our mapping approach is based on octrees and uses probabilistic occupancy estimation. It explicitly represents not only occupied space, but also free and unknown areas. Furthermore, we propose an octree map compression method that keeps the 3D models compact. Our framework is available as an open-source C++ library and has already been successfully applied in several robotics projects. We present a series of experimental results carried out with real robots and on publicly available real-world datasets. The results demonstrate that our approach is able to update the representation efficiently and models the data consistently while keeping the memory requirement at a minimum.",
"title": ""
}
] |
[
{
"docid": "fcd3eb613db484d7d2bd00a03e5192bc",
"text": "A design methodology by including the finite PSR of the error amplifier to improve the low frequency PSR of the Low dropout regulator with improved voltage subtractor circuit is proposed. The gm/ID method based on exploiting the all regions of operation of the MOS transistor is utilized for the design of LDO regulator. The PSR of the LDO regulator is better than -50dB up to 10MHz frequency for the load currents up to 20mA with 0.15V drop-out voltage. A comparison is made between different schematics of the LDO regulator and proposed methodology for the LDO regulator with improved voltage subtractor circuit. Low frequency PSR of the regulator can be significantly improved with proposed methodology.",
"title": ""
},
{
"docid": "741efb8046bb888b944768784b87d70a",
"text": "Entropy Search (ES) and Predictive Entropy Search (PES) are popular and empirically successful Bayesian Optimization techniques. Both rely on a compelling information-theoretic motivation, and maximize the information gained about the arg max of the unknown function; yet, both are plagued by the expensive computation for estimating entropies. We propose a new criterion, Max-value Entropy Search (MES), that instead uses the information about the maximum function value. We show relations of MES to other Bayesian optimization methods, and establish a regret bound. We observe that MES maintains or improves the good empirical performance of ES/PES, while tremendously lightening the computational burden. In particular, MES is much more robust to the number of samples used for computing the entropy, and hence more efficient for higher dimensional problems.",
"title": ""
},
{
"docid": "7ea777ccae8984c26317876d804c323c",
"text": "The CRISPR/Cas (clustered regularly interspaced short palindromic repeats/CRISPR-associated proteins) system was first identified in bacteria and archaea and can degrade exogenous substrates. It was developed as a gene editing technology in 2013. Over the subsequent years, it has received extensive attention owing to its easy manipulation, high efficiency, and wide application in gene mutation and transcriptional regulation in mammals and plants. The process of CRISPR/Cas is optimized constantly and its application has also expanded dramatically. Therefore, CRISPR/Cas is considered a revolutionary technology in plant biology. Here, we introduce the mechanism of the type II CRISPR/Cas called CRISPR/Cas9, update its recent advances in various applications in plants, and discuss its future prospects to provide an argument for its use in the study of medicinal plants.",
"title": ""
},
{
"docid": "0f5c1d2503a2845e409d325b085bf600",
"text": "We present Accel, a novel semantic video segmentation system that achieves high accuracy at low inference cost by combining the predictions of two network branches: (1) a reference branch that extracts high-detail features on a reference keyframe, and warps these features forward using frame-to-frame optical flow estimates, and (2) an update branch that computes features of adjustable quality on the current frame, performing a temporal update at each video frame. The modularity of the update branch, where feature subnetworks of varying layer depth can be inserted (e.g. ResNet-18 to ResNet-101), enables operation over a new, state-of-the-art accuracy-throughput trade-off spectrum. Over this curve, Accel models achieve both higher accuracy and faster inference times than the closest comparable single-frame segmentation networks. In general, Accel significantly outperforms previous work on efficient semantic video segmentation, correcting warping-related error that compounds on datasets with complex dynamics. Accel is end-to-end trainable and highly modular: the reference network, the optical flow network, and the update network can each be selected independently, depending on application requirements, and then jointly fine-tuned. The result is a robust, general system for fast, high-accuracy semantic segmentation on video.",
"title": ""
},
{
"docid": "798f8c412ac3fbe1ab1b867bc8ce68d0",
"text": "We introduce a new mobile system framework, SenSec, which uses passive sensory data to ensure the security of applications and data on mobile devices. SenSec constantly collects sensory data from accelerometers, gyroscopes and magnetometers and constructs the gesture model of how a user uses the device. SenSec calculates the sureness that the mobile device is being used by its owner. Based on the sureness score, mobile devices can dynamically request the user to provide active authentication (such as a strong password), or disable certain features of the mobile devices to protect user's privacy and information security. In this paper, we model such gesture patterns through a continuous n-gram language model using a set of features constructed from these sensors. We built mobile application prototype based on this model and use it to perform both user classification and user authentication experiments. User studies show that SenSec can achieve 75% accuracy in identifying the users and 71.3% accuracy in detecting the non-owners with only 13.1% false alarms.",
"title": ""
},
{
"docid": "7eb4e5b88843d81390c14aae2a90c30b",
"text": "A low-power, high-speed, but with a large input dynamic range and output swing class-AB output buffer circuit, which is suitable for the flat-panel display application, is proposed. The circuit employs an elegant comparator to sense the transients of the input to turn on charging/discharging transistors, thus draws little current during static, but has an improved driving capability during transients. It is demonstrated in a 0.6 m CMOS technology.",
"title": ""
},
{
"docid": "1090297224c76a5a2c4ade47cb932dba",
"text": "Global illumination drastically improves visual realism of interactive applications. Although many interactive techniques are available, they have some limitations or employ coarse approximations. For example, general instant radiosity often has numerical error, because the sampling strategy fails in some cases. This problem can be reduced by a bidirectional sampling strategy that is often used in off-line rendering. However, it has been complicated to implement in real-time applications. This paper presents a simple real-time global illumination system based on bidirectional path tracing. The proposed system approximates bidirectional path tracing by using rasterization on a commodity DirectX® 11 capable GPU. Moreover, for glossy surfaces, a simple and efficient artifact suppression technique is also introduced.",
"title": ""
},
{
"docid": "cb47cc2effac1404dd60a91a099699d1",
"text": "We survey recent trends in practical algorithms for balanced graph partitioning, point to applications and discuss future research directions.",
"title": ""
},
{
"docid": "c71d229d69d79747eca7e87e342ba6d8",
"text": "This paper proposes a road detection approach based solely on dense 3D-LIDAR data. The approach is built up of four stages: (1) 3D-LIDAR points are projected to a 2D reference plane; then, (2) dense height maps are computed using an upsampling method; (3) applying a sliding-window technique in the upsampled maps, probability distributions of neighbouring regions are compared according to a similarity measure; finally, (4) morphological operations are used to enhance performance against disturbances. Our detection approach does not depend on road marks, thus it is suitable for applications on rural areas and inner-city with unmarked roads. Experiments have been carried out in a wide variety of scenarios using the recent KITTI-ROAD benchmark, obtaining promising results when compared to other state-of-art approaches.",
"title": ""
},
{
"docid": "e84699f276c807eb7fddb49d61bd8ae8",
"text": "Cyberbotics Ltd. develops Webots, a mobile robotics simulation software that provides you with a rapid prototyping environment for modelling, programming and simulating mobile robots. The provided robot libraries enable you to transfer your control programs to several commercially available real mobile robots. Webots lets you define and modify a complete mobile robotics setup, even several different robots sharing the same environment. For each object, you can define a number of properties, such as shape, color, texture, mass, friction, etc. You can equip each robot with a large number of available sensors and actuators. You can program these robots using your favorite development environment, simulate them and optionally transfer the resulting programs onto your real robots. Webots has been developed in collaboration with the Swiss Federal Institute of Technology in Lausanne, thoroughly tested, well documented and continuously maintained for over 7 years. It is now the main commercial product available from Cyberbotics Ltd.",
"title": ""
},
{
"docid": "c9e9807acbc69afd9f6a67d9bda0d535",
"text": "Domain adaptation is one of the most challenging tasks of modern data analytics. If the adaptation is done correctly, models built on a specific data representation become more robust when confronted to data depicting the same classes, but described by another observation system. Among the many strategies proposed, finding domain-invariant representations has shown excellent properties, in particular since it allows to train a unique classifier effective in all domains. In this paper, we propose a regularized unsupervised optimal transportation model to perform the alignment of the representations in the source and target domains. We learn a transportation plan matching both PDFs, which constrains labeled samples of the same class in the source domain to remain close during transport. This way, we exploit at the same time the labeled samples in the source and the distributions observed in both domains. Experiments on toy and challenging real visual adaptation examples show the interest of the method, that consistently outperforms state of the art approaches. In addition, numerical experiments show that our approach leads to better performances on domain invariant deep learning features and can be easily adapted to the semi-supervised case where few labeled samples are available in the target domain.",
"title": ""
},
{
"docid": "6bea1d7242fc23ec8f462b1c8478f2c1",
"text": "Determining a consensus opinion on a product sold online is no longer easy, because assessments have become more and more numerous on the Internet. To address this problem, researchers have used various approaches, such as looking for feelings expressed in the documents and exploring the appearance and syntax of reviews. Aspect-based evaluation is the most important aspect of opinion mining, and researchers are becoming more interested in product aspect extraction; however, more complex algorithms are needed to address this issue precisely with large data sets. This paper introduces a method to extract and summarize product aspects and corresponding opinions from a large number of product reviews in a specific domain. We maximize the accuracy and usefulness of the review summaries by leveraging knowledge about product aspect extraction and providing both an appropriate level of detail and rich representation capabilities. The results show that the proposed system achieves F1-scores of 0.714 for camera reviews and 0.774 for laptop reviews.",
"title": ""
},
{
"docid": "43fa16b19c373e2d339f45c71a0a2c22",
"text": "McKusick-Kaufman syndrome is a human developmental anomaly syndrome comprising mesoaxial or postaxial polydactyly, congenital heart disease and hydrometrocolpos. This syndrome is diagnosed most frequently in the Old Order Amish population and is inherited in an autosomal recessive pattern with reduced penetrance and variable expressivity. Homozygosity mapping and linkage analyses were conducted using two pedigrees derived from a larger pedigree published in 1978. The PedHunter software query system was used on the Amish Genealogy Database to correct the previous pedigree, derive a minimal pedigree connecting those affected sibships that are in the database and determine the most recent common ancestors of the affected persons. Whole genome short tandem repeat polymorphism (STRP) screening showed homozygosity in 20p12, between D20S162 and D20S894 , an area that includes the Alagille syndrome critical region. The peak two-point LOD score was 3.33, and the peak three-point LOD score was 5.21. The physical map of this region has been defined, and additional polymorphic markers have been isolated. The region includes several genes and expressed sequence tags (ESTs), including the jagged1 gene that recently has been shown to be haploinsufficient in the Alagille syndrome. Sequencing of jagged1 in two unrelated individuals affected with McKusick-Kaufman syndrome has not revealed any disease-causing mutations.",
"title": ""
},
{
"docid": "44d4114280e3ab9f6bfa0f0b347114b7",
"text": "Dozens of Electronic Control Units (ECUs) can be found on modern vehicles for safety and driving assistance. These ECUs also introduce new security vulnerabilities as recent attacks have been reported by plugging the in-vehicle system or through wireless access. In this paper, we focus on the security of the Controller Area Network (CAN), which is a standard for communication among ECUs. CAN bus by design does not have sufficient security features to protect it from insider or outsider attacks. Intrusion detection system (IDS) is one of the most effective ways to enhance vehicle security on the insecure CAN bus protocol. We propose a new IDS based on the entropy of the identifier bits in CAN messages. The key observation is that all the known CAN message injection attacks need to alter the CAN ID bits and analyzing the entropy of such bits can be an effective way to detect those attacks. We collected real CAN messages from a vehicle (2016 Ford Fusion) and performed simulated message injection attacks. The experimental results showed that our entropy based IDS can successfully detect all the injection attacks without disrupting the communication on CAN.",
"title": ""
},
{
"docid": "a48b7c679008235568d3d431081277b4",
"text": "This paper discusses the security aspects of a registration protocol in a mobile satellite communication system. We propose a new mobile user authentication and data encryption scheme for mobile satellite communication systems. The scheme can remedy a replay attack.",
"title": ""
},
{
"docid": "9a1151e45740dfa663172478259b77b6",
"text": "Every year, several new ontology matchers are proposed in the literature, each one using a different heuristic, which implies in different performances according to the characteristics of the ontologies. An ontology metamatcher consists of an algorithm that combines several approaches in order to obtain better results in different scenarios. To achieve this goal, it is necessary to define a criterion for the use of matchers. We presented in this work an ontology meta-matcher that combines several ontology matchers making use of the evolutionary meta-heuristic prey-predator as a means of parameterization of the same. Resumo. Todo ano, diversos novos alinhadores de ontologias são propostos na literatura, cada um utilizando uma heurı́stica diferente, o que implica em desempenhos distintos de acordo com as caracterı́sticas das ontologias. Um meta-alinhador consiste de um algoritmo que combina diversas abordagens a fim de obter melhores resultados em diferentes cenários. Para atingir esse objetivo, é necessária a definição de um critério para melhor uso de alinhadores. Neste trabalho, é apresentado um meta-alinhador de ontologias que combina vários alinhadores através da meta-heurı́stica evolutiva presa-predador como meio de parametrização das mesmas.",
"title": ""
},
{
"docid": "a32c635c1f4f4118da20cee6ffb5c1ea",
"text": "We analyzed the influence of education and of culture on the neuropsychological profile of an indigenous and a nonindigenous population. The sample included 27 individuals divided into four groups: (a) seven illiterate Maya indigenous participants, (b) six illiterate Pame indigenous participants, (c) seven nonindigenous participants with no education, and (d) seven Maya indigenous participants with 1 to 4 years of education . A brief neuropsychological test battery developed and standardized in Mexico was individually administered. Results demonstrated differential effects for both variables. Both groups of indigenous participants (Maya and Pame) obtained higher scores in visuospatial tasks, and the level of education had significant effects on working and verbal memory. Our data suggested that culture dictates what it is important for survival and that education could be considered as a type of subculture that facilitates the development of certain skills.",
"title": ""
},
{
"docid": "c460660e6ea1cc38f4864fe4696d3a07",
"text": "Background. The effective development of healthcare competencies poses great educational challenges. A possible approach to provide learning opportunities is the use of augmented reality (AR) where virtual learning experiences can be embedded in a real physical context. The aim of this study was to provide a comprehensive overview of the current state of the art in terms of user acceptance, the AR applications developed and the effect of AR on the development of competencies in healthcare. Methods. We conducted an integrative review. Integrative reviews are the broadest type of research review methods allowing for the inclusion of various research designs to more fully understand a phenomenon of concern. Our review included multi-disciplinary research publications in English reported until 2012. Results. 2529 research papers were found from ERIC, CINAHL, Medline, PubMed, Web of Science and Springer-link. Three qualitative, 20 quantitative and 2 mixed studies were included. Using a thematic analysis, we've described three aspects related to the research, technology and education. This study showed that AR was applied in a wide range of topics in healthcare education. Furthermore acceptance for AR as a learning technology was reported among the learners and its potential for improving different types of competencies. Discussion. AR is still considered as a novelty in the literature. Most of the studies reported early prototypes. Also the designed AR applications lacked an explicit pedagogical theoretical framework. Finally the learning strategies adopted were of the traditional style 'see one, do one and teach one' and do not integrate clinical competencies to ensure patients' safety.",
"title": ""
},
{
"docid": "a25fa0c0889b62b70bf95c16f9966cc4",
"text": "We deal with the problem of document representation for the task of measuring semantic relatedness between documents. A document is represented as a compact concept graph where nodes represent concepts extracted from the document through references to entities in a knowledge base such as DBpedia. Edges represent the semantic and structural relationships among the concepts. Several methods are presented to measure the strength of those relationships. Concepts are weighted through the concept graph using closeness centrality measure which reflects their relevance to the aspects of the document. A novel similarity measure between two concept graphs is presented. The similarity measure first represents concepts as continuous vectors by means of neural networks. Second, the continuous vectors are used to accumulate pairwise similarity between pairs of concepts while considering their assigned weights. We evaluate our method on a standard benchmark for document similarity. Our method outperforms state-of-the-art methods including ESA (Explicit Semantic Annotation) while our concept graphs are much smaller than the concept vectors generated by ESA. Moreover, we show that by combining our concept graph with ESA, we obtain an even further improvement.",
"title": ""
},
{
"docid": "273abcab379d49680db121022fba3e8f",
"text": "Current emotion recognition computational techniques have been successful on associating the emotional changes with the EEG signals, and so they can be identified and classified from EEG signals if appropriate stimuli are applied. However, automatic recognition is usually restricted to a small number of emotions classes mainly due to signal’s features and noise, EEG constraints and subject-dependent issues. In order to address these issues, in this paper a novel feature-based emotion recognition model is proposed for EEGbased Brain–Computer Interfaces. Unlike other approaches, our method explores a wider set of emotion types and incorporates additional features which are relevant for signal pre-processing and recognition classification tasks, based on a dimensional model of emotions: Valence and Arousal. It aims to improve the accuracy of the emotion classification task by combining mutual information based feature selection methods and kernel classifiers. Experiments using our approach for emotion classification which combines efficient feature selection methods and efficient kernel-based classifiers on standard EEG datasets show the promise of the approach when compared with state-of-the-art computational methods. © 2015 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
6e7a76546b6b3b81447034e21dbcca74
|
THE FLEXIBLE CORRECTION MODEL : THE ROLE OF NAIVE THEORIES OF BIAS IN BIAS CORRECTION
|
[
{
"docid": "eed70d4d8bfbfa76382bfc32dd12c3db",
"text": "Three studies tested basic assumptions derived from a theoretical model based on the dissociation of automatic and controlled processes involved in prejudice. Study 1 supported the model's assumption that highand low-prejudice persons are equally knowledgeable of the cultural stereotype. The model suggests that the stereotype is automatically activated in the presence of a member (or some symbolic equivalent) of the stereotyped group and that low-prejudice responses require controlled inhibition of the automatically activated stereotype. Study 2, which examined the effects of automatic stereotype activation on the evaluation of ambiguous stereotype-relevant behaviors performed by a race-unspecified person, suggested that when subjects' ability to consciously monitor stereotype activation is precluded, both highand low-prejudice subjects produce stereotype-congruent evaluations of ambiguous behaviors. Study 3 examined highand low-prejudice subjects' responses in a consciously directed thought-listing task. Consistent with the model, only low-prejudice subjects inhibited the automatically activated stereotype-congruent thoughts and replaced them with thoughts reflecting equality and negations of the stereotype. The relation between stereotypes and prejudice and implications for prejudice reduction are discussed.",
"title": ""
}
] |
[
{
"docid": "5ba6ec8c7f9dc4d2b6c55a505ce394a7",
"text": "We develop a data structure, the spatialized normal cone hierarchy, and apply it to interactive solutions for model silhouette extraction, local minimum distance computations, and area light source shadow umbra and penumbra boundary determination. The latter applications extend the domain of surface normal encapsulation from problems described by a point and a model to problems involving two models.",
"title": ""
},
{
"docid": "4d9f0cf629cd3695a2ec249b81336d28",
"text": "We introduce an over-sketching interface for feature-preserving surface mesh editing. The user sketches a stroke that is the suggested position of part of a silhouette of the displayed surface. The system then segments all image-space silhouettes of the projected surface, identifies among all silhouette segments the best matching part, derives vertices in the surface mesh corresponding to the silhouette part, selects a sub-region of the mesh to be modified, and feeds appropriately modified vertex positions together with the sub-mesh into a mesh deformation tool. The overall algorithm has been designed to enable interactive modification of the surface --- yielding a surface editing system that comes close to the experience of sketching 3D models on paper.",
"title": ""
},
{
"docid": "f0916caf8abc62643a1e55781798c18e",
"text": "In this paper, we consider the problem of learning a policy by observing numerous non-expert agents. Our goal is to extract a policy that, with high-confidence, acts better than the agents’ average performance. Such a setting is important for real-world problems where expert data is scarce but non-expert data can easily be obtained, e.g. by crowdsourcing. Our approach is to pose this problem as safe policy improvement in reinforcement learning. First, we evaluate an average behavior policy and approximate its value function. Then, we develop a stochastic policy improvement algorithm that safely improves the average behavior. The primary advantages of our approach, termed Rerouted Behavior Improvement (RBI), over other safe learning methods are its stability in the presence of value estimation errors and the elimination of a policy search process. We demonstrate these advantages in the Taxi grid-world domain and in four games from the Atari learning environment.",
"title": ""
},
{
"docid": "ea5e08627706532504b9beb6f4dc6650",
"text": "This paper highlights the role that reinforcement learning can play in the optimization of treatment policies for chronic illnesses. Before applying any off-the-shelf reinforcement learning methods in this setting, we must first tackle a number of challenges. We outline some of these challenges and present methods for overcoming them. First, we describe a multiple imputation approach to overcome the problem of missing data. Second, we discuss the use of function approximation in the context of a highly variable observation set. Finally, we discuss approaches to summarizing the evidence in the data for recommending a particular action and quantifying the uncertainty around the Q-function of the recommended policy. We present the results of applying these methods to real clinical trial data of patients with schizophrenia.",
"title": ""
},
{
"docid": "1f753b8e3c0178cabbc8a9f594c40c8c",
"text": "For easy comprehensibility, rules are preferrable to non-linear kernel functions in the analysis of bio-medical data. In this paper, we describe two rule induction approaches—C4.5 and our PCL classifier—for discovering rules from both traditional clinical data and recent gene expression or proteomic profiling data. C4.5 is a widely used method, but it has two weaknesses, the single coverage constraint and the fragmentation problem, that affect its accuracy. PCL is a new rule-based classifier that overcomes these two weaknesses of decision trees by using many significant rules. We present a thorough comparison to show that our PCL method is much more accurate than C4.5, and it is also superior to Bagging and Boosting in general.",
"title": ""
},
{
"docid": "753a964fe17040a43ecbd2ae85b0701c",
"text": "We are analyzing the visualizations in the scientific literature to enhance search services, detect plagiarism, and study bibliometrics. An immediate problem is the ubiquitous use of multi-part figures: single images with multiple embedded sub-visualizations. Such figures account for approximately 35% of the figures in the scientific literature. Conventional image segmentation techniques and other existing approaches have been shown to be ineffective for parsing visualizations. We propose an algorithm to automatically segment multi-chart visualizations into a set of single-chart visualizations, thereby enabling downstream analysis. Our approach first splits an image into fragments based on background color and layout patterns. An SVM-based binary classifier then distinguishes complete charts from auxiliary fragments such as labels, ticks, and legends, achieving an average 98.1% accuracy. Next, we recursively merge fragments to reconstruct complete visualizations, choosing between alternative merge trees using a novel scoring function. To evaluate our approach, we used 261 scientific multi-chart figures randomly selected from the Pubmed database. Our algorithm achieves 80% recall and 85% precision of perfect extractions for the common case of eight or fewer sub-figures per figure. Further, even imperfect extractions are shown to be sufficient for most chart classification and reasoning tasks associated with bibliometrics and academic search applications.",
"title": ""
},
{
"docid": "24855976195933799d110122cbbbe6d5",
"text": "Association of audio events with video events presents a challenge to a typical camera-microphone approach in order to capture AV signals from a large distance. Setting up a long range microphone array and performing geo-calibration of both audio and video sensors is difficult. In this work, in addition to a geo-calibrated electro-optical camera, we propose to use a novel optical sensor a Laser Doppler Vibrometer (LDV) for real-time audio sensing, which allows us to capture acoustic signals from a large distance, and to use the same geo-calibration for both the camera and the audio (via LDV). We have promising preliminary results on association of the audio recording of speech with the video of the human speaker.",
"title": ""
},
{
"docid": "20171d6fa41e3c1a02e800b1792e0942",
"text": "Plastics pollution in the ocean is an area of growing concern, with research efforts focusing on both the macroplastic (>5mm) and microplastic (<5mm) fractions. In the 1990 s it was recognized that a minor source of microplastic pollution was derived from liquid hand-cleansers that would have been rarely used by the average consumer. In 2009, however, the average consumer is likely to be using microplastic-containing products on a daily basis, as the majority of facial cleansers now contain polyethylene microplastics which are not captured by wastewater plants and will enter the oceans. Four microplastic-containing facial cleansers available in New Zealand supermarkets were used to quantify the size of the polythelene fragments. Three-quarters of the brands had a modal size of <100 microns and could be immediately ingested by planktonic organisms at the base of the food chain. Over time the microplastics will be subject to UV-degradation and absorb hydrophobic materials such as PCBs, making them smaller and more toxic in the long-term. Marine scientists need to educate the public to the dangers of using products that pose an immediate and long-term threat to the health of the oceans and the food we eat.",
"title": ""
},
{
"docid": "b81c0d819f2afb0a0ff79b7c6aeb8ff7",
"text": "This paper proposes a framework to identify and evaluate companies from the technological perspective to support merger and acquisition (M&A) target selection decision-making. This employed a text mining-based patent map approach to identify companies which can fulfill a specific strategic purpose of M&A for enhancing technological capabilities. The patent map is the visualized technological landscape of a technology industry by using technological proximities among patents, so companies which closely related to the strategic purpose can be identified. To evaluate the technological aspects of the identified companies, we provide the patent indexes that evaluate both current and future technological capabilities and potential technology synergies between acquiring and acquired companies. Furthermore, because the proposed method evaluates potential targets from the overall corporate perspective and the specific strategic perspectives simultaneously, more robust and meaningful result can be obtained than when only one perspective is considered. Thus, the proposed framework can suggest the appropriate target companies that fulfill the strategic purpose of M&A for enhancing technological capabilities. For the verification of the framework, we provide an empirical study using patent data related to flexible display technology.",
"title": ""
},
{
"docid": "99d57cef03e21531be9f9663ec023987",
"text": "Anton Schwartz Dept. of Computer Science Stanford University Stanford, CA 94305 Email: schwartz@cs.stanford.edu Reinforcement learning addresses the problem of learning to select actions in order to maximize one's performance in unknown environments. To scale reinforcement learning to complex real-world tasks, such as typically studied in AI, one must ultimately be able to discover the structure in the world, in order to abstract away the myriad of details and to operate in more tractable problem spaces. This paper presents the SKILLS algorithm. SKILLS discovers skills, which are partially defined action policies that arise in the context of multiple, related tasks. Skills collapse whole action sequences into single operators. They are learned by minimizing the compactness of action policies, using a description length argument on their representation. Empirical results in simple grid navigation tasks illustrate the successful discovery of structure in reinforcement learning.",
"title": ""
},
{
"docid": "b3ebbff355dfc23b4dfbab3bc3012980",
"text": "Research with young children has shown that, like adults, they focus selectively on the aspects of an actor's behavior that are relevant to his or her underlying intentions. The current studies used the visual habituation paradigm to ask whether infants would similarly attend to those aspects of an action that are related to the actor's goals. Infants saw an actor reach for and grasp one of two toys sitting side by side on a curtained stage. After habituation, the positions of the toys were switched and babies saw test events in which there was a change in either the path of motion taken by the actor's arm or the object that was grasped by the actor. In the first study, 9-month-old infants looked longer when the actor grasped a new toy than when she moved through a new path. Nine-month-olds who saw an inanimate object of approximately the same dimensions as the actor's arm touch the toy did not show this pattern in test. In the second study, 5-month-old infants showed similar, though weaker, patterns. A third study provided evidence that the findings for the events involving a person were not due to perceptual changes in the objects caused by occlusion by the hand. A fourth study replicated the 9 month results for a human grasp at 6 months, and revealed that these effects did not emerge when infants saw an inanimate object with digits that moved to grasp the toy. Taken together, these findings indicate that young infants distinguish in their reasoning about human action and object motion, and that by 6 months infants encode the actions of other people in ways that are consistent with more mature understandings of goal-directed action.",
"title": ""
},
{
"docid": "125655821a44bbce2646157c8465e345",
"text": "Due to its wide applicability, the problem of semi-supervised classification is attracting increasing attention in machine learning. Semi-Supervised Support Vector Machines (S3VMs) are based on applying the margin maximization principle to both labeled and unlabeled examples. Unlike SVMs, their formulation leads to a non-convex optimization problem. A suite of algorithms have recently been proposed for solving S3VMs. This paper reviews key ideas in this literature. The performance and behavior of various S3VM algorithms is studied together, under a common experimental setting.",
"title": ""
},
{
"docid": "3ab4b094f3e32a4f467a849347157264",
"text": "Overview of geographically explicit momentary assessment research, applied to the study of mental health and well-being, which allows for cross-validation, extension, and enrichment of research on place and health. Building on the historical foundations of both ecological momentary assessment and geographic momentary assessment research, this review explores their emerging synergy into a more generalized and powerful research framework. Geographically explicit momentary assessment methods are rapidly advancing across a number of complimentary literatures that intersect but have not yet converged. Key contributions from these areas reveal tremendous potential for transdisciplinary and translational science. Mobile communication devices are revolutionizing research on mental health and well-being by physically linking momentary experience sampling to objective measures of socio-ecological context in time and place. Methodological standards are not well-established and will be required for transdisciplinary collaboration and scientific inference moving forward.",
"title": ""
},
{
"docid": "cae9e77074db114690a6ed1330d9b14c",
"text": "BACKGROUND\nOn December 8th, 2015, World Health Organization published a priority list of eight pathogens expected to cause severe outbreaks in the near future. To better understand global research trends and characteristics of publications on these emerging pathogens, we carried out this bibliometric study hoping to contribute to global awareness and preparedness toward this topic.\n\n\nMETHOD\nScopus database was searched for the following pathogens/infectious diseases: Ebola, Marburg, Lassa, Rift valley, Crimean-Congo, Nipah, Middle Eastern Respiratory Syndrome (MERS), and Severe Respiratory Acute Syndrome (SARS). Retrieved articles were analyzed to obtain standard bibliometric indicators.\n\n\nRESULTS\nA total of 8619 journal articles were retrieved. Authors from 154 different countries contributed to publishing these articles. Two peaks of publications, an early one for SARS and a late one for Ebola, were observed. Retrieved articles received a total of 221,606 citations with a mean ± standard deviation of 25.7 ± 65.4 citations per article and an h-index of 173. International collaboration was as high as 86.9%. The Centers for Disease Control and Prevention had the highest share (344; 5.0%) followed by the University of Hong Kong with 305 (4.5%). The top leading journal was Journal of Virology with 572 (6.6%) articles while Feldmann, Heinz R. was the most productive researcher with 197 (2.3%) articles. China ranked first on SARS, Turkey ranked first on Crimean-Congo fever, while the United States of America ranked first on the remaining six diseases. Of retrieved articles, 472 (5.5%) were on vaccine - related research with Ebola vaccine being most studied.\n\n\nCONCLUSION\nNumber of publications on studied pathogens showed sudden dramatic rise in the past two decades representing severe global outbreaks. Contribution of a large number of different countries and the relatively high h-index are indicative of how international collaboration can create common health agenda among distant different countries.",
"title": ""
},
{
"docid": "d146a363006aa6cc5dde35f740a28aab",
"text": "Website privacy policies are often ignored by Internet users, because these documents tend to be long and difficult to understand. However, the significance of privacy policies greatly exceeds the attention paid to them: these documents are binding legal agreements between website operators and their users, and their opaqueness is a challenge not only to Internet users but also to policy regulators. One proposed alternative to the status quo is to automate or semi-automate the extraction of salient details from privacy policy text, using a combination of crowdsourcing, natural language processing, and machine learning. However, there has been a relative dearth of datasets appropriate for identifying data practices in privacy policies. To remedy this problem, we introduce a corpus of 115 privacy policies (267K words) with manual annotations for 23K fine-grained data practices. We describe the process of using skilled annotators and a purpose-built annotation tool to produce the data. We provide findings based on a census of the annotations and show results toward automating the annotation procedure. Finally, we describe challenges and opportunities for the research community to use this corpus to advance research in both privacy and language technologies.",
"title": ""
},
{
"docid": "b54ca99ae8818517d5c04100bad0f3b4",
"text": "Finding the sparsest solutions to a tensor complementarity problem is generally NP-hard due to the nonconvexity and noncontinuity of the involved 0 norm. In this paper, a special type of tensor complementarity problems with Z -tensors has been considered. Under some mild conditions, we show that to pursuit the sparsest solutions is equivalent to solving polynomial programming with a linear objective function. The involved conditions guarantee the desired exact relaxation and also allow to achieve a global optimal solution to the relaxednonconvexpolynomial programming problem. Particularly, in comparison to existing exact relaxation conditions, such as RIP-type ones, our proposed conditions are easy to verify. This research was supported by the National Natural Science Foundation of China (11301022, 11431002), the State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University (RCS2014ZT20, RCS2014ZZ01), and the Hong Kong Research Grant Council (Grant No. PolyU 502111, 501212, 501913 and 15302114). B Ziyan Luo starkeynature@hotmail.com Liqun Qi liqun.qi@polyu.edu.hk Naihua Xiu nhxiu@bjtu.edu.cn 1 State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University, Beijing 100044, People’s Repubic of China 2 Department of Applied Mathematics, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong, People’s Repubic of China 3 Department of Mathematics, School of Science, Beijing Jiaotong University, Beijing, People’s Repubic of China 123 Author's personal copy",
"title": ""
},
{
"docid": "d164ead192d1ba25472935f517608faa",
"text": "Real-world machine learning applications may require functions to be fast-to-evaluate and interpretable, in particular, guaranteed monotonicity of the learned function can be critical to user trust. We propose meeting these goals for low-dimensional machine learning problems by learning flexible, monotonic functions using calibrated interpolated look-up tables. We extend the structural risk minimization framework of lattice regression to train monotonic functions by solving a convex problem with appropriate linear inequality constraints. In addition, we propose jointly learning interpretable calibrations of each feature to normalize continuous features and handle categorical or missing data, at the cost of making the objective non-convex. We address large-scale learning through parallelization, mini-batching, and propose random sampling of additive regularizer terms. Case studies for six real-world problems with five to sixteen features and thousands to millions of training samples demonstrate the proposed monotonic functions can achieve state-of-the-art accuracy on practical problems while providing greater transparency to users.",
"title": ""
},
{
"docid": "45e1a424ad0807ce49cd4e755bdd9351",
"text": "Data preprocessing is widely recognized as an important stage in anomaly detection. This paper reviews the data preprocessing techniques used by anomaly-based network intrusion detection systems (NIDS), concentrating on which aspects of the network traffic are analyzed, and what feature construction and selection methods have been used. Motivation for the paper comes from the large impact data preprocessing has on the accuracy and capability of anomaly-based NIDS. The review finds that many NIDS limit their view of network traffic to the TCP/IP packet headers. Time-based statistics can be derived from these headers to detect network scans, network worm behavior, and denial of service attacks. A number of other NIDS perform deeper inspection of request packets to detect attacks against network services and network applications. More recent approaches analyze full service responses to detect attacks targeting clients. The review covers a wide range of NIDS, highlighting which classes of attack are detectable by each of these approaches. Data preprocessing is found to predominantly rely on expert domain knowledge for identifying the most relevant parts of network traffic and for constructing the initial candidate set of traffic features. On the other hand, automated methods have been widely used for feature extraction to reduce data dimensionality, and feature selection to find the most relevant subset of features from this candidate set. The review shows a trend towards deeper packet inspection to construct more relevant features through targeted content parsing. These context sensitive features are required to detect current attacks.",
"title": ""
}
] |
scidocsrr
|
f4051641f29c54cf41b7f648aecc44e6
|
Investigating the relationship between : Smartphone Addiction , Social Anxiety , Self-Esteem , Age and Gender
|
[
{
"docid": "e08854e0fc17a8f80ede1fc05a07805c",
"text": "While many researches have analyzed the psychological antecedents of mobile phone addiction and mobile phone usage behavior, their relationship with psychological characteristics remains mixed. We investigated the relationship between psychological characteristics, mobile phone addiction and use of mobile phones for 269 Taiwanese female university students who were administered Rosenberg’s selfesteem scale, Lai’s personality inventory, and a mobile phone usage questionnaire and mobile phone addiction scale. The result showing that: (1) social extraversion and anxiety have positive effects on mobile phone addiction, and self-esteem has negative effects on mobile phone addiction. (2) Mobile phone addiction has a positive predictive effect on mobile phone usage behavior. The results of this study identify personal psychological characteristics of Taiwanese female university students which can significantly predict mobile phone addiction; female university students with mobile phone addiction will make more phone calls and send more text messages. These results are discussed and suggestions for future research for school and university students are provided. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "2acbfab9d69f3615930c1960a2e6dda9",
"text": "OBJECTIVE\nThe aim of this study was to develop a self-diagnostic scale that could distinguish smartphone addicts based on the Korean self-diagnostic program for Internet addiction (K-scale) and the smartphone's own features. In addition, the reliability and validity of the smartphone addiction scale (SAS) was demonstrated.\n\n\nMETHODS\nA total of 197 participants were selected from Nov. 2011 to Jan. 2012 to accomplish a set of questionnaires, including SAS, K-scale, modified Kimberly Young Internet addiction test (Y-scale), visual analogue scale (VAS), and substance dependence and abuse diagnosis of DSM-IV. There were 64 males and 133 females, with ages ranging from 18 to 53 years (M = 26.06; SD = 5.96). Factor analysis, internal-consistency test, t-test, ANOVA, and correlation analysis were conducted to verify the reliability and validity of SAS.\n\n\nRESULTS\nBased on the factor analysis results, the subscale \"disturbance of reality testing\" was removed, and six factors were left. The internal consistency and concurrent validity of SAS were verified (Cronbach's alpha = 0.967). SAS and its subscales were significantly correlated with K-scale and Y-scale. The VAS of each factor also showed a significant correlation with each subscale. In addition, differences were found in the job (p<0.05), education (p<0.05), and self-reported smartphone addiction scores (p<0.001) in SAS.\n\n\nCONCLUSIONS\nThis study developed the first scale of the smartphone addiction aspect of the diagnostic manual. This scale was proven to be relatively reliable and valid.",
"title": ""
}
] |
[
{
"docid": "7a24f978a349c897c1ae91de66b2cdc6",
"text": "Synthetic biology is a research field that combines the investigative nature of biology with the constructive nature of engineering. Efforts in synthetic biology have largely focused on the creation and perfection of genetic devices and small modules that are constructed from these devices. But to view cells as true 'programmable' entities, it is now essential to develop effective strategies for assembling devices and modules into intricate, customizable larger scale systems. The ability to create such systems will result in innovative approaches to a wide range of applications, such as bioremediation, sustainable energy production and biomedical therapies.",
"title": ""
},
{
"docid": "b7a04d56d6d06a0d89f6113c3ab639a8",
"text": "Poker is an interesting test-bed for artificial intelligence research. It is a game of imperfect knowledge, where multiple competing agents must deal with risk management, agent modeling, unreliable information and deception, much like decision-making applications in the real world. Agent modeling is one of the most difficult problems in decision-making applications and in poker it is essential to achieving high performance. This paper describes and evaluates Loki, a poker program capable of observing its opponents, constructing opponent models and dynamically adapting its play to best exploit patterns in the opponents’ play.",
"title": ""
},
{
"docid": "858acbd02250ff2f8325786475b4f3f3",
"text": "One of the most important aspects of Grice’s theory of conversation is the drawing of a borderline between what is said and what is implicated. Grice’s views concerning this borderline have been strongly and influentially criticised by relevance theorists. In particular, it has become increasingly widely accepted that Grice’s notion of what is said is too limited, and that pragmatics has a far larger role to play in determining what is said than Grice would have allowed. (See for example Bezuidenhuit 1996; Blakemore 1987; Carston 1991; Recanati 1991, 1993, 2001; Sperber and Wilson 1986; Wilson and Sperber 1981.) In this paper, I argue that the rejection of Grice has moved too swiftly, as a key line of objection which has led to this rejection is flawed. The flaw, we will see, is that relevance theorists rely on a misunderstanding of Grice’s project in his theory of conversation. I am not arguing that Grice’s versions of saying and implicating are right in all details, but simply that certain widespread reasons for rejecting his theory are based on misconceptions.1 Relevance theorists, I will suggest, systematically misunderstand Grice by taking him to be engaged in the same project that they are: making sense of the psychological processes by which we interpret utterances. Notions involved with this project will need to be ones that are relevant to the psychology of utterance interpretation. Thus, it is only reasonable that relevance theorists will require that what is said and what is implicated should be psychologically real to the audience. (We will see that this requirement plays a crucial role in their arguments against Grice.) Grice, I will argue, was not pursuing this project. Rather, I will suggest that he was trying to make sense of quite a different notion of what is said: one on which both speaker and audience may be wrong about what is said. On this sort of notion, psychological reality is not a requirement. So objections to Grice based on a requirement of psychological reality will fail.",
"title": ""
},
{
"docid": "17833f9cf4eec06dbc4d7954b6cc6f3f",
"text": "Automated vehicles rely on the accurate and robust detection of the drivable area, often classified into free space, road area and lane information. Most current approaches use monocular or stereo cameras to detect these. However, LiDAR sensors are becoming more common and offer unique properties for road area detection such as precision and robustness to weather conditions. We therefore propose two approaches for a pixel-wise semantic binary segmentation of the road area based on a modified U-Net Fully Convolutional Network (FCN) architecture. The first approach UView-Cam employs a single camera image, whereas the second approach UGrid-Fused incorporates a early fusion of LiDAR and camera data into a multi-dimensional occupation grid representation as FCN input. The fusion of camera and LiDAR allows for efficient and robust leverage of individual sensor properties in a single FCN. For the training of UView-Cam, multiple publicly available datasets of street environments are used, while the UGrid-Fused is trained with the KITTI dataset. In the KITTI Road/Lane Detection benchmark, the proposed networks reach a MaxF score of 94.23% and 93.81% respectively. Both approaches achieve realtime performance with a detection rate of about 10 Hz.",
"title": ""
},
{
"docid": "5931169b6433d77496dfc638988399eb",
"text": "Image annotation has been an important task for visual information retrieval. It usually involves a multi-class multi-label classification problem. To solve this problem, many researches have been conducted during last two decades, although most of the proposed methods rely on the training data with the ground truth. To prepare such a ground truth is an expensive and laborious task that cannot be easily scaled, and “semantic gaps” between low-level visual features and high-level semantics still remain. In this paper, we propose a novel approach, ontology based supervised learning for multi-label image annotation, where classifiers' training is conducted using easily gathered Web data. Moreover, it takes advantage of both low-level visual features and high-level semantic information of given images. Experimental results using 0.507 million Web images database show effectiveness of the proposed framework over existing method.",
"title": ""
},
{
"docid": "f03054e65555fce682c9ce2ea3ee5258",
"text": "Synthetic biology, despite still being in its infancy, is increasingly providing valuable information for applications in the clinic, the biotechnology industry and in basic molecular research. Both its unique potential and the challenges it presents have brought together the expertise of an eclectic group of scientists, from cell biologists to engineers. In this Viewpoint article, five experts discuss their views on the future of synthetic biology, on its main achievements in basic and applied science, and on the bioethical issues that are associated with the design of new biological systems.",
"title": ""
},
{
"docid": "552baf04d696492b0951be2bb84f5900",
"text": "We examined whether reduced perceptual specialization underlies atypical perception in autism spectrum disorder (ASD) testing classifications of stimuli that differ either along integral dimensions (prototypical integral dimensions of value and chroma), or along separable dimensions (prototypical separable dimensions of value and size). Current models of the perception of individuals with an ASD would suggest that on these tasks, individuals with ASD would be as, or more, likely to process dimensions as separable, regardless of whether they represented separable or integrated dimensions. In contrast, reduced specialization would propose that individuals with ASD would respond in a more integral manner to stimuli that differ along separable dimensions, and at the same time, respond in a more separable manner to stimuli that differ along integral dimensions. A group of nineteen adults diagnosed with high functioning ASD and seventeen typically developing participants of similar age and IQ, were tested on speeded and restricted classifications tasks. Consistent with the reduced specialization account, results show that individuals with ASD do not always respond more analytically than typically developed (TD) observers: Dimensions identified as integral for TD individuals evoke less integral responding in individuals with ASD, while those identified as separable evoke less analytic responding. These results suggest that perceptual representations are more broadly tuned and more flexibly represented in ASD. Autism Res 2017, 10: 1510-1522. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.",
"title": ""
},
{
"docid": "af4db4d9be3f652445a47e2985070287",
"text": "BACKGROUND\nSurgical Site Infections (SSIs) are infections of incision or deep tissue at operation sites. These infections prolong hospitalization, delay wound healing, and increase the overall cost and morbidity.\n\n\nOBJECTIVES\nThis study aimed to investigate anaerobic and aerobic bacteria prevalence in surgical site infections and determinate antibiotic susceptibility pattern in these isolates.\n\n\nMATERIALS AND METHODS\nOne hundred SSIs specimens were obtained by needle aspiration from purulent material in depth of infected site. These specimens were cultured and incubated in both aerobic and anaerobic condition. For detection of antibiotic susceptibility pattern in aerobic and anaerobic bacteria, we used disk diffusion, agar dilution, and E-test methods.\n\n\nRESULTS\nA total of 194 bacterial strains were isolated from 100 samples of surgical sites. Predominant aerobic and facultative anaerobic bacteria isolated from these specimens were the members of Enterobacteriaceae family (66, 34.03%) followed by Pseudomonas aeruginosa (26, 13.4%), Staphylococcus aureus (24, 12.37%), Acinetobacter spp. (18, 9.28%), Enterococcus spp. (16, 8.24%), coagulase negative Staphylococcus spp. (14, 7.22%) and nonhemolytic streptococci (2, 1.03%). Bacteroides fragilis (26, 13.4%), and Clostridium perfringens (2, 1.03%) were isolated as anaerobic bacteria. The most resistant bacteria among anaerobic isolates were B. fragilis. All Gram-positive isolates were susceptible to vancomycin and linezolid while most of Enterobacteriaceae showed sensitivity to imipenem.\n\n\nCONCLUSIONS\nMost SSIs specimens were polymicrobial and predominant anaerobic isolate was B. fragilis. Isolated aerobic and anaerobic strains showed high level of resistance to antibiotics.",
"title": ""
},
{
"docid": "72535e221c8d0a274ed7b025a17c8a7c",
"text": "Along with increasing demand on improving power quality, the most popular technique that has been used is Active Power Filter (APF); this is because APF can easily eliminate unwanted harmonics, improve power factor and overcome voltage sags. This paper will discuss and analyze the simulation result for a three-phase shunt active power filter using MATLAB/Simulink program. This simulation will implement a non-linear load and compensate line current harmonics under balance and unbalance load. As a result of the simulation, it is found that an active power filter is the better way to reduce the total harmonic distortion (THD) which is required by quality standards IEEE-519.",
"title": ""
},
{
"docid": "c232d9b7283a580b96ff00d196d69aea",
"text": "We present an algorithm for performing Lambertian photometric stereo in the presence of shadows. The algorithm has three novel features. First, a fast graph cuts based method is used to estimate per pixel light source visibility. Second, it allows images to be acquired with multiple illuminants, and there can be fewer images than light sources. This leads to better surface coverage and improves the reconstruction accuracy by enhancing the signal to noise ratio and the condition number of the light source matrix. The ability to use fewer images than light sources means that the imaging effort grows sublinearly with the number of light sources. Finally, the recovered shadow maps are combined with shading information to perform constrained surface normal integration. This reduces the low frequency bias inherent to the normal integration process and ensures that the recovered surface is consistent with the shadowing configuration The algorithm works with as few as four light sources and four images. We report results for light source visibility detection and high quality surface reconstructions for synthetic and real datasets.",
"title": ""
},
{
"docid": "644936acfe1f9ffa0b5f3e8751015d86",
"text": "The use of electromagnetic induction lamps without electrodes has increased because of their long life and energy efficiency. The control of the ignition and luminosity of the lamp is provided by an electronic ballast. Beyond that, the electronic ballast also provides a power factor correction, allowing the minimizing of the lamps impact on the quality of service of the electrical network. The electronic ballast includes several blocks, namely a bridge rectifier, a power factor correcting circuit (PFC), an asymmetric half-bridge inverter with a resonant filter on the inverter output, and a circuit to control the conduction time ot the ballast transistors. Index Terms – SEPIC, PFC, electrodeless lamp, ressonant filter,",
"title": ""
},
{
"docid": "9332c32039cf782d19367a9515768e42",
"text": "Maternal drug use during pregnancy is associated with fetal passive addiction and neonatal withdrawal syndrome. Cigarette smoking—highly prevalent during pregnancy—is associated with addiction and withdrawal syndrome in adults. We conducted a prospective, two-group parallel study on 17 consecutive newborns of heavy-smoking mothers and 16 newborns of nonsmoking, unexposed mothers (controls). Neurologic examinations were repeated at days 1, 2, and 5. Finnegan withdrawal score was assessed every 3 h during their first 4 d. Newborns of smoking mothers had significant levels of cotinine in the cord blood (85.8 ± 3.4 ng/mL), whereas none of the controls had detectable levels. Similar findings were observed with urinary cotinine concentrations in the newborns (483.1 ± 2.5 μg/g creatinine versus 43.6 ± 1.5 μg/g creatinine; p = 0.0001). Neurologic scores were significantly lower in newborns of smokers than in control infants at days 1 (22.3 ± 2.3 versus 26.5 ± 1.1; p = 0.0001), 2 (22.4 ± 3.3 versus 26.3 ± 1.6; p = 0.0002), and 5 (24.3 ± 2.1 versus 26.5 ± 1.5; p = 0.002). Neurologic scores improved significantly from day 1 to 5 in newborns of smokers (p = 0.05), reaching values closer to control infants. Withdrawal scores were higher in newborns of smokers than in control infants at days 1 (4.5 ± 1.1 versus 3.2 ± 1.4; p = 0.05), 2 (4.7 ± 1.7 versus 3.1 ± 1.1; p = 0.002), and 4 (4.7 ± 2.1 versus 2.9 ± 1.4; p = 0.007). Significant correlations were observed between markers of nicotine exposure and neurologic-and withdrawal scores. We conclude that withdrawal symptoms occur in newborns exposed to heavy maternal smoking during pregnancy.",
"title": ""
},
{
"docid": "1a750462f0f5dea5e703c2f852e7aa38",
"text": "Background: Land resource management measures, such as soil bund, trench, check dams and plantation had been practiced in Melaka watershed, Ethiopia since 2010. The objective of this study is to assess the impact of above measures on soil loss rate, vegetative cover and livelihood of the population. Results: The land cover spatial data sets were created from Landsat satellite images of 2010 and 2015 using ERDAS IMAGINE 2014®. Soil loss rate was calculated using Revised Universal Soil Loss Equation (RUSLE) and its input data were generated from field investigation, satellite imageries and rainfall analysis. Data on land resource of the study area and its impact on livelihood were collected through face-to-face interview and key informants. The results revealed that cropland decreased by 9% whereas vegetative cover and grassland increased by 96 and 136%, respectively. The soil loss rate was 19.2 Mg ha−1 year−1 in 2010 and 12.4 Mg ha−1 year−1 in 2015, accounting to 34% decrease over 5 years. It may be attributed to construction of soil bund and the biological measures practiced by the stakeholders. Consequently, land productivity and availability of forage was improved which substantially contributed to the betterment of people’s livelihood. Conclusions: The land resource management measures practiced in the study area were highly effective for reducing soil loss, improving vegetation cover and livelihood of the population.",
"title": ""
},
{
"docid": "3f394e57febd3ffdc7414cf1af94c53b",
"text": "Background recovery is a very important theme in computer vision applications. Recent research shows that robust principal component analysis (RPCA) is a promising approach for solving problems such as noise removal, video background modeling, and removal of shadows and specularity. RPCA utilizes the fact that the background is common in multiple views of a scene, and attempts to decompose the data matrix constructed from input images into a low-rank matrix and a sparse matrix. This is possible if the sparse matrix is sufficiently sparse, which may not be true in computer vision applications. Moreover, algorithmic parameters need to be fine tuned to yield accurate results. This paper proposes a fixed-rank RPCA algorithm for solving background recovering problems whose low-rank matrices have known ranks. Comprehensive tests show that, by fixing the rank of the low-rank matrix to a known value, the fixed-rank algorithm produces more reliable and accurate results than existing low-rank RPCA algorithm.",
"title": ""
},
{
"docid": "a74aef75f5b1d5bc44da2f6d2c9284cf",
"text": "In this paper, we define irregular bipolar fuzzy graphs and its various classifications. Size of regular bipolar fuzzy graphs is derived. The relation between highly and neighbourly irregular bipolar fuzzy graphs are established. Some basic theorems related to the stated graphs have also been presented.",
"title": ""
},
{
"docid": "9e4b7e87229dfb02c2600350899049be",
"text": "This paper presents an efficient and reliable swarm intelligence-based approach, namely elitist-mutated particle swarm optimization EMPSO technique, to derive reservoir operation policies for multipurpose reservoir systems. Particle swarm optimizers are inherently distributed algorithms, in which the solution for a problem emerges from the interactions between many simple individuals called particles. In this study the standard particle swarm optimization PSO algorithm is further improved by incorporating a new strategic mechanism called elitist-mutation to improve its performance. The proposed approach is first tested on a hypothetical multireservoir system, used by earlier researchers. EMPSO showed promising results, when compared with other techniques. To show practical utility, EMPSO is then applied to a realistic case study, the Bhadra reservoir system in India, which serves multiple purposes, namely irrigation and hydropower generation. To handle multiple objectives of the problem, a weighted approach is adopted. The results obtained demonstrate that EMPSO is consistently performing better than the standard PSO and genetic algorithm techniques. It is seen that EMPSO is yielding better quality solutions with less number of function evaluations. DOI: 10.1061/ ASCE 0733-9496 2007 133:3 192 CE Database subject headings: Reservoir operation; Optimization; Irrigation; Hydroelectric power generation.",
"title": ""
},
{
"docid": "f267e8cfbe10decbe16fa83c97e76049",
"text": "The growing prevalence of e-learning systems and on-line courses has made educational material widely accessible to students of varying abilities, backgrounds and styles. There is thus a growing need to accomodate for individual differences in such e-learning systems. This paper presents a new algorithm for personliazing educational content to students that combines collaborative filtering algorithms with social choice theory. The algorithm constructs a “difficulty” ranking over questions for a target student by aggregating the ranking of similar students, as measured by different aspects of their performance on common past questions, such as grades, number of retries, and time spent solving questions. It infers a difficulty ranking directly over the questions for a target student, rather than ordering them according to predicted performance, which is prone to error. The algorithm was tested on two large real world data sets containing tens of thousands of students and a million records. Its performance was compared to a variety of personalization methods as well as a non-personalized method that relied on a domain expert. It was able to significantly outperform all of these approaches according to standard information retrieval metrics. Our approach can potentially be used to support teachers in tailoring problem sets and exams to individual students and students in informing them about areas they may need to strengthen.",
"title": ""
},
{
"docid": "8ce33eef3eaa1f89045d916869813d5d",
"text": "This paper introduces a deep neural network model for subband-based speech 1 synthesizer. The model benefits from the short bandwidth of the subband signals 2 to reduce the complexity of the time-domain speech generator. We employed 3 the multi-level wavelet analysis/synthesis to decompose/reconstruct the signal to 4 subbands in time domain. Inspired from the WaveNet, a convolutional neural 5 network (CNN) model predicts subband speech signals fully in time domain. Due 6 to the short bandwidth of the subbands, a simple network architecture is enough to 7 train the simple patterns of the subbands accurately. In the ground truth experiments 8 with teacher forcing, the subband synthesizer outperforms the fullband model 9 significantly. In addition, by conditioning the model on the phoneme sequence 10 using a pronunciation dictionary, we have achieved the first fully time-domain 11 neural text-to-speech (TTS) system. The generated speech of the subband TTS 12 shows comparable quality as the fullband one with a slighter network architecture 13 for each subband. 14",
"title": ""
},
{
"docid": "8bd93bf2043a356ff40531acb372992d",
"text": "Liver lesion segmentation is an important step for liver cancer diagnosis, treatment planning and treatment evaluation. LiTS (Liver Tumor Segmentation Challenge) provides a common testbed for comparing different automatic liver lesion segmentation methods. We participate in this challenge by developing a deep convolutional neural network (DCNN) method. The particular DCNN model works in 2.5D in that it takes a stack of adjacent slices as input and produces the segmentation map corresponding to the center slice. The model has 32 layers in total and makes use of both long range concatenation connections of U-Net [1] and short-range residual connections from ResNet [2]. The model was trained using the 130 LiTS training datasets and achieved an average Dice score of 0.67 when evaluated on the 70 test CT scans, which ranked first for the LiTS challenge at the time of the ISBI 2017 conference.",
"title": ""
},
{
"docid": "1afd50a91b67bd1eab0db1c2a19a6c73",
"text": "In this paper we present syntactic characterization of temporal formulas that express various properties of interest in the verification of concurrent programs. Such a characterization helps us in choosing the right techniques for proving correctness with respect to these properties. The properties that we consider include safety properties, liveness properties and fairness properties. We also present algorithms for checking if a given temporal formula expresses any of these properties.",
"title": ""
}
] |
scidocsrr
|
15f5b0b5aab4f3bb6141fdac4c6471c4
|
The Compact 3 D Convolutional Neural Network for Medical Images
|
[
{
"docid": "2d95b9919e1825ea46b5c5e6a545180c",
"text": "Computed tomography (CT) generates a stack of cross-sectional images covering a region of the body. The visual assessment of these images for the identification of potential abnormalities is a challenging and time consuming task due to the large amount of information that needs to be processed. In this article we propose a deep artificial neural network architecture, ReCTnet, for the fully-automated detection of pulmonary nodules in CT scans. The architecture learns to distinguish nodules and normal structures at the pixel level and generates three-dimensional probability maps highlighting areas that are likely to harbour the objects of interest. Convolutional and recurrent layers are combined to learn expressive image representations exploiting the spatial dependencies across axial slices. We demonstrate that leveraging intra-slice dependencies substantially increases the sensitivity to detect pulmonary nodules without inflating the false positive rate. On the publicly available LIDC/IDRI dataset consisting of 1,018 annotated CT scans, ReCTnet reaches a detection sensitivity of 90.5% with an average of 4.5 false positives per scan. Comparisons with a competing multi-channel convolutional neural network for multislice segmentation and other published methodologies using the same dataset provide evidence that ReCTnet offers significant performance gains. 1 ar X iv :1 60 9. 09 14 3v 1 [ st at .M L ] 2 8 Se p 20 16",
"title": ""
}
] |
[
{
"docid": "345a59aac1e89df5402197cca90ca464",
"text": "Tony Velkov,* Philip E. Thompson, Roger L. Nation, and Jian Li* School of Medicine, Deakin University, Pigdons Road, Geelong 3217, Victoria, Australia, Medicinal Chemistry and Drug Action and Facility for Anti-infective Drug Development and Innovation, Drug Delivery, Disposition and Dynamics, Monash Institute of Pharmaceutical Sciences, Monash University, 381 Royal Parade, Parkville 3052, Victoria, Australia",
"title": ""
},
{
"docid": "0640f60855954fa2f12a58f403aec058",
"text": "Corresponding Author: Vo Ngoc Phu Nguyen Tat Thanh University, 300A Nguyen Tat Thanh Street, Ward 13, District 4, Ho Chi Minh City, 702000, Vietnam Email: vongocphu03hca@gmail.com vongocphu@ntt.edu.vn Abstract: A Data Mining Has Already Had Many Algorithms Which A KNearest Neighbors Algorithm, K-NN, Is A Famous Algorithm For Researchers. K-NN Is Very Effective On Small Data Sets, However It Takes A Lot Of Time To Run On Big Datasets. Today, Data Sets Often Have Millions Of Data Records, Hence, It Is Difficult To Implement K-NN On Big Data. In This Research, We Propose An Improvement To K-NN To Process Big Datasets In A Shortened Execution Time. The Reformed KNearest Neighbors Algorithm (R-K-NN) Can Be Implemented On Large Datasets With Millions Or Even Billions Of Data Records. R-K-NN Is Tested On A Data Set With 500,000 Records. The Execution Time Of R-KNN Is Much Shorter Than That Of K-NN. In Addition, R-K-NN Is Implemented In A Parallel Network System With Hadoop Map (M) And Hadoop Reduce (R).",
"title": ""
},
{
"docid": "d2e434f472b60e17ab92290c78706945",
"text": "In recent years, a variety of review-based recommender systems have been developed, with the goal of incorporating the valuable information in user-generated textual reviews into the user modeling and recommending process. Advanced text analysis and opinion mining techniques enable the extraction of various types of review elements, such as the discussed topics, the multi-faceted nature of opinions, contextual information, comparative opinions, and reviewers’ emotions. In this article, we provide a comprehensive overview of how the review elements have been exploited to improve standard content-based recommending, collaborative filtering, and preference-based product ranking techniques. The review-based recommender system’s ability to alleviate the well-known rating sparsity and cold-start problems is emphasized. This survey classifies state-of-the-art studies into two principal branches: review-based user profile building and review-based product profile building. In the user profile sub-branch, the reviews are not only used to create term-based profiles, but also to infer or enhance ratings. Multi-faceted opinions can further be exploited to derive the weight/value preferences that users place on particular features. In another sub-branch, the product profile can be enriched with feature opinions or comparative opinions to better reflect its assessment quality. The merit of each branch of work is discussed in terms of both algorithm development and the way in which the proposed algorithms are evaluated. In addition, we discuss several future trends based on the survey, which may inspire investigators to pursue additional studies in this area.",
"title": ""
},
{
"docid": "4a22a7dbcd1515e2b1b6e7748ffa3e02",
"text": "Average public feedback scores given to sellers have increased strongly over time in an online labor market. Changes in marketplace composition or improved seller performance cannot fully explain this trend. We propose that two factors inflated reputations: (1) it costs more to give bad feedback than good feedback and (2) this cost to raters is increasing in the cost to sellers from bad feedback. Together, (1) and (2) can lead to an equilibrium where feedback is always positive, regardless of performance. In response, the marketplace encouraged buyers to additionally give private feedback. This private feedback was substantially more candid and more predictive of future worker performance. When aggregates of private feedback about each job applicant were experimentally provided to employers as a private feedback score, employers used these scores when making screening and hiring decisions.",
"title": ""
},
{
"docid": "3615db7b4a62f981ef62062084597ca5",
"text": "Adoption is a topic of crucial importance both to those directly involved and to society. Yet, at this writing, the federal government collects no comprehensive national statistics on adoption. The purpose of this article is to address what we do know, what we do not know, and what we need to know about the statistics on adoption. The article provides an overview of adoption and describes data available regarding adoption arrangements and the characteristics of parents who relinquish children, of children who are adopted or in substitute care, and of adults who seek to adopt. Recommendations for future data collection are offered, including the establishment of a national data collection system for adoption statistics. doption is an issue of vital importance for all persons involved in Kathy S. Stolley, M.A., is an instructor in the A the adoption triangle: the child, the adoptive parents, and the Department of Sociology birthparents. According to national estimates, one million children in the United States live with adoptive parents, and from 2% to and Criminal Justice at Old Dominion Univer4% of American families include an adopted child. sity, Norfolk, VA. Adoption is most important for infertile couples seeking children and children in need of parents. Yet adoption issues also have consequences for the larger society in such areas as public welfare and mental health. Additionally, adoption can be framed as a public health issue, particularly in light of increasing numbers of pediatric AIDS cases and concerns regarding drug-exposed infants, and “boarder” babies available for adoption. Adoption is also often supported as an alternative to abortion. Limitations of Available Data Despite the importance of adoption to many groups, it remains an underresearched area and a topic on which the data are incomplete. Indeed, at this writing, no comprehensive national data on adoption are collected by the federal government. Through the Children’s Bureau and later the National Center for Social Statistics (NCSS), the federal government collected adoption data periodically between 1944 and 1957, then annually from 1957 to 1975. States voluntarily reported summary statistics on all types of finalized adoptions using data primarily drawn from court records. The number of states and territories participating in this reporting system varied from year to year, ranging from a low of 22 in 1944 to a high of 52 during the early 1960s.4 This data collection effort ended in 1975 with the dissolution of the NCSS. The Future of Children ADOPTION Vol. 3 • No. 1 Spring 1993",
"title": ""
},
{
"docid": "d34cc5c09e882c167b3ff273f5c52159",
"text": "Received: 23 May 2011 Revised: 20 February 2012 2nd Revision: 7 September 2012 3rd Revision: 6 November 2012 Accepted: 7 November 2012 Abstract Competitive pressures are forcing organizations to be flexible. Being responsive to changing environmental conditions is an important factor in determining corporate performance. Earlier research, focusing primarily on IT infrastructure, has shown that organizational flexibility is closely related to IT infrastructure flexibility. Using real-world cases, this paper explores flexibility in the broader context of the IS function. An empirically derived framework for better understanding and managing IS flexibility is developed using grounded theory and content analysis. A process model for managing flexibility is presented; it includes steps for understanding contextual factors, recognizing reasons why flexibility is important, evaluating what needs to be flexible, identifying flexibility categories and stakeholders, diagnosing types of flexibility needed, understanding synergies and tradeoffs between them, and prescribing strategies for proactively managing IS flexibility. Three major flexibility categories, flexibility in IS operations, flexibility in IS systems & services development and deployment, and flexibility in IS management, containing 10 IS flexibility types are identified and described. European Journal of Information Systems (2014) 23, 151–184. doi:10.1057/ejis.2012.53; published online 8 January 2013",
"title": ""
},
{
"docid": "e035233d3787ea79c446d1716553d41e",
"text": "In this paper, we propose a method of detecting and classifying web application attacks. In contrast to current signature-based security methods, our solution is an ontology based technique. It specifies web application attacks by using semantic rules, the context of consequence and the specifications of application protocols. The system is capable of detecting sophisticated attacks effectively and efficiently by analyzing the specified portion of a user request where attacks are possible. Semantic rules help to capture the context of the application, possible attacks and the protocol that was used. These rules also allow inference to run over the ontological models in order to detect, the often complex polymorphic variations of web application attacks. The ontological model was developed using Description Logic that was based on the Web Ontology Language (OWL). The inference rules are Horn Logic statements and are implemented using the Apache JENA framework. The system is therefore platform and technology independent. Prior to the evaluation of the system the knowledge model was validated by using OntoClean to remove inconsistency, incompleteness and redundancy in the specification of ontological concepts. The experimental results show that the detection capability and performance of our system is significantly better than existing state of the art solutions. The system successfully detects web application attacks whilst generating few false positives. The examples that are presented demonstrate that a semantic approach can be used to effectively detect zero day and more sophisticated attacks in a real-world environment. 2013 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "a394dafb3ffd6a66bdf4fe3fb0b03f40",
"text": "Part-of-speech tagging, like any supervised statistical NLP task, is more difficult when test sets are very different from training sets, for example when tagging across genres or language varieties. We examined the problem of POS tagging of different varieties of Mandarin Chinese (PRC-Mainland, PRCHong Kong, and Taiwan). An analytic study first showed that unknown words were a major source of difficulty in cross-variety tagging. Unknown words in English tend to be proper nouns. By contrast, we found that Mandarin unknown words were mostly common nouns and verbs. We showed these results are caused by the high frequency of morphological compounding in Mandarin; in this sense Mandarin is more like German than English. Based on this analysis, we propose a variety of new morphological unknown-word features for POS tagging, extending earlier work by others on unknown-word tagging in English and German. Our features were implemented in a maximum entropy Markov model. Our system achieves state-of-the-art performance in Mandarin tagging, including improving unknown-word tagging performance on unseen varieties in Chinese Treebank 5.0 from 61% to 80% correct.",
"title": ""
},
{
"docid": "0f56b99bc1d2c9452786c05242c89150",
"text": "Individuals with below-knee amputation have more difficulty balancing during walking, yet few studies have explored balance enhancement through active prosthesis control. We previously used a dynamical model to show that prosthetic ankle push-off work affects both sagittal and frontal plane dynamics, and that appropriate step-by-step control of push-off work can improve stability. We hypothesized that this approach could be applied to a robotic prosthesis to partially fulfill the active balance requirements of human walking, thereby reducing balance-related activity and associated effort for the person using the device. We conducted experiments on human participants (N = 10) with simulated amputation. Prosthetic ankle push-off work was varied on each step in ways expected to either stabilize, destabilize or have no effect on balance. Average ankle push-off work, known to affect effort, was kept constant across conditions. Stabilizing controllers commanded more push-off work on steps when the mediolateral velocity of the center of mass was lower than usual at the moment of contralateral heel strike. Destabilizing controllers enforced the opposite relationship, while a neutral controller maintained constant push-off work regardless of body state. A random disturbance to landing foot angle and a cognitive distraction task were applied, further challenging participants’ balance. We measured metabolic rate, foot placement kinematics, center of pressure kinematics, distraction task performance, and user preference in each condition. We expected the stabilizing controller to reduce active control of balance and balance-related effort for the user, improving user preference. The best stabilizing controller lowered metabolic rate by 5.5% (p = 0.003) and 8.5% (p = 0.02), and step width variability by 10.0% (p = 0.009) and 10.7% (p = 0.03) compared to conditions with no control and destabilizing control, respectively. Participants tended to prefer stabilizing controllers. These effects were not due to differences in average push-off work, which was unchanged across conditions, or to average gait mechanics, which were also unchanged. Instead, benefits were derived from step-by-step adjustments to prosthesis behavior in response to variations in mediolateral velocity at heel strike. Once-per-step control of prosthetic ankle push-off work can reduce both active control of foot placement and balance-related metabolic energy use during walking.",
"title": ""
},
{
"docid": "63a583de2dbbbd9aada8a685ec9edc78",
"text": "BACKGROUND\nVarious nerve blocks with local anaesthetic agents have been used to reduce pain after hip fracture and subsequent surgery. This review was published originally in 1999 and was updated in 2001, 2002, 2009 and 2017.\n\n\nOBJECTIVES\nThis review focuses on the use of peripheral nerves blocks as preoperative analgesia, as postoperative analgesia or as a supplement to general anaesthesia for hip fracture surgery. We undertook the update to look for new studies and to update the methods to reflect Cochrane standards.\n\n\nSEARCH METHODS\nFor the updated review, we searched the following databases: the Cochrane Central Register of Controlled Trials (CENTRAL; 2016, Issue 8), MEDLINE (Ovid SP, 1966 to August week 1 2016), Embase (Ovid SP, 1988 to 2016 August week 1) and the Cumulative Index to Nursing and Allied Health Literature (CINAHL) (EBSCO, 1982 to August week 1 2016), as well as trial registers and reference lists of relevant articles.\n\n\nSELECTION CRITERIA\nWe included randomized controlled trials (RCTs) involving use of nerve blocks as part of the care provided for adults aged 16 years and older with hip fracture.\n\n\nDATA COLLECTION AND ANALYSIS\nTwo review authors independently assessed new trials for inclusion, determined trial quality using the Cochrane tool and extracted data. When appropriate, we pooled results of outcome measures. We rated the quality of evidence according to the GRADE Working Group approach.\n\n\nMAIN RESULTS\nWe included 31 trials (1760 participants; 897 randomized to peripheral nerve blocks and 863 to no regional blockade). Results of eight trials with 373 participants show that peripheral nerve blocks reduced pain on movement within 30 minutes of block placement (standardized mean difference (SMD) -1.41, 95% confidence interval (CI) -2.14 to -0.67; equivalent to -3.4 on a scale from 0 to 10; I2 = 90%; high quality of evidence). Effect size was proportionate to the concentration of local anaesthetic used (P < 0.00001). Based on seven trials with 676 participants, we did not find a difference in the risk of acute confusional state (risk ratio (RR) 0.69, 95% CI 0.38 to 1.27; I2 = 48%; very low quality of evidence). Three trials with 131 participants reported decreased risk for pneumonia (RR 0.41, 95% CI 0.19 to 0.89; I2 = 3%; number needed to treat for an additional beneficial outcome (NNTB) 7, 95% CI 5 to 72; moderate quality of evidence). We did not find a difference in risk of myocardial ischaemia or death within six months, but the number of participants included was well below the optimal information size for these two outcomes. Two trials with 155 participants reported that peripheral nerve blocks also reduced time to first mobilization after surgery (mean difference -11.25 hours, 95% CI -14.34 to -8.15 hours; I2 = 52%; moderate quality of evidence). One trial with 75 participants indicated that the cost of analgesic drugs was lower when they were given as a single shot block (SMD -3.48, 95% CI -4.23 to -2.74; moderate quality of evidence).\n\n\nAUTHORS' CONCLUSIONS\nHigh-quality evidence shows that regional blockade reduces pain on movement within 30 minutes after block placement. Moderate-quality evidence shows reduced risk for pneumonia, decreased time to first mobilization and cost reduction of the analgesic regimen (single shot blocks).",
"title": ""
},
{
"docid": "9af37841feed808345c39ee96ddff914",
"text": "Wake-up receivers (WuRXs) are low-power radios that continuously monitor the RF environment to wake up a higher-power radio upon detection of a predetermined RF signature. Prior-art WuRXs have 100s of kHz of bandwidth [1] with low signature-to-wake-up-signal latency to help synchronize communication amongst nominally asynchronous wireless devices. However, applications such as unattended ground sensors and smart home appliances wake-up infrequently in an event-driven manner, and thus WuRX bandwidth and latency are less critical; instead, the most important metrics are power consumption and sensitivity. Unfortunately, current state-of-the-art WuRXs utilizing direct envelope-detecting [2] and IF/uncertain-IF [1,3] architectures (Fig. 24.5.1) achieve only modest sensitivity at low-power (e.g., −39dBm at 104nW [2]), or achieve excellent sensitivity at higher-power (e.g., −97dBm at 99µW [3]) via active IF gain elements. Neither approach meets the needs of next-generation event-driven sensing networks.",
"title": ""
},
{
"docid": "25cbc3f8f9ecbeb89c2c49c044e61c2a",
"text": "This study investigated lying behavior and the behavior of people who are deceived by using a deception game (Gneezy, 2005) in both anonymity and face-to-face treatments. Subjects consist of students and non-students (citizens) to investigate whether lying behavior is depended on socioeconomic backgrounds. To explore how liars feel about lying, we give senders a chance to confess their behaviors to their counter partner for the guilty aversion of lying. The following results are obtained: i) a frequency of lying behavior for students is significantly higher than that for non-students at a payoff in the anonymity treatment, but that is not significantly difference between the anonymity and face-to-face treatments; ii) lying behavior is not influenced by gender; iii) a frequency of confession is higher in the face-to-face treatment than in the anonymity treatment; and iv) the receivers who are deceived are more likely to believe a sender’s message to be true in the anonymity treatment. This study implies that the existence of the partner prompts liars to confess their behavior because they may feel remorse or guilt.",
"title": ""
},
{
"docid": "93ee57bae5f3e7a9aabafe033302c7f8",
"text": "Dialog state tracking - the process of updating the dialog state after each interaction with the user - is a key component of most dialog systems. Following a similar scheme to the fourth dialog state tracking challenge, this edition again focused on human-human dialogs, but introduced the task of cross-lingual adaptation of trackers. The challenge received a total of 32 entries from 9 research groups. In addition, several pilot track evaluations were also proposed receiving a total of 16 entries from 4 groups. In both cases, the results show that most of the groups were able to outperform the provided baselines for each task.",
"title": ""
},
{
"docid": "2201ca2f10699276d68e380fd1069086",
"text": "After integrating five higher-order personality traits in an extended model of technology acceptance, Devaraj et al. (2008) called for further research including personality in information systems research to understand the formation of perceptual beliefs and behaviors in more detail. To assist such future research endeavors, this article gives an overview on prior research discussing personality within the six plus two journals of the AIS Senior Basket (MISQ, ISR, JMIS, JAIS, EJIS, ISJ, JSIS, JIT) 1 . Therefore, the Theory of a Person approach (ToP) derived from psychology research serves as the underlying conceptual matrix. Within the literature analysis, we identify 30 articles discussing personality traits on distinct hierarchical levels in three fields of information systems research. Results of the literature analysis reveal a shift of examined traits over the last years. In addition, research gaps are identified so that propositions are derived. Further research results and implications are discussed within the article.",
"title": ""
},
{
"docid": "3f9bb5e1b9b6d4d44cb9741a32f7325f",
"text": "Deep learning with convolutional neural networks (deep ConvNets) has revolutionized computer vision through end-to-end learning, that is, learning from the raw data. There is increasing interest in using deep ConvNets for end-to-end EEG analysis, but a better understanding of how to design and train ConvNets for end-to-end EEG decoding and how to visualize the informative EEG features the ConvNets learn is still needed. Here, we studied deep ConvNets with a range of different architectures, designed for decoding imagined or executed tasks from raw EEG. Our results show that recent advances from the machine learning field, including batch normalization and exponential linear units, together with a cropped training strategy, boosted the deep ConvNets decoding performance, reaching at least as good performance as the widely used filter bank common spatial patterns (FBCSP) algorithm (mean decoding accuracies 82.1% FBCSP, 84.0% deep ConvNets). While FBCSP is designed to use spectral power modulations, the features used by ConvNets are not fixed a priori. Our novel methods for visualizing the learned features demonstrated that ConvNets indeed learned to use spectral power modulations in the alpha, beta, and high gamma frequencies, and proved useful for spatially mapping the learned features by revealing the topography of the causal contributions of features in different frequency bands to the decoding decision. Our study thus shows how to design and train ConvNets to decode task-related information from the raw EEG without handcrafted features and highlights the potential of deep ConvNets combined with advanced visualization techniques for EEG-based brain mapping. Hum Brain Mapp 38:5391-5420, 2017. © 2017 Wiley Periodicals, Inc.",
"title": ""
},
{
"docid": "7a619f349e8b62b016db98e7526c04a6",
"text": "Although sensor noise is generally known as a very reliable means to uniquely identify digital cameras, care has to be taken with respect to camera model characteristics that may cause false accusations. While earlier reports focused on so-called linear patterns with a regular grid structure, also distortions due to geometric corrections of radial lens distortion have recently gained interest. Here, we report observations from a case study with the 'Dresden Image Database' that revealed further artefacts. We found diagonal line artefacts in Nikon CoolPix S710 sensor noise, as well as non-trivial dependencies between sensor noise, exposure time (FujiFilm J50) and focal length (Casio EX-Z150). At slower shutter speeds, original J50 images exhibit a slight horizontal shift, whereas EX-Z150 images exhibit irregular geometric distortions, which depend on the focal length and which become visible in the p-map of state-of-the-art resampling detectors. The observed artefacts may provide valuable clues for camera model identification, but also call for particular attention when creating reference noise patterns for applications that require low false negative rates.",
"title": ""
},
{
"docid": "893408bc41eb46a75fc59e23f74339cf",
"text": "We discuss cutting stock problems (CSPs) from the perspective of the paper industry and the financial impact they make. Exact solution approaches and heuristics have been used for decades to support cutting stock decisions in that industry. We have developed polylithic solution techniques integrated in our ERP system to solve a variety of cutting stock problems occurring in real world problems. Among them is the simultaneous minimization of the number of rolls and the number of patterns while not allowing any overproduction. For two cases, CSPs minimizing underproduction and CSPs with master rolls of different widths and availability, we have developed new column generation approaches. The methods are numerically tested using real world data instances. An assembly of current solved and unsolved standard and non-standard CSPs at the forefront of research are put in perspective.",
"title": ""
},
{
"docid": "e91c18f5509e05471d20d4e28e03b014",
"text": "This paper describes the design of a broadside circularly polarized uniform circular array based on curved planar inverted F-antenna elements. Circular polarization (CP) is obtained by exploiting the sequential rotation technique and implementing it with a series feed network. The proposed structure is first introduced, and some geometrical considerations are derived. Second, the array radiation body is designed taking into account the mutual coupling among antenna elements. Third, the series feed network usually employed for four-antenna element arrays is analyzed and extended to three and more than four antennas exploiting the special case of equal power distribution. The array is designed with three-, four-, five-, and six-antenna elements, and dimensions, impedance bandwidth (defined for <inline-formula> <tex-math notation=\"LaTeX\">$S_{11}\\leq -10$ </tex-math></inline-formula> dB), axial ratio (AR) bandwidth (<inline-formula> <tex-math notation=\"LaTeX\">$\\text {AR}\\leq 3$ </tex-math></inline-formula> dB), gain, beamwidth, front-to-back ratio, and cross-polarization level are compared. Arrays with three and five elements are also prototyped to benchmark the numerical analysis results, finding good correspondence.",
"title": ""
},
{
"docid": "a6e71e4be58c51b580fcf08e9d1a100a",
"text": "This dissertation is concerned with the processing of high velocity text using event processing means. It comprises a scientific approach for combining the area of information filtering and event processing, in order to analyse fast and voluminous streams of text. In order to be able to process text streams within event driven means, an event reference model was developed that allows for the conversion of unstructured or semi-structured text streams into discrete event types on which event processing engines can operate. Additionally, a set of essential reference processes in the domain of information filtering and text stream analysis were described using eventdriven concepts. In a second step, a reference architecture was designed that described essential architectural components required for the design of information filtering and text stream analysis systems in an event-driven manner. Further to this, a set of architectural patterns for building event driven text analysis systems was derived that support the design and implementation of such systems. Subsequently, a prototype was built using the theoretic foundations. This system was initially used to study the effect of sliding window sizes on the properties of dynamic sub-corpora. It could be shown that small sliding window based corpora are similar to larger sliding windows and thus can be used as a resource-saving alternative. Next, a study of several linguistic aspects of text streams was undertaken that showed that event stream summary statistics can provide interesting insights into the characteristics of high velocity text streams. Finally, four essential information filtering and text stream analysis components were studied, viz. filter policies, term weighting, thresholds and query expansion. These were studied using three temporal search profile types and were evaluated using standard performance measures. The goal was to study the efficiency of traditional as well as new algorithms within the given context of high velocity text stream data, in order to provide advise which methods work best. The results of this dissertation are intended to provide software architects and developers with valuable information for the design and implementation of event-driven text stream analysis systems.",
"title": ""
}
] |
scidocsrr
|
c92e25f5d839b9fe1b8e7685305320fc
|
A novel paradigm for calculating Ramsey number via Artificial Bee Colony Algorithm
|
[
{
"docid": "828c54f29339e86107f1930ae2a5e77f",
"text": "Artificial bee colony (ABC) algorithm is an optimization algorithm based on a particular intelligent behaviour of honeybee swarms. This work compares the performance of ABC algorithm with that of differential evolution (DE), particle swarm optimization (PSO) and evolutionary algorithm (EA) for multi-dimensional numeric problems. The simulation results show that the performance of ABC algorithm is comparable to those of the mentioned algorithms and can be efficiently employed to solve engineering problems with high dimensionality. # 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "35ab98f6e5b594261e52a21740c70336",
"text": "Artificial Bee Colony (ABC) algorithm which is one of the most recently introduced optimization algorithms, simulates the intelligent foraging behavior of a honey bee swarm. Clustering analysis, used in many disciplines and applications, is an important tool and a descriptive task seeking to identify homogeneous groups of objects based on the values of their attributes. In this work, ABC is used for data clustering on benchmark problems and the performance of ABC algorithm is compared with Particle Swarm Optimization (PSO) algorithm and other nine classification techniques from the literature. Thirteen of typical test data sets from the UCI Machine Learning Repository are used to demonstrate the results of the techniques. The simulation results indicate that ABC algorithm can efficiently be used for multivariate data clustering. © 2009 Elsevier B.V. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "4847eb4451c597d4656cf48c242cf252",
"text": "Despite the independent evolution of multicellularity in plants and animals, the basic organization of their stem cell niches is remarkably similar. Here, we report the genome-wide regulatory potential of WUSCHEL, the key transcription factor for stem cell maintenance in the shoot apical meristem of the reference plant Arabidopsis thaliana. WUSCHEL acts by directly binding to at least two distinct DNA motifs in more than 100 target promoters and preferentially affects the expression of genes with roles in hormone signaling, metabolism, and development. Striking examples are the direct transcriptional repression of CLAVATA1, which is part of a negative feedback regulation of WUSCHEL, and the immediate regulation of transcriptional repressors of the TOPLESS family, which are involved in auxin signaling. Our results shed light on the complex transcriptional programs required for the maintenance of a dynamic and essential stem cell niche.",
"title": ""
},
{
"docid": "9c0d65ee42ccfaa291b576568bad59e0",
"text": "BACKGROUND\nThe WHO International Classification of Diseases, 11th version (ICD-11), has proposed two related diagnoses following exposure to traumatic events; Posttraumatic Stress Disorder (PTSD) and Complex PTSD (CPTSD). We set out to explore whether the newly developed ICD-11 Trauma Questionnaire (ICD-TQ) can distinguish between classes of individuals according to the PTSD and CPTSD symptom profiles as per ICD-11 proposals based on latent class analysis. We also hypothesized that the CPTSD class would report more frequent and a greater number of different types of childhood trauma as well as higher levels of functional impairment. Methods Participants in this study were a sample of individuals who were referred for psychological therapy to a National Health Service (NHS) trauma centre in Scotland (N=193). Participants completed the ICD-TQ as well as measures of life events and functioning.\n\n\nRESULTS\nOverall, results indicate that using the newly developed ICD-TQ, two subgroups of treatment-seeking individuals could be empirically distinguished based on different patterns of symptom endorsement; a small group high in PTSD symptoms only and a larger group high in CPTSD symptoms. In addition, CPTSD was more strongly associated with more frequent and a greater accumulation of different types of childhood traumatic experiences and poorer functional impairment.\n\n\nLIMITATIONS\nSample predominantly consisted of people who had experienced childhood psychological trauma or been multiply traumatised in childhood and adulthood.\n\n\nCONCLUSIONS\nCPTSD is highly prevalent in treatment seeking populations who have been multiply traumatised in childhood and adulthood and appropriate interventions should now be developed to aid recovery from this debilitating condition.",
"title": ""
},
{
"docid": "ac52504a90be9cd685a10f73603d3776",
"text": "Unsupervised domain adaption aims to learn a powerful classifier for the target domain given a labeled source data set and an unlabeled target data set. To alleviate the effect of ‘domain shift’, the major challenge in domain adaptation, studies have attempted to align the distributions of the two domains. Recent research has suggested that generative adversarial network (GAN) has the capability of implicitly capturing data distribution. In this paper, we thus propose a simple but effective model for unsupervised domain adaption leveraging adversarial learning. The same encoder is shared between the source and target domains which is expected to extract domain-invariant representations with the help of an adversarial discriminator. With the labeled source data, we introduce the center loss to increase the discriminative power of feature learned. We further align the conditional distribution of the two domains to enforce the discrimination of the features in the target domain. Unlike previous studies where the source features are extracted with a fixed pre-trained encoder, our method jointly learns feature representations of two domains. Moreover, by sharing the encoder, the model does not need to know the source of images during testing and hence is more widely applicable. We evaluate the proposed method on several unsupervised domain adaption benchmarks and achieve superior or comparable performance to state-of-the-art results.",
"title": ""
},
{
"docid": "1aac7dedc18b437966b31cf04f1b7efc",
"text": "Massive open online courses (MOOCs) continue to appear across the higher education landscape, originating from many institutions in the USA and around the world. MOOCs typically have low completion rates, at least when compared with traditional courses, as this course delivery model is very different from traditional, fee-based models, such as college courses. This research examined MOOC student demographic data, intended behaviours and course interactions to better understand variables that are indicative of MOOC completion. The results lead to ideas regarding how these variables can be used to support MOOC students through the application of learning analytics tools and systems.",
"title": ""
},
{
"docid": "575d8fed62c2afa1429d16444b6b173c",
"text": "Research into learning and teaching in higher education over the last 25 years has provided a variety of concepts, methods, and findings that are of both theoretical interest and practical relevance. It has revealed the relationships between students’ approaches to studying, their conceptions of learning, and their perceptions of their academic context. It has revealed the relationships between teachers’ approaches to teaching, their conceptions of teaching, and their perceptions of the teaching environment. And it has provided a range of tools that can be exploited for developing our understanding of learning and teaching in particular contexts and for assessing and enhancing the student experience on specific courses and programs.",
"title": ""
},
{
"docid": "12d4c8ff1072fece3fea7eeac43c3fc5",
"text": "Multi-agent path finding (MAPF) is well-studied in artificial intelligence, robotics, theoretical computer science and operations research. We discuss issues that arise when generalizing MAPF methods to real-world scenarios and four research directions that address them. We emphasize the importance of addressing these issues as opposed to developing faster methods for the standard formulation of the MAPF problem.",
"title": ""
},
{
"docid": "e94f453a3301ca86bed19162ad1cb6e1",
"text": "Linux scheduling is based on the time-sharing technique already introduced in the section \"CPU's Time Sharing\" in Chapter 5, Timing Measurements: several processes are allowed to run \"concurrently,\" which means that the CPU time is roughly divided into \"slices,\" one for each runnable process.[1] Of course, a single processor can run only one process at any given instant. If a currently running process is not terminated when its time slice or quantum expires, a process switch may take place. Time-sharing relies on timer interrupts and is thus transparent to processes. No additional code needs to be inserted in the programs in order to ensure CPU time-sharing.",
"title": ""
},
{
"docid": "4d69284c25e1a9a503dd1c12fde23faa",
"text": "Human pose estimation has been actively studied for decades. While traditional approaches rely on 2d data like images or videos, the development of Time-of-Flight cameras and other depth sensors created new opportunities to advance the field. We give an overview of recent approaches that perform human motion analysis which includes depthbased and skeleton-based activity recognition, head pose estimation, facial feature detection, facial performance capture, hand pose estimation and hand gesture recognition. While the focus is on approaches using depth data, we also discuss traditional image based methods to provide a broad overview of recent developments in these areas.",
"title": ""
},
{
"docid": "2c266af949495f7cd32b8abdf1a04803",
"text": "Humans rely on eye gaze and hand manipulations extensively in their everyday activities. Most often, users gaze at an object to perceive it and then use their hands to manipulate it. We propose applying a multimodal, gaze plus free-space gesture approach to enable rapid, precise and expressive touch-free interactions. We show the input methods are highly complementary, mitigating issues of imprecision and limited expressivity in gaze-alone systems, and issues of targeting speed in gesture-alone systems. We extend an existing interaction taxonomy that naturally divides the gaze+gesture interaction space, which we then populate with a series of example interaction techniques to illustrate the character and utility of each method. We contextualize these interaction techniques in three example scenarios. In our user study, we pit our approach against five contemporary approaches; results show that gaze+gesture can outperform systems using gaze or gesture alone, and in general, approach the performance of \"gold standard\" input systems, such as the mouse and trackpad.",
"title": ""
},
{
"docid": "ceb42399b7cd30b15d27c30d7c4b57b6",
"text": "In this letter, the performance of non-orthogonal multiple access (NOMA) is investigated from an informationtheoretic perspective. The relationships among the capacity r egion of broadcast channels and two rate regions achieved by NOMA and time-division multiple access (TDMA) are illustrated first. Then, the performance of NOMA is evaluated by considering TDMA as the benchmark, where both the sum rate and the individual use r rates are used as the criteria. In a wireless downlink scenar io with user pairing, the developed analytical results show that NOMA can outperform TDMA not only for the sum rate but also for each user’s individual rate, particularly when the difference between the users’ channels is large. I. I NTRODUCTION Because of its superior spectral efficiency, non-orthogona l multiple access (NOMA) has been recognized as a promising technique to be used in the fifth generation (5G) networks [1] – [4]. NOMA utilizes the power domain for achieving multiple access, i.e., different users are served at different power levels. Unlike conventional orthogonal MA, such as timedivision multiple access (TDMA), NOMA faces strong cochannel interference between different users, and success ive interference cancellation (SIC) is used by the NOMA users with better channel conditions for interference managemen t. The concept of NOMA is essentially a special case of superposition coding developed for broadcast channels (BC ). Cover first found the capacity region of a degraded discrete memoryless BC by using superposition coding [5]. Then, the capacity region of the Gaussian BC with single-antenna terminals was established in [6]. Moreover, the capacity re gion of the multiple-input multiple-output (MIMO) Gaussian BC was found in [7], by applying dirty paper coding (DPC) instea d of superposition coding. This paper mainly focuses on the single-antenna scenario. Specifically, consider a Gaussian BC with a single-antenna transmitter and two single-antenna receivers, where each r eceiver is corrupted by additive Gaussian noise with unit var iance. Denote the ordered channel gains from the transmitter to the two receivers byhw andhb, i.e., |hw| < |hb|. For a given channel pair(hw, hb), the capacity region is given by [6] C , ⋃ a1+a2=1, a1, a2 ≥ 0 { (R1, R2) : R1, R2 ≥ 0, R1≤ log2 ( 1+ a1x 1+a2x ) , R2≤ log2 (1+a2y) }",
"title": ""
},
{
"docid": "8b5b4950177030e7664d57724acd52a3",
"text": "With the fast development of industrial Internet of things (IIoT), a large amount of data is being generated continuously by different sources. Storing all the raw data in the IIoT devices locally is unwise considering that the end devices’ energy and storage spaces are strictly limited. In addition, the devices are unreliable and vulnerable to many threats because the networks may be deployed in remote and unattended areas. In this paper, we discuss the emerging challenges in the aspects of data processing, secure data storage, efficient data retrieval and dynamic data collection in IIoT. Then, we design a flexible and economical framework to solve the problems above by integrating the fog computing and cloud computing. Based on the time latency requirements, the collected data are processed and stored by the edge server or the cloud server. Specifically, all the raw data are first preprocessed by the edge server and then the time-sensitive data (e.g., control information) are used and stored locally. The non-time-sensitive data (e.g., monitored data) are transmitted to the cloud server to support data retrieval and mining in the future. A series of experiments and simulation are conducted to evaluate the performance of our scheme. The results illustrate that the proposed framework can greatly improve the efficiency and security of data storage and retrieval in IIoT.",
"title": ""
},
{
"docid": "dc9a92313c58b5e688a3502b994e6d3a",
"text": "This paper explores the application of Activity-Based Costing and Activity-Based Management in ecommerce. The proposed application may lead to better firm performance of many companies in offering their products and services over the Internet. A case study of a fictitious Business-to-Customer (B2C) company is used to illustrate the proposed structured implementation procedure and effects of an Activity-Based Costing analysis. The analysis is performed by using matrixes in order to trace overhead. The Activity-Based Costing analysis is then used to demonstrate operational and strategic Activity-Based Management in e-commerce.",
"title": ""
},
{
"docid": "e3566963e4307c15086a54afe7661f32",
"text": "Next-generation wireless networks must support ultra-reliable, low-latency communication and intelligently manage a massive number of Internet of Things (IoT) devices in real-time, within a highly dynamic environment. This need for stringent communication quality-of-service (QoS) requirements as well as mobile edge and core intelligence can only be realized by integrating fundamental notions of artificial intelligence (AI) and machine learning across the wireless infrastructure and end-user devices. In this context, this paper provides a comprehensive tutorial that introduces the main concepts of machine learning, in general, and artificial neural networks (ANNs), in particular, and their potential applications in wireless communications. For this purpose, we present a comprehensive overview on a number of key types of neural networks that include feed-forward, recurrent, spiking, and deep neural networks. For each type of neural network, we present the basic architecture and training procedure, as well as the associated challenges and opportunities. Then, we provide an in-depth overview on the variety of wireless communication problems that can be addressed using ANNs, ranging from communication using unmanned aerial vehicles to virtual reality and edge caching.For each individual application, we present the main motivation for using ANNs along with the associated challenges while also providing a detailed example for a use case scenario and outlining future works that can be addressed using ANNs. In a nutshell, this article constitutes one of the first holistic tutorials on the development of machine learning techniques tailored to the needs of future wireless networks. This research was supported by the U.S. National Science Foundation under Grants CNS-1460316 and IIS-1633363. ar X iv :1 71 0. 02 91 3v 1 [ cs .I T ] 9 O ct 2 01 7",
"title": ""
},
{
"docid": "ea8685f27096f3e3e589ea8af90e78f5",
"text": "Acoustic data transmission is a technique to embed the data in a sound wave imperceptibly and to detect it at the receiver. This letter proposes a novel acoustic data transmission system designed based on the modulated complex lapped transform (MCLT). In the proposed system, data is embedded in an audio file by modifying the phases of the original MCLT coefficients. The data can be transmitted by playing the embedded audio and extracting it from the received audio. By embedding the data in the MCLT domain, the perceived quality of the resulting audio could be kept almost similar as the original audio. The system can transmit data at several hundreds of bits per second (bps), which is sufficient to deliver some useful short messages.",
"title": ""
},
{
"docid": "a0f8af71421d484cbebb550a0bf59a6d",
"text": "researchers and practitioners doing work in these three related areas. Risk management, fraud detection, and intrusion detection all involve monitoring the behavior of populations of users (or their accounts) to estimate, plan for, avoid, or detect risk. In his paper, Til Schuermann (Oliver, Wyman, and Company) categorizes risk into market risk, credit risk, and operating risk (or fraud). Similarly, Barry Glasgow (Metropolitan Life Insurance Co.) discusses inherent risk versus fraud. This workshop focused primarily on what might loosely be termed “improper behavior,” which includes fraud, intrusion, delinquency, and account defaulting. However, Glasgow does discuss the estimation of “inherent risk,” which is the bread and butter of insurance firms. Problems of predicting, preventing, and detecting improper behavior share characteristics that complicate the application of existing AI and machine-learning technologies. In particular, these problems often have or require more than one of the following that complicate the technical problem of automatically learning predictive models: large volumes of (historical) data, highly skewed distributions (“improper behavior” occurs far less frequently than “proper behavior”), changing distributions (behaviors change over time), widely varying error costs (in certain contexts, false positive errors are far more costly than false negatives), costs that change over time, adaptation of undesirable behavior to detection techniques, changing patterns of legitimate behavior, the trad■ The 1997 AAAI Workshop on AI Approaches to Fraud Detection and Risk Management brought together over 50 researchers and practitioners to discuss problems of fraud detection, computer intrusion detection, and risk scoring. This article presents highlights, including discussions of problematic issues that are common to these application domains, and proposed solutions that apply a variety of AI techniques.",
"title": ""
},
{
"docid": "c21a1a07918d86dab06d84e0e4e7dc05",
"text": "Big data potential value across business sectors has received tremendous attention from the practitioner and academia world. The huge amount of data collected in different forms in organizations promises to radically transform the business landscape globally. The impact of big data, which is spreading across all business sectors, has potential to create new opportunities for growth. With organizations now able to store huge diverse amounts of data from different sources and forms, big data is expected to deliver tremendous value across business sectors. This paper focuses on building a business case for big data adoption in organizations. This paper discusses some of the opportunities and potential benefits associated with big data adoption across various business sectors globally. The discussion is important for making a business case for big data investment in organizations, which is major challenge for its adoption globally. The paper uses the IT strategic grid to understand the current and future potential benefits of big data for different business sectors. The results of the study suggest that there is no one-size-fits-all to big data adoption potential benefits in organizations.",
"title": ""
},
{
"docid": "636851f2fc41fbeb488d27c813d175dc",
"text": "We propose DropMax, a stochastic version of softmax classifier which at each iteration drops non-target classes according to dropout probabilities adaptively decided for each instance. Specifically, we overlay binary masking variables over class output probabilities, which are input-adaptively learned via variational inference. This stochastic regularization has an effect of building an ensemble classifier out of exponentially many classifiers with different decision boundaries. Moreover, the learning of dropout rates for non-target classes on each instance allows the classifier to focus more on classification against the most confusing classes. We validate our model on multiple public datasets for classification, on which it obtains significantly improved accuracy over the regular softmax classifier and other baselines. Further analysis of the learned dropout probabilities shows that our model indeed selects confusing classes more often when it performs classification.",
"title": ""
},
{
"docid": "6cfdad2bb361713616dd2971026758a7",
"text": "We consider the problem of controlling a system with unknown, stochastic dynamics to achieve a complex, time-sensitive task. An example of this problem is controlling a noisy aerial vehicle with partially known dynamics to visit a pre-specified set of regions in any order while avoiding hazardous areas. In particular, we are interested in tasks which can be described by signal temporal logic (STL) specifications. STL is a rich logic that can be used to describe tasks involving bounds on physical parameters, continuous time bounds, and logical relationships over time and states. STL is equipped with a continuous measure called the robustness degree that measures how strongly a given sample path exhibits an STL property [4, 3]. This measure enables the use of continuous optimization problems to solve learning [7, 6] or formal synthesis problems [9] involving STL.",
"title": ""
},
{
"docid": "a58130841813814dacd7330d04efe735",
"text": "Under-reporting of food intake is one of the fundamental obstacles preventing the collection of accurate habitual dietary intake data. The prevalence of under-reporting in large nutritional surveys ranges from 18 to 54% of the whole sample, but can be as high as 70% in particular subgroups. This wide variation between studies is partly due to different criteria used to identify under-reporters and also to non-uniformity of under-reporting across populations. The most consistent differences found are between men and women and between groups differing in body mass index. Women are more likely to under-report than men, and under-reporting is more common among overweight and obese individuals. Other associated characteristics, for which there is less consistent evidence, include age, smoking habits, level of education, social class, physical activity and dietary restraint. Determining whether under-reporting is specific to macronutrients or food is problematic, as most methods identify only low energy intakes. Studies that have attempted to measure under-reporting specific to macronutrients express nutrients as percentage of energy and have tended to find carbohydrate under-reported and protein over-reported. However, care must be taken when interpreting these results, especially when data are expressed as percentages. A logical conclusion is that food items with a negative health image (e.g. cakes, sweets, confectionery) are more likely to be under-reported, whereas those with a positive health image are more likely to be over-reported (e.g. fruits and vegetables). This also suggests that dietary fat is likely to be under-reported. However, it is necessary to distinguish between under-reporting and genuine under-eating for the duration of data collection. The key to understanding this problem, but one that has been widely neglected, concerns the processes that cause people to under-report their food intakes. The little work that has been done has simply confirmed the complexity of this issue. The importance of obtaining accurate estimates of habitual dietary intakes so as to assess health correlates of food consumption can be contrasted with the poor quality of data collected. This phenomenon should be considered a priority research area. Moreover, misreporting is not simply a nutritionist's problem, but requires a multidisciplinary approach (including psychology, sociology and physiology) to advance the understanding of under-reporting in dietary intake studies.",
"title": ""
},
{
"docid": "80e0a6c270bb146a1a45994d27340639",
"text": "BACKGROUND\nThe promotion of active and healthy ageing is becoming increasingly important as the population ages. Physical activity (PA) significantly reduces all-cause mortality and contributes to the prevention of many chronic illnesses. However, the proportion of people globally who are active enough to gain these health benefits is low and decreases with age. Social support (SS) is a social determinant of health that may improve PA in older adults, but the association has not been systematically reviewed. This review had three aims: 1) Systematically review and summarise studies examining the association between SS, or loneliness, and PA in older adults; 2) clarify if specific types of SS are positively associated with PA; and 3) investigate whether the association between SS and PA differs between PA domains.\n\n\nMETHODS\nQuantitative studies examining a relationship between SS, or loneliness, and PA levels in healthy, older adults over 60 were identified using MEDLINE, PSYCInfo, SportDiscus, CINAHL and PubMed, and through reference lists of included studies. Quality of these studies was rated.\n\n\nRESULTS\nThis review included 27 papers, of which 22 were cross sectional studies, three were prospective/longitudinal and two were intervention studies. Overall, the study quality was moderate. Four articles examined the relation of PA with general SS, 17 with SS specific to PA (SSPA), and six with loneliness. The results suggest that there is a positive association between SSPA and PA levels in older adults, especially when it comes from family members. No clear associations were identified between general SS, SSPA from friends, or loneliness and PA levels. When measured separately, leisure time PA (LTPA) was associated with SS in a greater percentage of studies than when a number of PA domains were measured together.\n\n\nCONCLUSIONS\nThe evidence surrounding the relationship between SS, or loneliness, and PA in older adults suggests that people with greater SS for PA are more likely to do LTPA, especially when the SS comes from family members. However, high variability in measurement methods used to assess both SS and PA in included studies made it difficult to compare studies.",
"title": ""
}
] |
scidocsrr
|
647b9b99a6f33511254b9be5c427a473
|
Market Index and Stock Price Direction Prediction using Machine Learning Techniques: An empirical study on the KOSPI and HSI
|
[
{
"docid": "a4dbddafcdb2b0b3f26fb5aa2e2de933",
"text": "Ability to predict direction of stock/index price accurately is crucial for market dealers or investors to maximize their profits. Data mining techniques have been successfully shown to generate high forecasting accuracy of stock price movement. Nowadays, in stead of a single method, traders need to use various forecasting techniques to gain multiple signals and more information about the future of the markets. In this paper, ten different techniques of data mining are discussed and applied to predict price movement of Hang Seng index of Hong Kong stock market. The approaches include Linear discriminant analysis (LDA), Quadratic discriminant analysis (QDA), K-nearest neighbor classification, Naïve Bayes based on kernel estimation, Logit model, Tree based classification, neural network, Bayesian classification with Gaussian process, Support vector machine (SVM) and Least squares support vector machine (LS-SVM). Experimental results show that the SVM and LS-SVM generate superior predictive performances among the other models. Specifically, SVM is better than LS-SVM for in-sample prediction but LS-SVM is, in turn, better than the SVM for the out-of-sample forecasts in term of hit rate and error rate criteria.",
"title": ""
},
{
"docid": "386cd963cf70c198b245a3251c732180",
"text": "Support vector machines (SVMs) are promising methods for the prediction of -nancial timeseries because they use a risk function consisting of the empirical error and a regularized term which is derived from the structural risk minimization principle. This study applies SVM to predicting the stock price index. In addition, this study examines the feasibility of applying SVM in -nancial forecasting by comparing it with back-propagation neural networks and case-based reasoning. The experimental results show that SVM provides a promising alternative to stock market prediction. c © 2003 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "247c8cd5e076809a208849abe4dce3e5",
"text": "This paper deals with the application of a novel neural network technique, support vector machine (SVM), in !nancial time series forecasting. The objective of this paper is to examine the feasibility of SVM in !nancial time series forecasting by comparing it with a multi-layer back-propagation (BP) neural network. Five real futures contracts that are collated from the Chicago Mercantile Market are used as the data sets. The experiment shows that SVM outperforms the BP neural network based on the criteria of normalized mean square error (NMSE), mean absolute error (MAE), directional symmetry (DS) and weighted directional symmetry (WDS). Since there is no structured way to choose the free parameters of SVMs, the variability in performance with respect to the free parameters is investigated in this study. Analysis of the experimental results proved that it is advantageous to apply SVMs to forecast !nancial time series. ? 2001 Elsevier Science Ltd. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "2c91e6ca6cf72279ad084c4a51b27b1c",
"text": "Knowing where the host lane lies is paramount to the effectiveness of many advanced driver assistance systems (ADAS), such as lane keep assist (LKA) and adaptive cruise control (ACC). This paper presents an approach for improving lane detection based on the past trajectories of vehicles. Instead of expensive high-precision map, we use the vehicle trajectory information to provide additional lane-level spatial support of the traffic scene, and combine it with the visual evidence to improve each step of the lane detection procedure, thereby overcoming typical challenges of normal urban streets. Such an approach could serve as an Add-On to enhance the performance of existing lane detection systems in terms of both accuracy and robustness. Experimental results in various typical but challenging scenarios show the effectiveness of the proposed system.",
"title": ""
},
{
"docid": "c8ba40dd66f57f6d192a73be94440d07",
"text": "PURPOSE\nWound infection after an ileostomy reversal is a common problem. To reduce wound-related complications, purse-string skin closure was introduced as an alternative to conventional linear skin closure. This study is designed to compare wound infection rates and operative outcomes between linear and purse-string skin closure after a loop ileostomy reversal.\n\n\nMETHODS\nBetween December 2002 and October 2010, a total of 48 consecutive patients undergoing a loop ileostomy reversal were enrolled. Outcomes were compared between linear skin closure (group L, n = 30) and purse string closure (group P, n = 18). The operative technique for linear skin closure consisted of an elliptical incision around the stoma, with mobilization, and anastomosis of the ileum. The rectus fascia was repaired with interrupted sutures. Skin closure was performed with vertical mattress interrupted sutures. Purse-string skin closure consisted of a circumstomal incision around the ileostomy using the same procedures as used for the ileum. Fascial closure was identical to linear closure, but the circumstomal skin incision was approximated using a purse-string subcuticular suture (2-0 Polysorb).\n\n\nRESULTS\nBetween group L and P, there were no differences of age, gender, body mass index, and American Society of Anesthesiologists (ASA) scores. Original indication for ileostomy was 23 cases of malignancy (76.7%) in group L, and 13 cases of malignancy (77.2%) in group P. The median time duration from ileostomy to reversal was 4.0 months (range, 0.6 to 55.7 months) in group L and 4.1 months (range, 2.2 to 43.9 months) in group P. The median operative time was 103 minutes (range, 45 to 260 minutes) in group L and 100 minutes (range, 30 to 185 minutes) in group P. The median hospital stay was 11 days (range, 5 to 4 days) in group L and 7 days (range, 4 to 14 days) in group P (P < 0.001). Wound infection was found in 5 cases (16.7%) in group L and in one case (5.6%) in group L (P = 0.26).\n\n\nCONCLUSION\nBased on this study, purse-string skin closure after a loop ileostomy reversal showed comparable outcomes, in terms of wound infection rates, to those of linear skin closure. Thus, purse-string skin closure could be a good alternative to the conventional linear closure.",
"title": ""
},
{
"docid": "e3ac61e2a8fe211124446c22f7f88b69",
"text": "Requirement elicitation is a critical activity in the requirement development process and it explores the requirements of stakeholders. The common challenges that analysts face during elicitation process are to ensure effective communication between analyst and the users. Mostly errors in the systems are due to poor communication between user and analyst. This paper proposes an improved approach for requirements elicitation using paper prototype. The paper progresses through an assessment of the new approach using student projects developed for various organizations. A case study project is explained in the paper.",
"title": ""
},
{
"docid": "1b22c3d5bb44340fcb66a1b44b391d71",
"text": "The contrast in real world scenes is often beyond what consumer cameras can capture. For these situations, High Dynamic Range (HDR) images can be generated by taking multiple exposures of the same scene. When fusing information from different images, however, the slightest change in the scene can generate artifacts which dramatically limit the potential of this solution. We present a technique capable of dealing with a large amount of movement in the scene: we find, in all the available exposures, patches consistent with a reference image previously selected from the stack. We generate the HDR image by averaging the radiance estimates of all such regions and we compensate for camera calibration errors by removing potential seams. We show that our method works even in cases when many moving objects cover large regions of the scene.",
"title": ""
},
{
"docid": "3028de6940fb7a5af5320c506946edfc",
"text": "Metaphor is ubiquitous in text, even in highly technical text. Correct inference about textual entailment requires computers to distinguish the literal and metaphorical senses of a word. Past work has treated this problem as a classical word sense disambiguation task. In this paper, we take a new approach, based on research in cognitive linguistics that views metaphor as a method for transferring knowledge from a familiar, well-understood, or concrete domain to an unfamiliar, less understood, or more abstract domain. This view leads to the hypothesis that metaphorical word usage is correlated with the degree of abstractness of the word’s context. We introduce an algorithm that uses this hypothesis to classify a word sense in a given context as either literal (denotative) or metaphorical (connotative). We evaluate this algorithm with a set of adjectivenoun phrases (e.g., in dark comedy , the adjective dark is used metaphorically; in dark hair, it is used literally) and with the TroFi (Trope Finder) Example Base of literal and nonliteral usage for fifty verbs. We achieve state-of-theart performance on both datasets.",
"title": ""
},
{
"docid": "cf52fd01af4e01f28eeb14e0c6bce7e9",
"text": "Most applications manipulate persistent data, yet traditional systems decouple data manipulation from persistence in a two-level storage model. Programming languages and system software manipulate data in one set of formats in volatile main memory (DRAM) using a load/store interface, while storage systems maintain persistence in another set of formats in non-volatile memories, such as Flash and hard disk drives in traditional systems, using a file system interface. Unfortunately, such an approach suffers from the system performance and energy overheads of locating data, moving data, and translating data between the different formats of these two levels of storage that are accessed via two vastly different interfaces. Yet today, new non-volatile memory (NVM) technologies show the promise of storage capacity and endurance similar to or better than Flash at latencies comparable to DRAM, making them prime candidates for providing applications a persistent single-level store with a single load/store interface to access all system data. Our key insight is that in future systems equipped with NVM, the energy consumed executing operating system and file system code to access persistent data in traditional systems becomes an increasingly large contributor to total energy. The goal of this work is to explore the design of a Persistent Memory Manager that coordinates the management of memory and storage under a single hardware unit in a single address space. Our initial simulation-based exploration shows that such a system with a persistent memory can improve energy efficiency and performance by eliminating the instructions and data movement traditionally used to perform I/O operations.",
"title": ""
},
{
"docid": "e21c4b071723b68af1674740fbf3e993",
"text": "Throughout history, cryptography has played an important role during times of war. The ability to read enemy messages can lead to invaluable knowledge that can be used to lessen casualties and secure victories. The Allied cryptographers during World War II had a major impact on the outcome of the war. The Allies’ ability to intercept and decrypt messages encrypted on the Japanese cipher machine, Purple, and the German cipher machine, Enigma, empowered the Allies with a major advantage during World War II. Without this advantage, the war may have had a different end result. 1 A Brief Introduction on Cryptography Cryptography is the art and science of secret communication [4]. It involves sending a message in such a way so that only the intended audience should be able to read the message with ease. Cryptography has affected many parts of history, including the outcome of World War II. Steganography is the earliest known form of secret communication, which involves hiding the existence of a message, not the meaning of it [4]. An example of concealing a message can be found in ancient China. A sender would use a messenger whose hair would be shaved off, then the message would be tattooed to the messenger’s head. Once the hair grew back thick enough, the existence of the message was concealed. The messenger was then free to travel to the destination to deliver the message. Once there, the messenger would shave his head again so that the message could be read by the intended recipients. This type of secret communication provides little security to a message, since if a message is found, the meaning is known immediately [4]. Consequently, a more secure system was needed to ensure the meaning of a message was not revealed to a potential eavesdropper. Cryptography, hiding the meaning of a message instead of its existence, is a more secure way of sending a message. In order to send a secret message using cryptographic techniques, one would start with the message that is to be sent, called the plaintext [5]. Before encoding, the sender and receiver agree on the algorithm, the rules by which the message is encoded, to use in order to ensure that both parties can read the message. These rules include the type of cipher that is used and",
"title": ""
},
{
"docid": "b7d20190bdb3ef25110b58d87d7e5bf8",
"text": "Field of soft robotics has been widely researched. Modularization of soft robots is one of the effort to expand the field. In this paper, we introduce a magnet connection for modularized soft units which were introduced in our previous research. The magnet connector was designed with off the shelf magnets. Thanks to the magnet connection, it was simpler and more intuitive than the connection method that we used in previous research. Connecting strength of the magnet connection and bending performance of a soft bending actuator assembled with the units were tested. Connecting strength and air leakage prevention of the connector was affordable in a range of actuating pneumatic pressure. We hope that this magnet connector enables modularized soft units being used as a daily item in the future.",
"title": ""
},
{
"docid": "5e1f035df9a6f943c5632078831f5040",
"text": "Animacy is a necessary property for a referent to be an agent, and thus animacy detection is useful for a variety of natural language processing tasks, including word sense disambiguation, co-reference resolution, semantic role labeling, and others. Prior work treated animacy as a word-level property, and has developed statistical classifiers to classify words as either animate or inanimate. We discuss why this approach to the problem is ill-posed, and present a new approach based on classifying the animacy of co-reference chains. We show that simple voting approaches to inferring the animacy of a chain from its constituent words perform relatively poorly, and then present a hybrid system merging supervised machine learning (ML) and a small number of handbuilt rules to compute the animacy of referring expressions and co-reference chains. This method achieves state of the art performance. The supervised ML component leverages features such as word embeddings over referring expressions, parts of speech, and grammatical and semantic roles. The rules take into consideration parts of speech and the hypernymy structure encoded in WordNet. The system achieves an F1 of 0.88 for classifying the animacy of referring expressions, which is comparable to state of the art results for classifying the animacy of words, and achieves an F1 of 0.75 for classifying the animacy of coreference chains themselves. We release our training and test dataset, which includes 142 texts (all narratives) comprising 156,154 words, 34,698 referring expressions, and 10,941 co-reference chains. We test the method on a subset of the OntoNotes dataset, showing using manual sampling that animacy classification is 90%±2% accurate for coreference chains, and 92%±1% for referring expressions. The data also contains 46 folktales, which present an interesting challenge because they often involve characters who are members of traditionally inanimate classes (e.g., stoves that walk, trees that talk). We show that our system is able to detect the animacy of these unusual referents with an F1 of 0.95.",
"title": ""
},
{
"docid": "ba58cbfd68426359a50a5a60251e0322",
"text": "Intelligent power allocation and load management systems have been playing an increasingly important role in aircrafts whose electrical network systems are getting more and more complex. Load shedding used to be the main means of aircraft power management. But the increasing number of electrical components and the emphasis of safety and human comfort call for more resilient power management. In this paper we present a novel power allocation and scheduling formulation which aims for minimum load shedding and optimal generator operational profiles. The problem is formulated as a mixed integer quadratic programming (MIQP) problem and solved by CPLEX optimization tool.",
"title": ""
},
{
"docid": "bffddca72c7e9d6e5a8c760758a98de0",
"text": "In this paper we present Sentimentor, a tool for sentiment analysis of Twitter data. Sentimentor utilises the naive Bayes Classifier to classify Tweets into positive, negative or objective sets. We present experimental evaluation of our dataset and classification results, our findings are not contridictory with existing work.",
"title": ""
},
{
"docid": "0ce7465e40b3b13e5c316fb420a766d9",
"text": "We have been developing ldquoSmart Suitrdquo as a soft and light-weight wearable power assist system. A prototype for preventing low-back injury in agricultural works and its semi-active assist mechanism have been developed in the previous study. The previous prototype succeeded to reduce about 14% of average muscle fatigues of body trunk in waist extension/flexion motion. In this paper, we describe a prototype of smart suit for supporting waist and knee joint, and its control method for preventing the displacement of the adjustable assist force mechanism in order to keep the assist efficiency.",
"title": ""
},
{
"docid": "70294e6680ad7d662596262c4068a352",
"text": "As cancer development involves pathological vessel formation, 16 angiogenesis markers were evaluated as potential ovarian cancer (OC) biomarkers. Blood samples collected from 172 patients were divided based on histopathological result: OC (n = 38), borderline ovarian tumours (n = 6), non-malignant ovarian tumours (n = 62), healthy controls (n = 50) and 16 patients were excluded. Sixteen angiogenesis markers were measured using BioPlex Pro Human Cancer Biomarker Panel 1 immunoassay. Additionally, concentrations of cancer antigen 125 (CA125) and human epididymis protein 4 (HE4) were measured in patients with adnexal masses using electrochemiluminescence immunoassay. In the comparison between OC vs. non-OC, osteopontin achieved the highest area under the curve (AUC) of 0.79 (sensitivity 69%, specificity 78%). Multimarker models based on four to six markers (basic fibroblast growth factor-FGF-basic, follistatin, hepatocyte growth factor-HGF, osteopontin, platelet-derived growth factor AB/BB-PDGF-AB/BB, leptin) demonstrated higher discriminatory ability (AUC 0.80-0.81) than a single marker (AUC 0.79). When comparing OC with benign ovarian tumours, six markers had statistically different expression (osteopontin, leptin, follistatin, PDGF-AB/BB, HGF, FGF-basic). Osteopontin was the best single angiogenesis marker (AUC 0.825, sensitivity 72%, specificity 82%). A three-marker panel consisting of osteopontin, CA125 and HE4 better discriminated the groups (AUC 0.958) than HE4 or CA125 alone (AUC 0.941 and 0.932, respectively). Osteopontin should be further investigated as a potential biomarker in OC screening and differential diagnosis of ovarian tumours. Adding osteopontin to a panel of already used biomarkers (CA125 and HE4) significantly improves differential diagnosis between malignant and benign ovarian tumours.",
"title": ""
},
{
"docid": "46de8aa53a304c3f66247fdccbe9b39f",
"text": "The effect of pH and electrochemical potential on copper uptake, xanthate adsorption and the hydrophobicity of sphalerite were studied from flotation practice point of view using electrochemical and micro-flotation techniques. Voltammetric studies conducted using the combination of carbon matrix composite (CMC) electrode and surface conduction (SC) electrode show that the kinetics of activation increases with decreasing activating pH. Controlling potential contact angle measurements conducted on a copper-activated SC electrode in xanthate solution with different pHs show that, xanthate adsorption occurs at acidic and alkaline pHs and renders the mineral surface hydrophobic. At near neutral pH, although xanthate adsorbs on Cu:ZnS, the mineral surface is hydrophilic. Microflotation tests confirm this finding. Cleaning reagent was used to improve the flotation response of sphalerite at near neutral pH.",
"title": ""
},
{
"docid": "459a3bc8f54b8f7ece09d5800af7c37b",
"text": "This material is brought to you by the Journals at AIS Electronic Library (AISeL). It has been accepted for inclusion in Communications of the Association for Information Systems by an authorized administrator of AIS Electronic Library (AISeL). For more information, please contact elibrary@aisnet.org. As companies are increasingly exposed to information security threats, decision makers are permanently forced to pay attention to security issues. Information security risk management provides an approach for measuring the security through risk assessment, risk mitigation, and risk evaluation. Although a variety of approaches have been proposed, decision makers lack well-founded techniques that (1) show them what they are getting for their investment, (2) show them if their investment is efficient, and (3) do not demand in-depth knowledge of the IT security domain. This article defines a methodology for management decision makers that effectively addresses these problems. This work involves the conception, design, and implementation of the methodology into a software solution. The results from two qualitative case studies show the advantages of this methodology in comparison to established methodologies.",
"title": ""
},
{
"docid": "de408de1915d43c4db35702b403d0602",
"text": "real-time population health assessment and monitoring D. L. Buckeridge M. Izadi A. Shaban-Nejad L. Mondor C. Jauvin L. Dubé Y. Jang R. Tamblyn The fragmented nature of population health information is a barrier to public health practice. Despite repeated demands by policymakers, administrators, and practitioners to develop information systems that provide a coherent view of population health status, there has been limited progress toward developing such an infrastructure. We are creating an informatics platform for describing and monitoring the health status of a defined population by integrating multiple clinical and administrative data sources. This infrastructure, which involves a population health record, is designed to enable development of detailed portraits of population health, facilitate monitoring of population health indicators, enable evaluation of interventions, and provide clinicians and patients with population context to assist diagnostic and therapeutic decision-making. In addition to supporting public health professionals, clinicians, and the public, we are designing the infrastructure to provide a platform for public health informatics research. This early report presents the requirements and architecture for the infrastructure and describes the initial implementation of the population health record, focusing on indicators of chronic diseases related to obesity.",
"title": ""
},
{
"docid": "81173801bcecfd51e828337d2613dcba",
"text": "There is increasing awareness of the large degree of crosslinguistic diversity involved in the structural realisation of information packaging (or information structure). Whereas English and many Germanic languages primarily exploit intonation for informational purposes , in other languages, like Catalan, syntax plays the primary role in the realisation of information packaging and intonation is reduced to a secondary role. In yet another group of languages the primary structural correlate is morphology. This paper provides a contrastive analysis of the structural properties of information packaging in a number of languages. It also contains a discussion of some basic issues concerning information packaging and identiies a set of information-packaging primitives that are applied to the crosslinguistic facts.",
"title": ""
},
{
"docid": "fdfbcacd5a31038ecc025315c7483b5a",
"text": "Most work on natural language question answering today focuses on answer selection: given a candidate list of sentences, determine which contains the answer. Although important, answer selection is only one stage in a standard end-to-end question answering pipeline. is paper explores the eectiveness of convolutional neural networks (CNNs) for answer selection in an end-to-end context using the standard TrecQA dataset. We observe that a simple idf-weighted word overlap algorithm forms a very strong baseline, and that despite substantial eorts by the community in applying deep learning to tackle answer selection, the gains are modest at best on this dataset. Furthermore, it is unclear if a CNN is more eective than the baseline in an end-to-end context based on standard retrieval metrics. To further explore this nding, we conducted a manual user evaluation, which conrms that answers from the CNN are detectably beer than those from idf-weighted word overlap. is result suggests that users are sensitive to relatively small dierences in answer selection quality.",
"title": ""
},
{
"docid": "ecf56a68fbd1df54b83251b9dfc6bf9f",
"text": "All our lives, we interact with the space around us, whether we are finding our way to a remote cabana in an exotic tropical isle or reaching for a ripe mango on the tree beside the cabana or finding a comfortable position in the hammock to snack after the journey. Each of these natural situations is experienced differently, and as a consequence, each is conceptualized differently. Our knowledge of space, unlike geometry or physical measurements of space, is constructed out of the things in space, not space itself. Mental spaces are schematized, eliminating detail and simplifying features around a framework consisting of elements and the relations among them. Our research suggests that which elements and spatial relations are included and how they are schematized varies with the space in ways that reflect our experience in the space. The space of navigation is too large to be seen from a single place (short of flying over it, but that is a different experience). To find our way in a large environment requires putting together information from different views or different sources. For the most part, the space of navigation is conceptualized as a two-dimensional plane, like a map. Maps, too, are schematized, yet they differ in significant ways from mental representations of space. The space around the body stands in contrast to the space of navigation. It can be seen from a single place, given rotation in place. It is the space of immediate action, our own or the things around us. It is also conceptualized schematically, but in three dimensions. Finally, there is the space of our own bodies. This space is the space of our own actions and our own sensations, experienced from the inside as well as the outside. It is schematized in terms of our limbs. Knowledge of these three spaces, that is, knowledge of the relative locations of the places in navigation space that are critical to our lives, knowledge of the space we are currently interacting with, and knowledge of the space of our bodies, is essential to finding our way in the world, to fulfilling our needs, and to avoiding danger, in short, necessary to survival.",
"title": ""
},
{
"docid": "33b129cb569c979c81c0cb1c0a5b9594",
"text": "During animal development, accurate control of tissue specification and growth are critical to generate organisms of reproducible shape and size. The eye-antennal disc epithelium of Drosophila is a powerful model system to identify the signaling pathway and transcription factors that mediate and coordinate these processes. We show here that the Yorkie (Yki) pathway plays a major role in tissue specification within the developing fly eye disc epithelium at a time when organ primordia and regional identity domains are specified. RNAi-mediated inactivation of Yki, or its partner Scalloped (Sd), or increased activity of the upstream negative regulators of Yki cause a dramatic reorganization of the eye disc fate map leading to specification of the entire disc epithelium into retina. On the contrary, constitutive expression of Yki suppresses eye formation in a Sd-dependent fashion. We also show that knockdown of the transcription factor Homothorax (Hth), known to partner Yki in some developmental contexts, also induces an ectopic retina domain, that Yki and Scalloped regulate Hth expression, and that the gain-of-function activity of Yki is partially dependent on Hth. Our results support a critical role for Yki- and its partners Sd and Hth--in shaping the fate map of the eye epithelium independently of its universal role as a regulator of proliferation and survival.",
"title": ""
}
] |
scidocsrr
|
3a411a79274b079b2646a0bba6249c86
|
Deep Abstract Q-Networks
|
[
{
"docid": "28ee32149227e4a26bea1ea0d5c56d8c",
"text": "We consider an agent’s uncertainty about its environment and the problem of generalizing this uncertainty across states. Specifically, we focus on the problem of exploration in non-tabular reinforcement learning. Drawing inspiration from the intrinsic motivation literature, we use density models to measure uncertainty, and propose a novel algorithm for deriving a pseudo-count from an arbitrary density model. This technique enables us to generalize count-based exploration algorithms to the non-tabular case. We apply our ideas to Atari 2600 games, providing sensible pseudo-counts from raw pixels. We transform these pseudo-counts into exploration bonuses and obtain significantly improved exploration in a number of hard games, including the infamously difficult MONTEZUMA’S REVENGE.",
"title": ""
},
{
"docid": "bf4594673a4e450b005096401e771cd5",
"text": "The PixelCNN model used in this paper is a lightweight variant of the Gated PixelCNN introduced in (van den Oord et al., 2016a). It consists of a 7 × 7 masked convolution, followed by two residual blocks with 1×1 masked convolutions with 16 feature planes, and another 1×1 masked convolution producing 64 features planes, which are mapped by a final masked convolution to the output logits. Inputs are 42 × 42 greyscale images, with pixel values quantized to 8 bins.",
"title": ""
}
] |
[
{
"docid": "bfe9b8e84da087cfd3a3d8ece6dc9b9d",
"text": "Microblog ranking is a hot research topic in recent years. Most of the related works apply TF-IDF metric for calculating content similarity while neglecting their semantic similarity. And most existing search engines which retrieve the microblog list by string matching the search keywords is not competent to provide a reliable list for users when dealing with polysemy and synonym. Besides, treating all the users with same authority for all topics is intuitively not ideal. In this paper, a comprehensive strategy for microblog ranking is proposed. First, we extend the conventional TF-IDF based content similarity with exploiting knowledge from WordNet. Then, we further incorporate a new feature for microblog ranking that is the topical relation between search keyword and its retrieval. Author topical authority is also incorporated into the ranking framework as an important feature for microblog ranking. Gradient Boosting Decision Tree(GBDT), then is employed to train the ranking model with multiple features involved. We conduct thorough experiments on a large-scale real-world Twitter dataset and demonstrate that our proposed approach outperform a number of existing approaches in discovering higher quality and more related microblogs.",
"title": ""
},
{
"docid": "dcacbed90f45b76e9d40c427e16e89d6",
"text": "High torque density and low torque ripple are crucial for traction applications, which allow electrified powertrains to perform properly during start-up, acceleration, and cruising. High-quality anisotropic magnetic materials such as cold-rolled grain-oriented electrical steels can be used for achieving higher efficiency, torque density, and compactness in synchronous reluctance motors equipped with transverse laminated rotors. However, the rotor cylindrical geometry makes utilization of these materials with pole numbers higher than two more difficult. From a reduced torque ripple viewpoint, particular attention to the rotor slot pitch angle design can lead to improvements. This paper presents an innovative rotor lamination design and assembly using cold-rolled grain-oriented electrical steel to achieve higher torque density along with an algorithm for rotor slot pitch angle design for reduced torque ripple. The design methods and prototyping process are discussed, finite-element analyses and experimental examinations are carried out, and the results are compared to verify and validate the proposed methods.",
"title": ""
},
{
"docid": "084ceedc5a45b427503f776a5c9fea68",
"text": "Although the worldwide incidence of infant botulism is rare, the majority of cases are diagnosed in the United States. An infant can acquire botulism by ingesting Clostridium botulinum spores, which are found in soil or honey products. The spores germinate into bacteria that colonize the bowel and synthesize toxin. As the toxin is absorbed, it irreversibly binds to acetylcholine receptors on motor nerve terminals at neuromuscular junctions. The infant with botulism becomes progressively weak, hypotonic and hyporeflexic, showing bulbar and spinal nerve abnormalities. Presenting symptoms include constipation, lethargy, a weak cry, poor feeding and dehydration. A high index of suspicion is important for the diagnosis and prompt treatment of infant botulism, because this disease can quickly progress to respiratory failure. Diagnosis is confirmed by isolating the organism or toxin in the stool and finding a classic electromyogram pattern. Treatment consists of nutritional and respiratory support until new motor endplates are regenerated, which results in spontaneous recovery. Neurologic sequelae are seldom seen. Some children require outpatient tube feeding and may have persistent hypotonia.",
"title": ""
},
{
"docid": "8e4eb520c80dfa8d39c69b1273ea89c8",
"text": "This paper examines the potential impact of automatic meter reading (AMR) on short-term load forecasting for a residential customer. Real-time measurement data from customers' smart meters provided by a utility company is modeled as the sum of a deterministic component and a Gaussian noise signal. The shaping filter for the Gaussian noise is calculated using spectral analysis. Kalman filtering is then used for load prediction. The accuracy of the proposed method is evaluated for different sampling periods and planning horizons. The results show that the availability of more real-time measurement data improves the accuracy of the load forecast significantly. However, the improved prediction accuracy can come at a high computational cost. Our results qualitatively demonstrate that achieving the desired prediction accuracy while avoiding a high computational load requires limiting the volume of data used for prediction. Consequently, the measurement sampling rate must be carefully selected as a compromise between these two conflicting requirements.",
"title": ""
},
{
"docid": "325796828b9d25d50eb69f62d9eabdbb",
"text": "We present a new algorithm to reduce the space complexity of heuristic search. It is most effective for problem spaces that grow polynomially wi th problem size, but contain large numbers of short cycles. For example, the problem of finding a lowest-cost corner-to-corner path in a d-dimensional grid has application to gene sequence alignment in computational biology. The main idea is to perform a bidirectional search, but saving only the Open lists and not the Closed lists. Once the search completes, we have one node on an optimal path, but don't have the solution path itself. The path is then reconstructed by recursively applying the same algorithm between the in i t ia l node and the in termediate node, and also between the intermediate node and the goal node. If n is the length of the grid in each dimension, and d is the number of dimensions, this algorithm reduces the memory requirement from to The time complexity only increases by a constant factor of in two dimensions, and 1.8 in three dimensions.",
"title": ""
},
{
"docid": "d399e142488766759abf607defd848f0",
"text": "The high penetration of cell phones in today's global environment offers a wide range of promising mobile marketing activities, including mobile viral marketing campaigns. However, the success of these campaigns, which remains unexplored, depends on the consumers' willingness to actively forward the advertisements that they receive to acquaintances, e.g., to make mobile referrals. Therefore, it is important to identify and understand the factors that influence consumer referral behavior via mobile devices. The authors analyze a three-stage model of consumer referral behavior via mobile devices in a field study of a firm-created mobile viral marketing campaign. The findings suggest that consumers who place high importance on the purposive value and entertainment value of a message are likely to enter the interest and referral stages. Accounting for consumers' egocentric social networks, we find that tie strength has a negative influence on the reading and decision to refer stages and that degree centrality has no influence on the decision-making process. © 2013 Direct Marketing Educational Foundation, Inc. Published by Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "03e267aeeef5c59aab348775d264afce",
"text": "Visual relations, such as person ride bike and bike next to car, offer a comprehensive scene understanding of an image, and have already shown their great utility in connecting computer vision and natural language. However, due to the challenging combinatorial complexity of modeling subject-predicate-object relation triplets, very little work has been done to localize and predict visual relations. Inspired by the recent advances in relational representation learning of knowledge bases and convolutional object detection networks, we propose a Visual Translation Embedding network (VTransE) for visual relation detection. VTransE places objects in a low-dimensional relation space where a relation can be modeled as a simple vector translation, i.e., subject + predicate ≈ object. We propose a novel feature extraction layer that enables object-relation knowledge transfer in a fully-convolutional fashion that supports training and inference in a single forward/backward pass. To the best of our knowledge, VTransE is the first end-toend relation detection network. We demonstrate the effectiveness of VTransE over other state-of-the-art methods on two large-scale datasets: Visual Relationship and Visual Genome. Note that even though VTransE is a purely visual model, it is still competitive to the Lu’s multi-modal model with language priors [27].",
"title": ""
},
{
"docid": "ee4ebafe1b40e3d2020b2fb9a4b881f6",
"text": "Probing the lowest energy configuration of a complex system by quantum annealing was recently found to be more effective than its classical, thermal counterpart. By comparing classical and quantum Monte Carlo annealing protocols on the two-dimensional random Ising model (a prototype spin glass), we confirm the superiority of quantum annealing relative to classical annealing. We also propose a theory of quantum annealing based on a cascade of Landau-Zener tunneling events. For both classical and quantum annealing, the residual energy after annealing is inversely proportional to a power of the logarithm of the annealing time, but the quantum case has a larger power that makes it faster.",
"title": ""
},
{
"docid": "647f8e9ece2c7663e2b8767f0694fec5",
"text": "Modern retrieval systems are often driven by an underlying machine learning model. The goal of such systems is to identify and possibly rank the few most relevant items for a given query or context. Thus, such systems are typically evaluated using a ranking-based performance metric such as the area under the precision-recall curve, the Fβ score, precision at fixed recall, etc. Obviously, it is desirable to train such systems to optimize the metric of interest. In practice, due to the scalability limitations of existing approaches for optimizing such objectives, large-scale retrieval systems are instead trained to maximize classification accuracy, in the hope that performance as measured via the true objective will also be favorable. In this work we present a unified framework that, using straightforward building block bounds, allows for highly scalable optimization of a wide range of ranking-based objectives. We demonstrate the advantage of our approach on several real-life retrieval problems that are significantly larger than those considered in the literature, while achieving substantial improvement in performance over the accuracyobjective baseline. Proceedings of the 20 International Conference on Artificial Intelligence and Statistics (AISTATS) 2017, Fort Lauderdale, Florida, USA. JMLR: W&CP volume 54. Copyright 2017 by the author(s).",
"title": ""
},
{
"docid": "7b1e2439e3be5110f8634394f266da7c",
"text": "ÐIn the absence of cues for absolute depth measurements as binocular disparity, motion, or defocus, the absolute distance between the observer and a scene cannot be measured. The interpretation of shading, edges, and junctions may provide a 3D model of the scene but it will not provide information about the actual ªscaleº of the space. One possible source of information for absolute depth estimation is the image size of known objects. However, object recognition, under unconstrained conditions, remains difficult and unreliable for current computational approaches. Here, we propose a source of information for absolute depth estimation based on the whole scene structure that does not rely on specific objects. We demonstrate that, by recognizing the properties of the structures present in the image, we can infer the scale of the scene and, therefore, its absolute mean depth. We illustrate the interest in computing the mean depth of the scene with application to scene recognition and object detection.",
"title": ""
},
{
"docid": "13452d0ceb4dfd059f1b48dba6bf5468",
"text": "This paper presents an extension to the technology acceptance model (TAM) and empirically examines it in an enterprise resource planning (ERP) implementation environment. The study evaluated the impact of one belief construct (shared beliefs in the benefits of a technology) and two widely recognized technology implementation success factors (training and communication) on the perceived usefulness and perceived ease of use during technology implementation. Shared beliefs refer to the beliefs that organizational participants share with their peers and superiors on the benefits of the ERP system. Using data gathered from the implementation of an ERP system, we showed that both training and project communication influence the shared beliefs that users form about the benefits of the technology and that the shared beliefs influence the perceived usefulness and ease of use of the technology. Thus, we provided empirical and theoretical support for the use of managerial interventions, such as training and communication, to influence the acceptance of technology, since perceived usefulness and ease of use contribute to behavioral intention to use the technology. # 2003 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "a2cdcd9400c2c6663b3672e9cf8d41f6",
"text": "The use of immersive virtual reality (VR) systems in museums is a recent trend, as the development of new interactive technologies has inevitably impacted the more traditional sciences and arts. This is more evident in the case of novel interactive technologies that fascinate the broad public, as has always been the case with virtual reality. The increasing development of VR technologies has matured enough to expand research from the military and scientific visualization realm into more multidisciplinary areas, such as education, art and entertainment. This paper analyzes the interactive virtual environments developed at an institution of informal education and discusses the issues involved in developing immersive interactive virtual archaeology projects for the broad public.",
"title": ""
},
{
"docid": "9e8d4b422a7ed05ee338fcd426dab723",
"text": "Entity typing is an essential task for constructing a knowledge base. However, many non-English knowledge bases fail to type their entities due to the absence of a reasonable local hierarchical taxonomy. Since constructing a widely accepted taxonomy is a hard problem, we propose to type these non-English entities with some widely accepted taxonomies in English, such as DBpedia, Yago and Freebase. We define this problem as cross-lingual type inference. In this paper, we present CUTE to type Chinese entities with DBpedia types. First we exploit the cross-lingual entity linking between Chinese and English entities to construct the training data. Then we propose a multi-label hierarchical classification algorithm to type these Chinese entities. Experimental results show the effectiveness and efficiency of our method.",
"title": ""
},
{
"docid": "834a5cb9f2948443fbb48f274e02ca9c",
"text": "The Carnegie Mellon Communicator is a telephone-based dialog system that supports planning in a travel domain. The implementation of such a system requires two complimentary components, an architecture capable of managing interaction and the task, as well as a knowledge base that captures the speech, language and task characteristics specific to the domain. Given a suitable architecture, the principal effort in development in taken up in the acquisition and processing of a domain knowledge base. This paper describes a variety of techniques we have applied to modeling in acoustic, language, task, generation and synthesis components of the system.",
"title": ""
},
{
"docid": "7d1faee4929d60d952cc8c2c12fa16d3",
"text": "We recently showed that improved perceptual performance on a visual motion direction–discrimination task corresponds to changes in how an unmodified sensory representation in the brain is interpreted to form a decision that guides behavior. Here we found that these changes can be accounted for using a reinforcement-learning rule to shape functional connectivity between the sensory and decision neurons. We modeled performance on the basis of the readout of simulated responses of direction-selective sensory neurons in the middle temporal area (MT) of monkey cortex. A reward prediction error guided changes in connections between these sensory neurons and the decision process, first establishing the association between motion direction and response direction, and then gradually improving perceptual sensitivity by selectively strengthening the connections from the most sensitive neurons in the sensory population. The results suggest a common, feedback-driven mechanism for some forms of associative and perceptual learning.",
"title": ""
},
{
"docid": "501c9fa6829242962f182aff2dbbd6f8",
"text": "We present an instance segmentation scheme based on pixel affinity information, which is the relationship of two pixels belonging to a same instance. In our scheme, we use two neural networks with similar structure. One is to predict pixel level semantic score and the other is designed to derive pixel affinities. Regarding pixels as the vertexes and affinities as edges, we then propose a simple yet effective graph merge algorithm to cluster pixels into instances. Experimental results show that our scheme can generate fine grained instance mask. With Cityscapes training data, the proposed scheme achieves 27.3 AP on test set.",
"title": ""
},
{
"docid": "191058192146249d5cf9493eb41a37c2",
"text": "Cryptocurrency networks have given birth to a diversity of start-ups and attracted a huge influx of venture capital to invest in these start-ups for creating and capturing value within and between such networks. Synthesizing strategic management and information systems (IS) literature, this study advances a unified theoretical framework for identifying and investigating how cryptocurrency companies configure value through digital business models. This framework is then employed, via multiple case studies, to examine digital business models of companies within the bitcoin network. Findings suggest that companies within the bitcoin network exhibits six generic digital business models. These six digital business models are in turn driven by three modes of value configurations with their own distinct logic for value creation and mechanisms for value capturing. A key finding of this study is that value-chain and value-network driven business models commercialize their products and services for each value unit transfer, whereas commercialization for value-shop driven business models is realized through the subsidization of direct users by revenue generating entities. This study contributes to extant literature on value configurations and digital businesses models within the emerging and increasingly pervasive domain of cryptocurrency networks.",
"title": ""
},
{
"docid": "645a92cd2f789f8708a522a35100611b",
"text": "INTRODUCTION\nMalignant Narcissism has been recognized as a serious condition but it has been largely ignored in psychiatric literature and research. In order to bring this subject to the attention of mental health professionals, this paper presents a contemporary synthesis of the biopsychosocial dynamics and recommendations for treatment of Malignant Narcissism.\n\n\nMETHODS\nWe reviewed the literature on Malignant Narcissism which was sparse. It was first described in psychiatry by Otto Kernberg in 1984. There have been few contributions to the literature since that time. We discovered that the syndrome of Malignant Narcissism was expressed in fairy tales as a part of the collective unconscious long before it was recognized by psychiatry. We searched for prominent malignant narcissists in recent history. We reviewed the literature on treatment and developed categories for family assessment.\n\n\nRESULTS\nMalignant Narcissism is described as a core Narcissistic personality disorder, antisocial behavior, ego-syntonic sadism, and a paranoid orientation. There is no structured interview or self-report measure that identifies Malignant Narcissism and this interferes with research, clinical diagnosis and treatment. This paper presents a synthesis of current knowledge about Malignant Narcissism and proposes a foundation for treatment.\n\n\nCONCLUSIONS\nMalignant Narcissism is a severe personality disorder that has devastating consequences for the family and society. It requires attention within the discipline of psychiatry and the social science community. We recommend treatment in a therapeutic community and a program of prevention that is focused on psychoeducation, not only in mental health professionals, but in the wider social community.",
"title": ""
},
{
"docid": "14fc402353ddc5ef3ebb1a28682b44ad",
"text": "Service Oriented Architecture (SOA) is an architectural style that supports service orientation. In reality, SOA is much more than architecture. SOA adoption is prerequisite for organization to excel their service deliveries, as the delivery platforms are shifting to mobile, cloud and social media. A maturity model is a tool to accelerate enterprise SOA adoption, however it depends on how it should be applied. This paper presents a literature review of existing maturity models and proposes 5 major aspects that a maturity model has to address to improve SOA practices of an enterprise. A maturity model can be used as: (i) a roadmap for SOA adoption, (ii) a reference guide for SOA adoption, (iii) a tool to gauge maturity of process execution, (iv) a tool to measure the effectiveness of SOA motivations, and (v) a review tool for governance framework. This paper also sheds light on how SOA maturity assessment can be modeled. A model for SOA process execution maturity and perspective maturity assessment has been proposed along with a framework to include SOA scope of adoption.",
"title": ""
},
{
"docid": "f383dd5dd7210105406c2da80cf72f89",
"text": "We present a new, \"greedy\", channel-router that is quick, simple, and highly effective. It always succeeds, usually using no more than one track more than required by channel density. (It may be forced in rare cases to make a few connections \"off the end\" of the channel, in order to succeed.) It assumes that all pins and wiring lie on a common grid, and that vertical wires are on one layer, horizontal on another. The greedy router wires up the channel in a left-to-right, column-by-column manner, wiring each column completely before starting the next. Within each column the router tries to maximize the utility of the wiring produced, using simple, \"greedy\" heuristics. It may place a net on more than one track for a few columns, and \"collapse\" the net to a single track later on, using a vertical jog. It may also use a jog to move a net to a track closer to its pin in some future column. The router may occasionally add a new track to the channel, to avoid \"getting stuck\".",
"title": ""
}
] |
scidocsrr
|
11662c77ce61b9476c57a5094b6ed761
|
Recurrent Attentional Reinforcement Learning for Multi-Label Image Recognition
|
[
{
"docid": "2f20bca0134eb1bd9d65c4791f94ddcc",
"text": "We present an attention-based model for recognizing multiple objects in images. The proposed model is a deep recurrent neural network trained with reinforcement learning to attend to the most relevant regions of the input image. We show that the model learns to both localize and recognize multiple objects despite being given only class labels during training. We evaluate the model on the challenging task of transcribing house number sequences from Google Street View images and show that it is both more accurate than the state-of-the-art convolutional networks and uses fewer parameters and less computation.",
"title": ""
}
] |
[
{
"docid": "3a0da20211697fbcce3493aff795556c",
"text": "OBJECTIVES\nWe studied whether park size, number of features in the park, and distance to a park from participants' homes were related to a park being used for physical activity.\n\n\nMETHODS\nWe collected observational data on 28 specific features from 33 parks. Adult residents in surrounding areas (n=380) completed 7-day physical activity logs that included the location of their activities. We used logistic regression to examine the relative importance of park size, features, and distance to participants' homes in predicting whether a park was used for physical activity, with control for perceived neighborhood safety and aesthetics.\n\n\nRESULTS\nParks with more features were more likely to be used for physical activity; size and distance were not significant predictors. Park facilities were more important than were park amenities. Of the park facilities, trails had the strongest relationship with park use for physical activity.\n\n\nCONCLUSIONS\nSpecific park features may have significant implications for park-based physical activity. Future research should explore these factors in diverse neighborhoods and diverse parks among both younger and older populations.",
"title": ""
},
{
"docid": "466c537fca72aaa1e9cda2dc22c0f504",
"text": "This paper presents a single-phase grid-connected photovoltaic (PV) module-integrated converter (MIC) based on cascaded quasi-Z-source inverters (qZSI). In this system, each qZSI module serves as an MIC and is connected to one PV panel. Due to the cascaded structure and qZSI topology, the proposed MIC features low-voltage gain requirement, single-stage energy conversion, enhanced reliability, and good output power quality. Furthermore, the enhancement mode gallium nitride field-effect transistors (eGaN FETs) are employed in the qZSI module for efficiency improvement at higher switching frequency. It is found that the qZSI is very suitable for the application of eGaN FETs because of the shoot-through capability. Optimized module design is developed based on the derived qZSI ac equivalent model and power loss analytical model to achieve high efficiency and high power density. A design example of qZSI module is presented for a 250-W PV panel with 25-50-V output voltage. The simulation and experimental results prove the validity of the analytical models. The final module prototype design achieves up to 98.06% efficiency with 100-kHz switching frequency.",
"title": ""
},
{
"docid": "d723ffedb1d346742004b0585ee93f0b",
"text": "In today's world, apart from the fact that systems and products are becoming increasingly complex, electronic technology is rapidly progressing in both miniaturization and higher complexity. Consequently, these facts are accompanied with new failures modes. Standard reliability tools cope to tackle all of the new emerging challenges. New technology and designs require adapted approaches to ensure that the products cost-effectively and timely meet desired reliability goals. The Physics-of-Failure (P-o-F) represents one approach to reliability assessment based on modeling and simulation that relies on understanding the physical processes contributing to the appearance of the critical failures. This paper outlines the classical approaches to reliability engineering and discusses advantages of the Physics-of-Failure approach. It also stresses that the P-o-F approach should be probabilistic in order to include inevitable variations of variables involved in processes contributing to the occurrence of failures in the analysis.",
"title": ""
},
{
"docid": "52ef7357fa379b7eede3c4ceee448a81",
"text": "(Note: This is a completely revised version of the article that was originally published in ACM Crossroads, Volume 13, Issue 4. Revisions were needed because of major changes to the Natural Language Toolkit project. The code in this version of the article will always conform to the very latest version of NLTK (v2.0b9 as of November 2010). Although the code is always tested, it is possible that a bug or two may have been introduced in the code during the course of this revision. If you find any, please report them to the author. If you are still using version 0.7 of the toolkit for some reason, please refer to http://www.acm.org/crossroads/xrds13-4/natural_language.html).",
"title": ""
},
{
"docid": "697ac701dca9f2c4343d0de3aadd0fa1",
"text": "We propose a two phase time dependent vehicle routing and scheduling optimization model that identifies the safest routes, as a substitute for the classical objectives given in the literature such as shortest distance or travel time, through (1) avoiding recurring congestions, and (2) selecting routes that have a lower probability of crash occurrences and non-recurring congestion caused by those crashes. In the first phase, we solve a mixed-integer programming model which takes the dynamic speed variations into account on a graph of roadway networks according to the time of day, and identify the routing of a fleet and sequence of nodes on the safest feasible paths. Second phase considers each route as an independent transit path (fixed route with fixed node sequences), and tries to avoid congestion by rescheduling the departure times of each vehicle from each node, and by adjusting the sub-optimal speed on each arc. A modified simulated annealing (SA) algorithm is formulated to solve both complex models iteratively, which is found to be capable of providing solutions in a considerably short amount of time. In this paper, speed (and travel time) variation with respect to the hour of the day is calculated via queuing models (i.e., M/G/1) to capture the stochasticity of travel times more accurately unlike the most researches in this area, which assume the speed on arcs to be a fixed value or a time dependent step function. First, we demonstrate the accurate performance of M/G/1 in estimation and predicting speeds and travel times for those arcs without readily available speed data. Crash data, on the other hand, is obtained for each arc. Next, 24 scenarios, which correspond to each hour of a day, are developed, and are fed to the proposed solution algorithms. This is followed by evaluating the routing schema for each scenario where the following objective * Corresponding author. Tel.: +1-850-405-6688 E-mail address: Aschkan@ufl.edu functions are utilized: (1) the minimization of the traffic delay (maximum congestion avoidance), and (2) the minimization of the traffic crash risk, and (3) the combination of two objectives. Using these objectives, we identify the safest routes, as a substitute for the classical objectives given in the literature such as shortest distance or travel time, through (1) avoiding recurring congestions, and (2) selecting routes that have a lower probability of crash occurrences and non-recurring congestion caused by those crashes. This also allows us to discuss the feasibility and applicability of our model. Finally, the proposed methodology is applied on a benchmark network as well as a small real-world case study application for the City of Miami, Florida. Results suggest that in some instances, both the travelled distance and travel time increase in return for a safer route, however, the advantages of safer route can outweigh this slight increase.",
"title": ""
},
{
"docid": "1310fd212958fa5b18ff67efe7cade63",
"text": "In this paper, a new design method of a tunable oscillator using a suspended-stripline resonator is presented. The negative resistance of an FET mounted on microstrip line (MSL) is combined with a high Q suspended-stripline (SSL) resonator to produce a tunable oscillator with good phase noise. The new MSL-to-SSL transition facilitates easy connection between the MSL-based circuits and the SSL module. The proposed oscillator is also frequency-tunable using a tuner located on the top of the SSL housing. The measured phase noise of the implemented oscillator at 5.148 GHz is -104.34 dBc@100 kHz and -133.21 dBc@1 MHz with 125.7 MHz of frequency tuning.",
"title": ""
},
{
"docid": "bc4fa6a77bf0ea02456947696dc6dca3",
"text": "We propose a constraint programming approach for the optimization of inventory routing in the liquefied natural gas industry. We present two constraint programming models that rely on a disjunctive scheduling representation of the problem. We also propose an iterative search heuristic to generate good feasible solutions for these models. Computational results on a set of largescale test instances demonstrate that our approach can find better solutions than existing approaches based on mixed integer programming, while being 4 to 10 times faster on average.",
"title": ""
},
{
"docid": "76a2c62999a256076cdff0fffefca1eb",
"text": "Learning a second language is challenging. Becoming fluent requires learning contextual information about how language should be used as well as word meanings and grammar. The majority of existing language learning applications provide only thin context around content. In this paper, we present Crystallize, a collaborative 3D game that provides rich context along with scaffolded learning and engaging gameplay mechanics. Players collaborate through joint tasks, or quests. We present a user study with 42 participants that examined the impact of low and high levels of task interdependence on language learning experience and outcomes. We found that requiring players to help each other led to improved collaborative partner interactions, learning outcomes, and gameplay. A detailed analysis of the chat-logs further revealed that changes in task interdependence affected learning behaviors.",
"title": ""
},
{
"docid": "8f9309ebfc87de5eb7cf715c0370da54",
"text": "Hyperbolic discounting of future outcomes is widely observed to underlie choice behavior in animals. Additionally, recent studies (Kobayashi & Schultz, 2008) have reported that hyperbolic discounting is observed even in neural systems underlying choice. However, the most prevalent models of temporal discounting, such as temporal difference learning, assume that future outcomes are discounted exponentially. Exponential discounting has been preferred largely because it can be expressed recursively, whereas hyperbolic discounting has heretofore been thought not to have a recursive definition. In this letter, we define a learning algorithm, hyperbolically discounted temporal difference (HDTD) learning, which constitutes a recursive formulation of the hyperbolic model.",
"title": ""
},
{
"docid": "857b9753f213d704b9d7d3b166ff9848",
"text": "The aim of rehabilitation robotic area is to research on the application of robotic devices to therapeutic procedures. The goal is to achieve the best possible motor, cognitive and functional recovery for people with impairments following various diseases. Pneumatic actuators are attractive for robotic rehabilitation applications because they are lightweight, powerful, and compliant, but their control has historically been difficult, limiting their use. This article first reviews the current state-of-art in rehabilitation robotic devices with pneumatic actuation systems reporting main features and control issues of each therapeutic device. Then, a new pneumatic rehabilitation robot for proprioceptive neuromuscular facilitation therapies and for relearning daily living skills: like taking a glass, drinking, and placing object on shelves is described as a case study and compared with the current pneumatic rehabilitation devices.",
"title": ""
},
{
"docid": "7381d61eea849ecdf74c962042d0c5ff",
"text": "Unmanned aerial vehicle (UAV) synthetic aperture radar (SAR) is very important for battlefield awareness. For SAR systems mounted on a UAV, the motion errors can be considerably high due to atmospheric turbulence and aircraft properties, such as its small size, which makes motion compensation (MOCO) in UAV SAR more urgent than other SAR systems. In this paper, based on 3-D motion error analysis, a novel 3-D MOCO method is proposed. The main idea is to extract necessary motion parameters, i.e., forward velocity and displacement in line-of-sight direction, from radar raw data, based on an instantaneous Doppler rate estimate. Experimental results show that the proposed method is suitable for low- or medium-altitude UAV SAR systems equipped with a low-accuracy inertial navigation system.",
"title": ""
},
{
"docid": "6624487fd7296588c934ad1d74bfc5ea",
"text": "We report an efficient method for fabricating flexible membranes of electrospun carbon nanofiber/tin(IV) sulfide (CNF@SnS2) core/sheath fibers. CNF@SnS2 is a new photocatalytic material that can be used to treat wastewater containing high concentrations of hexavalent chromium (Cr(VI)). The hierarchical CNF@SnS2 core/sheath membranes have a three-dimensional macroporous architecture. This provides continuous channels for the rapid diffusion of photoelectrons generated by SnS2 nanoparticles under visible light irradiation. The visible light (λ > 400 nm) driven photocatalytic properties of CNF@SnS2 are evaluated by the reduction of water-soluble Cr(VI). CNF@SnS2 exhibits high visible light-driven photocatalytic activity because of its low band gap of 2.34 eV. Moreover, CNF@SnS2 exhibits good photocatalytic stability and excellent cycling stability. Under visible light irradiation, the optimized CNF@SnS2 membranes exhibit a high rate of degradation of 250 mg/L of aqueous Cr(VI) and can completely degrade the Cr(VI) within 90 min.",
"title": ""
},
{
"docid": "30bc96451dd979a8c08810415e4a2478",
"text": "An adaptive circulator fabricated on a 130 nm CMOS is presented. Circulator has two adaptive blocks for gain and phase mismatch correction and leakage cancelation. The impedance matching circuit corrects mismatches for antenna, divider, and LNTA. The cancelation block cancels the Tx leakage. Measured isolation between transmitter and receiver for single tone at 2.4 GHz is 90 dB, and for a 40 MHz wide-band signal is 50dB. The circulator Rx gain is 10 dB, with NF = 4.7 dB and 5 dB insertion loss.",
"title": ""
},
{
"docid": "40c4175be1573d9542f6f9f859fafb01",
"text": "BACKGROUND\nFalls are a major threat to the health and independence of seniors. Regular physical activity (PA) can prevent 40% of all fall injuries. The challenge is to motivate and support seniors to be physically active. Persuasive systems can constitute valuable support for persons aiming at establishing and maintaining healthy habits. However, these systems need to support effective behavior change techniques (BCTs) for increasing older adults' PA and meet the senior users' requirements and preferences. Therefore, involving users as codesigners of new systems can be fruitful. Prestudies of the user's experience with similar solutions can facilitate future user-centered design of novel persuasive systems.\n\n\nOBJECTIVE\nThe aim of this study was to investigate how seniors experience using activity monitors (AMs) as support for PA in daily life. The addressed research questions are as follows: (1) What are the overall experiences of senior persons, of different age and balance function, in using wearable AMs in daily life?; (2) Which aspects did the users perceive relevant to make the measurements as meaningful and useful in the long-term perspective?; and (3) What needs and requirements did the users perceive as more relevant for the activity monitors to be useful in a long-term perspective?\n\n\nMETHODS\nThis qualitative interview study included 8 community-dwelling older adults (median age: 83 years). The participants' experiences in using two commercial AMs together with tablet-based apps for 9 days were investigated. Activity diaries during the usage and interviews after the usage were exploited to gather user experience. Comments in diaries were summarized, and interviews were analyzed by inductive content analysis.\n\n\nRESULTS\nThe users (n=8) perceived that, by using the AMs, their awareness of own PA had increased. However, the AMs' impact on the users' motivation for PA and activity behavior varied between participants. The diaries showed that self-estimated physical effort varied between participants and varied for each individual over time. Additionally, participants reported different types of accomplished activities; talking walks was most frequently reported. To be meaningful, measurements need to provide the user with a reliable receipt of whether his or her current activity behavior is sufficient for reaching an activity goal. Moreover, praise when reaching a goal was described as motivating feedback. To be useful, the devices must be easy to handle. In this study, the users perceived wearables as easy to handle, whereas tablets were perceived difficult to maneuver. Users reported in the diaries that the devices had been functional 78% (58/74) of the total test days.\n\n\nCONCLUSIONS\nActivity monitors can be valuable for supporting seniors' PA. However, the potential of the solutions for a broader group of seniors can significantly be increased. Areas of improvement include reliability, usability, and content supporting effective BCTs with respect to increasing older adults' PA.",
"title": ""
},
{
"docid": "43ff7d61119cc7b467c58c9c2e063196",
"text": "Financial engineering such as trading decision is an emerging research area and also has great commercial potentials. A successful stock buying/selling generally occurs near price trend turning point. Traditional technical analysis relies on some statistics (i.e. technical indicators) to predict turning point of the trend. However, these indicators can not guarantee the accuracy of prediction in chaotic domain. In this paper, we propose an intelligent financial trading system through a new approach: learn trading strategy by probabilistic model from high-level representation of time series – turning points and technical indicators. The main contributions of this paper are two-fold. First, we utilize high-level representation (turning point and technical indicators). High-level representation has several advantages such as insensitive to noise and intuitive to human being. However, it is rarely used in past research. Technical indicator is the knowledge from professional investors, which can generally characterize the market. Second, by combining high-level representation with probabilistic model, the randomness and uncertainty of chaotic system is further reduced. In this way, we achieve great results (comprehensive experiments on S&P500 components) in a chaotic domain in which the prediction is thought impossible in the past. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "5058d6002c43298442ebdf2902e6adf3",
"text": "Non-contact image photoplethysmography has gained a lot of attention during the last 5 years. Starting with the work of Verkruysse et al. [1], various methods for estimation of the human pulse rate from video sequences of the face under ambient illumination have been presented. Applied on a mobile service robot aimed to motivate elderly users for physical exercises, the pulse rate can be a valuable information in order to adapt to the users conditions. For this paper, a typical processing pipeline was implemented on a mobile robot, and a detailed comparison of methods for face segmentation was conducted, which is the key factor for robust pulse rate extraction even, if the subject is moving. A benchmark data set is introduced focusing on the amount of motion of the head during the measurement.",
"title": ""
},
{
"docid": "35258abbafac62dbfbd0be08617e95bf",
"text": "Code Reuse Attacks (CRAs) recently emerged as a new class of security exploits. CRAs construct malicious programs out of small fragments (gadgets) of existing code, thus eliminating the need for code injection. Existing defenses against CRAs often incur large performance overheads or require extensive binary rewriting and other changes to the system software. In this paper, we examine a signature-based detection of CRAs, where the attack is detected by observing the behavior of programs and detecting the gadget execution patterns. We first demonstrate that naive signature-based defenses can be defeated by introducing special “delay gadgets” as part of the attack. We then show how a software-configurable signature-based approach can be designed to defend against such stealth CRAs, including the attacks that manage to use longer-length gadgets. The proposed defense (called SCRAP) can be implemented entirely in hardware using simple logic at the commit stage of the pipeline. SCRAP is realized with minimal performance cost, no changes to the software layers and no implications on binary compatibility. Finally, we show that SCRAP generates no false alarms on a wide range of applications.",
"title": ""
},
{
"docid": "1472e8a0908467404c01d236d2f39c58",
"text": "Millimetre wave antennas are typically used for applications like anti-collision car radar or sensory. A new and upcoming application is the use of 60 GHz antennas for high date rate point-to-point connections to serve wireless local area networks. For high gain antennas, configurations using lenses in combination with planar structures are often applied. However, single layer planar arrays might offer a more cost-efficient solution, especially if the antenna and the RF-circuitry are realised on one and the same substrate. The design of millimetre wave antennas has to cope with the severe impacts of manufacturing tolerances and losses at these frequencies. Reproducibility can become poor in such cases. The successful design and realisation of a cost-efficient 60 GHz planar patch array (8/spl times/8 elements) with high reproducibility for point-to-point connections is presented. Important design aspects are highlighted and manufacturing tolerances and losses are analysed. Measurement results of different prototypes are presented to show the reproducibility of the antenna layout.",
"title": ""
},
{
"docid": "67509b64aaf1ead0bcba557d8cfe84bc",
"text": "Base on innovation resistance theory, this research builds the model of factors affecting consumers' resistance in using online travel in Thailand. Through the questionnaires and the SEM methods, empirical analysis results show that functional barriers are even greater sources of resistance to online travel website than psychological barriers. Online experience and independent travel experience have significantly influenced on consumer innovation resistance. Social influence plays an important role in this research.",
"title": ""
}
] |
scidocsrr
|
0fda572b0a651c2c09b38584515fa36e
|
Data-driven comparison of spatio-temporal monitoring techniques
|
[
{
"docid": "5508603a802abb9ab0203412b396b7bc",
"text": "We present an optimal algorithm for informative path planning (IPP), using a branch and bound method inspired by feature selection algorithms. The algorithm uses the monotonicity of the objective function to give an objective function-dependent speedup versus brute force search. We present results which suggest that when maximizing variance reduction in a Gaussian process model, the speedup is significant.",
"title": ""
},
{
"docid": "2bdaaeb18db927e2140c53fcc8d4fa30",
"text": "Many information gathering problems require determining the set of points, for which an unknown function takes value above or below some given threshold level. As a concrete example, in the context of environmental monitoring of Lake Zurich we would like to estimate the regions of the lake where the concentration of chlorophyll or algae is greater than some critical value, which would serve as an indicator of algal bloom phenomena. A critical factor in such applications is the high cost in terms of time, baery power, etc. that is associated with each measurement, therefore it is important to be careful about selecting “informative” locations to sample, in order to reduce the total sampling effort required. We formalize the task of level set estimation as a classification problem with sequential measurements, where the unknown function is modeled as a sample from a Gaussian process (GP). We propose LSE, an active learning algorithm that guides both sampling and classification based on GP-derived confidence bounds, and provide theoretical guarantees about its sample complexity. Furthermore, we extend LSE and its theory to two more natural seings: (1) where the threshold level is implicitly defined as a percentage of the (unknown) maximum of the target function and (2) where samples are selected in batches. Based on the laer extension we also propose a simple path planning algorithm. We evaluate the effectiveness of our proposed methods on two problems of practical interest, namely the aforementioned autonomous monitoring of algal populations in Lake Zurich and geolocating network latency.",
"title": ""
}
] |
[
{
"docid": "8d90b9fbf7af1ea36f93f88e6ce11ba2",
"text": "Given its serious implications for psychological and socio-emotional health, the prevention of problem gambling among adolescents is increasingly acknowledged as an area requiring attention. The theory of planned behavior (TPB) is a well-established model of behavior change that has been studied in the development and evaluation of primary preventive interventions aimed at modifying cognitions and behavior. However, the utility of the TPB has yet to be explored as a framework for the development of adolescent problem gambling prevention initiatives. This paper first examines the existing empirical literature addressing the effectiveness of school-based primary prevention programs for adolescent gambling. Given the limitations of existing programs, we then present a conceptual framework for the integration of the TPB in the development of effective problem gambling preventive interventions. The paper describes the TPB, demonstrates how the framework has been applied to gambling behavior, and reviews the strengths and limitations of the model for the design of primary prevention initiatives targeting adolescent risk and addictive behaviors, including adolescent gambling.",
"title": ""
},
{
"docid": "1514bae0c1b47f5aaf0bfca6a63d9ce9",
"text": "The persistence of racial inequality in the U.S. labor market against a general backdrop of formal equality of opportunity is a troubling phenomenon that has significant ramifications on the design of hiring policies. In this paper, we show that current group disparate outcomes may be immovable even when hiring decisions are bound by an input-output notion of “individual fairness.” Instead, we construct a dynamic reputational model of the labor market that illustrates the reinforcing nature of asymmetric outcomes resulting from groups’ divergent accesses to resources and as a result, investment choices. To address these disparities, we adopt a dual labor market composed of a Temporary Labor Market (TLM), in which firms’ hiring strategies are constrained to ensure statistical parity of workers granted entry into the pipeline, and a Permanent Labor Market (PLM), in which firms hire top performers as desired. Individual worker reputations produce externalities for their group; the corresponding feedback loop raises the collective reputation of the initially disadvantaged group via a TLM fairness intervention that need not be permanent. We show that such a restriction on hiring practices induces an equilibrium that, under particular market conditions, Pareto-dominates those arising from strategies that statistically discriminate or employ a “group-blind” criterion. The enduring nature of equilibria that are both inequitable and Pareto suboptimal suggests that fairness interventions beyond procedural checks of hiring decisions will be of critical importance in a world where machines play a greater role in the employment process. ACM Reference Format: Lily Hu and Yiling Chen. 2018. A Short-term Intervention for Long-term Fairness in the Labor Market. In WWW 2018: The 2018 Web Conference, April 23–27, 2018, Lyon, France. ACM, New York, NY, USA, 10 pages. https: //doi.org/10.1145/3178876.3186044",
"title": ""
},
{
"docid": "3fd685b63f92d277fb5a8e524e065277",
"text": "State-of-the-art image sensors suffer from significant limitations imposed by their very principle of operation. These sensors acquire the visual information as a series of “snapshot” images, recorded at discrete points in time. Visual information gets time quantized at a predetermined frame rate which has no relation to the dynamics present in the scene. Furthermore, each recorded frame conveys the information from all pixels, regardless of whether this information, or a part of it, has changed since the last frame had been acquired. This acquisition method limits the temporal resolution, potentially missing important information, and leads to redundancy in the recorded image data, unnecessarily inflating data rate and volume. Biology is leading the way to a more efficient style of image acquisition. Biological vision systems are driven by events happening within the scene in view, and not, like image sensors, by artificially created timing and control signals. Translating the frameless paradigm of biological vision to artificial imaging systems implies that control over the acquisition of visual information is no longer being imposed externally to an array of pixels but the decision making is transferred to the single pixel that handles its own information individually. In this paper, recent developments in bioinspired, neuromorphic optical sensing and artificial vision are presented and discussed. It is suggested that bioinspired vision systems have the potential to outperform conventional, frame-based vision systems in many application fields and to establish new benchmarks in terms of redundancy suppression and data compression, dynamic range, temporal resolution, and power efficiency. Demanding vision tasks such as real-time 3-D mapping, complex multiobject tracking, or fast visual feedback loops for sensory-motor action, tasks that often pose severe, sometimes insurmountable, challenges to conventional artificial vision systems, are in reach using bioinspired vision sensing and processing techniques.",
"title": ""
},
{
"docid": "69b0c5a4a3d5fceda5e902ec8e0479bb",
"text": "Mobile-edge computing (MEC) is an emerging paradigm that provides a capillary distribution of cloud computing capabilities to the edge of the wireless access network, enabling rich services and applications in close proximity to the end users. In this paper, an MEC enabled multi-cell wireless network is considered where each base station (BS) is equipped with a MEC server that assists mobile users in executing computation-intensive tasks via task offloading. The problem of joint task offloading and resource allocation is studied in order to maximize the users’ task offloading gains, which is measured by a weighted sum of reductions in task completion time and energy consumption. The considered problem is formulated as a mixed integer nonlinear program (MINLP) that involves jointly optimizing the task offloading decision, uplink transmission power of mobile users, and computing resource allocation at the MEC servers. Due to the combinatorial nature of this problem, solving for optimal solution is difficult and impractical for a large-scale network. To overcome this drawback, we propose to decompose the original problem into a resource allocation (RA) problem with fixed task offloading decision and a task offloading (TO) problem that optimizes the optimal-value function corresponding to the RA problem. We address the RA problem using convex and quasi-convex optimization techniques, and propose a novel heuristic algorithm to the TO problem that achieves a suboptimal solution in polynomial time. Simulation results show that our algorithm performs closely to the optimal solution and that it significantly improves the users’ offloading utility over traditional approaches.",
"title": ""
},
{
"docid": "bd3ba8635a8cd2112a1de52c90e2a04b",
"text": "Neural Machine Translation (NMT) is a new technique for machine translation that has led to remarkable improvements compared to rule-based and statistical machine translation (SMT) techniques, by overcoming many of the weaknesses in the conventional techniques. We study and apply NMT techniques to create a system with multiple models which we then apply for six Indian language pairs. We compare the performances of our NMT models with our system using automatic evaluation metrics such as UNK Count, METEOR, F-Measure, and BLEU. We find that NMT techniques are very effective for machine translations of Indian language pairs. We then demonstrate that we can achieve good accuracy even using a shallow network; on comparing the performance of Google Translate on our test dataset, our best model outperformed Google Translate by a margin of 17 BLEU points on Urdu-Hindi, 29 BLEU points on Punjabi-Hindi, and 30 BLEU points on Gujarati-Hindi translations.",
"title": ""
},
{
"docid": "bc6f9ef52c124675c62ccb8a1269a9b8",
"text": "We explore 3D printing physical controls whose tactile response can be manipulated programmatically through pneumatic actuation. In particular, by manipulating the internal air pressure of various pneumatic elements, we can create mechanisms that require different levels of actuation force and can also change their shape. We introduce and discuss a series of example 3D printed pneumatic controls, which demonstrate the feasibility of our approach. This includes conventional controls, such as buttons, knobs and sliders, but also extends to domains such as toys and deformable interfaces. We describe the challenges that we faced and the methods that we used to overcome some of the limitations of current 3D printing technology. We conclude with example applications and thoughts on future avenues of research.",
"title": ""
},
{
"docid": "043306203de8365bd1930a9c0b4138c7",
"text": "In this paper, we compare two different methods for automatic Arabic speech recognition for isolated words and sentences. Isolated word/sentence recognition was performed using cepstral feature extraction by linear predictive coding, as well as Hidden Markov Models (HMM) for pattern training and classification. We implemented a new pattern classification method, where we used Neural Networks trained using the Al-Alaoui Algorithm. This new method gave comparable results to the already implemented HMM method for the recognition of words, and it has overcome HMM in the recognition of sentences. The speech recognition system implemented is part of the Teaching and Learning Using Information Technology (TLIT) project which would implement a set of reading lessons to assist adult illiterates in developing better reading capabilities.",
"title": ""
},
{
"docid": "980ad058a2856048765f497683557386",
"text": "Hierarchical reinforcement learning (HRL) has recently shown promising advances on speeding up learning, improving the exploration, and discovering intertask transferable skills. Most recent works focus on HRL with two levels, i.e., a master policy manipulates subpolicies, which in turn manipulate primitive actions. However, HRL with multiple levels is usually needed in many real-world scenarios, whose ultimate goals are highly abstract, while their actions are very primitive. Therefore, in this paper, we propose a diversity-driven extensible HRL (DEHRL), where an extensible and scalable framework is built and learned levelwise to realize HRL with multiple levels. DEHRL follows a popular assumption: diverse subpolicies are useful, i.e., subpolicies are believed to be more useful if they are more diverse. However, existing implementations of this diversity assumption usually have their own drawbacks, which makes them inapplicable to HRL with multiple levels. Consequently, we further propose a novel diversity-driven solution to achieve this assumption in DEHRL. Experimental studies evaluate DEHRL with five baselines from four perspectives in two domains; the results show that DEHRL outperforms the state-of-the-art baselines in all four aspects.",
"title": ""
},
{
"docid": "af8fbdfbc4c4958f69b3936ff2590767",
"text": "Analysis of sedimentary diatom assemblages (10 to 144 ka) form the basis for a detailed reconstruction of the paleohydrography and diatom paleoecology of Lake Malawi. Lake-level fluctuations on the order of hundreds of meters were inferred from dramatic changes in the fossil and sedimentary archives. Many of the fossil diatom assemblages we observed have no analog in modern Lake Malawi. Cyclotelloid diatom species are a major component of fossil assemblages prior to 35 ka, but are not found in significant abundances in the modern diatom communities in Lake Malawi. Salinityand alkalinity-tolerant plankton has not been reported in the modern lake system, but frequently dominant fossil diatom assemblages prior to 85 ka. Large stephanodiscoid species that often dominate the plankton today are rarely present in the fossil record prior to 31 ka. Similarly, prior to 31 ka, common central-basin aulacoseiroid species are replaced by species found in the shallow, well-mixed southern basin. Surprisingly, tychoplankton and periphyton were not common throughout prolonged lowstands, but tended to increase in relative abundance during periods of inferred deeper-lake environments. A high-resolution lake level reconstruction was generated by a principle component analysis of fossil diatom and wetsieved fossil and mineralogical residue records. Prior to 70 ka, fossil assemblages suggest that the central basin was periodically a much shallower, more saline and/or alkaline, well-mixed environment. The most significant reconstructed lowstands are ~ 600 m below the modern lake level and span thousands of years. These conditions contrast starkly with the deep, dilute, dysaerobic environments of the modern central basin. After 70 ka, our reconstruction indicates sustained deeper-water environments were common, marked by a few brief, but significant, lowstands. High amplitude lake-level fluctuations appear related to changes in insolation. Seismic reflection data and additional sediment cores recovered from the northern basin of Lake Malawi provide evidence that supports our reconstruction.",
"title": ""
},
{
"docid": "7a87ffc98d8bab1ff0c80b9e8510a17d",
"text": "We present a method for transferring neural representations from label-rich source domains to unlabeled target domains. Recent adversarial methods proposed for this task learn to align features across domains by fooling a special domain critic network. However, a drawback of this approach is that the critic simply labels the generated features as in-domain or not, without considering the boundaries between classes. This can lead to ambiguous features being generated near class boundaries, reducing target classification accuracy. We propose a novel approach, Adversarial Dropout Regularization (ADR), to encourage the generator to output more discriminative features for the target domain. Our key idea is to replace the critic with one that detects non-discriminative features, using dropout on the classifier network. The generator then learns to avoid these areas of the feature space and thus creates better features. We apply our ADR approach to the problem of unsupervised domain adaptation for image classification and semantic segmentation tasks, and demonstrate significant improvement over the state of the art. We also show that our approach can be used to train Generative Adversarial Networks for semi-supervised learning.",
"title": ""
},
{
"docid": "33aa9af9a5f3d3f0b8bf21dca3b13d2f",
"text": "Microarchitectural resources such as caches and predictors can be used to leak information across security domains. Significant prior work has demonstrated attacks and defenses for specific types of such microarchitectural side and covert channels. In this paper, we introduce a general mathematical study of microarchitectural channels using information theory. Our conceptual contribution is a simple mathematical abstraction that captures the common characteristics of all microarchitectural channels. We call this the Bucket model and it reveals that microarchitectural channels are fundamentally different from side and covert channels in networking. We then quantify the communication capacity of several microarchitectural covert channels (including channels that rely on performance counters, AES hardware and memory buses) and measure bandwidths across both KVM based heavy-weight virtualization and light-weight operating-system level isolation. We demonstrate channel capacities that are orders of magnitude higher compared to what was previously considered possible. Finally, we introduce a novel way of detecting intelligent adversaries that try to hide while running covert channel eavesdropping attacks. Our method generalizes a prior detection scheme (that modeled static adversaries) by introducing noise that hides the detection process from an intelligent eavesdropper.",
"title": ""
},
{
"docid": "aa2401a302c7f0b394abb11961420b50",
"text": "A program is then asked the question “what was too small” as a follow-up to (1a), and the question “what was too big” as a follow-up to (1b). Levesque et. al. call a sentence such as that in (1) “Google proof” since a system that processed a large corpus cannot “learn” how to resolve such references by finding some statistical correlations in the data, as the only difference between (1a) and (1b) are antonyms that are known to co-occur in similar contexts with the same frequency. In a recent paper Trinh and Le (2018) henceforth T&L suggested that they have successfully formulated a „simple‟ machine learning method for performing commonsense reasoning, and in particular, the kind of reasoning that would be required in the process of language understanding. In doing so, T&L use the Winograd Schema (WS) challenge as a benchmark. In simple terms, T&L suggest the following method for “learning” how to successfully resolve the reference “it” in sentences such as those in (1): generate two",
"title": ""
},
{
"docid": "0122f015e3c054840782d09ede609390",
"text": "Decision rules are one of the most expressive languages for machine learning. In this paper we present Adaptive Model Rules (AMRules), the first streaming rule learning algorithm for regression problems. In AMRules the antecedent of a rule is a conjunction of conditions on the attribute values, and the consequent is a linear combination of attribute values. Each rule uses a PageHinkley test to detect changes in the process generating data and react to changes by pruning the rule set. In the experimental section we report the results of AMRules on benchmark regression problems, and compare the performance of our system with other streaming regression algorithms.",
"title": ""
},
{
"docid": "ad9cd1137223583c9324f7670688f098",
"text": "Sources of multidimensional data are becoming more prevalent, partly due to the rise of the Internet of Things (IoT), and with that the need to ingest and analyze data streams at rates higher than before. Some industrial IoT applications require ingesting millions of records per second, while processing queries on recently ingested and historical data. Unfortunately, existing database systems suited to multidimensional data exhibit low per-node ingestion performance, and even if they can scale horizontally in distributed settings, they require large number of nodes to meet such ingest demands. For this reason, in this paper we evaluate a singlenode multidimensional data store for high-velocity sensor data. Its design centers around a two-level indexing structure, wherein the global index is an in-memory R*-tree and the local indices are serialized kd-trees. This study is confined to records with numerical indexing fields and range queries, and covers ingest throughput, query response time, and storage footprint. We show that the adopted design streamlines data ingestion and offers ingress rates two orders of magnitude higher than those of a selection of open-source database systems, namely Percona Server, SQLite, and Druid. Our prototype also reports query response times comparable to or better than those of Percona Server and Druid, and compares favorably in terms of storage footprint. In addition, we evaluate a kd-tree partitioning based scheme for grouping incoming streamed data records. Compared to a random scheme, this scheme produces less overlap between groups of streamed records, but contrary to what we expected, such reduced overlap does not translate into better query performance. By contrast, the local indices prove much more beneficial to query performance. We believe the experience reported in this paper is valuable to practitioners and researchers alike interested in building database systems for high-velocity multidimensional data.",
"title": ""
},
{
"docid": "ba302b1ee508edc2376160b3ad0a751f",
"text": "During the last years terrestrial laser scanning became a standard method of data acquisition for various applications in close range domain, like industrial production, forest inventories, plant engineering and construction, car navigation and – one of the most important fields – the recording and modelling of buildings. To use laser scanning data in an adequate way, a quality assessment of the laser scanner is inevitable. In the literature some publications can be found concerning the data quality of terrestrial laser scanners. Most of these papers concentrate on the geometrical accuracy of the scanner (errors of instrument axis, range accuracy using target etc.). In this paper a special aspect of quality assessment will be discussed: the influence of different materials and object colours on the recorded measurements of a TLS. The effects on the geometric accuracy as well as on the simultaneously acquired intensity values are the topics of our investigations. A TRIMBLE GX scanner was used for several test series. The study of different effects refer to materials commonly used at building façades, i.e. grey scaled and coloured sheets, various species of wood, a metal plate, plasters of different particle size, light-transmissive slides and surfaces of different conditions of wetness. The tests concerning a grey wedge show a dependence on the brightness where the mean square error (MSE) decrease from black to white, and therefore, confirm previous results of other research groups. Similar results had been obtained with coloured sheets. In this context an important result is that the accuracy of measurements at night-time has proved to be much better than at day time. While different species of wood and different conditions of wetness have no significant effect on the range accuracy the study of a metal plate delivers MSE values considerably higher than the accuracy of the scanner, if the angle of incidence is approximately orthogonal. Also light-transmissive slides cause enormous MSE values. It can be concluded that high precision measurements should be carried out at night-time and preferable on bright surfaces without specular characteristics.",
"title": ""
},
{
"docid": "8beca44b655835e7a33abd8f1f343a6f",
"text": "Taxonomies have been developed as a mechanism for cyber attack categorisation. However, when one considers the recent and rapid evolution of attacker techniques and targets, the applicability and effectiveness of these taxonomies should be questioned. This paper applies two approaches to the evaluation of seven taxonomies. The first employs a criteria set, derived through analysis of existing works in which critical components to the creation of taxonomies are defined. The second applies historical attack data to each taxonomy under review, more specifically, attacks in which industrial control systems have been targeted. This combined approach allows for a more in-depth understanding of existing taxonomies to be developed, from both a theoretical and practical perspective.",
"title": ""
},
{
"docid": "01d4f1311afdd38c1afae967542768e6",
"text": "Cortana, one of the new features introduced by Microsoft in Windows 10 desktop operating systems, is a voice activated personal digital assistant that can be used for searching stuff on device or web, setting up reminders, tracking users’ upcoming flights, getting news tailored to users’ interests, sending text and emails, and more. Being the platform relatively new, the forensic examination of Cortana has been largely unexplored in literature. This paper seeks to determine the data remnants of Cortana usage in a Windows 10 personal computer (PC). The research contributes in-depth understanding of the location of evidentiary artifacts on hard disk and the type of information recorded in these artifacts as a result of user activities on Cortana. For decoding and exporting data from one of the databases created by Cortana application, four custom python scripts have been developed. Additionally, as a part of this paper, a GUI tool called CortanaDigger is developed for extracting and listing web search strings, as well as timestamp of search made by a user on Cortana box. Several experiments are conducted to track reminders (based on time, place, and person) and detect anti-forensic attempts like evidence modification and evidence destruction carried out on Cortana artifacts. Finally, forensic usefulness of Cortana artifacts is demonstrated in terms of a Cortana web search timeline constructed over a period of time.",
"title": ""
},
{
"docid": "6c411f36e88a39684eb9779462117e6b",
"text": "Number of people who use internet and websites for various purposes is increasing at an astonishing rate. More and more people rely on online sites for purchasing songs, apparels, books, rented movies etc. The competition between the online sites forced the web site owners to provide personalized services to their customers. So the recommender systems came into existence. Recommender systems are active information filtering systems that attempt to present to the user, information items in which the user is interested in. The websites implement recommender system feature using collaborative filtering, content based or hybrid approaches. The recommender systems also suffer from issues like cold start, sparsity and over specialization. Cold start problem is that the recommenders cannot draw inferences for users or items for which it does not have sufficient information. This paper attempts to propose a solution to the cold start problem by combining association rules and clustering technique. Comparison is done between the performance of the recommender system when association rule technique is used and the performance when association rule and clustering is combined. The experiments with the implemented system proved that accuracy can be improved when association rules and clustering is combined. An accuracy improvement of 36% was achieved by using the combination technique over the association rule technique.",
"title": ""
},
{
"docid": "04e7a143443a04be37e61a8ce0f562d6",
"text": "During the 2016 United States presidential election, politicians have increasingly used Twitter to express their beliefs, stances on current political issues, and reactions concerning national and international events. Given the limited length of tweets and the scrutiny politicians face for what they choose or neglect to say, they must craft and time their tweets carefully. The content and delivery of these tweets is therefore highly indicative of a politician’s stances. We present a weakly supervised method for extracting how issues are framed and temporal activity patterns on Twitter for popular politicians and issues of the 2016 election. These behavioral components are combined into a global model which collectively infers the most likely stance and agreement patterns among politicians, with respective accuracies of 86.44% and 84.6% on average.",
"title": ""
}
] |
scidocsrr
|
a60f54a4b2103ce0e5fa92ef52973b0f
|
A Comparative Study of Classification and Regression Algorithms for Modelling Students' Academic Performance.
|
[
{
"docid": "fb3cb4a5aef2633add88f28a7f3f19ac",
"text": "Both the root mean square error (RMSE) and the mean absolute error (MAE) are regularly employed in model evaluation studies. Willmott and Matsuura(2005) have suggested that the RMSE is not a good indicator of average model performance and might be a misleading indicator of average error, and thus the MAE would be a better metric for that purpose. While some concerns over using RMSE raised by Willmott and Matsuura(2005) andWillmott et al. (2009) are valid, the proposed avoidance of RMSE in favor of MAE is not the solution. Citing the aforementioned papers, many researchers chose MAE over RMSE to present their model evaluation statistics when presenting or adding the RMSE measures could be more beneficial. In this technical note, we demonstrate that the RMSE is not ambiguous in its meaning, contrary to what was claimed by Willmott et al. (2009). The RMSE is more appropriate to represent model performance than the MAE when the error distribution is expected to be Gaussian. In addition, we show that the RMSE satisfies the triangle inequality requirement for a distance metric, whereasWillmott et al. (2009) indicated that the sums-ofsquares-based statistics do not satisfy this rule. In the end, we discussed some circumstances where using the RMSE will be more beneficial. However, we do not contend that the RMSE is superior over the MAE. Instead, a combination of metrics, including but certainly not limited to RMSEs and MAEs, are often required to assess model performance.",
"title": ""
}
] |
[
{
"docid": "9c47d1896892c663987caa24d4a70037",
"text": "Multi-pitch estimation of sources in music is an ongoing research area that has a wealth of applications in music information retrieval systems. This paper presents the systematic evaluations of over a dozen competing methods and algorithms for extracting the fundamental frequencies of pitched sound sources in polyphonic music. The evaluations were carried out as part of the Music Information Retrieval Evaluation eXchange (MIREX) over the course of two years, from 2007 to 2008. The generation of the dataset and its corresponding ground-truth, the methods by which systems can be evaluated, and the evaluation results of the different systems are presented and discussed.",
"title": ""
},
{
"docid": "ed6a69d040a53bec208cf3f0fc5076e9",
"text": "The Buddhist construct of mindfulness is a central element of mindfulness-based interventions and derives from a systematic phenomenological programme developed over several millennia to investigate subjective experience. Enthusiasm for ‘mindfulness’ in Western psychological and other science has resulted in proliferation of definitions, operationalizations and self-report inventories that purport tomeasure mindful awareness as a trait. This paper addresses a number of seemingly intractable issues regarding current attempts to characterize mindfulness and also highlights a number of vulnerabilities in this domain that may lead to denaturing, distortion, dilution or reification of Buddhist constructs related to mindfulness. Enriching positivist Western psychological paradigms with a detailed and complex Buddhist phenomenology of the mind may require greater study and long-term direct practice of insight meditation than is currently common among psychologists and other scientists. Pursuit of such an approach would seem a necessary precondition for attempts to characterize and quantify mindfulness.",
"title": ""
},
{
"docid": "7645c6a0089ab537cb3f0f82743ce452",
"text": "Behavioral studies of facial emotion recognition (FER) in autism spectrum disorders (ASD) have yielded mixed results. Here we address demographic and experiment-related factors that may account for these inconsistent findings. We also discuss the possibility that compensatory mechanisms might enable some individuals with ASD to perform well on certain types of FER tasks in spite of atypical processing of the stimuli, and difficulties with real-life emotion recognition. Evidence for such mechanisms comes in part from eye-tracking, electrophysiological, and brain imaging studies, which often show abnormal eye gaze patterns, delayed event-related-potential components in response to face stimuli, and anomalous activity in emotion-processing circuitry in ASD, in spite of intact behavioral performance during FER tasks. We suggest that future studies of FER in ASD: 1) incorporate longitudinal (or cross-sectional) designs to examine the developmental trajectory of (or age-related changes in) FER in ASD and 2) employ behavioral and brain imaging paradigms that can identify and characterize compensatory mechanisms or atypical processing styles in these individuals.",
"title": ""
},
{
"docid": "0add09adcb099c977435ddd8390c03c8",
"text": "A novel diode-triggered SCR (DTSCR) ESD protection element is introduced for low-voltage application (signal, supply voltage /spl les/1.8 V) and extremely narrow ESD design margins. Trigger voltage engineering in conjunction with fast and efficient SCR voltage clamping is applied for the protection of ultra-sensitive circuit nodes, such as SiGe HBT bases (e.g. f/sub Tmax/=45 GHz in BiCMOS-0.35 /spl mu/m LNA input) and thin gate-oxides (e.g. tox=1.7 nm in CMOS-0.09 /spl mu/m input). SCR integration is possible based on CMOS devices or can alternatively be formed by high-speed SiGe HBTs.",
"title": ""
},
{
"docid": "34d0b8d4b1c25b4be30ad0c15435f407",
"text": "Cranioplasty using alternate alloplastic bone substitutes instead of autologous bone grafting is inevitable in the clinical field. The authors present their experiences with cranial reshaping using methyl methacrylate (MMA) and describe technical tips that are keys to a successful procedure. A retrospective chart review of patients who underwent cranioplasty with MMA between April 2007 and July 2010 was performed. For 20 patients, MMA was used for cranioplasty after craniofacial trauma (n = 16), tumor resection (n = 2), and a vascular procedure (n = 2). The patients were divided into two groups. In group 1, MMA was used in full-thickness inlay fashion (n = 3), and in group 2, MMA was applied in partial-thickness onlay fashion (n = 17). The locations of reconstruction included the frontotemporal region (n = 5), the frontoparietotemporal region (n = 5), the frontal region (n = 9), and the vertex region (n = 1). The size of cranioplasty varied from 30 to 144 cm2. The amount of MMA used ranged from 20 to 70 g. This biomaterial was applied without difficulty, and no intraoperative complications were linked to the applied material. The patients were followed for 6 months to 4 years (mean, 2 years) after MMA implantation. None of the patients showed any evidence of implant infection, exposure, or extrusion. Moreover, the construct appeared to be structurally stable over time in all the patients. Methyl methacrylate is a useful adjunct for treating deficiencies of the cranial skeleton. It provides rapid and reliable correction of bony defects and contour deformities. Although MMA is alloplastic, appropriate surgical procedures can avoid problems such as infection and extrusion. An acceptable overlying soft tissue envelope should be maintained together with minimal contamination of the operative site. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .",
"title": ""
},
{
"docid": "4829d8c0dd21f84c3afbe6e1249d6248",
"text": "We present an action recognition and detection system from temporally untrimmed videos by combining motion and appearance features. Motion and appearance are two kinds of complementary cues for human action understanding from video. For motion features, we adopt the Fisher vector representation with improved dense trajectories due to its rich descriptive capacity. For appearance feature, we choose the deep convolutional neural network activations due to its recent success in image based tasks. With this fused feature of iDT and CNN, we train a SVM classifier for each action class in the one-vs-all scheme. We report both the recognition and detection results of our system on Thumos 14 Challenge. From the results, we see that our method rank 4 in the action recognition task and 2 in the action detection task.",
"title": ""
},
{
"docid": "41149c3504f43bd76cca054a4dff384c",
"text": "This paper presents a 3-dimensional millimeterwave statistical channel impulse response model from 28 GHz and 73 GHz ultrawideband propagation measurements [1], [2] . An accurate 3GPP-like channel model that supports arbitrary carrier frequency, RF bandwidth, and antenna beamwidth (for both omnidirectional and arbitrary directional antennas), is provided. Time cluster and spatial lobe model parameters are extracted from empirical distributions from field measurements. A step-by-step modeling procedure for generati ng channel coefficients is shown to agree with statistics from t he field measurements, thus confirming that the statistical cha nnel model faithfully recreates spatial and temporal channel impulse responses for use in millimeter-wave 5G air interface desig ns.",
"title": ""
},
{
"docid": "4a5959a7bcfaa0c7768d9a0d742742be",
"text": "In this paper, we are interested in understanding the interrelationships between mainstream and social media in forming public opinion during mass crises, specifically in regards to how events are framed in the mainstream news and on social networks and to how the language used in those frames may allow to infer political slant and partisanship. We study the lingual choices for political agenda setting in mainstream and social media by analyzing a dataset of more than 40M tweets and more than 4M news articles from the mass protests in Ukraine during 2013-2014 — known as \"Euromaidan\" — and the post-Euromaidan conflict between Russian, pro-Russian and Ukrainian forces in eastern Ukraine and Crimea. We design a natural language processing algorithm to analyze at scale the linguistic markers which point to a particular political leaning in online media and show that political slant in news articles and Twitter posts can be inferred with a high level of accuracy. These findings allow us to better understand the dynamics of partisan opinion formation during mass crises and the interplay between mainstream and social media in such circumstances.",
"title": ""
},
{
"docid": "61cfd09f87ed6bacd3446ea32061bc4c",
"text": "Subgroup discovery is a data mining technique which extracts interesting rules with respect to a target variable. An important characteristic of this task is the combination of predictive and descriptive induction. An overview related to the task of subgroup discovery is presented. This review focuses on the foundations, algorithms, and advanced studies together with the applications of subgroup discovery presented throughout the specialised bibliography.",
"title": ""
},
{
"docid": "646a1a07019d0f2965051baebcfe62c5",
"text": "We present a computing model based on the DNA strand displacement technique, which performs Bayesian inference. The model will take single-stranded DNA as input data, that represents the presence or absence of a specific molecular signal (evidence). The program logic encodes the prior probability of a disease and the conditional probability of a signal given the disease affecting a set of different DNA complexes and their ratios. When the input and program molecules interact, they release a different pair of single-stranded DNA species whose ratio represents the application of Bayes’ law: the conditional probability of the disease given the signal. The models presented in this paper can have the potential to enable the application of probabilistic reasoning in genetic diagnosis in vitro.",
"title": ""
},
{
"docid": "fd2abd6749eb7a85f3480ae9b4cbefa6",
"text": "We examine the current performance and future demands of interconnects to and on silicon chips. We compare electrical and optical interconnects and project the requirements for optoelectronic and optical devices if optics is to solve the major problems of interconnects for future high-performance silicon chips. Optics has potential benefits in interconnect density, energy, and timing. The necessity of low interconnect energy imposes low limits especially on the energy of the optical output devices, with a ~ 10 fJ/bit device energy target emerging. Some optical modulators and radical laser approaches may meet this requirement. Low (e.g., a few femtofarads or less) photodetector capacitance is important. Very compact wavelength splitters are essential for connecting the information to fibers. Dense waveguides are necessary on-chip or on boards for guided wave optical approaches, especially if very high clock rates or dense wavelength-division multiplexing (WDM) is to be avoided. Free-space optics potentially can handle the necessary bandwidths even without fast clocks or WDM. With such technology, however, optics may enable the continued scaling of interconnect capacity required by future chips.",
"title": ""
},
{
"docid": "c101290e355e76df7581a4500c111c86",
"text": "The Internet of Things (IoT) is a network of physical things, objects, or devices, such as radio-frequency identification tags, sensors, actuators, mobile phones, and laptops. The IoT enables objects to be sensed and controlled remotely across existing network infrastructure, including the Internet, thereby creating opportunities for more direct integration of the physical world into the cyber world. The IoT becomes an instance of cyberphysical systems (CPSs) with the incorporation of sensors and actuators in IoT devices. Objects in the IoT have the potential to be grouped into geographical or logical clusters. Various IoT clusters generate huge amounts of data from diverse locations, which creates the need to process these data more efficiently. Efficient processing of these data can involve a combination of different computation models, such as in situ processing and offloading to surrogate devices and cloud-data centers.",
"title": ""
},
{
"docid": "de6581719d2bc451695a77d43b091326",
"text": "Keyphrases are useful for a variety of tasks in information retrieval systems and natural language processing, such as text summarization, automatic indexing, clustering/classification, ontology learning and building and conceptualizing particular knowledge domains, etc. However, assigning these keyphrases manually is time consuming and expensive in term of human resources. Therefore, there is a need to automate the task of extracting keyphrases. A wide range of techniques of keyphrase extraction have been proposed, but they are still suffering from the low accuracy rate and poor performance. This paper presents a state of the art of automatic keyphrase extraction approaches to identify their strengths and weaknesses. We also discuss why some techniques perform better than others and how can we improve the task of automatic keyphrase extraction.",
"title": ""
},
{
"docid": "03aa771b457ec08c6ee5a4d1bb2d20dc",
"text": "CONTEXT\nThe use of unidimensional pain scales such as the Numerical Rating Scale (NRS), Verbal Rating Scale (VRS), or Visual Analogue Scale (VAS) is recommended for assessment of pain intensity (PI). A literature review of studies specifically comparing the NRS, VRS, and/or VAS for unidimensional self-report of PI was performed as part of the work of the European Palliative Care Research Collaborative on pain assessment.\n\n\nOBJECTIVES\nTo investigate the use and performance of unidimensional pain scales, with specific emphasis on the NRSs.\n\n\nMETHODS\nA systematic search was performed, including citations through April 2010. All abstracts were evaluated by two persons according to specified criteria.\n\n\nRESULTS\nFifty-four of 239 papers were included. Postoperative PI was most frequently studied; six studies were in cancer. Eight versions of the NRS (NRS-6 to NRS-101) were used in 37 studies; a total of 41 NRSs were tested. Twenty-four different descriptors (15 for the NRSs) were used to anchor the extremes. When compared with the VAS and VRS, NRSs had better compliance in 15 of 19 studies reporting this, and were the recommended tool in 11 studies on the basis of higher compliance rates, better responsiveness and ease of use, and good applicability relative to VAS/VRS. Twenty-nine studies gave no preference. Many studies showed wide distributions of NRS scores within each category of the VRSs. Overall, NRS and VAS scores corresponded, with a few exceptions of systematically higher VAS scores.\n\n\nCONCLUSION\nNRSs are applicable for unidimensional assessment of PI in most settings. Whether the variability in anchors and response options directly influences the numerical scores needs to be empirically tested. This will aid in the work toward a consensus-based, standardized measure.",
"title": ""
},
{
"docid": "44e310ba974f371605f6b6b6cd0146aa",
"text": "This section is a collection of shorter “Issue and Opinions” pieces that address some of the critical challenges around the evolution of digital business strategy. These voices and visions are from thought leaders who, in addition to their scholarship, have a keen sense of practice. They outline through their opinion pieces a series of issues that will need attention from both research and practice. These issues have been identified through their observation of practice with the eye of a scholar. They provide fertile opportunities for scholars in information systems, strategic management, and organizational theory.",
"title": ""
},
{
"docid": "e2f69fd023cfe69432459e8a82d4c79a",
"text": "Thresholding is one of the popular and fundamental techniques for conducting image segmentation. Many thresholding techniques have been proposed in the literature. Among them, the minimum cross entropy thresholding (MCET) have been widely adopted. Although the MCET method is effective in the bilevel thresholding case, it could be very time-consuming in the multilevel thresholding scenario for more complex image analysis. This paper first presents a recursive programming technique which reduces an order of magnitude for computing the MCET objective function. Then, a particle swarm optimization (PSO) algorithm is proposed for searching the near-optimal MCET thresholds. The experimental results manifest that the proposed PSO-based algorithm can derive multiple MCET thresholds which are very close to the optimal ones examined by the exhaustive search method. The convergence of the proposed method is analyzed mathematically and the results validate that the proposed method is efficient and is suited for real-time applications. 2006 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "37653b46f34b1418ad7dbfc59cbfe16a",
"text": "The Nonlinear autoregressive exogenous (NARX) model, which predicts the current value of a time series based upon its previous values as well as the current and past values of multiple driving (exogenous) series, has been studied for decades. Despite the fact that various NARX models have been developed, few of them can capture the long-term temporal dependencies appropriately and select the relevant driving series to make predictions. In this paper, we propose a dual-stage attention-based recurrent neural network (DA-RNN) to address these two issues. In the first stage, we introduce an input attention mechanism to adaptively extract relevant driving series (a.k.a., input features) at each time step by referring to the previous encoder hidden state. In the second stage, we use a temporal attention mechanism to select relevant encoder hidden states across all time steps. With this dual-stage attention scheme, our model can not only make predictions effectively, but can also be easily interpreted. Thorough empirical studies based upon the SML 2010 dataset and the NASDAQ 100 Stock dataset demonstrate that the DA-RNN can outperform state-of-the-art methods for time series prediction.",
"title": ""
},
{
"docid": "9bcf47b56ba4b58533b0d0435411a7b3",
"text": "OBJECTIVES\nThe aim of this report was to evaluate the 5-year clinical performance and survival of zirconia (NobelProcera™) single crowns.\n\n\nMETHODS\nAll patients treated with porcelain-veneered zirconia single crowns in a private practice during the period October 2004 to November 2005 were included. The records were scrutinized for clinical data. Information was available for 162 patients and 205 crowns.\n\n\nRESULTS\nMost crowns (78%) were placed on premolars and molars. Out of the 143 crowns that were followed for 5 years, 126 (88%) did not have any complications. Of those with complications, the most common were: extraction of abutment tooth (7; 3%), loss of retention (15; 7%), need of endodontic treatment (9; 4%) and porcelain veneer fracture (6; 3%). No zirconia cores fractured. In total 19 restorations (9%) were recorded as failures: abutment tooth extraction (7), remake of crown due to lost retention (6), veneer fracture (4), persistent pain (1) and caries (1). The 5-year cumulative survival rate (CSR) was 88.8%.\n\n\nCONCLUSIONS\nAccording to the present 5-year results zirconia crowns (NobelProcera™) are a promising prosthodontic alternative also in the premolar and molar regions. Out of the 143 crowns followed for 5 years, 126 (88%) did not have any complications. However, 9% of the restorations were judged as failures. Further studies are necessary to evaluate the long-term success.",
"title": ""
},
{
"docid": "1c56b68a20b2baba45c7939a24d9be70",
"text": "Emotion recognition in conversations is crucial for building empathetic machines. Current work in this domain do not explicitly consider the inter-personal influences that thrive in the emotional dynamics of dialogues. To this end, we propose Interactive COnversational memory Network (ICON), a multimodal emotion detection framework that extracts multimodal features from conversational videos and hierarchically models the selfand interspeaker emotional influences into global memories. Such memories generate contextual summaries which aid in predicting the emotional orientation of utterance-videos. Our model outperforms state-of-the-art networks on multiple classification and regression tasks in two benchmark datasets.",
"title": ""
},
{
"docid": "c41e65416f0339046587239ae6a6f7b4",
"text": "Substantial research has documented the universality of several emotional expressions. However, recent findings have demonstrated cultural differences in level of recognition and ratings of intensity. When testing cultural differences, stimulus sets must meet certain requirements. Matsumoto and Ekman's Japanese and Caucasian Facial Expressions of Emotion (JACFEE) is the only set that meets these requirements. The purpose of this study was to obtain judgment reliability data on the JACFEE, and to test for possible cross-national differences in judgments as well. Subjects from Hungary, Japan, Poland, Sumatra, United States, and Vietnam viewed the complete JACFEE photo set and judged which emotions were portrayed in the photos and rated the intensity of those expressions. Results revealed high agreement across countries in identifying the emotions portrayed in the photos, demonstrating the reliability of the JACFEE. Despite high agreement, cross-national differences were found in the exact level of agreement for photos of anger, contempt, disgust, fear, sadness, and surprise. Cross-national differences were also found in the level of intensity attributed to the photos. No systematic variation due to either preceding emotion or presentation order of the JACFEE was found. Also, we found that grouping the countries into a Western/Non-Western dichotomy was not justified according to the data. Instead, the cross-national differences are discussed in terms of possible sociopsychological variables that influence emotion judgments. Cross-cultural research has documented high agreement in judgments of facial expressions of emotion in over 30 different cultures (Ekman, The research reported in this article was made supported in part by faculty awards for research and scholarship to David Matsumoto. Also, we would like to express our appreciation to William Irwin for his previous work on this project, and to Nathan Yrizarry, Hideko Uchida, Cenita Kupperbusch, Galin Luk, Carinda Wilson-Cohn, Sherry Loewinger, and Sachiko Takeuchi for their general assistance in our research program. Correspondence concerning this article should be addressed to David Matsumoto, Department of Psychology, San Francisco State University, 1600 Holloway Avenue, San Francisco, CA 94132. Electronic mail may be sent to dm@sfsu.edu. loumal of Nonverbal Behavior 21(1), Spring 1997 © 1997 Human Sciences Press, Inc. 3 1994), including preliterate cultures (Ekman, Sorensen, & Friesen, 1969; Ekman & Friesen, 1971). Recent research, however, has reported cultural differences in judgment as well. Matsumoto (1989, 1992a), for example, found that American and Japanese subjects differed in their rates of recognition. Differences have also been found in ratings of intensity (Ekman et al., 1987). Examining cultural differences requires a different methodology than studying similarities. Matsumoto (1992a) outlined such requirements: (1) cultures must view the same expressions; (2) the facial expressions must meet criteria for validly and reliably portraying the universal emotions; (3) each poser must appear only once; (4) expressions must include posers of more than one race. Matsumoto and Ekman's (1988) Japanese and Caucasian Facial Expressions of Emotion (JACFEE) was designed to meet these requirements. JACFEE was developed by photographing over one hundred posers who voluntarily moved muscles that correspond to the universal expressions (Ekman & Friesen, 1975, 1986). From the thousands of photographs taken, a small pool of photos was coded using Ekman and Friesen's (1978) Facial Action Coding System (FACS). A final pool of photos was then selected to ensure that each poser only contributed one photo in the final set, which is comprised of 56 photos, including eight photos each of anger, contempt, disgust, fear, happiness, sadness, and surprise. Four photos of each emotion depict posers of either Japanese or Caucasian descent (2 males, 2 females). Two published studies have reported judgment data on the JACFEE, but only with American and Japanese subjects. Matsumoto and Ekman (1989), for example, asked their subjects to make scalar ratings (0-8) on seven emotion dimensions for each photo. The judgments of the Americans and Japanese were similar in relation to strongest emotion depicted in the photos, and the relative intensity among the photographs. Americans, however, gave higher absolute intensity ratings on photos of happiness, anger, sadness, and surprise. In the second study (Matsumoto, 1992a), high agreement was found in the recognition judgments, but the level of recognition differed for anger, disgust, fear, and sadness. While data from these and other studies seem to indicate the dual existence of universal and culture-specific aspects of emotion judgment, the methodology used in many previous studies has recently been questioned on several grounds, including the previewing of slides, judgment context, presentation order, preselection of slides, the use of posed expressions, and type of response format (Russell, 1994; see Ekman, 1994, and Izard, 1994, for reply). Two of these, judgment context and presentation order, are especially germane to the present study and are addressed here. JOURNAL OF NONVERBAL BEHAVIOR 4",
"title": ""
}
] |
scidocsrr
|
9dfa53d70e1d72fc77c4ea19877698b6
|
Identifying Argumentative Discourse Structures in Persuasive Essays
|
[
{
"docid": "3fa5de33e7ccd6c440a4a65a5681f8b8",
"text": "Argumentation is the process by which arguments are constructed and handled. Argumentation constitutes a major component of human intelligence. The ability to engage in argumentation is essential for humans to understand new problems, to perform scientific reasoning, to express, to clarify and to defend their opinions in their daily lives. Argumentation mining aims to detect the arguments presented in a text document, the relations between them and the internal structure of each individual argument. In this paper we analyse the main research questions when dealing with argumentation mining and the different methods we have studied and developed in order to successfully confront the challenges of argumentation mining in legal texts.",
"title": ""
},
{
"docid": "afd00b4795637599f357a7018732922c",
"text": "We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors in conditional loglinear models, and (iv) fine-grained modeling of unknown word features. Using these ideas together, the resulting tagger gives a 97.24% accuracy on the Penn Treebank WSJ, an error reduction of 4.4% on the best previous single automatically learned tagging result.",
"title": ""
},
{
"docid": "b69686c780d585d6b53fe7ec37e22b80",
"text": "In written dialog, discourse participants need to justify claims they make, to convince the reader the claim is true and/or relevant to the discourse. This paper presents a new task (with an associated corpus), namely detecting such justifications. We investigate the nature of such justifications, and observe that the justifications themselves often contain discourse structure. We therefore develop a method to detect the existence of certain types of discourse relations, which helps us classify whether a segment is a justification or not. Our task is novel, and our work is novel in that it uses a large set of connectives (which we call indicators), and in that it uses a large set of discourse relations, without choosing among them.",
"title": ""
}
] |
[
{
"docid": "8689b038c62d96adf1536594fcc95c07",
"text": "We present an interactive system that allows users to design original pop-up cards. A pop-up card is an interesting form of papercraft consisting of folded paper that forms a three-dimensional structure when opened. However, it is very difficult for the average person to design pop-up cards from scratch because it is necessary to understand the mechanism and determine the positions of objects so that pop-up parts do not collide with each other or protrude from the card. In the proposed system, the user interactively sets and edits primitives that are predefined in the system. The system simulates folding and opening of the pop-up card using a mass–spring model that can simply simulate the physical movement of the card. This simulation detects collisions and protrusions and illustrates the movement of the pop-up card. The results of the present study reveal that the user can design a wide range of pop-up cards using the proposed system.",
"title": ""
},
{
"docid": "2da528d39b8815bcbb9a8aaf20d94926",
"text": "Collaborative filtering (CF) is out of question the most widely adopted and successful recommendation approach. A typical CF-based recommender system associates a user with a group of like-minded users based on their individual preferences over all the items, either explicit or implicit, and then recommends to the user some unobserved items enjoyed by the group. However, we find that two users with similar tastes on one item subset may have totally different tastes on another set. In other words, there exist many user-item subgroups each consisting of a subset of items and a group of like-minded users on these items. It is more reasonable to predict preferences through one user's correlated subgroups, but not the entire user-item matrix. In this paper, to find meaningful subgroups, we formulate a new Multiclass Co-Clustering (MCoC) model, which captures relations of user-to-item, user-to-user, and item-to-item simultaneously. Then, we combine traditional CF algorithms with subgroups for improving their top- <inline-formula><tex-math notation=\"LaTeX\">$N$</tex-math><alternatives> <inline-graphic xlink:type=\"simple\" xlink:href=\"cai-ieq1-2566622.gif\"/></alternatives></inline-formula> recommendation performance. Our approach can be seen as a new extension of traditional clustering CF models. Systematic experiments on several real data sets have demonstrated the effectiveness of our proposed approach.",
"title": ""
},
{
"docid": "5bd61380b9b05b3e89d776c6cbeb0336",
"text": "Cross-domain text classification aims to automatically train a precise text classifier for a target domain by using labelled text data from a related source domain. To this end, one of the most promising ideas is to induce a new feature representation so that the distributional difference between domains can be reduced and a more accurate classifier can be learned in this new feature space. However, most existing methods do not explore the duality of the marginal distribution of examples and the conditional distribution of class labels given labeled training examples in the source domain. Besides, few previous works attempt to explicitly distinguish the domain-independent and domain-specific latent features and align the domain-specific features to further improve the cross-domain learning. In this paper, we propose a model called Partially Supervised Cross-Collection LDA topic model (PSCCLDA) for cross-domain learning with the purpose of addressing these two issues in a unified way. Experimental results on nine datasets show that our model outperforms two standard classifiers and four state-of-the-art methods, which demonstrates the effectiveness of our proposed model.",
"title": ""
},
{
"docid": "baa71f083831919a067322ab4b268db5",
"text": "– The theoretical analysis gives an overview of the functioning of DDS, especially with respect to noise and spurs. Different spur reduction techniques are studied in detail. Four ICs, which were the circuit implementations of the DDS, were designed. One programmable logic device implementation of the CORDIC based quadrature amplitude modulation (QAM) modulator was designed with a separate D/A converter IC. For the realization of these designs some new building blocks, e.g. a new tunable error feedback structure and a novel and more cost-effective digital power ramp generator, were developed. Implementing a DDS on an FPGA using Xilinx’s ISE software. IndexTerms—CORDIC, DDS, NCO, FPGA, SFDR. ________________________________________________________________________________________________________",
"title": ""
},
{
"docid": "e2d2fe124fbef2138d2c67a02da220c6",
"text": "This paper addresses robust fault diagnosis of the chaser’s thrusters used for the rendezvous phase of the Mars Sample Return (MSR) mission. The MSR mission is a future exploration mission undertaken jointly by the National Aeronautics and Space Administration (NASA) and the European Space Agency (ESA). The goal is to return tangible samples from Mars atmosphere and ground to Earth for analysis. A residual-based scheme is proposed that is robust against the presence of unknown time-varying delays induced by the thruster modulator unit. The proposed fault diagnosis design is based on Eigenstructure Assignment (EA) and first-order Padé approximation. The resulted method is able to detect quickly any kind of thruster faults and to isolate them using a cross-correlation based test. Simulation results from the MSR ”high-fidelity” industrial simulator, provided by Thales Alenia Space, demonstrate that the proposed method is able to detect and isolate some thruster faults in a reasonable time, despite of delays in the thruster modulator unit, inaccurate navigation unit, and spatial disturbances (i.e. J2 gravitational perturbation, atmospheric drag, and solar radiation pressure). Robert Fonod IMS laboratory, University of Bordeaux 1, 351 cours de la libération, 33405 Talence, France e-mail: robert.fonod@ims-bordeaux.fr David Henry IMS laboratory, University of Bordeaux 1, 351 cours de la libération, 33405 Talence, France e-mail: david.henry@ims-bordeaux.fr Catherine Charbonnel Thales Alenia Space, 100 Boulevard du Midi, 06156 Cannes La Bocca, France e-mail: catherine.charbonnel@thalesaleniaspace.com Eric Bornschlegl European Space Research and Technology Centre, Keplerlaan 1, 2200 AG Noordwijk, Netherlands e-mail: eric.bornschlegl@esa.int 1 Proceedings of the EuroGNC 2013, 2nd CEAS Specialist Conference on Guidance, Navigation & Control, Delft University of Technology, Delft, The Netherlands, April 10-12, 2013 FrBT2.2",
"title": ""
},
{
"docid": "edb5b733e77271dd4e1afaf742388a68",
"text": "The Intolerance of Uncertainty Model was initially developed as an explanation for worry within the context of generalized anxiety disorder. However, recent research has identified intolerance of uncertainty (IU) as a possible transdiagnostic maintaining factor across the anxiety disorders and depression. The aim of this study was to determine whether IU mediated the relationship between neuroticism and symptoms related to various anxiety disorders and depression in a treatment-seeking sample (N=328). Consistent with previous research, IU was significantly associated with neuroticism as well as with symptoms of social phobia, panic disorder and agoraphobia, obsessive-compulsive disorder, generalized anxiety disorder, and depression. Moreover, IU explained unique variance in these symptom measures when controlling for neuroticism. Mediational analyses showed that IU was a significant partial mediator between neuroticism and all symptom measures, even when controlling for symptoms of other disorders. More specifically, anxiety in anticipation of future uncertainty (prospective anxiety) partially mediated the relationship between neuroticism and symptoms of generalized anxiety disorder (i.e. worry) and obsessive-compulsive disorder, whereas inaction in the face of uncertainty (inhibitory anxiety) partially mediated the relationship between neuroticism and symptoms of social anxiety, panic disorder and agoraphobia, and depression. Sobel's test demonstrated that all hypothesized meditational pathways were associated with significant indirect effects, although the mediation effect was stronger for worry than other symptoms. Potential implications of these findings for the treatment of anxiety disorders and depression are discussed.",
"title": ""
},
{
"docid": "947ffeb4fff1ca4ee826d71d4add399e",
"text": "Description bttroductian. A maximal complete subgraph (clique) is a complete subgraph that is not contained in any other complete subgraph. A recent paper [1] describes a number of techniques to find maximal complete subgraphs of a given undirected graph. In this paper, we present two backtracking algorithms, using a branchand-bound technique [4] to cut off branches that cannot lead to a clique. The first version is a straightforward implementation of the basic algorithm. It is mainly presented to illustrate the method used. This version generates cliques in alphabetic (lexicographic) order. The second version is derived from the first and generates cliques in a rather unpredictable order in an attempt to minimize the number of branches to be traversed. This version tends to produce the larger cliques first and to generate sequentially cliques having a large common intersection. The detailed algorithm for version 2 is presented here. Description o f the algorithm--Version 1. Three sets play an important role in the algorithm. (1) The set compsub is the set to be extended by a new point or shrunk by one point on traveling along a branch of the backtracking tree. The points that are eligible to extend compsub, i.e. that are connected to all points in compsub, are collected recursively in the remaining two sets. (2) The set candidates is the set of all points that will in due time serve as an extension to the present configuration of compsub. (3) The set not is the set of all points that have at an earlier stage already served as an extension of the present configuration of compsub and are now explicitly excluded. The reason for maintaining this set trot will soon be made clear. The core of the algorithm consists of a recursively defined extension operator that will be applied to the three sets Just described. It has the duty to generate all extensions of the given configuration of compsub that it can make with the given set of candidates and that do not contain any of the points in not. To put it differently: all extensions of compsub containing any point in not have already been generated. The basic mechanism now consists of the following five steps:",
"title": ""
},
{
"docid": "5d154a62b22415cbedd165002853315b",
"text": "Unaccompanied immigrant children are a highly vulnerable population, but research into their mental health and psychosocial context remains limited. This study elicited lawyers’ perceptions of the mental health needs of unaccompanied children in U.S. deportation proceedings and their mental health referral practices with this population. A convenience sample of 26 lawyers who work with unaccompanied children completed a semi-structured, online survey. Lawyers surveyed frequently had mental health concerns about their unaccompanied child clients, used clinical and lay terminology to describe symptoms, referred for both expert testimony and treatment purposes, frequently encountered barriers to accessing appropriate services, and expressed interest in mental health training. The results of this study suggest a complex intersection between the legal and mental health needs of unaccompanied children, and the need for further research and improved service provision in support of their wellbeing.",
"title": ""
},
{
"docid": "5bc1c336b8e495e44649365f11af4ab8",
"text": "Convolutional neural networks (CNN) are limited by the lack of capability to handle geometric information due to the fixed grid kernel structure. The availability of depth data enables progress in RGB-D semantic segmentation with CNNs. State-of-the-art methods either use depth as additional images or process spatial information in 3D volumes or point clouds. These methods suffer from high computation and memory cost. To address these issues, we present Depth-aware CNN by introducing two intuitive, flexible and effective operations: depth-aware convolution and depth-aware average pooling. By leveraging depth similarity between pixels in the process of information propagation, geometry is seamlessly incorporated into CNN. Without introducing any additional parameters, both operators can be easily integrated into existing CNNs. Extensive experiments and ablation studies on challenging RGB-D semantic segmentation benchmarks validate the effectiveness and flexibility of our approach.",
"title": ""
},
{
"docid": "cdee51ab9562e56aee3fff58cd2143ba",
"text": "Stochastic gradient descent (SGD) still is the workhorse for many practical problems. However, it converges slow, and can be difficult to tune. It is possible to precondition SGD to accelerate its convergence remarkably. But many attempts in this direction either aim at solving specialized problems, or result in significantly more complicated methods than SGD. This paper proposes a new method to adaptively estimate a preconditioner, such that the amplitudes of perturbations of preconditioned stochastic gradient match that of the perturbations of parameters to be optimized in a way comparable to Newton method for deterministic optimization. Unlike the preconditioners based on secant equation fitting as done in deterministic quasi-Newton methods, which assume positive definite Hessian and approximate its inverse, the new preconditioner works equally well for both convex and nonconvex optimizations with exact or noisy gradients. When stochastic gradient is used, it can naturally damp the gradient noise to stabilize SGD. Efficient preconditioner estimation methods are developed, and with reasonable simplifications, they are applicable to large-scale problems. Experimental results demonstrate that equipped with the new preconditioner, without any tuning effort, preconditioned SGD can efficiently solve many challenging problems like the training of a deep neural network or a recurrent neural network requiring extremely long-term memories.",
"title": ""
},
{
"docid": "bd0375c1a6393117d9b3e97340e90316",
"text": "INTRODUCTION\nCancer patients are particularly vulnerable to depression and anxiety, with fatigue as the most prevalent symptom of those undergoing treatment. The purpose of this study was to determine whether improvement in depression, anxiety or fatigue during chemotherapy following anthroposophy art therapy intervention is substantial enough to warrant a controlled trial.\n\n\nMATERIAL AND METHODS\nSixty cancer patients on chemotherapy and willing to participate in once-weekly art therapy sessions (painting with water-based paints) were accrued for the study. Nineteen patients who participated in > or =4 sessions were evaluated as the intervention group, and 41 patients who participated in < or =2 sessions comprised the participant group. Hospital Anxiety and Depression Scale (HADS) and the Brief Fatigue Inventory (BFI) were completed before every session, relating to the previous week.\n\n\nRESULTS\nBFI scores were higher in the participant group (p=0.06). In the intervention group, the median HADS score for depression was 9 at the beginning and 7 after the fourth appointment (p=0.021). The median BFI score changed from 5.7 to 4.1 (p=0.24). The anxiety score was in the normal range from the beginning.\n\n\nCONCLUSION\nAnthroposophical art therapy is worthy of further study in the treatment of cancer patients with depression or fatigue during chemotherapy treatment.",
"title": ""
},
{
"docid": "b1f98cbb045f8c15f53d284c9fa9d881",
"text": "If the pace of increase in life expectancy in developed countries over the past two centuries continues through the 21st century, most babies born since 2000 in France, Germany, Italy, the UK, the USA, Canada, Japan, and other countries with long life expectancies will celebrate their 100th birthdays. Although trends differ between countries, populations of nearly all such countries are ageing as a result of low fertility, low immigration, and long lives. A key question is: are increases in life expectancy accompanied by a concurrent postponement of functional limitations and disability? The answer is still open, but research suggests that ageing processes are modifiable and that people are living longer without severe disability. This finding, together with technological and medical development and redistribution of work, will be important for our chances to meet the challenges of ageing populations.",
"title": ""
},
{
"docid": "9f16e90dc9b166682ac9e2a8b54e611a",
"text": "Lua is a programming language designed as scripting language, which is fast, lightweight, and suitable for embedded applications. Due to its features, Lua is widely used in the development of games and interactive applications for digital TV. However, during the development phase of such applications, some errors may be introduced, such as deadlock, arithmetic overflow, and division by zero. This paper describes a novel verification approach for software written in Lua, using as backend the Efficient SMTBased Context-Bounded Model Checker (ESBMC). Such an approach, called bounded model checking - Lua (BMCLua), consists in translating Lua programs into ANSI-C source code, which is then verified with ESBMC. Experimental results show that the proposed verification methodology is effective and efficient, when verifying safety properties in Lua programs. The performed experiments have shown that BMCLua produces an ANSI-C code that is more efficient for verification, when compared with other existing approaches. To the best of our knowledge, this work is the first that applies bounded model checking to the verification of Lua programs.",
"title": ""
},
{
"docid": "9948786041464ea72bfdddeaba0d2707",
"text": "The interpretation of forensic fingerprint evidence relies on the expertise of latent print examiners. We tested latent print examiners on the extent to which they reached consistent decisions. This study assessed intra-examiner repeatability by retesting 72 examiners on comparisons of latent and exemplar fingerprints, after an interval of approximately seven months; each examiner was reassigned 25 image pairs for comparison, out of total pool of 744 image pairs. We compare these repeatability results with reproducibility (inter-examiner) results derived from our previous study. Examiners repeated 89.1% of their individualization decisions, and 90.1% of their exclusion decisions; most of the changed decisions resulted in inconclusive decisions. Repeatability of comparison decisions (individualization, exclusion, inconclusive) was 90.0% for mated pairs, and 85.9% for nonmated pairs. Repeatability and reproducibility were notably lower for comparisons assessed by the examiners as \"difficult\" than for \"easy\" or \"moderate\" comparisons, indicating that examiners' assessments of difficulty may be useful for quality assurance. No false positive errors were repeated (n = 4); 30% of false negative errors were repeated. One percent of latent value decisions were completely reversed (no value even for exclusion vs. of value for individualization). Most of the inter- and intra-examiner variability concerned whether the examiners considered the information available to be sufficient to reach a conclusion; this variability was concentrated on specific image pairs such that repeatability and reproducibility were very high on some comparisons and very low on others. Much of the variability appears to be due to making categorical decisions in borderline cases.",
"title": ""
},
{
"docid": "af910640384bca46ba4268fe4ba0c3b3",
"text": "The experience and methodology developed by COPEL for the integrated use of Pls-Cadd (structure spotting) and Tower (structural analysis) softwares are presented. Structural evaluations in transmission line design are possible for any loading condition, allowing considerations of new or updated loading trees, wind speeds or design criteria.",
"title": ""
},
{
"docid": "df9c6dc1d6d1df15b78b7db02f055f70",
"text": "The robotic grasp detection is a great challenge in the area of robotics. Previous work mainly employs the visual approaches to solve this problem. In this paper, a hybrid deep architecture combining the visual and tactile sensing for robotic grasp detection is proposed. We have demonstrated that the visual sensing and tactile sensing are complementary to each other and important for the robotic grasping. A new THU grasp dataset has also been collected which contains the visual, tactile and grasp configuration information. The experiments conducted on a public grasp dataset and our collected dataset show that the performance of the proposed model is superior to state of the art methods. The results also indicate that the tactile data could help to enable the network to learn better visual features for the robotic grasp detection task.",
"title": ""
},
{
"docid": "27e1d29dc8d252081e80f93186a14660",
"text": "Over the last several years there has been an increasing focus on early detection of Autism Spectrum Disorder (ASD), not only from the scientific field but also from professional associations and public health systems all across Europe. Not surprisingly, in order to offer better services and quality of life for both children with ASD and their families, different screening procedures and tools have been developed for early assessment and intervention. However, current evidence is needed for healthcare providers and policy makers to be able to implement specific measures and increase autism awareness in European communities. The general aim of this review is to address the latest and most relevant issues related to early detection and treatments. The specific objectives are (1) analyse the impact, describing advantages and drawbacks, of screening procedures based on standardized tests, surveillance programmes, or other observational measures; and (2) provide a European framework of early intervention programmes and practices and what has been learnt from implementing them in public or private settings. This analysis is then discussed and best practices are suggested to help professionals, health systems and policy makers to improve their local procedures or to develop new proposals for early detection and intervention programmes.",
"title": ""
},
{
"docid": "40e129b6264892f1090fd9a8d6a9c1ae",
"text": "We introduce an algorithm for text detection and localization (\"spotting\") that is computationally efficient and produces state-of-the-art results. Our system uses multi-channel MSERs to detect a large number of promising regions, then subsamples these regions using a clustering approach. Representatives of region clusters are binarized and then passed on to a deep network. A final line grouping stage forms word-level segments. On the ICDAR 2011 and 2015 benchmarks, our algorithm obtains an F-score of 82% and 83%, respectively, at a computational cost of 1.2 seconds per frame. We also introduce a version that is three times as fast, with only a slight reduction in performance.",
"title": ""
},
{
"docid": "93d4d58e974e66c11c9b41d12a833da0",
"text": "OBJECTIVE\nButyrate enemas may be effective in the treatment of active distal ulcerative colitis. Because colonic fermentation of Plantago ovata seeds (dietary fiber) yields butyrate, the aim of this study was to assess the efficacy and safety of Plantago ovata seeds as compared with mesalamine in maintaining remission in ulcerative colitis.\n\n\nMETHODS\nAn open label, parallel-group, multicenter, randomized clinical trial was conducted. A total of 105 patients with ulcerative colitis who were in remission were randomized into groups to receive oral treatment with Plantago ovata seeds (10 g b.i.d.), mesalamine (500 mg t.i.d.), and Plantago ovata seeds plus mesalamine at the same doses. The primary efficacy outcome was maintenance of remission for 12 months.\n\n\nRESULTS\nOf the 105 patients, 102 were included in the final analysis. After 12 months, treatment failure rate was 40% (14 of 35 patients) in the Plantago ovata seed group, 35% (13 of 37) in the mesalamine group, and 30% (nine of 30) in the Plantago ovata plus mesalamine group. Probability of continued remission was similar (Mantel-Cox test, p = 0.67; intent-to-treat analysis). Therapy effects remained unchanged after adjusting for potential confounding variables with a Cox's proportional hazards survival analysis. Three patients were withdrawn because of the development of adverse events consisting of constipation and/or flatulence (Plantago ovata seed group = 1 and Plantago ovata seed plus mesalamine group = 2). A significant increase in fecal butyrate levels (p = 0.018) was observed after Plantago ovata seed administration.\n\n\nCONCLUSIONS\nPlantago ovata seeds (dietary fiber) might be as effective as mesalamine to maintain remission in ulcerative colitis.",
"title": ""
}
] |
scidocsrr
|
2a9d99e81c06a751cb76e5d22677eca8
|
CloudMAC — An OpenFlow based architecture for 802.11 MAC layer processing in the cloud
|
[
{
"docid": "83355e7d2db67e42ec86f81909cfe8c1",
"text": "everal protocols for routing and forwarding in Wireless Mesh Networks (WMN) have been proposed, such as AODV, OLSR or B.A.T.M.A.N. However, providing support for e.g. flow-based routing where flows of one source take different paths through the network is hard to implement in a unified way using traditional routing protocols. OpenFlow is an emerging technology which makes network elements such as routers or switches programmable via a standardized interface. By using virtualization and flow-based routing, OpenFlow enables a rapid deployment of novel packet forwarding and routing algorithms, focusing on fixed networks. We propose an architecture that integrates OpenFlow with WMNs and provides such flow-based routing and forwarding capabilities. To demonstrate the feasibility of our OpenFlow based approach, we have implemented a simple solution to solve the problem of client mobility in a WMN which handles the fast migration of client addresses (e.g. IP addresses) between Mesh Access Points and the interaction with re-routing without the need for tunneling. Measurements from a real mesh testbed (KAUMesh) demonstrate the feasibility of our approach based on the evaluation of forwarding performance, control traffic and rule activation time.",
"title": ""
}
] |
[
{
"docid": "1ab9bfcb356b394a3e9441a75668bc07",
"text": "User Generated Content (UGC) is a rapidly emerging growth engine of many Internet businesses and an important component of the new knowledge society. However, little research has been done on the mechanisms inherent to UGC. This research explores the relationships among the quality, value, and benefits of UGC. The main objective is to identify and evaluate the quality factors that affect UGC value, which ultimately influences the utility of UGC. We identify the three quality dimensions of UGC: content, design, and technology. We classify UGC value into three categories: functional value, emotional value, and social value. We attempt to characterize the mechanism underlying UGC value by evaluating the relationships between the quality and value of UGC and investigating what types of UGC value affect UGC utility. Our results show that all three factors of UGC quality are strongly associated with increases in the functional, emotional, and social values of UGC. Our findings also demonstrate that the functional and emotional values of UGC are critically important factors for UGC utility. Based on these findings, we discuss theoretical implications for future research and practical implications for UGC services.",
"title": ""
},
{
"docid": "858acbd02250ff2f8325786475b4f3f3",
"text": "One of the most important aspects of Grice’s theory of conversation is the drawing of a borderline between what is said and what is implicated. Grice’s views concerning this borderline have been strongly and influentially criticised by relevance theorists. In particular, it has become increasingly widely accepted that Grice’s notion of what is said is too limited, and that pragmatics has a far larger role to play in determining what is said than Grice would have allowed. (See for example Bezuidenhuit 1996; Blakemore 1987; Carston 1991; Recanati 1991, 1993, 2001; Sperber and Wilson 1986; Wilson and Sperber 1981.) In this paper, I argue that the rejection of Grice has moved too swiftly, as a key line of objection which has led to this rejection is flawed. The flaw, we will see, is that relevance theorists rely on a misunderstanding of Grice’s project in his theory of conversation. I am not arguing that Grice’s versions of saying and implicating are right in all details, but simply that certain widespread reasons for rejecting his theory are based on misconceptions.1 Relevance theorists, I will suggest, systematically misunderstand Grice by taking him to be engaged in the same project that they are: making sense of the psychological processes by which we interpret utterances. Notions involved with this project will need to be ones that are relevant to the psychology of utterance interpretation. Thus, it is only reasonable that relevance theorists will require that what is said and what is implicated should be psychologically real to the audience. (We will see that this requirement plays a crucial role in their arguments against Grice.) Grice, I will argue, was not pursuing this project. Rather, I will suggest that he was trying to make sense of quite a different notion of what is said: one on which both speaker and audience may be wrong about what is said. On this sort of notion, psychological reality is not a requirement. So objections to Grice based on a requirement of psychological reality will fail.",
"title": ""
},
{
"docid": "d0a765968e7cc4cf8099f66e0c3267da",
"text": "We explore the lattice sphere packing representation of a multi-antenna system and the algebraic space-time (ST) codes. We apply the sphere decoding (SD) algorithm to the resulted lattice code. For the uncoded system, SD yields, with small increase in complexity, a huge improvement over the well-known V-BLAST detection algorithm. SD of algebraic ST codes exploits the full diversity of the coded multi-antenna system, and makes the proposed scheme very appealing to take advantage of the richness of the multi-antenna environment. The fact that the SD does not depend on the constellation size, gives rise to systems with very high spectral efficiency, maximum-likelihood performance, and low decoding complexity.",
"title": ""
},
{
"docid": "e0aac76af8e600afba35a97d88a60da1",
"text": "We present a new algorithm for merging occupancy grid maps produced by multiple robots exploring the same environment. The algorithm produces a set of possible transformations needed to merge two maps, i.e translations and rotations. Each transformation is weighted, thus allowing to distinguish uncertain situations, and enabling to track multiple cases when ambiguities arise. Transformations are produced extracting some spectral information from the maps. The approach is deterministic, non-iterative, and fast. The algorithm has been tested on public available datasets, as well as on maps produced by two robots concurrently exploring both indoor and outdoor environments. Throughout the experimental validation stage the technique we propose consistently merged maps exhibiting very different characteristics.",
"title": ""
},
{
"docid": "3503074668bd55868f86a99a8a171073",
"text": "Deep Neural Networks (DNNs) provide state-of-the-art solutions in several difficult machine perceptual tasks. However, their performance relies on the availability of a large set of labeled training data, which limits the breadth of their applicability. Hence, there is a need for new semisupervised learning methods for DNNs that can leverage both (a small amount of) labeled and unlabeled training data. In this paper, we develop a general loss function enabling DNNs of any topology to be trained in a semi-supervised manner without extra hyper-parameters. As opposed to current semi-supervised techniques based on topology-specific or unstable approaches, ours is both robust and general. We demonstrate that our approach reaches state-of-the-art performance on the SVHN (9.82% test error, with 500 labels and wide Resnet) and CIFAR10 (16.38% test error, with 8000 labels and sigmoid convolutional neural network) data sets.",
"title": ""
},
{
"docid": "ce1d25b3d2e32f903ce29470514abcce",
"text": "We present a method to generate a robot control strategy that maximizes the probability to accomplish a task. The task is given as a Linear Temporal Logic (LTL) formula over a set of properties that can be satisfied at the regions of a partitioned environment. We assume that the probabilities with which the properties are satisfied at the regions are known, and the robot can determine the truth value of a proposition only at the current region. Motivated by several results on partitioned-based abstractions, we assume that the motion is performed on a graph. To account for noisy sensors and actuators, we assume that a control action enables several transitions with known probabilities. We show that this problem can be reduced to the problem of generating a control policy for a Markov Decision Process (MDP) such that the probability of satisfying an LTL formula over its states is maximized. We provide a complete solution for the latter problem that builds on existing results from probabilistic model checking. We include an illustrative case study.",
"title": ""
},
{
"docid": "e63eac157bd750ca39370fd5b9fdf85e",
"text": "Allometric scaling relations, including the 3/4 power law for metabolic rates, are characteristic of all organisms and are here derived from a general model that describes how essential materials are transported through space-filling fractal networks of branching tubes. The model assumes that the energy dissipated is minimized and that the terminal tubes do not vary with body size. It provides a complete analysis of scaling relations for mammalian circulatory systems that are in agreement with data. More generally, the model predicts structural and functional properties of vertebrate cardiovascular and respiratory systems, plant vascular systems, insect tracheal tubes, and other distribution networks.",
"title": ""
},
{
"docid": "52b1adf3b7b6bf08651c140d726143c3",
"text": "The antifungal potential of aqueous leaf and fruit extracts of Capsicum frutescens against four major fungal strains associated with groundnut storage was evaluated. These seed-borne fungi, namely Aspergillus flavus, A. niger, Penicillium sp. and Rhizopus sp. were isolated by standard agar plate method and identified by macroscopic and microscopic features. The minimum inhibitory concentrations (MIC) and minimum fungicidal concentration (MFC) of C. frutescens extracts were determined. MIC values of the fruit extract were lower compared to the leaf extract. At MIC, leaf extract showed strong activity against A. flavus (88.06%), while fruit extract against A. niger (88.33%) in the well diffusion method. Groundnut seeds treated with C.frutescens fruit extract (10mg/ml) showed a higher rate of fungal inhibition. The present results suggest that groundnuts treated with C. frutescens fruit extracts are capable of preventing fungal infection to a certain extent.",
"title": ""
},
{
"docid": "0d0f9576ba5ccc442f531d4222bb1a12",
"text": "This tutorial introduces fingerprint recognition systems and their main components: sensing, feature extraction and matching. The basic technologies are surveyed and some state-of-the-art algorithms are discussed. Due to the extent of this topic it is not possible to provide here all the details and to cover a number of interesting issues such as classification, indexing and multimodal systems. Interested readers can find in [21] a complete and comprehensive guide to fingerprint recognition.",
"title": ""
},
{
"docid": "fa52d586e7e6c92444845881ab1990cf",
"text": "This paper proposes a novel rotor contour design for variable reluctance (VR) resolvers by injecting auxiliary air-gap permeance harmonics. Based on the resolver model with nonoverlapping tooth-coil windings, the influence of air-gap length function is first investigated by finite element (FE) method, and the detection accuracy of designs with higher values of fundamental wave factor may deteriorate due to the increasing third order of output voltage harmonics. Further, the origins of the third harmonics are investigated by analytical derivation and FE analyses of output voltages. Furthermore, it is proved that the voltage harmonics and the detection accuracy are significantly improved by injecting auxiliary air-gap permeance harmonics in the design of rotor contour. In addition, the proposed design can also be employed to eliminate voltage tooth harmonics in a conventional VR resolver topology. Finally, VR resolver prototypes with the conventional and the proposed rotors are fabricated and tested respectively to verify the analyses.",
"title": ""
},
{
"docid": "ef09bc08cc8e94275e652e818a0af97f",
"text": "The biosynthetic pathway of L-tartaric acid, the form most commonly encountered in nature, and its catabolic ties to vitamin C, remain a challenge to plant scientists. Vitamin C and L-tartaric acid are plant-derived metabolites with intrinsic human value. In contrast to most fruits during development, grapes accumulate L-tartaric acid, which remains within the berry throughout ripening. Berry taste and the organoleptic properties and aging potential of wines are intimately linked to levels of L-tartaric acid present in the fruit, and those added during vinification. Elucidation of the reactions relating L-tartaric acid to vitamin C catabolism in the Vitaceae showed that they proceed via the oxidation of L-idonic acid, the proposed rate-limiting step in the pathway. Here we report the use of transcript and metabolite profiling to identify candidate cDNAs from genes expressed at developmental times and in tissues appropriate for L-tartaric acid biosynthesis in grape berries. Enzymological analyses of one candidate confirmed its activity in the proposed rate-limiting step of the direct pathway from vitamin C to tartaric acid in higher plants. Surveying organic acid content in Vitis and related genera, we have identified a non-tartrate-forming species in which this gene is deleted. This species accumulates in excess of three times the levels of vitamin C than comparably ripe berries of tartrate-accumulating species, suggesting that modulation of tartaric acid biosynthesis may provide a rational basis for the production of grapes rich in vitamin C.",
"title": ""
},
{
"docid": "d079bba6c4490bf00eb73541ebba8ace",
"text": "The literature on Design Science (or Design Research) has been mixed on the inclusion, form, and role of theory and theorising in Design Science. Some authors have explicitly excluded theory development and testing from Design Science, leaving them to the Natural and Social/Behavioural Sciences. Others propose including theory development and testing as part of Design Science. Others propose some ideas for the content of IS Design Theories, although more detailed and clear concepts would be helpful. This paper discusses the need and role for theory in Design Science. It further proposes some ideas for standards for the form and level of detail needed for theories in Design Science. Finally it develops a framework of activities for the interaction of Design Science with research in other scientific paradigms.",
"title": ""
},
{
"docid": "039055a2fa9292031abc8db50819eb35",
"text": "Boosting is a technique of combining a set weak classifiers to form one high-performance prediction rule. Boosting was successfully applied to solve the problems of object detection, text analysis, data mining and etc. The most and widely used boosting algorithm is AdaBoost and its later more effective variations Gentle and Real AdaBoost. In this article we propose a new boosting algorithm, which produces less generalization error compared to mentioned algorithms at the cost of somewhat higher training error.",
"title": ""
},
{
"docid": "e4375a896bb4fb9d437ee68e3c2bf2c1",
"text": "Executive Overview This paper describes the findings from a new, and intrinsically interdisciplinary, literature on happiness and well-being. The paper focuses on international evidence. We report the patterns in modern data, discuss what has been persuasively established and what has not, and suggest paths for future research. Looking ahead, our instinct is that this social science research avenue will gradually merge with a related literature—from the medical, epidemiological, and biological sciences—on biomarkers and health. Nevertheless, we expect that intellectual convergence to happen slowly.",
"title": ""
},
{
"docid": "cfbd49b3d76942631639d00d7ee736d6",
"text": "The online implementation of traditional business mechanisms raises many new issues not considered in classical economic models. This partially explains why online auctions have become the most successful but also the most controversial Internet businesses in the recent years. One emerging issue is that the lack of authentication over the Internet has encouraged shill bidding, the deliberate placing of bids on the seller’s behalf to artificially drive up the price of the seller’s auctioned item. Private-value English auctions with shill bidding can result in a higher expected seller profit than other auction formats [1], violating the classical revenue equivalence theory. This paper analyzes shill bidding in multi-round online English auctions and proves that there is no equilibrium without shill bidding. Taking into account the seller’s shills and relistings, bidders with valuations even higher than the reserve will either wait for the next round or shield their bids in the current round. Hence, it is inevitable to redesign online auctions to deal with the “shiller’s curse.”",
"title": ""
},
{
"docid": "28370dc894584f053a5bb029142ad587",
"text": "Pharmaceutical parallel trade in the European Union is a large and growing phenomenon, and hope has been expressed that it has the potential to reduce prices paid by health insurance and consumers and substantially to raise overall welfare. In this paper we examine the phenomenon empirically, using data on prices and volumes of individual imported products. We have found that the gains from parallel trade accrue mostly to the distribution chain rather than to health insurance and consumers. This is because in destination countries parallel traded drugs are priced just below originally sourced drugs. We also test to see whether parallel trade has a competition impact on prices in destination countries and find that it does not. Such competition effects as there are in pharmaceuticals come mainly from the presence of generics. Accordingly, instead of a convergence to the bottom in EU pharmaceutical prices, the evidence points at ‘convergence to the top’. This is explained by the fact that drug prices are subjected to regulation in individual countries, and by the limited incentives of purchasers to respond to price differentials.",
"title": ""
},
{
"docid": "78c6ec58cec2607d5111ee415d683525",
"text": "Forty-three normal hearing participants were tested in two experiments, which focused on temporal coincidence in auditory visual (AV) speech perception. In these experiments, audio recordings of/pa/and/ba/were dubbed onto video recordings of /ba/or/ga/, respectively (ApVk, AbVg), to produce the illusory \"fusion\" percepts /ta/, or /da/ [McGurk, H., & McDonald, J. (1976). Hearing lips and seeing voices. Nature, 264, 746-747]. In Experiment 1, an identification task using McGurk pairs with asynchronies ranging from -467 ms (auditory lead) to +467 ms was conducted. Fusion responses were prevalent over temporal asynchronies from -30 ms to +170 ms and more robust for audio lags. In Experiment 2, simultaneity judgments for incongruent and congruent audiovisual tokens (AdVd, AtVt) were collected. McGurk pairs were more readily judged as asynchronous than congruent pairs. Characteristics of the temporal window over which simultaneity and fusion responses were maximal were quite similar, suggesting the existence of a 200 ms duration asymmetric bimodal temporal integration window.",
"title": ""
},
{
"docid": "1a7dd0fb317a9640ee6e90036d6036fa",
"text": "A genome-wide association study was performed to identify genetic factors involved in susceptibility to psoriasis (PS) and psoriatic arthritis (PSA), inflammatory diseases of the skin and joints in humans. 223 PS cases (including 91 with PSA) were genotyped with 311,398 single nucleotide polymorphisms (SNPs), and results were compared with those from 519 Northern European controls. Replications were performed with an independent cohort of 577 PS cases and 737 controls from the U.S., and 576 PSA patients and 480 controls from the U.K.. Strongest associations were with the class I region of the major histocompatibility complex (MHC). The most highly associated SNP was rs10484554, which lies 34.7 kb upstream from HLA-C (P = 7.8x10(-11), GWA scan; P = 1.8x10(-30), replication; P = 1.8x10(-39), combined; U.K. PSA: P = 6.9x10(-11)). However, rs2395029 encoding the G2V polymorphism within the class I gene HCP5 (combined P = 2.13x10(-26) in U.S. cases) yielded the highest ORs with both PS and PSA (4.1 and 3.2 respectively). This variant is associated with low viral set point following HIV infection and its effect is independent of rs10484554. We replicated the previously reported association with interleukin 23 receptor and interleukin 12B (IL12B) polymorphisms in PS and PSA cohorts (IL23R: rs11209026, U.S. PS, P = 1.4x10(-4); U.K. PSA: P = 8.0x10(-4); IL12B:rs6887695, U.S. PS, P = 5x10(-5) and U.K. PSA, P = 1.3x10(-3)) and detected an independent association in the IL23R region with a SNP 4 kb upstream from IL12RB2 (P = 0.001). Novel associations replicated in the U.S. PS cohort included the region harboring lipoma HMGIC fusion partner (LHFP) and conserved oligomeric golgi complex component 6 (COG6) genes on chromosome 13q13 (combined P = 2x10(-6) for rs7993214; OR = 0.71), the late cornified envelope gene cluster (LCE) from the Epidermal Differentiation Complex (PSORS4) (combined P = 6.2x10(-5) for rs6701216; OR 1.45) and a region of LD at 15q21 (combined P = 2.9x10(-5) for rs3803369; OR = 1.43). This region is of interest because it harbors ubiquitin-specific protease-8 whose processed pseudogene lies upstream from HLA-C. This region of 15q21 also harbors the gene for SPPL2A (signal peptide peptidase like 2a) which activates tumor necrosis factor alpha by cleavage, triggering the expression of IL12 in human dendritic cells. We also identified a novel PSA (and potentially PS) locus on chromosome 4q27. This region harbors the interleukin 2 (IL2) and interleukin 21 (IL21) genes and was recently shown to be associated with four autoimmune diseases (Celiac disease, Type 1 diabetes, Grave's disease and Rheumatoid Arthritis).",
"title": ""
},
{
"docid": "9d803b0ce1f1af621466b1d7f97b7edf",
"text": "This research paper addresses the methodology and approaches to managing criminal computer forensic investigations in a law enforcement environment with management controls, operational controls, and technical controls. Management controls cover policy and standard operating procedures (SOP's), methodology, and guidance. Operational controls cover SOP requirements, seizing evidence, evidence handling, best practices, and education, training and awareness. Technical controls cover acquisition and analysis procedures, data integrity, rules of evidence, presenting findings, proficiency testing, and data archiving.",
"title": ""
}
] |
scidocsrr
|
badd6e36d6833cb2ccd3e2bf595608c7
|
Understanding User Revisions When Using Information Systems Features: Adaptive System Use and Triggers
|
[
{
"docid": "586d89b6d45fd49f489f7fb40c87eb3a",
"text": "Little research has examined the impacts of enterprise resource planning (ERP) systems implementation on job satisfaction. Based on a 12-month study of 2,794 employees in a telecommunications firm, we found that ERP system implementation moderated the relationships between three job characteristics (skill variety, autonomy, and feedback) and job satisfaction. Our findings highlight the key role that ERP system implementation can have in altering wellestablished relationships in the context of technology-enabled organizational change situations. This work also extends research on technology diffusion by moving beyond a focus on technology-centric outcomes, such as system use, to understanding broader job outcomes. Carol Saunders was the accepting senior editor for this paper.",
"title": ""
}
] |
[
{
"docid": "d310779b1006f90719a0ece3cf2583b2",
"text": "While neural networks have been successfully applied to many natural language processing tasks, they come at the cost of interpretability. In this paper, we propose a general methodology to analyze and interpret decisions from a neural model by observing the effects on the model of erasing various parts of the representation, such as input word-vector dimensions, intermediate hidden units, or input words. We present several approaches to analyzing the effects of such erasure, from computing the relative difference in evaluation metrics, to using reinforcement learning to erase the minimum set of input words in order to flip a neural model’s decision. In a comprehensive analysis of multiple NLP tasks, including linguistic feature classification, sentence-level sentiment analysis, and document level sentiment aspect prediction, we show that the proposed methodology not only offers clear explanations about neural model decisions, but also provides a way to conduct error analysis on neural models.",
"title": ""
},
{
"docid": "7dcba854d1f138ab157a1b24176c2245",
"text": "Essential oils distilled from members of the genus Lavandula have been used both cosmetically and therapeutically for centuries with the most commonly used species being L. angustifolia, L. latifolia, L. stoechas and L. x intermedia. Although there is considerable anecdotal information about the biological activity of these oils much of this has not been substantiated by scientific or clinical evidence. Among the claims made for lavender oil are that is it antibacterial, antifungal, carminative (smooth muscle relaxing), sedative, antidepressive and effective for burns and insect bites. In this review we detail the current state of knowledge about the effect of lavender oils on psychological and physiological parameters and its use as an antimicrobial agent. Although the data are still inconclusive and often controversial, there does seem to be both scientific and clinical data that support the traditional uses of lavender. However, methodological and oil identification problems have severely hampered the evaluation of the therapeutic significance of much of the research on Lavandula spp. These issues need to be resolved before we have a true picture of the biological activities of lavender essential oil.",
"title": ""
},
{
"docid": "56d295950edf9503d89d891f7c1b361f",
"text": "This paper describes the discipline of distance metric learning, a branch of machine learning that aims to learn distances from the data. Distance metric learning can be useful to improve similarity learning algorithms, and also has applications in dimensionality reduction. We describe the distance metric learning problem and analyze its main mathematical foundations. We discuss some of the most popular distance metric learning techniques used in classification, showing their goals and the required information to understand and use them. Furthermore, we present a Python package that collects a set of 17 distance metric learning techniques explained in this paper, with some experiments to evaluate the performance of the different algorithms. Finally, we discuss several possibilities of future work in this topic.",
"title": ""
},
{
"docid": "d6dadf93c1a51be67f67a7fb8fdb9b68",
"text": "Recent advances in quantum computing seem to suggest it is only a matter of time before general quantum computers become a reality. Because all widely used cryptographic constructions rely on the hardness of problems that can be solved efficiently using known quantum algorithms, quantum computers will have a profound impact on the field of cryptography. One such construction that will be broken by quantum computers is elliptic curve cryptography, which is used in blockchain applications such as bitcoin for digital signatures. Hash-based signature schemes are a promising post-quantum secure alternative, but existing schemes such as XMSS and SPHINCS are impractical for blockchain applications because of their performance characteristics. We construct a quantum secure signature scheme for use in blockchain technology by combining a hash-based one-time signature scheme with Naor-Yung chaining. By exploiting the structure and properties of a blockchain we achieve smaller signatures and better performance than existing hash-based signature schemes. The proposed scheme supports both one-time and many-time key pairs, and is designed to be easily adopted into existing blockchain implementations.",
"title": ""
},
{
"docid": "5656c77061a3f678172ea01e226ede26",
"text": "BACKGROUND\nIn 2010, overweight and obesity were estimated to cause 3·4 million deaths, 3·9% of years of life lost, and 3·8% of disability-adjusted life-years (DALYs) worldwide. The rise in obesity has led to widespread calls for regular monitoring of changes in overweight and obesity prevalence in all populations. Comparable, up-to-date information about levels and trends is essential to quantify population health effects and to prompt decision makers to prioritise action. We estimate the global, regional, and national prevalence of overweight and obesity in children and adults during 1980-2013.\n\n\nMETHODS\nWe systematically identified surveys, reports, and published studies (n=1769) that included data for height and weight, both through physical measurements and self-reports. We used mixed effects linear regression to correct for bias in self-reports. We obtained data for prevalence of obesity and overweight by age, sex, country, and year (n=19,244) with a spatiotemporal Gaussian process regression model to estimate prevalence with 95% uncertainty intervals (UIs).\n\n\nFINDINGS\nWorldwide, the proportion of adults with a body-mass index (BMI) of 25 kg/m(2) or greater increased between 1980 and 2013 from 28·8% (95% UI 28·4-29·3) to 36·9% (36·3-37·4) in men, and from 29·8% (29·3-30·2) to 38·0% (37·5-38·5) in women. Prevalence has increased substantially in children and adolescents in developed countries; 23·8% (22·9-24·7) of boys and 22·6% (21·7-23·6) of girls were overweight or obese in 2013. The prevalence of overweight and obesity has also increased in children and adolescents in developing countries, from 8·1% (7·7-8·6) to 12·9% (12·3-13·5) in 2013 for boys and from 8·4% (8·1-8·8) to 13·4% (13·0-13·9) in girls. In adults, estimated prevalence of obesity exceeded 50% in men in Tonga and in women in Kuwait, Kiribati, Federated States of Micronesia, Libya, Qatar, Tonga, and Samoa. Since 2006, the increase in adult obesity in developed countries has slowed down.\n\n\nINTERPRETATION\nBecause of the established health risks and substantial increases in prevalence, obesity has become a major global health challenge. Not only is obesity increasing, but no national success stories have been reported in the past 33 years. Urgent global action and leadership is needed to help countries to more effectively intervene.\n\n\nFUNDING\nBill & Melinda Gates Foundation.",
"title": ""
},
{
"docid": "c8ef89eb90824b3d0f966c6f9b097d0b",
"text": "Machine Learning and Inference methods have become ubiquitous in our attempt to induce more abstract representations of natural language text, visual scenes, and other messy, naturally occurring data, and support decisions that depend on it. However, learning models for these tasks is difficult partly because generating the necessary supervision signals for it is costly and does not scale. This paper describes several learning paradigms that are designed to alleviate the supervision bottleneck. It will illustrate their benefit in the context of multiple problems, all pertaining to inducing various levels of semantic representations from text. In particular, we discuss (i) Response Driven Learning of models, a learning protocol that supports inducing meaning representations simply by observing the model’s behavior in its environment, (ii) the exploitation of Incidental Supervision signals that exist in the data, independently of the task at hand, to learn models that identify and classify semantic predicates, and (iii) the use of weak supervision to combine simple models to support global decisions where joint supervision is not available. While these ideas are applicable in a range of Machine Learning driven fields, we will demonstrate it in the context of several natural language applications, from (cross-lingual) text classification, to Wikification, to semantic parsing.",
"title": ""
},
{
"docid": "d3797817bcde1b16d35cc7efbc97953c",
"text": "Biological time-keeping mechanisms have fascinated researchers since the movement of leaves with a daily rhythm was first described >270 years ago. The circadian clock confers a approximately 24-hour rhythm on a range of processes including leaf movements and the expression of some genes. Molecular mechanisms and components underlying clock function have been described in recent years for several animal and prokaryotic organisms, and those of plants are beginning to be characterized. The emerging model of the Arabidopsis clock has mechanistic parallels with the clocks of other model organisms, which consist of positive and negative feedback loops, but the molecular components appear to be unique to plants.",
"title": ""
},
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "f92a71e6094000ecf47ebd02bf4e5c4a",
"text": "Exploding amounts of multimedia data increasingly require automatic indexing and classification, e.g. training classifiers to produce high-level features, or semantic concepts, chosen to represent image content, like car, person, etc. When changing the applied domain (i.e. from news domain to consumer home videos), the classifiers trained in one domain often perform poorly in the other domain due to changes in feature distributions. Additionally, classifiers trained on the new domain alone may suffer from too few positive training samples. Appropriately adapting data/models from an old domain to help classify data in a new domain is an important issue. In this work, we develop a new cross-domain SVM (CDSVM) algorithm for adapting previously learned support vectors from one domain to help classification in another domain. Better precision is obtained with almost no additional computational cost. Also, we give a comprehensive summary and comparative study of the state- of-the-art SVM-based cross-domain learning methods. Evaluation over the latest large-scale TRECVID benchmark data set shows that our CDSVM method can improve mean average precision over 36 concepts by 7.5%. For further performance gain, we also propose an intuitive selection criterion to determine which cross-domain learning method to use for each concept.",
"title": ""
},
{
"docid": "ad6d21a36cc5500e4d8449525eae25ca",
"text": "Human Activity Recognition is one of the attractive topics to develop smart interactive environment in which computing systems can understand human activities in natural context. Besides traditional approaches with visual data, inertial sensors in wearable devices provide a promising approach for human activity recognition. In this paper, we propose novel methods to recognize human activities from raw data captured from inertial sensors using convolutional neural networks with either 2D or 3D filters. We also take advantage of hand-crafted features to combine with learned features from Convolution-Pooling blocks to further improve accuracy for activity recognition. Experiments on UCI Human Activity Recognition dataset with six different activities demonstrate that our method can achieve 96.95%, higher than existing methods.",
"title": ""
},
{
"docid": "ad49388ef64fd63e0f318a0097019fe2",
"text": "We present an experimental study of IEEE 802.11n (high throughput extension to the 802.11 standard) using commodity wireless hardware. 802.11n introduces a variety of new mechanisms including physical layer diversity techniques, channel bonding and frame aggregation mechanisms. Using measurements from our testbed, we analyze the fundamental characteristics of 802.11n links and quantify the gains of each mechanism under diverse scenarios. We show that the throughput of an 802.11n link can be severely degraded (up ≈85%) in presence of an 802.11g link. Our results also indicate that increased amount of interference due to wider channel bandwidths can lead to throughput degradation. To this end, we characterize the nature of interference due to variable channel widths in 802.11n and show that careful modeling of interference is imperative in such scenarios. Further, as a reappraisal of previous work, we evaluate the effectiveness of MAC level diversity in the presence of physical layer diversity mechanisms introduced by 802.11n.",
"title": ""
},
{
"docid": "2fc024a732681aea0945430894351394",
"text": "Despite the increasing popularity of cloud services, ensuring the security and availability of data, resources and services remains an ongoing research challenge. Distributed denial of service (DDoS) attacks are not a new threat, but remain a major security challenge and are a topic of ongoing research interest. Mitigating DDoS attack in cloud presents a new dimension to solutions proffered in traditional computing due to its architecture and features. This paper reviews 96 publications on DDoS attack and defense approaches in cloud computing published between January 2009 and December 2015, and discusses existing research trends. A taxonomy and a conceptual cloud DDoS mitigation framework based on change point detection are presented. Future research directions are also outlined.",
"title": ""
},
{
"docid": "728ea68ac1a50ae2d1b280b40c480aec",
"text": "This paper presents a new metaprogramming library, CL ARRAY, that offers multiplatform and generic multidimensional data containers for C++ specifically adapted for parallel programming. The CL ARRAY containers are built around a new formalism for representing the multidimensional nature of data as well as the semantics of multidimensional pointers and contiguous data structures. We also present OCL ARRAY VIEW, a concept based on metaprogrammed enveloped objects that supports multidimensional transformations and multidimensional iterators designed to simplify and formalize the interfacing process between OpenCL APIs, standard template library (STL) algorithms and CL ARRAY containers. Our results demonstrate improved performance and energy savings over the three most popular container libraries available to the developer community for use in the context of multi-linear algebraic applications.",
"title": ""
},
{
"docid": "48a476d5100f2783455fabb6aa566eba",
"text": "Phylogenies are usually dated by calibrating interior nodes against the fossil record. This relies on indirect methods that, in the worst case, misrepresent the fossil information. Here, we contrast such node dating with an approach that includes fossils along with the extant taxa in a Bayesian total-evidence analysis. As a test case, we focus on the early radiation of the Hymenoptera, mostly documented by poorly preserved impression fossils that are difficult to place phylogenetically. Specifically, we compare node dating using nine calibration points derived from the fossil record with total-evidence dating based on 343 morphological characters scored for 45 fossil (4--20 complete) and 68 extant taxa. In both cases we use molecular data from seven markers (∼5 kb) for the extant taxa. Because it is difficult to model speciation, extinction, sampling, and fossil preservation realistically, we develop a simple uniform prior for clock trees with fossils, and we use relaxed clock models to accommodate rate variation across the tree. Despite considerable uncertainty in the placement of most fossils, we find that they contribute significantly to the estimation of divergence times in the total-evidence analysis. In particular, the posterior distributions on divergence times are less sensitive to prior assumptions and tend to be more precise than in node dating. The total-evidence analysis also shows that four of the seven Hymenoptera calibration points used in node dating are likely to be based on erroneous or doubtful assumptions about the fossil placement. With respect to the early radiation of Hymenoptera, our results suggest that the crown group dates back to the Carboniferous, ∼309 Ma (95% interval: 291--347 Ma), and diversified into major extant lineages much earlier than previously thought, well before the Triassic. [Bayesian inference; fossil dating; morphological evolution; relaxed clock; statistical phylogenetics.].",
"title": ""
},
{
"docid": "16b9d7602e45da0bb47017d1516c95bb",
"text": "Intranet is a term used to describe the use of Internet technologies internally within an organization rather than externally to connect to the global Internet. While the advancement and the sophistication of the intranet is progressing tremendously, research on intranet utilization is still very scant. This paper is an attempt to provide a conceptual understanding of the intranet utilization and the corresponding antecedents and impacts through the proposed conceptual model. Based on several research frameworks built through past research, the authors attempt to propose a framework for studying intranet utilization that is based on three constructs i.e. mode of utilizations, decision support and knowledge sharing. Three groups of antecedent variables namely intranet, organizational and individual characteristics are explored to determine their possible contribution to intranet utilization. In addition, the impacts of intranet utilization are also examined in terms of task productivity, task innovation and individual sense of accomplishments. Based on the proposed model, several propositions are formulated as a basis for the study that will follow.",
"title": ""
},
{
"docid": "cff32690c2421b2ad94dea33f5e4479d",
"text": "Heavy ion single-event effect (SEE) measurements on Xilinx Zynq-7000 are reported. Heavy ion susceptibility to Single-Event latchup (SEL), single event upsets (SEUs) of BRAM, configuration bits of FPGA and on chip memory (OCM) of the processor were investigated.",
"title": ""
},
{
"docid": "418ebc0424128ec1a89d5e5292872124",
"text": "Apocyni Veneti Folium (AVF) is a kind of staple traditional Chinese medicine with vast clinical consumption because of its positive effects. However, due to the habitats and adulterants, its quality is uneven. To control the quality of this medicinal herb, in this study, the quality of AVF was evaluated based on simultaneous determination of multiple bioactive constituents combined with multivariate statistical analysis. A reliable method based on ultra-fast liquid chromatography tandem triple quadrupole mass spectrometry (UFLC-QTRAP-MS/MS) was developed for the simultaneous determination of a total of 43 constituents, including 15 flavonoids, 6 organic acids, 13 amino acids, and 9 nucleosides in 41 Luobumaye samples from different habitats and commercial herbs. Furthermore, according to the contents of these 43 constituents, principal component analysis (PCA) was employed to classify and distinguish between AVF and its adulterants, leaves of Poacynum hendersonii (PHF), and gray relational analysis (GRA) was performed to evaluate the quality of the samples. The proposed method was successfully applied to the comprehensive quality evaluation of AVF, and all results demonstrated that the quality of AVF was higher than the PHF. This study will provide comprehensive information necessary for the quality control of AVF.",
"title": ""
},
{
"docid": "46980b89e76bc39bf125f63ed9781628",
"text": "In this paper, a design of miniaturized 3-way Bagley polygon power divider (BPD) is presented. The design is based on using non-uniform transmission lines (NTLs) in each arm of the divider instead of the conventional uniform ones. For verification purposes, a 3-way BPD is designed, simulated, fabricated, and measured. Besides suppressing the fundamental frequency's odd harmonics, a size reduction of almost 30% is achieved.",
"title": ""
},
{
"docid": "25eea5205d1f8beaa8c4a857da5714bc",
"text": "To backpropagate the gradients through discrete stochastic layers, we encode the true gradients into a multiplication between random noises and the difference of the same function of two different sets of discrete latent variables, which are correlated with these random noises. The expectations of that multiplication over iterations are zeros combined with spikes from time to time. To modulate the frequencies, amplitudes, and signs of the spikes to capture the temporal evolution of the true gradients, we propose the augment-REINFORCE-merge (ARM) estimator that combines data augmentation, the score-function estimator, permutation of the indices of latent variables, and variance reduction for Monte Carlo integration using common random numbers. The ARM estimator provides low-variance and unbiased gradient estimates for the parameters of discrete distributions, leading to state-of-the-art performance in both auto-encoding variational Bayes and maximum likelihood inference, for discrete latent variable models with one or multiple discrete stochastic layers.",
"title": ""
},
{
"docid": "81f474cbd140935d93faf47af87a205b",
"text": "The availability of food ingredient information in digital form is a major factor in modern information systems related to diet management and health issues. Although ingredient information is printed on food product labels, corresponding digital data is rarely available for the public. In this demo, we present the Mobile Food Information Scanner (MoFIS), a mobile user interface designed to enable users to semi-automatically extract ingredient lists from food product packaging.",
"title": ""
}
] |
scidocsrr
|
015f01d0b690b329424e2e757777c8ce
|
Understanding customer satisfaction and loyalty: An empirical study of mobile instant messages in China
|
[
{
"docid": "97957590d7bec130bac3cf0f0e29cf9a",
"text": "Understanding user acceptance of the Internet, especially the intentions to use Internet commerce and mobile commerce, is important in explaining the fact that these commerce have been growing at an exponential rate in recent years. This paper studies factors of new technology to better understand and manage the electronic commerce activities. The theoretical model proposed in this paper is intended to clarify the factors as they are related to the technology acceptance model. More specifically, the relationship among trust and other factors are hypothesized. Using the technology acceptance model, this research reveals the importance of the hedonic factor. The result of this research implies that the ways of stimulating and facilitating customers' participation in mobile commerce should be differentiated from those in Internet commerce",
"title": ""
},
{
"docid": "b6d6da15fd000be1a01d4b0f1bb0d087",
"text": "Purpose – The purpose of the paper is to distinguish features of m-commerce from those of e-commerce and identify factors to influence customer satisfaction (m-satisfaction) and loyalty (m-loyalty) in m-commerce by empirically-based case study. Design/methodology/approach – First, based on previous literature, the paper builds sets of customer satisfaction factors for both e-commerce and m-commerce. Second, features of m-commerce are identified by comparing it with current e-commerce through decision tree (DT). Third, with the derived factors from DT, significant factors and relationships among the factors, m-satisfaction and m-loyalty are examined by m-satisfaction model employing structural equation model. Findings – The paper finds that m-commerce is partially similar in factors like “transaction process” and “customization” which lead customer satisfaction after connecting an m-commerce site, but it has unique aspects of “content reliability”, “availability”, and “perceived price level of mobile Internet (m-Internet)” which build customer’s intention to the m-commerce site. Through the m-satisfaction model, “content reliability”, and “transaction process” are proven to be significantly influential factors to m-satisfaction and m-loyalty. Research implications/limitations – The paper can be a meaningful step to provide empirical analysis and evaluation based on questionnaire survey targeting actual users. The research is based on a case study on digital music transaction, which is indicative, rather than general. Practical implications – The paper meets the needs to focus on customer under the fiercer competition in Korean m-commerce market. It can guide those who want to initiate, move or broaden their business to m-commerce from e-commerce. Originality/value – The paper develops a revised ACSI model to identify individual critical factors and the degree of effect.",
"title": ""
}
] |
[
{
"docid": "716e08a31e775342daee6319d4c6a4cf",
"text": "Error-related EEG potentials (ErrP) can be used for brain-machine interfacing (BMI). Decoding of these signals, indicating subject's perception of erroneous system decisions or actions can be used to correct these actions or to improve the overall interfacing system. Multiple studies have shown the feasibility of decoding these potentials in single-trial using different types of experimental protocols and feedback modalities. However, previously reported approaches are limited by the use of long inter-stimulus intervals (ISI > 2 s). In this work we assess if it is possible to overcome this limitation. Our results show that it is possible to decode error-related potentials elicited by stimuli presented with ISIs lower than 1 s without decrease in performance. Furthermore, the increase in the presentation rate did not increase the subject workload. This suggests that the presentation rate for ErrP-based BMI protocols using serial monitoring paradigms can be substantially increased with respect to previous works.",
"title": ""
},
{
"docid": "bc0064e87f077b9acf4d583d3d90489b",
"text": "The dominant evolutionary theory of physical attraction posits that attractiveness reflects physiological health, and attraction is a mechanism for identifying a healthy mate. Previous studies have found that perceptions of the healthiest body mass index (weight scaled for height; BMI) for women are close to healthy BMI guidelines, while the most attractive BMI is significantly lower, possibly pointing to an influence of sociocultural factors in determining attractive BMI. However, less is known about ideal body size for men. Further, research has not addressed the role of body fat and muscle, which have distinct relationships with health and are conflated in BMI, in determining perceived health and attractiveness. Here, we hypothesised that, if attractiveness reflects physiological health, the most attractive and healthy appearing body composition should be in line with physiologically healthy body composition. Thirty female and 33 male observers were instructed to manipulate 15 female and 15 male body images in terms of their fat and muscle to optimise perceived health and, separately, attractiveness. Observers were unaware that they were manipulating the muscle and fat content of bodies. The most attractive apparent fat mass for female bodies was significantly lower than the healthiest appearing fat mass (and was lower than the physiologically healthy range), with no significant difference for muscle mass. The optimal fat and muscle mass for men's bodies was in line with the healthy range. Male observers preferred a significantly lower overall male body mass than did female observers. While the body fat and muscle associated with healthy and attractive appearance is broadly in line with physiologically healthy values, deviations from this pattern suggest that future research should examine a possible role for internalization of body ideals in influencing perceptions of attractive body composition, particularly in women.",
"title": ""
},
{
"docid": "69a0426796f46ac387f1f9d831c85e87",
"text": "In this paper, a Volterra analysis built on top of a normal harmonic balance simulation is used for a comprehensive analysis of the causes of AM-PM distortion in a LDMOS RF power amplifier (PA). The analysis shows that any nonlinear capacitors cause AM-PM. In addition, varying terminal impedances may pull the matching impedances and cause phase shift. The AM-PM is also affected by the distortion that is mixed down from the second harmonic. As a sample circuit, an internally matched 30-W LDMOS RF PA is used and the results are compared to measured AM-AM, AM-PM and large-signal S11.",
"title": ""
},
{
"docid": "eb10f86262180b122d261f5acbe4ce18",
"text": "Procrasttnatton ts variously descnbed a? harmful, tnnocuous, or even beneficial Two longitudinal studies examined procrastination among students Procrasttnators reported lower stress and less illness than nonprocrasttnators early in the semester, but they reported higher stress and more illness late in the term, and overall they were sicker Procrastinators also received lower grades on atl assignment's Procrasttnatton thus appears to be a self-defeating behavior pattem marked by short-term benefits and long-term costs Doing one's work and fulfilling other obligations in a timely fashion seem like integral parts of rational, proper adult funcuoning Yet a majonty of the population admits to procrastinating at least sometimes, and substantial minonties admit to significant personal, occupational, or financial difficulties resulting from their dilatory behavior (Ferran, Johnson, & McCown, 1995) Procrastinauon is often condemned, particularly by people who do not think themselves guilty of it (Burka & Yuen, 1983, Ferran et dl, 1995) Cntics of procrastination depict it as a lazy self-indulgent habit of putting things off for no reason They say it is self-defeating m that It lowers the quality of performance, because one ends up with less time to work (Baumeister & Scher, 1988, Ellis & Knaus, 1977) Others depict it as a destructive strategy of self-handicappmg (Jones & Berglas, 1978), such a,s when people postpone or withhold effort so as to give themselves an excuse for anticipated poor performance (Tice, 1991, Tice & Baumeister, 1990) People who finish their tasks and assignments early may point self-nghteously to the stress suffered by procrastinators at the last minute and say that putting things off is bad for one's physical or mental health (see Boice, 1989, 1996, Rothblum, Solomon, & Murakami, 1986 Solomon & Rothblum, 1984) On the other hand, some procrastinators defend their practice They point out correctly that if one puts in the same amount of work on the project, it does not matter whether this is done early or late Some even say that procrastination improves perfonnance, because the imminent deadline creates excitement and pressure that elicit peak performance \"I do my best work under pressure,\" in the standard phrase (Ferran, 1992, Ferran et al , 1995, Uy, 1995) Even if it were true that stress and illness are higher for people who leave things unul the last minute—and research has not yet provided clear evidence that in fact they both are higher—this might be offset by the enjoyment of carefree times earlier (see Ainslie, 1992) The present investigation involved a longitudinal study of the effects of procrastination on quality of performance, stress, and illness Early in the semester, students were given an assignment with a deadline Procrastinators were identified usmg Lay's (1986) scale Students' well-being was assessed with self-reports of stress and illAddress correspondence Case Western Reserve Unive 7123, e-mail dxt2@po cwiu o Dianne M Tice Department of Psychology, sity 10900 Euclid Ave Cleveland OH 44106ness The validity of the scale was checked by ascertaining whethtr students tumed in the assignment early, on time, or late Finally, task performance was assessed by consulting the grades received Competing predictions could be made",
"title": ""
},
{
"docid": "591af257561f98f28b1530c0fee13907",
"text": "Most of the mining techniques have only concerned with interesting patterns. However, in the recent years, there is an increasing demand in mining Unexpected Items or Outliers or Rare Items. Several application domains have realized the direct mapping between outliers in data and real world anomalies that are of great interest to an analyst. Outliers represents semantically correct but infrequent situationin a database. Detecting outliers allows extracting useful and actionable knowledge to the domain experts. In Educational Data, outliers are those students who have secured scores deviated so much from the average scores of other students. The educational data are Quantitative in nature. Any mining technique on quantitative data will partition the quantitative attributes with unnatural boundaries which lead to overestimate or underestimate the boundary values. Fuzzy logic handles this in a more realistic way. Knowing the threshold values apriori is not possible, hence our method uses dynamically calculated Support and Rank measures rather than predefined values. Our method uses a modified Fuzzy Apriori Rare Item sets Mining (FARIM) algorithm to detect the outliers (weak student). This will help the teachers in giving extra coaching for the weak students.",
"title": ""
},
{
"docid": "b5b4e637065ba7c0c18a821bef375aea",
"text": "The new era of mobile health ushered in by the wide adoption of ubiquitous computing and mobile communications has brought opportunities for governments and companies to rethink their concept of healthcare. Simultaneously, the worldwide urbanization process represents a formidable challenge and attracts attention toward cities that are expected to gather higher populations and provide citizens with services in an efficient and human manner. These two trends have led to the appearance of mobile health and smart cities. In this article we introduce the new concept of smart health, which is the context-aware complement of mobile health within smart cities. We provide an overview of the main fields of knowledge that are involved in the process of building this new concept. Additionally, we discuss the main challenges and opportunities that s-Health would imply and provide a common ground for further research.",
"title": ""
},
{
"docid": "690659887c8261e2984802e2cdb71b5f",
"text": "The Discrete Hodge Helmholtz Decomposition (DHHD) is able to locate critical points in a vector field. We explore two novel applications of this technique to image processing problems, viz., hurricane tracking and fingerprint analysis. The eye of the hurricane represents a rotational center, which is shown to be robustly detected using DHHD. This is followed by an automatic segmentation and tracking of the hurricane eye, which does not require manual initializations. DHHD is also used for identification of reference points in fingerprints. The new technique for reference point detection is relatively insensitive to noise in the orientation field. The DHHD based method is shown to detect reference points correctly for 96.25% of the images in the database used.",
"title": ""
},
{
"docid": "f4c2a00b8a602203c86eaebc6f111f46",
"text": "Tamara Kulesa: Hello. This is Tamara Kulesa, Worldwide Marketing Manager for IBM Global Business Services for the Global Government Industry. I am here today with Susanne Dirks, Manager of the IBM Institute for Business Values Global Center for Economic Development in Ireland. Susanne is responsible for the research and writing of the newly published report, \"A Vision of Smarter Cities: How Cities Can Lead the Way into a Prosperous and Sustainable Future.\" Susanne, thank you for joining me today.",
"title": ""
},
{
"docid": "abcc4de8a7ca3b716fa0951429a6c969",
"text": "Recently, deep learning has been successfully applied to the problem of hashing, yielding remarkable performance compared to traditional methods with hand-crafted features. However, most of existing deep hashing methods are designed for the supervised scenario and require a large number of labeled data. In this paper, we propose a novel semi-supervised hashing method for image retrieval, named Deep Hashing with a Bipartite Graph (BGDH), to simultaneously learn embeddings, features and hash codes. More specifically, we construct a bipartite graph to discover the underlying structure of data, based on which an embedding is generated for each instance. Then, we feed raw pixels as well as embeddings to a deep neural network, and concatenate the resulting features to determine the hash code. Compared to existing methods, BGDH is a universal framework that is able to utilize various types of graphs and losses. Furthermore, we propose an inductive variant of BGDH to support out-of-sample extensions. Experimental results on real datasets show that our BGDH outperforms state-of-the-art hashing methods.",
"title": ""
},
{
"docid": "4a5abe07b93938e7549df068967731fc",
"text": "A novel compact dual-polarized unidirectional wideband antenna based on two crossed magneto-electric dipoles is proposed. The proposed miniaturization method consist in transforming the electrical filled square dipoles into vertical folded square loops. The surface of the radiating element is reduced to 0.23λ0∗0.23λ0, where λ0 is the wavelength at the lowest operation frequency for a standing wave ratio (SWR) <2.5, which corresponds to a reduction factor of 48%. The antenna has been prototyped using 3D printing technology. The measured input impedance bandwidth is 51.2% from 1.7 GHz to 2.9 GHz with a Standing wave ratio (SWR) <2.",
"title": ""
},
{
"docid": "31954ceaa223884fa27a9c446288b8a9",
"text": "Computational thinking (CT) has been described as the use of abstraction, automation, and analysis in problem-solving [3]. We examine how these ways of thinking take shape for middle and high school youth in a set of NSF-supported programs. We discuss opportunities and challenges in both in-school and after-school contexts. Based on these observations, we present a \"use-modify-create\" framework, representing three phases of students' cognitive and practical activity in computational thinking. We recommend continued investment in the development of CT-rich learning environments, in educators who can facilitate their use, and in research on the broader value of computational thinking.",
"title": ""
},
{
"docid": "b59e332c086a8ce6d6ddc0526b8848c7",
"text": "We propose Generative Adversarial Tree Search (GATS), a sample-efficient Deep Reinforcement Learning (DRL) algorithm. While Monte Carlo Tree Search (MCTS) is known to be effective for search and planning in RL, it is often sampleinefficient and therefore expensive to apply in practice. In this work, we develop a Generative Adversarial Network (GAN) architecture to model an environment’s dynamics and a predictor model for the reward function. We exploit collected data from interaction with the environment to learn these models, which we then use for model-based planning. During planning, we deploy a finite depth MCTS, using the learned model for tree search and a learned Q-value for the leaves, to find the best action. We theoretically show that GATS improves the bias-variance tradeoff in value-based DRL. Moreover, we show that the generative model learns the model dynamics using orders of magnitude fewer samples than the Q-learner. In non-stationary settings where the environment model changes, we find the generative model adapts significantly faster than the Q-learner to the new environment.",
"title": ""
},
{
"docid": "e733b08455a5ca2a5afa596268789993",
"text": "In this paper a new PWM inverter topology suitable for medium voltage (2300/4160 V) adjustable speed drive (ASD) systems is proposed. The modular inverter topology is derived by combining three standard 3-phase inverter modules and a 0.33 pu output transformer. The output voltage is high quality, multistep PWM with low dv/dt. Further, the approach also guarantees balanced operation and 100% utilization of each 3-phase inverter module over the entire speed range. These features enable the proposed topology to be suitable for powering constant torque as well as variable torque type loads. Clean power utility interface of the proposed inverter system can be achieved via an 18-pulse input transformer. Analysis, simulation, and experimental results are shown to validate the concepts.",
"title": ""
},
{
"docid": "5ed74b235edcbcb5aeb5b6b3680e2122",
"text": "Self-paced learning (SPL) mimics the cognitive mechanism o f humans and animals that gradually learns from easy to hard samples. One key issue in SPL is to obtain better weighting strategy that is determined by mini zer function. Existing methods usually pursue this by artificially designing th e explicit form of SPL regularizer. In this paper, we focus on the minimizer functi on, and study a group of new regularizer, named self-paced implicit regularizer th at is deduced from robust loss function. Based on the convex conjugacy theory, the min imizer function for self-paced implicit regularizer can be directly learned fr om the latent loss function, while the analytic form of the regularizer can be even known. A general framework (named SPL-IR) for SPL is developed accordingly. We dem onstrate that the learning procedure of SPL-IR is associated with latent robu st loss functions, thus can provide some theoretical inspirations for its working m echanism. We further analyze the relation between SPL-IR and half-quadratic opt imization. Finally, we implement SPL-IR to both supervised and unsupervised tasks , nd experimental results corroborate our ideas and demonstrate the correctn ess and effectiveness of implicit regularizers.",
"title": ""
},
{
"docid": "7d42d3d197a4d62e1b4c0f3c08be14a9",
"text": "Links between issue reports and their corresponding commits in version control systems are often missing. However, these links are important for measuring the quality of a software system, predicting defects, and many other tasks. Several approaches have been designed to solve this problem by automatically linking bug reports to source code commits via comparison of textual information in commit messages and bug reports. Yet, the effectiveness of these techniques is oftentimes suboptimal when commit messages are empty or contain minimum information; this particular problem makes the process of recovering traceability links between commits and bug reports particularly challenging. In this work, we aim at improving the effectiveness of existing bug linking techniques by utilizing rich contextual information. We rely on a recently proposed approach, namely ChangeScribe, which generates commit messages containing rich contextual information by using code summarization techniques. Our approach then extracts features from these automatically generated commit messages and bug reports, and inputs them into a classification technique that creates a discriminative model used to predict if a link exists between a commit message and a bug report. We compared our approach, coined as RCLinker (Rich Context Linker), to MLink, which is an existing state-of-the-art bug linking approach. Our experiment results on bug reports from six software projects show that RCLinker outperforms MLink in terms of F-measure by 138.66%.",
"title": ""
},
{
"docid": "a1c859b44c46ebf4d2d413f4303cb4f7",
"text": "We study the parsing complexity of Combinatory Categorial Grammar (CCG) in the formalism of Vijay-Shanker and Weir (1994). As our main result, we prove that any parsing algorithm for this formalism will take in the worst case exponential time when the size of the grammar, and not only the length of the input sentence, is included in the analysis. This sets the formalism of Vijay-Shanker andWeir (1994) apart from weakly equivalent formalisms such as Tree-Adjoining Grammar (TAG), for which parsing can be performed in time polynomial in the combined size of grammar and input sentence. Our results contribute to a refined understanding of the class of mildly context-sensitive grammars, and inform the search for new, mildly context-sensitive versions of CCG.",
"title": ""
},
{
"docid": "4fb6b884b22962c6884bd94f8b76f6f2",
"text": "This paper describes a novel motion estimation algorithm for floating base manipulators that utilizes low-cost inertial measurement units (IMUs) containing a three-axis gyroscope and a three-axis accelerometer. Four strap-down microelectromechanical system (MEMS) IMUs are mounted on each link to form a virtual IMU whose body's fixed frame is located at the center of the joint rotation. An extended Kalman filter (EKF) and a complementary filter are used to develop a virtual IMU by fusing together the output of four IMUs. The novelty of the proposed algorithm is that no forward kinematic model that requires data flow from previous joints is needed. The measured results obtained from the planar motion of a hydraulic arm show that the accuracy of the estimation of the joint angle is within ± 1 degree and that the root mean square error is less than 0.5 degree.",
"title": ""
},
{
"docid": "5c772b272bbbd8a19af1f2960a44be18",
"text": "The American Association of Clinical Endocrinologists and American Association of Endocrine Surgeons Medical Guidelines for the Management of Adrenal Incidentalomas are systematically developed statements to assist health care providers in medical decision making for specific clinical conditions. Most of the content herein is based on literature reviews. In areas of uncertainty, professional judgment was applied. These guidelines are a working document that reflects the state of the field at the time of publication. Because rapid changes in this area are expected, periodic revisions are inevitable. We encourage medical professionals to use this information in conjunction with their best clinical judgment. The presented recommendations may not be appropriate in all situations. Any decision by practitioners to apply these guidelines must be made in light of local resources and individual circumstances.",
"title": ""
},
{
"docid": "cc78d1482412669e05f57e13cbc1c59f",
"text": "We present a method to learn and propagate shape placements in 2D polygonal scenes from a few examples provided by a user. The placement of a shape is modeled as an oriented bounding box. Simple geometric relationships between this bounding box and nearby scene polygons define a feature set for the placement. The feature sets of all example placements are then used to learn a probabilistic model over all possible placements and scenes. With this model, we can generate a new set of placements with similar geometric relationships in any given scene. We introduce extensions that enable propagation and generation of shapes in 3D scenes, as well as the application of a learned modeling session to large scenes without additional user interaction. These concepts allow us to generate complex scenes with thousands of objects with relatively little user interaction.",
"title": ""
},
{
"docid": "fb8638c46ca5bb4a46b1556a2504416d",
"text": "In this paper we investigate how a VANET-based traffic information system can overcome the two key problems of strictly limited bandwidth and minimal initial deployment. First, we present a domain specific aggregation scheme in order to minimize the required overall bandwidth. Then we propose a genetic algorithm which is able to identify good positions for static roadside units in order to cope with the highly partitioned nature of a VANET in an early deployment stage. A tailored toolchain allows to optimize the placement with respect to an application-centric objective function, based on travel time savings. By means of simulation we assess the performance of the resulting traffic information system and the optimization strategy.",
"title": ""
}
] |
scidocsrr
|
d5e15ac864231fcbcd8823b9ed7b70b2
|
Design and Dynamic Model of a Frog-inspired Swimming Robot Powered by Pneumatic Muscles
|
[
{
"docid": "30f48021bca12899d6f2e012e93ba12d",
"text": "There are several locomotion mechanisms in Nature. The study of mechanics of any locomotion is very useful for scientists and researchers. Many locomotion principles from Nature have been adapted in robotics. There are several species which are capable of multimode locomotion such as walking and swimming, and flying etc. Frogs are such species, capable of jumping, walking, and swimming. Multimode locomotion is important for robots to work in unknown environment. Frogs are widely known as good multimode locomotors. Webbed feet help them to swim efficiently in water. This paper presents the study of frog's swimming locomotion and adapting the webbed feet for swimming locomotion of the robots. A simple mechanical model of robotic leg with webbed foot, which can be used for multi-mode locomotion and robotic frog, is put forward. All the joints of the legs are designed to be driven by tendon-pulley arrangement with the actuators mounted on the body, which allows the legs to be lighter and compact.",
"title": ""
}
] |
[
{
"docid": "50ffd544ab676a0b3c17802734a9fd9a",
"text": "PSDVec is a Python/Perl toolbox that learns word embeddings, i.e. the mapping of words in a natural language to continuous vectors which encode the semantic/syntactic regularities between the words. PSDVec implements a word embedding learning method based on a weighted low-rank positive semidefinite approximation. To scale up the learning process, we implement a blockwise online learning algorithm to learn the embeddings incrementally. This strategy greatly reduces the learning time of word embeddings on a large vocabulary, and can learn the embeddings of new words without re-learning the whole vocabulary. On 9 word similarity/analogy benchmark sets and 2 Natural Language Processing (NLP) tasks, PSDVec produces embeddings that has the best average performance among popular word embedding tools. PSDVec provides a new option for NLP practitioners. & 2016 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "1613f8b73465d52a3e850c894578ef2a",
"text": "In this paper, we evaluate the performance of Multicarrier-Low Density Spreading Multiple Access (MC-LDSMA) as a multiple access technique for mobile communication systems. The MC-LDSMA technique is compared with current multiple access techniques, OFDMA and SC-FDMA. The performance is evaluated in terms of cubic metric, block error rate, spectral efficiency and fairness. The aim is to investigate the expected gains of using MC-LDSMA in the uplink for next generation cellular systems. The simulation results of the link and system-level performance evaluation show that MC-LDSMA has significant performance improvements over SC-FDMA and OFDMA. It is shown that using MC-LDSMA can considerably reduce the required transmission power and increase the spectral efficiency and fairness among the users.",
"title": ""
},
{
"docid": "51215220471f8f7f4afd68c1a27b5809",
"text": "he unauthorized modification and subsequent misuse of software is often referred to as software cracking. Usually, cracking requires disabling one or more software features that enforce policies (of access, usage, dissemination, etc.) related to the software. Because there is value and/or notoriety to be gained by accessing valuable software capabilities, cracking continues to be common and is a growing problem. To combat cracking, anti-tamper (AT) technologies have been developed to protect valuable software. Both hardware and software AT technologies aim to make software more resistant against attack and protect critical program elements. However, before discussing the various AT technologies, we need to know the adversary's goals. What do software crackers hope to achieve? Their purposes vary, and typically include one or more of the following: • Gaining unauthorized access. The attacker's goal is to disable the software access control mechanisms built into the software. After doing so, the attacker can make and distribute illegal copies whose copy protection or usage control mechanisms have been disabled – this is the familiar software piracy problem. If the cracked software provides access to classified data, then the attacker's real goal is not the software itself, but the data that is accessible through the software. The attacker sometimes aims at modifying or unlocking specific functionality in the program, e.g., a demo or export version of software is often a deliberately degraded version of what is otherwise fully functional software. The attacker then seeks to make it fully functional by re-enabling the missing features. • Reverse engineering. The attacker aims to understand enough about the software to steal key routines, to gain access to proprietary intellectual property , or to carry out code-lifting, which consists of reusing a crucial part of the code (without necessarily understanding the internals of how it works) in some other software. Good programming practices, while they facilitate software engineering, also tend to simultaneously make it easier to carry out reverse engineering attacks. These attacks are potentially very costly to the original software developer as they allow a competitor (or an enemy) to nullify the develop-er's competitive advantage by rapidly closing a technology gap through insights gleaned from examining the software. • Violating code integrity. This familiar attack consists of either injecting malicious code (malware) into a program , injecting code that is not malevolent but illegally enhances a pro-gram's functionality, or otherwise sub-verting a program so it performs new and …",
"title": ""
},
{
"docid": "f69ba8c401cd61057888dfa023bfee30",
"text": "Since its introduction, the Nintendo Wii remote has become one of the world's most sophisticated and common input devices. Combining its impressive capability with a low cost and high degree of accessibility make it an ideal platform for exploring a variety of interaction research concepts. The author describes the technology inside the Wii remote, existing interaction techniques, what's involved in creating custom applications, and several projects ranging from multiobject tracking to spatial augmented reality that challenge the way its developers meant it to be used.",
"title": ""
},
{
"docid": "cf2e23cddb72b02d1cca83b4c3bf17a8",
"text": "This article seeks to reconceptualize the relationship between flexibility and efficiency. Much organization theory argues that efficiency requires bureaucracy, that bureaucracy impedes flexibility, and that organizations therefore confront a tradeoff between efficiency and flexibility. Some researchers have challenged this line of reasoning, arguing that organizations can shift the efficiency/flexibility tradeoff to attain both superior efficiency and superior flexibility. Others have pointed out numerous obstacles to successfully shifting the tradeoff. Seeking to advance our understanding of these obstacles and how they might be overcome, we analyze an auto assembly plant that appears to be far above average industry performance in both efficiency and flexibility. NUMMI, a Toyota subsidiary located in Fremont, California, relied on a highly bureaucratic organization to achieve its high efficiency. Analyzing two recent major model changes, we find that NUMMI used four mechanisms to support its exceptional flexibility/efficiency combination. First, metaroutines (routines for changing other routines) facilitated the efficient performance of nonroutine tasks. Second, both workers and suppliers contributed to nonroutine tasks while they worked in routine production. Third, routine and nonroutine tasks were separated temporally, and workers switched sequentially between them. Finally, novel forms of organizational partitioning enabled differentiated subunits to work in parallel on routine and nonroutine tasks. NUMMI’s success with these four mechanisms depended on several features of the broader organizational context, most notably training, trust, and leadership. (Flexibility; Bureaucracy; Tradeoffs; Routines; Metaroutines; Ambidexterity; Switching; Partitioning; Trust) Introduction The postulate of a tradeoff between efficiency and flexibility is one of the more enduring ideas in organizational theory. Thompson (1967, p. 15) described it as a central “paradox of administration.” Managers must choose between organization designs suited to routine, repetitive tasks and those suited to nonroutine, innovative tasks. However, as competitive rivalry intensifies, a growing number of firms are trying to improve simultaneously in efficiencyand flexibility-related dimensions (de Meyer et al. 1989, Volberda 1996, Organization Science 1996). How can firms shift the terms of the efficiency-flexibility tradeoff? To explore how firms can create simultaneously superior efficiency and superior flexibility, we examine an exceptional auto assembly plant, NUMMI, a joint venture of Toyota and GM whose day-to-day operations were unD ow nl oa de d fr om in fo rm s. or g by [ 12 8. 32 .7 5. 11 8] o n 28 A pr il 20 14 , a t 1 0: 21 . Fo r pe rs on al u se o nl y, a ll ri gh ts r es er ve d. PAUL S. ADLER, BARBARA GOLDOFTAS AND DAVID I. LEVINE Flexibility Versus Efficiency? 44 ORGANIZATION SCIENCE/Vol. 10, No. 1, January–February 1999 der Toyota control. Like other Japanese auto transplants in the U.S., NUMMI far outpaced its Big Three counterparts simultaneously in efficiency and quality and in model change flexibility (Womack et al. 1990, Business Week 1994). In the next section we set the theoretical stage by reviewing prior research on the efficiency/flexibility tradeoff. Prior research suggests four mechanisms by which organizations can shift the tradeoff as well as some potentially serious impediments to each mechanism. We then describe our research methods and the NUMMI organization. The following sections first outline in summary form the results of this investigation, then provide the supporting evidence in our analysis of two major model changeovers at NUMMI and how they differed from traditional U.S. Big Three practice. A discussion section identifies some conditions underlying NUMMI’s success in shifting the tradeoff and in overcoming the potential impediments to the four trade-off shifting mechanisms. Flexibility Versus Efficiency? There are many kinds of flexibility and indeed a sizable literature devoted to competing typologies of the various kinds of flexibility (see overview by Sethi and Sethi 1990). However, from an organizational point of view, all forms of flexibility present a common challenge: efficiency requires a bureaucratic form of organization with high levels of standardization, formalization, specialization, hierarchy, and staffs; but these features of bureaucracy impede the fluid process of mutual adjustment required for flexibility; and organizations therefore confront a tradeoff between efficiency and flexibility (Knott 1996, Kurke 1988). Contingency theory argues that organizations will be more effective if they are designed to fit the nature of their primary task. Specifically, organizations should adopt a mechanistic form if their task is simple and stable and their goal is efficiency, and they should adopt an organic form if their task is complex and changing and their goal is therefore flexibility (Burns and Stalker 1961). Organizational theory presents a string of contrasts reflecting this mechanistic/organic polarity: machine bureaucracies vs. adhocracies (Mintzberg 1979); adaptive learning based on formal rules and hierarchical controls versus generative learning relying on shared values, teams, and lateral communication (McGill et al. 1992); generalists who pursue opportunistic r-strategies and rely on excess capacity to do well in open environments versus specialists that are more likely to survive in competitive environments by pursuing k-strategies that trade less flexibility for greater efficiency (Hannan and Freeman 1977, 1989). March (1991) and Levinthal and March (1993) make the parallel argument that organizations must choose between structures that facilitate exploration—the search for new knowledge—and those that facilitate exploitation—the use of existing knowledge. Social-psychological theories provide a rationale for this polarization. Merton (1958) shows how goal displacement in bureaucratic organizations generates rigidity. Argyris and Schon (1978) show how defensiveness makes single-loop learning—focused on pursuing given goals more effectively (read: efficiency)—an impediment to double-loop learning—focused on defining new task goals (read: flexibility). Thus, argues Weick (1969), adaptation precludes adaptability. This tradeoff view has been echoed in other disciplines. Standard economic theory postulates a tradeoff between flexibility and average costs (e.g., Stigler 1939, Hart 1942). Further extending this line of thought, Klein (1984) contrasts static and dynamic efficiency. Operations management researchers have long argued that productivity and flexibility or innovation trade off against each other in manufacturing plant performance (Abernathy 1978; see reviews by Gerwin 1993, Suárez et al. 1996, Corrêa 1994). Hayes and Wheelwright’s (1984) product/process matrix postulates a close correspondence between product variety and process efficiency (see Safizadeh et al. 1996). Strategy researchers such as Ghemawat and Costa (1993) argue that firms must chose between a strategy of dynamic effectiveness through flexibility and static efficiency through more rigid discipline. In support of a key corollary of the tradeoff postulate articulated in the organization theory literature, they argue that in general the optimal choice is at one end or the other of the spectrum, since a firm pursuing both goals simultaneously would have to mix organizational elements appropriate to each strategy and thus lose the benefit of the complementarities that typically obtain between the various elements of each type of organization. They would thus be “stuck in the middle” (Porter 1980). Beyond the Tradeoff? Empirical evidence for the tradeoff postulate is, however, remarkably weak. Take, for example, product mix flexibility. On the one hand, Hayes and Wheelwright (1984) and Skinner (1985) provide anecdotal evidence that more focused factories—ones producing a narrower range of products—are more efficient. In their survey of plants across a range of manufacturing industries, Safizadeh et al. (1996) confirmed that in general more product variety was associated with reliance on job-shop rather continuous processes. D ow nl oa de d fr om in fo rm s. or g by [ 12 8. 32 .7 5. 11 8] o n 28 A pr il 20 14 , a t 1 0: 21 . Fo r pe rs on al u se o nl y, a ll ri gh ts r es er ve d. PAUL S. ADLER, BARBARA GOLDOFTAS AND DAVID I. LEVINE Flexibility Versus Efficiency? ORGANIZATION SCIENCE/Vol. 10, No. 1, January–February 1999 45 On the other hand, Kekre and Srinivasan’s (1990) study of companies selling industrial products found that a broader product line was significantly associated with lower manufacturing costs. MacDuffie et al. (1996) found that greater product variety had no discernible affect on auto assembly plant productivity. Suárez et al. (1996) found that product mix flexibility had no discernible relationship to costs or quality in printed circuit board assembly. Brush and Karnani (1996) found only three out of 19 manufacturing industries showed statistically significant productivity returns to narrower product lines, while two industries showed significant returns to broader product lines. Research by Fleischman (1996) on employment flexibility revealed a similar pattern: within 2digit SIC code industries that face relatively homogeneous levels of expected volatility of employment, the employment adjustment costs of the least flexible 4-digit industries were anywhere between 4 and 10 times greater than the adjustment costs found in the most flexible 4digit industries. Some authors argue that the era of tradeoffs is behind us (Ferdows and de Meyer 1990). Hypercompetitive environments force firms to compete on several dimensions at once (Organization Science 1996), and flexible technologies enable firms to shift the tradeoff curve just as quickly as they could move to a different point on the existing tr",
"title": ""
},
{
"docid": "7c9aba06418b51a90f1f3d97c3e3f83a",
"text": "BACKGROUND\nResearch indicates that music therapy can improve social behaviors and joint attention in children with Autism Spectrum Disorder (ASD); however, more research on the use of music therapy interventions for social skills is needed to determine the impact of group music therapy.\n\n\nOBJECTIVE\nTo examine the effects of a music therapy group intervention on eye gaze, joint attention, and communication in children with ASD.\n\n\nMETHOD\nSeventeen children, ages 6 to 9, with a diagnosis of ASD were randomly assigned to the music therapy group (MTG) or the no-music social skills group (SSG). Children participated in ten 50-minute group sessions over a period of 5 weeks. All group sessions were designed to target social skills. The Social Responsiveness Scale (SRS), the Autism Treatment Evaluation Checklist (ATEC), and video analysis of sessions were used to evaluate changes in social behavior.\n\n\nRESULTS\nThere were significant between-group differences for joint attention with peers and eye gaze towards persons, with participants in the MTG demonstrating greater gains. There were no significant between-group differences for initiation of communication, response to communication, or social withdraw/behaviors. There was a significant interaction between time and group for SRS scores, with improvements for the MTG but not the SSG. Scores on the ATEC did not differ over time between the MTG and SSG.\n\n\nCONCLUSIONS\nThe results of this study support further research on the use of music therapy group interventions for social skills in children with ASD. Statistical results demonstrate initial support for the use of music therapy social groups to develop joint attention.",
"title": ""
},
{
"docid": "53ab46387cb1c04e193d2452c03a95ad",
"text": "Real time control of five-axis machine tools requires smooth generation of feed, acceleration and jerk in CNC systems without violating the physical limits of the drives. This paper presents a feed scheduling algorithm for CNC systems to minimize the machining time for five-axis contour machining of sculptured surfaces. The variation of the feed along the five-axis tool-path is expressed in a cubic B-spline form. The velocity, acceleration and jerk limits of the five axes are considered in finding the most optimal feed along the toolpath in order to ensure smooth and linear operation of the servo drives with minimal tracking error. The time optimal feed motion is obtained by iteratively modulating the feed control points of the B-spline to maximize the feed along the tool-path without violating the programmed feed and the drives’ physical limits. Long tool-paths are handled efficiently by applying a moving window technique. The improvement in the productivity and linear operation of the five drives is demonstrated with five-axis simulations and experiments on a CNC machine tool. r 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "c841938f03a07fffc5150fbe18f8f740",
"text": "Ensemble modeling is now a well-established means for improving prediction accuracy; it enables you to average out noise from diverse models and thereby enhance the generalizable signal. Basic stacked ensemble techniques combine predictions from multiple machine learning algorithms and use these predictions as inputs to second-level learning models. This paper shows how you can generate a diverse set of models by various methods such as forest, gradient boosted decision trees, factorization machines, and logistic regression and then combine them with stacked-ensemble techniques such as hill climbing, gradient boosting, and nonnegative least squares in SAS Visual Data Mining and Machine Learning. The application of these techniques to real-world big data problems demonstrates how using stacked ensembles produces greater prediction accuracy and robustness than do individual models. The approach is powerful and compelling enough to alter your initial data mining mindset from finding the single best model to finding a collection of really good complementary models. It does involve additional cost due both to training a large number of models and the proper use of cross validation to avoid overfitting. This paper shows how to efficiently handle this computational expense in a modern SAS environment and how to manage an ensemble workflow by using parallel computation in a distributed framework.",
"title": ""
},
{
"docid": "2a4cb6dac01c4388b4b8d8a80e30fc2b",
"text": "Chemotaxis toward amino-acids results from the suppression of directional changes which occur spontaneously in isotropic solutions.",
"title": ""
},
{
"docid": "4482146da978a89920e128470e3b8567",
"text": "Glaucoma is the second leading cause of blindness. Glaucoma can be diagnosed through measurement of neuro-retinal optic cup-to-disc ratio (CDR). Automatic calculation of optic cup boundary is challenging due to the interweavement of blood vessels with the surrounding tissues around the cup. A Convex Hull based Neuro-Retinal Optic Cup Ellipse Optimization algorithm improves the accuracy of the boundary estimation. The algorithm’s effectiveness is demonstrated on 70 clinical patient’s data set collected from Singapore Eye Research Institute. The root mean squared error of the new algorithm is 43% better than the ARGALI system which is the state-of-the-art. This further leads to a large clinical evaluation of the algorithm involving 15 thousand patients from Australia and Singapore.",
"title": ""
},
{
"docid": "23def38b89358bc1090412e127c7ec2b",
"text": "We describe the design of four ornithopters ranging in wing span from 10 cm to 40 cm, and in weight from 5 g to 45 g. The controllability and power supply are two major considerations, so we compare the efficiency and characteristics between different types of subsystems such as gearbox and tail shape. Our current ornithopter is radio-controlled with inbuilt visual sensing and capable of takeoff and landing. We also concentrate on its wing efficiency based on design inspired by a real insect wing and consider that aspects of insect flight such as delayed stall and wake capture are essential at such small size. Most importantly, the advance ratio, controlled either by enlarging the wing beat amplitude or raising the wing beat frequency, is the most significant factor in an ornithopter which mimics an insect.",
"title": ""
},
{
"docid": "4f43a692ff8f6aed3a3fc4521c86d35e",
"text": "LEARNING OBJECTIVES\nAfter reading this article, the participant should be able to: 1. Understand the challenges in restoring volume and structural integrity in rhinoplasty. 2. Identify the appropriate uses of various autografts in aesthetic and reconstructive rhinoplasty (septal cartilage, auricular cartilage, costal cartilage, calvarial and nasal bone, and olecranon process of the ulna). 3. Identify the advantages and disadvantages of each of these autografts.\n\n\nSUMMARY\nThis review specifically addresses the use of autologous grafts in rhinoplasty. Autologous materials remain the preferred graft material for use in rhinoplasty because of their high biocompatibility and low risk of infection and extrusion. However, these advantages should be counterbalanced with the concerns of donor-site morbidity, graft availability, and graft resorption.",
"title": ""
},
{
"docid": "5b36ec4a7282397402d582de7254d0c1",
"text": "Recurrent neural network language models (RNNLMs) have becoming increasingly popular in many applications such as automatic speech recognition (ASR). Significant performance improvements in both perplexity and word error rate over standard n-gram LMs have been widely reported on ASR tasks. In contrast, published research on using RNNLMs for keyword search systems has been relatively limited. In this paper the application of RNNLMs for the IARPA Babel keyword search task is investigated. In order to supplement the limited acoustic transcription data, large amounts of web texts are also used in large vocabulary design and LM training. Various training criteria were then explored to improved RNNLMs' efficiency in both training and evaluation. Significant and consistent improvements on both keyword search and ASR tasks were obtained across all languages.",
"title": ""
},
{
"docid": "7b27d8b8f05833888b9edacf9ace0a18",
"text": "This paper reports results from a study on the adoption of an information visualization system by administrative data analysts. Despite the fact that the system was neither fully integrated with their current software tools nor with their existing data analysis practices, analysts identified a number of key benefits that visualization systems provide to their work. These benefits for the most part occurred when analysts went beyond their habitual and well-mastered data analysis routines and engaged in creative discovery processes. We analyze the conditions under which these benefits arose, to inform the design of visualization systems that can better assist the work of administrative data analysts.",
"title": ""
},
{
"docid": "0fe95e1e3f848d8ed1bc4b54c9ccfc5d",
"text": "Procedural knowledge is the knowledge required to perform certain tasks, and forms an important part of expertise. A major source of procedural knowledge is natural language instructions. While these readable instructions have been useful learning resources for human, they are not interpretable by machines. Automatically acquiring procedural knowledge in machine interpretable formats from instructions has become an increasingly popular research topic due to their potential applications in process automation. However, it has been insufficiently addressed. This paper presents an approach and an implemented system to assist users to automatically acquire procedural knowledge in structured forms from instructions. We introduce a generic semantic representation of procedures for analysing instructions, using which natural language techniques are applied to automatically extract structured procedures from instructions. The method is evaluated in three domains to justify the generality of the proposed semantic representation as well as the effectiveness of the implemented automatic system.",
"title": ""
},
{
"docid": "445685897a2e7c9c5b44a713690bd0a8",
"text": "Maximum power point tracking (MPPT) is an integral part of a system of energy conversion using photovoltaic (PV) arrays. The power-voltage characteristic of PV arrays operating under partial shading conditions exhibits multiple local maximum power points (LMPPs). In this paper, a new method has been presented to track the global maximum power point (GMPP) of PV. Compared with the past proposed global MPPT techniques, the method proposed in this paper has the advantages of determining whether partial shading is present, calculating the number of peaks on P-V curves, and predicting the locations of GMPP and LMPP. The new method can quickly find GMPP, and avoid much energy loss due to blind scan. The experimental results verify that the proposed method guarantees convergence to the global MPP under partial shading conditions.",
"title": ""
},
{
"docid": "4c729baceae052361decd51321e0b5bc",
"text": "Learning to hash has attracted broad research interests in recent computer vision and machine learning studies, due to its ability to accomplish efficient approximate nearest neighbor search. However, the closely related task, maximum inner product search (MIPS), has rarely been studied in this literature. To facilitate the MIPS study, in this paper, we introduce a general binary coding framework based on asymmetric hash functions, named asymmetric inner-product binary coding (AIBC). In particular, AIBC learns two different hash functions, which can reveal the inner products between original data vectors by the generated binary vectors. Although conceptually simple, the associated optimization is very challenging due to the highly nonsmooth nature of the objective that involves sign functions. We tackle the nonsmooth optimization in an alternating manner, by which each single coding function is optimized in an efficient discrete manner. We also simplify the objective by discarding the quadratic regularization term which significantly boosts the learning efficiency. Both problems are optimized in an effective discrete way without continuous relaxations, which produces high-quality hash codes. In addition, we extend the AIBC approach to the supervised hashing scenario, where the inner products of learned binary codes are forced to fit the supervised similarities. Extensive experiments on several benchmark image retrieval databases validate the superiority of the AIBC approaches over many recently proposed hashing algorithms.",
"title": ""
},
{
"docid": "b4efebd49c8dd2756a4c2fb86b854798",
"text": "Mobile technologies (including handheld and wearable devices) have the potential to enhance learning activities from basic medical undergraduate education through residency and beyond. In order to use these technologies successfully, medical educators need to be aware of the underpinning socio-theoretical concepts that influence their usage, the pre-clinical and clinical educational environment in which the educational activities occur, and the practical possibilities and limitations of their usage. This Guide builds upon the previous AMEE Guide to e-Learning in medical education by providing medical teachers with conceptual frameworks and practical examples of using mobile technologies in medical education. The goal is to help medical teachers to use these concepts and technologies at all levels of medical education to improve the education of medical and healthcare personnel, and ultimately contribute to improved patient healthcare. This Guide begins by reviewing some of the technological changes that have occurred in recent years, and then examines the theoretical basis (both social and educational) for understanding mobile technology usage. From there, the Guide progresses through a hierarchy of institutional, teacher and learner needs, identifying issues, problems and solutions for the effective use of mobile technology in medical education. This Guide ends with a brief look to the future.",
"title": ""
},
{
"docid": "bcb756857adef42264eab0f1361f8be7",
"text": "The problem of multi-class boosting is considered. A new fra mework, based on multi-dimensional codewords and predictors is introduced . The optimal set of codewords is derived, and a margin enforcing loss proposed. The resulting risk is minimized by gradient descent on a multidimensional functi onal space. Two algorithms are proposed: 1) CD-MCBoost, based on coordinate des cent, updates one predictor component at a time, 2) GD-MCBoost, based on gradi ent descent, updates all components jointly. The algorithms differ in the w ak learners that they support but are both shown to be 1) Bayes consistent, 2) margi n enforcing, and 3) convergent to the global minimum of the risk. They also red uce to AdaBoost when there are only two classes. Experiments show that both m et ods outperform previous multiclass boosting approaches on a number of data sets.",
"title": ""
},
{
"docid": "42e2a8b8c1b855fba201e3421639d80d",
"text": "Fraudulent behaviors in Google’s Android app market fuel search rank abuse and malware proliferation. We present FairPlay, a novel system that uncovers both malware and search rank fraud apps, by picking out trails that fraudsters leave behind. To identify suspicious apps, FairPlay’s PCF algorithm correlates review activities and uniquely combines detected review relations with linguistic and behavioral signals gleaned from longitudinal Google Play app data. We contribute a new longitudinal app dataset to the community, which consists of over 87K apps, 2.9M reviews, and 2.4M reviewers, collected over half a year. FairPlay achieves over 95% accuracy in classifying gold standard datasets of malware, fraudulent and legitimate apps. We show that 75% of the identified malware apps engage in search rank fraud. FairPlay discovers hundreds of fraudulent apps that currently evade Google Bouncer’s detection technology, and reveals a new type of attack campaign, where users are harassed into writing positive reviews, and install and review other apps.",
"title": ""
}
] |
scidocsrr
|
afc8a1049b3702f7928d91cfca7ffa82
|
Bayesian Nonparametric Inverse Reinforcement Learning for Switched Markov Decision Processes
|
[
{
"docid": "e6b9c0064a8dcf2790a891e20a5bb01d",
"text": "The difficulty in inverse reinforcement learning (IRL) aris es in choosing the best reward function since there are typically an infinite number of eward functions that yield the given behaviour data as optimal. Using a Bayes i n framework, we address this challenge by using the maximum a posteriori (MA P) estimation for the reward function, and show that most of the previous IRL al gorithms can be modeled into our framework. We also present a gradient metho d for the MAP estimation based on the (sub)differentiability of the poster ior distribution. We show the effectiveness of our approach by comparing the performa nce of the proposed method to those of the previous algorithms.",
"title": ""
},
{
"docid": "52fe696242f399d830d0a675bd766128",
"text": "Humans are adept at inferring the mental states underlying other agents' actions, such as goals, beliefs, desires, emotions and other thoughts. We propose a computational framework based on Bayesian inverse planning for modeling human action understanding. The framework represents an intuitive theory of intentional agents' behavior based on the principle of rationality: the expectation that agents will plan approximately rationally to achieve their goals, given their beliefs about the world. The mental states that caused an agent's behavior are inferred by inverting this model of rational planning using Bayesian inference, integrating the likelihood of the observed actions with the prior over mental states. This approach formalizes in precise probabilistic terms the essence of previous qualitative approaches to action understanding based on an \"intentional stance\" [Dennett, D. C. (1987). The intentional stance. Cambridge, MA: MIT Press] or a \"teleological stance\" [Gergely, G., Nádasdy, Z., Csibra, G., & Biró, S. (1995). Taking the intentional stance at 12 months of age. Cognition, 56, 165-193]. In three psychophysical experiments using animated stimuli of agents moving in simple mazes, we assess how well different inverse planning models based on different goal priors can predict human goal inferences. The results provide quantitative evidence for an approximately rational inference mechanism in human goal inference within our simplified stimulus paradigm, and for the flexible nature of goal representations that human observers can adopt. We discuss the implications of our experimental results for human action understanding in real-world contexts, and suggest how our framework might be extended to capture other kinds of mental state inferences, such as inferences about beliefs, or inferring whether an entity is an intentional agent.",
"title": ""
}
] |
[
{
"docid": "e134a35340fbf5f825d0d64108a171c3",
"text": "The present study investigated relations of anxiety sensitivity and other theoretically relevant personality factors to Copper's [Psychological Assessment 6 (1994) 117.] four categories of substance use motivations as applied to teens' use of alcohol, cigarettes, and marijuana. A sample of 508 adolescents (238 females, 270 males; mean age = 15.1 years) completed the Trait subscale of the State-Trait Anxiety Inventory for Children, the Childhood Anxiety Sensitivity Index (CASI), and the Intensity and Novelty subscales of the Arnett Inventory of Sensation Seeking. Users of each substance also completed the Drinking Motives Questionnaire-Revised (DMQ-R) and/or author-compiled measures for assessing motives for cigarette smoking and marijuana use, respectively. Multiple regression analyses revealed that, in the case of each drug, the block of personality variables predicted \"risky\" substance use motives (i.e., coping, enhancement, and/or conformity motives) over-and-above demographics. High intensity seeking and low anxiety sensitivity predicted enhancement motives for alcohol use, high anxiety sensitivity predicted conformity motives for alcohol and marijuana use, and high trait anxiety predicted coping motives for alcohol and cigarette use. Moreover, anxiety sensitivity moderated the relation between trait anxiety and coping motives for alcohol and cigarette use: the trait anxiety-coping motives relation was stronger for high, than for low, anxiety sensitive individuals. Implications of the findings for improving substance abuse prevention efforts for youth will be discussed.",
"title": ""
},
{
"docid": "21bd6f42c74930c8e9876ff4f5ef1ee2",
"text": "Dynamic channel allocation (DCA) is the key technology to efficiently utilize the spectrum resources and decrease the co-channel interference for multibeam satellite systems. Most works allocate the channel on the basis of the beam traffic load or the user terminal distribution of the current moment. These greedy-like algorithms neglect the intrinsic temporal correlation among the sequential channel allocation decisions, resulting in the spectrum resources underutilization. To solve this problem, a novel deep reinforcement learning (DRL)-based DCA (DRL-DCA) algorithm is proposed. Specifically, the DCA optimization problem, which aims at minimizing the service blocking probability, is formulated in the multibeam satellite systems. Due to the temporal correlation property, the DCA optimization problem is modeled as the Markov decision process (MDP) which is the dominant analytical approach in DRL. In modeled MDP, the system state is reformulated into an image-like fashion, and then, convolutional neural network is used to extract useful features. Simulation results show that the DRL-DCA algorithm can decrease the blocking probability and improve the carried traffic and spectrum efficiency compared with other channel allocation algorithms.",
"title": ""
},
{
"docid": "9aa95ffde4eb675c094f4eba5e970357",
"text": "Many interesting computational problems can be reformulated in terms of decision trees. A natural classical algorithm is to then run a random walk on the tree, starting at the root, to see if the tree contains a node n levels from the root. We devise a quantum mechanical algorithm that evolves a state, initially localized at the root, through the tree. We prove that if the classical strategy succeeds in reaching level n in time polynomial in n, then so does the quantum algorithm. Moreover, we find examples of trees for which the classical algorithm requires time exponential in n, but for which the quantum algorithm succeeds in polynomial time. The examples we have so far, however, could also be solved in polynomial time by different classical algorithms. MIT-CTP-2651, quant-ph/9706062 June 1997",
"title": ""
},
{
"docid": "318daea2ef9b0d7afe2cb08edcfe6025",
"text": "Stock market prediction has become an attractive investigation topic due to its important role in economy and beneficial offers. There is an imminent need to uncover the stock market future behavior in order to avoid investment risks. The large amount of data generated by the stock market is considered a treasure of knowledge for investors. This study aims at constructing an effective model to predict stock market future trends with small error ratio and improve the accuracy of prediction. This prediction model is based on sentiment analysis of financial news and historical stock market prices. This model provides better accuracy results than all previous studies by considering multiple types of news related to market and company with historical stock prices. A dataset containing stock prices from three companies is used. The first step is to analyze news sentiment to get the text polarity using naïve Bayes algorithm. This step achieved prediction accuracy results ranging from 72.73% to 86.21%. The second step combines news polarities and historical stock prices together to predict future stock prices. This improved the prediction accuracy up to 89.80%.",
"title": ""
},
{
"docid": "908769c3f39ab3047fac2be9157d9a35",
"text": "Low-bit-rate speech coding, at rates below 4 kb/s, is needed for both communication and voice storage applications. At such low rates, full encoding of the speech waveform is not possible; therefore, low-rate coders rely instead on parametric models to represent only the most perceptually-relevant aspects of speech. While there are a number of different approaches for this modeling, all can be related to the basic linear model of speech production, where an excitation signal drives a vocal tract filter. The basic properties of the speech signal and of human speech perception can explain the principles of parametric speech coding as applied in early vocoders. Current speech modeling approaches, such as mixed excitation linear prediction, sinusoidal coding, and waveform interpolation, use more sophisticated versions of these same concepts. Modern techniques for encoding the model parameters, in particular using the theory of vector quantization, allow the encoding of the model information with very few bits per speech frame. Successful standardization of low-rate coders has enabled their widespread use for both military and satellite communications, at rates from 4 kb/s all the way down to 600 b/s. However, the goal of tollquality low-rate coding continues to provide a research challenge. This work was sponsored by the Defense Advanced Research Projects Agency under Air Force Contract FA8721-05-C-0002 . Opinions, interpretations, conclusions, and recommendat ions are those of the authors and are not necessarily endorsed by the U nited States Government.",
"title": ""
},
{
"docid": "a461592a276b13a6a25c25ab64c23d61",
"text": "To maintain the integrity of an organism constantly challenged by pathogens, the immune system is endowed with a variety of cell types. B lymphocytes were initially thought to only play a role in the adaptive branch of immunity. However, a number of converging observations revealed that two B-cell subsets, marginal zone (MZ) and B1 cells, exhibit unique developmental and functional characteristics, and can contribute to innate immune responses. In addition to their capacity to mount a local antibody response against type-2 T-cell-independent (TI-2) antigens, MZ B-cells can participate to T-cell-dependent (TD) immune responses through the capture and import of blood-borne antigens to follicular areas of the spleen. Here, we discuss the multiple roles of MZ B-cells in humans, non-human primates, and rodents. We also summarize studies - performed in transgenic mice expressing fully human antibodies on their B-cells and in macaques whose infection with Simian immunodeficiency virus (SIV) represents a suitable model for HIV-1 infection in humans - showing that infectious agents have developed strategies to subvert MZ B-cell functions. In these two experimental models, we observed that two microbial superantigens for B-cells (protein A from Staphylococcus aureus and protein L from Peptostreptococcus magnus) as well as inactivated AT-2 virions of HIV-1 and infectious SIV preferentially deplete innate-like B-cells - MZ B-cells and/or B1 B-cells - with different consequences on TI and TD antibody responses. These data revealed that viruses and bacteria have developed strategies to deplete innate-like B-cells during the acute phase of infection and to impair the antibody response. Unraveling the intimate mechanisms responsible for targeting MZ B-cells in humans will be important for understanding disease pathogenesis and for designing novel vaccine strategies.",
"title": ""
},
{
"docid": "272affb51cec7bf4fe0cbe8b10331977",
"text": "During an earthquake, structures are subjected to both horizontal and vertical shaking. Most structures are rather insensitive to variations in the vertical acceleration history and primary considerations are given to the impact of the horizontal shaking on the behavior of structures. In the laboratory, however, most component tests are carried out under uni-directional horizontal loading to simulate earthquake effects rather than bi-directional loading. For example, biaxial loading tests of reinforced concrete (RC) walls constitute less than 0.5% of all quasi-static cyclic tests that have been conducted. Bi-directional tests require larger and more complex test setups than uni-directional tests and therefore should only be pursued if they provide insights and results that cannot be obtained from uni-directional tests. To investigate the influence of bi-directional loading on RC wall performance, this paper reviews results from quasi-static cyclic tests on RC walls that are reported in the literature. Results from uni-directional tests are compared to results from bi-directional tests for walls of different cross sections including rectangular walls, T-shaped walls, and U-shaped walls. The available test data are analyzed with regard to the influence of the loading history on stiffness, strength, deformation capacity and failure mode. Walls with T-shaped and Ushaped cross sections are designed to carry loads in both horizontal directions and thus consideration of the impact of bidirectional loading on behavior should be considered. However, it is also shown that the displacement capacity of walls with rectangular cross sections is typically reduced by 20 to 30% due to bi-directional loading. Further analysis of the test data indicates that the bi-directional loading protocol selected might impact wall strength and stiffness of the test specimen. Based on these findings, future research needs with regard to the response of RC walls subjected to bi-directional loading are provided.",
"title": ""
},
{
"docid": "6afb1d4ee806a8be1bfae8748a731615",
"text": "BACKGROUND\nThe COPD Assessment Test (CAT) is responsive to change in patients with chronic obstructive pulmonary disease (COPD). However, the minimum clinically important difference (MCID) has not been established. We aimed to identify the MCID for the CAT using anchor-based and distribution-based methods.\n\n\nMETHODS\nWe did three studies at two centres in London (UK) between April 1, 2010, and Dec 31, 2012. Study 1 assessed CAT score before and after 8 weeks of outpatient pulmonary rehabilitation in patients with COPD who were able to walk 5 m, and had no contraindication to exercise. Study 2 assessed change in CAT score at discharge and after 3 months in patients admitted to hospital for more than 24 h for acute exacerbation of COPD. Study 3 assessed change in CAT score at baseline and at 12 months in stable outpatients with COPD. We focused on identifying the minimum clinically important improvement in CAT score. The St George's Respiratory Questionnaire (SGRQ) and Chronic Respiratory Questionnaire (CRQ) were measured concurrently as anchors. We used receiver operating characteristic curves, linear regression, and distribution-based methods (half SD, SE of measurement) to estimate the MCID for the CAT; we included only patients with paired CAT scores in the analysis.\n\n\nFINDINGS\nIn Study 1, 565 of 675 (84%) patients had paired CAT scores. The mean change in CAT score with pulmonary rehabilitation was -2·5 (95% CI -3·0 to -1·9), which correlated significantly with change in SGRQ score (r=0·32; p<0·0001) and CRQ score (r=-0·46; p<0·0001). In Study 2, of 200 patients recruited, 147 (74%) had paired CAT scores. Mean change in CAT score from hospital discharge to 3 months after discharge was -3·0 (95% CI -4·4 to -1·6), which correlated with change in SGRQ score (r=0·47; p<0·0001). In Study 3, of 200 patients recruited, 164 (82%) had paired CAT scores. Although no significant change in CAT score was identified after 12 months (mean 0·6, 95% CI -0·4 to 1·5), change in CAT score correlated significantly with change in SGRQ score (r=0·36; p<0·0001). Linear regression estimated the minimum clinically important improvement for the CAT to range between -1·2 and -2·8 with receiver operating characteristic curves consistently identifying -2 as the MCID. Distribution-based estimates for the MCID ranged from -3·3 to -3·8.\n\n\nINTERPRETATION\nThe most reliable estimate of the minimum important difference of the CAT is 2 points. This estimate could be useful in the clinical interpretation of CAT data, particularly in response to intervention studies.\n\n\nFUNDING\nMedical Research Council and UK National Institute of Health Research.",
"title": ""
},
{
"docid": "697360b396804ef0540d0f53b7031aed",
"text": "We describe a high-resolution, real-time 3D absolute coordinate measurement system based on a phase-shifting method. It acquires 3D shape at 30 frames per second (fps), with 266K points per frame. A tiny marker is encoded in the projected fringe pattern, and detected by software from the texture image and the gamma map. Absolute 3D coordinates are obtained from the detected marker position and the calibrated system parameters. To demonstrate the performance of the system, we measure a hand moving over a depth distance of approximately 700 mm, and human faces with expressions. Applications of such a system include manufacturing, inspection, entertainment, security, medical imaging.",
"title": ""
},
{
"docid": "fceb43462f77cf858ef9747c1c5f0728",
"text": "MapReduce has become a dominant parallel computing paradigm for big data, i.e., colossal datasets at the scale of tera-bytes or higher. Ideally, a MapReduce system should achieve a high degree of load balancing among the participating machines, and minimize the space usage, CPU and I/O time, and network transfer at each machine. Although these principles have guided the development of MapReduce algorithms, limited emphasis has been placed on enforcing serious constraints on the aforementioned metrics simultaneously. This paper presents the notion of minimal algorithm, that is, an algorithm that guarantees the best parallelization in multiple aspects at the same time, up to a small constant factor. We show the existence of elegant minimal algorithms for a set of fundamental database problems, and demonstrate their excellent performance with extensive experiments.",
"title": ""
},
{
"docid": "34855c90155970485094829edb6bc3cb",
"text": "We present an approach for navigating in unknown environments while, simultaneously, gathering information for inspecting underwater structures using an autonomous underwater vehicle (AUV). To accomplish this, we first use our pipeline for mapping and planning collision-free paths online, which endows an AUV with the capability to autonomously acquire optical data in close proximity. With that information, we then propose a reconstruction pipeline to create a photo-realistic textured 3D model of the inspected area. These 3D models are also of particular interest to other fields of study in marine sciences, since they can serve as base maps for environmental monitoring, thus allowing change detection of biological communities and their environment over time. Finally, we evaluate our approach using the Sparus II, a torpedo-shaped AUV, conducting inspection missions in a challenging, real-world and natural scenario.",
"title": ""
},
{
"docid": "cb7e4299f0994d2fe37ea2f1dc382610",
"text": "This paper presents a quick and accurate power control method for a zone-control induction heating (ZCIH) system. The ZCIH system consists of multiple working coils connected to multiple H-bridge inverters. The system controls the amplitude and phase angle of each coil current to make the temperature distribution on the workpiece uniform. This paper proposes a new control method for the coil currents based on a circuit model using real and imaginary (Re-Im) current/voltage components. The method detects and controls the Re-Im components of the coil current instead of the current amplitude and phase angle. As a result, the proposed method enables decoupling control for the system, making the control for each working coil independent from the others. Experiments on a 6-zone ZCIH laboratory setup are conducted to verify the validity of the proposed method. It is clarified that the proposed method has a stable operation both in transient and steady states. The proposed system and control method enable system complexity reduction and control stability improvements.",
"title": ""
},
{
"docid": "8314487867961ae2572997e2a7315c9c",
"text": "Social cognitive neuroscience examines social phenomena and processes using cognitive neuroscience research tools such as neuroimaging and neuropsychology. This review examines four broad areas of research within social cognitive neuroscience: (a) understanding others, (b) understanding oneself, (c) controlling oneself, and (d) the processes that occur at the interface of self and others. In addition, this review highlights two core-processing distinctions that can be neurocognitively identified across all of these domains. The distinction between automatic versus controlled processes has long been important to social psychological theory and can be dissociated in the neural regions contributing to social cognition. Alternatively, the differentiation between internally-focused processes that focus on one's own or another's mental interior and externally-focused processes that focus on one's own or another's visible features and actions is a new distinction. This latter distinction emerges from social cognitive neuroscience investigations rather than from existing psychological theories demonstrating that social cognitive neuroscience can both draw on and contribute to social psychological theory.",
"title": ""
},
{
"docid": "0f5e00fc025d0ee8746f774dfead1781",
"text": "Within the fields of urban reconstruction and city modeling, shape grammars have emerged as a powerful tool for both synthesizing novel designs and reconstructing buildings. Traditionally, a human expert was required to write grammars for specific building styles, which limited the scope of method applicability. We present an approach to automatically learn two-dimensional attributed stochastic context-free grammars (2D-ASCFGs) from a set of labeled building facades. To this end, we use Bayesian Model Merging, a technique originally developed in the field of natural language processing, which we extend to the domain of two-dimensional languages. Given a set of labeled positive examples, we induce a grammar which can be sampled to create novel instances of the same building style. In addition, we demonstrate that our learned grammar can be used for parsing existing facade imagery. Experiments conducted on the dataset of Haussmannian buildings in Paris show that our parsing with learned grammars not only outperforms bottom-up classifiers but is also on par with approaches that use a manually designed style grammar.",
"title": ""
},
{
"docid": "22eefe8e8a46f1323fdfdcc5e0e4cac5",
"text": " Covers the main data mining techniques through carefully selected case studies Describes code and approaches that can be easily reproduced or adapted to your own problems Requires no prior experience with R Includes introductions to R and MySQL basics Provides a fundamental understanding of the merits, drawbacks, and analysis objectives of the data mining techniques Offers data and R code on www.liaad.up.pt/~ltorgo/DataMiningWithR/",
"title": ""
},
{
"docid": "d4cd46d9c8f0c225d4fe7e34b308e8f1",
"text": "In this paper, a 10 kW current-fed DC-DC converter using resonant push-pull topology is demonstrated and analyzed. The grounds for component dimensioning are given and the advantages and disadvantages of the resonant push-pull topology are discussed. The converter characteristics and efficiencies are demonstrated by calculations and prototype measurements.",
"title": ""
},
{
"docid": "b5af728b9a8fd3d53c8fd55784557e29",
"text": "The term \"Goal\" is increasingly being used in Requirement Engineering. Goal-Oriented requirement engineering (GORE) provides an incremental approach for elicitation, analysis, elaboration & refinement, specification and modeling of requirements. Various Goal Oriented Requirement Engineering (GORE) methods exist for these requirement engineering processes like KAOS, GBRAM etc. GORE techniques are based on certain underlying concepts and principles. This paper presents and synthesizes the underlying concepts of GORE with respect to coverage of requirement engineering activities. The advantages of GORE claimed in the literature are presented. This paper evaluates GORE techniques on the basis of concepts, process and claimed advantages.",
"title": ""
},
{
"docid": "1e8f25674dc66a298c277d80dd031c20",
"text": "DeepQ Arrhythmia Database, the first generally available large-scale dataset for arrhythmia detector evaluation, contains 897 annotated single-lead ECG recordings from 299 unique patients. DeepQ includes beat-by-beat, rhythm episodes, and heartbeats fiducial points annotations. Each patient was engaged in a sequence of lying down, sitting, and walking activities during the ECG measurement and contributed three five-minute records to the database. Annotations were manually labeled by a group of certified cardiographic technicians and audited by a cardiologist at Taipei Veteran General Hospital, Taiwan. The aim of this database is in three folds. First, from the scale perspective, we build this database to be the largest representative reference set with greater number of unique patients and more variety of arrhythmic heartbeats. Second, from the diversity perspective, our database contains fully annotated ECG measures from three different activity modes and facilitates the arrhythmia classifier training for wearable ECG patches and AAMI assessment. Thirdly, from the quality point of view, it serves as a complement to the MIT-BIH Arrhythmia Database in the development and evaluation of the arrhythmia detector. The addition of this dataset can help facilitate the exhaustive studies using machine learning models and deep neural networks, and address the inter-patient variability. Further, we describe the development and annotation procedure of this database, as well as our on-going enhancement. We plan to make DeepQ database publicly available to advance medical research in developing outpatient, mobile arrhythmia detectors.",
"title": ""
},
{
"docid": "0332be71a529382e82094239db31ea25",
"text": "Nguyen and Shparlinski recently presented a polynomial-time algorithm that provably recovers the signer’s secret DSA key when a few bits of the random nonces k (used at each signature generation) are known for a number of DSA signatures at most linear in log q (q denoting as usual the small prime of DSA), under a reasonable assumption on the hash function used in DSA. The number of required bits is about log q, and can be further decreased to 2 if one assumes access to ideal lattice basis reduction, namely an oracle for the lattice closest vector problem for the infinity norm. All previously known results were only heuristic, including those of Howgrave-Graham and Smart who introduced the topic. Here, we obtain similar results for the elliptic curve variant of DSA (ECDSA).",
"title": ""
}
] |
scidocsrr
|
79763ad1e7ec488b68bbb5d2f3549da5
|
Mind the Traps! Design Guidelines for Rigorous BCI Experiments
|
[
{
"docid": "d4cb0a729d182222ba0a96715e07783e",
"text": "A survey of affective brain computer interfaces: principles, state-of-the-art, and challenges Christian Mühl, Brendan Allison, Anton Nijholt & Guillaume Chanel a Inria Bordeaux Sud-Ouest, Talence, France b ASPEN Lab, Electrical and Computer Engineering Department, Old Dominion University, Norfolk, VA, USA c Department of Cognitive Science, University of California at San Diego, La Jolla, CA, USA d Faculty EEMCS, Human Media Interaction, University of Twente, Enschede, The Netherlands e Swiss Center for Affective Sciences – University of Geneva, Campus Biotech, Genève, Switzerland Published online: 14 May 2014.",
"title": ""
}
] |
[
{
"docid": "e9b2f987c4744e509b27cbc2ab1487be",
"text": "Analogy and similarity are often assumed to be distinct psychological processes. In contrast to this position, the authors suggest that both similarity and analogy involve a process of structural alignment and mapping, that is, that similarity is like analogy. In this article, the authors first describe the structure-mapping process as it has been worked out for analogy. Then, this view is extended to similarity, where it is used to generate new predictions. Finally, the authors explore broader implications of structural alignment for psychological processing.",
"title": ""
},
{
"docid": "699ba57af7ed09817db19d30110ad9b0",
"text": "A RESURF stepped oxide (RSO) transistor is presented and electrically characterised. The processed RSO MOSFET includes a trench field-plate network in the drift region that is isolated with a thick oxide layer. This trench network has a hexagonal layout that induces an improved RESURF effect at breakdown compared with the more common stripe (2D) layout. Consequently, the effective doping can be two times higher for the hexagonal layout. We have obtained a record value for the specific on-resistance (R/sub ds,on/) of 58 m/spl Omega/.mm/sup 2/ at V/sub gs/=10 V for a breakdown voltage (BV/sub ds/,) of 85 V. These values have been obtained for devices having a 4.0 /spl mu/m cell pitch and a 5 /spl mu/m long drift region with a doping level of 2.10/sup 16/ cm/sup -3/. Measurements of the gate-drain charge density (Q/sub gd/) for these devices show that Q/sub gd/ is fully dominated by the oxide capacitance of the field-plate along the drift region.",
"title": ""
},
{
"docid": "b7487dc3fc2b26ed49fd6beaa0fefe77",
"text": "Cellulose and cyclodextrins possess unique properties that can be tailored, combined, and used in a considerable number of applications, including textiles, coatings, sensors, and drug delivery systems. Successfully structuring and applying cellulose and cyclodextrins conjugates requires a deep understanding of the relation between structural, and soft matter behavior, materials, energy, and function. This review focuses on the key advances in developing materials based on these conjugates. Relevant aspects regarding structural variations, methods of synthesis, processing and functionalization, and corresponding supramolecular properties are presented. The use of cellulose/cyclodextrin conjugates as intelligent platforms for applications in materials science and pharmaceutical technology is also outlined, focusing on drug delivery, textiles, and sensors.",
"title": ""
},
{
"docid": "454c390fcd7d9a3d43842aee19c77708",
"text": "Altmetrics have gained momentum and are meant to overcome the shortcomings of citation-based metrics. In this regard some light is shed on the dangers associated with the new “all-in-one” indicator altmetric score.",
"title": ""
},
{
"docid": "d479707742dcf5bec920370d98c2eadc",
"text": "Spectral measures of linear Granger causality have been widely applied to study the causal connectivity between time series data in neuroscience, biology, and economics. Traditional Granger causality measures are based on linear autoregressive with exogenous (ARX) inputs models of time series data, which cannot truly reveal nonlinear effects in the data especially in the frequency domain. In this study, it is shown that the classical Geweke's spectral causality measure can be explicitly linked with the output spectra of corresponding restricted and unrestricted time-domain models. The latter representation is then generalized to nonlinear bivariate signals and for the first time nonlinear causality analysis in the frequency domain. This is achieved by using the nonlinear ARX (NARX) modeling of signals, and decomposition of the recently defined output frequency response function which is related to the NARX model.",
"title": ""
},
{
"docid": "22d4ab1e9ecdfb86e6823fdd780f18dd",
"text": "Part-of-Speech (POS) tagging is the process of assigning a part-of-speech like noun, verb, adjective, adverb, or other lexical class marker to each word in a sentence. This paper presents a POS Tagger for Marathi language text using Rule based approach, which will assign part of speech to the words in a sentence given as an input. We describe our system as the one which tokenizes the string into tokens and then comparing tokens with the WordNet to assign their particular tags. There are many ambiguous words in Marathi language and we resolve the ambiguity of these words using Marathi grammar rules. KeywordsPOS-Part Of Speech, WordNet, Tagset, Corpus.",
"title": ""
},
{
"docid": "fe89c8a17676b7767cfa40e7822b8d25",
"text": "Previous machine comprehension (MC) datasets are either too small to train endto-end deep learning models, or not difficult enough to evaluate the ability of current MC techniques. The newly released SQuAD dataset alleviates these limitations, and gives us a chance to develop more realistic MC models. Based on this dataset, we propose a Multi-Perspective Context Matching (MPCM) model, which is an end-to-end system that directly predicts the answer beginning and ending points in a passage. Our model first adjusts each word-embedding vector in the passage by multiplying a relevancy weight computed against the question. Then, we encode the question and weighted passage by using bi-directional LSTMs. For each point in the passage, our model matches the context of this point against the encoded question from multiple perspectives and produces a matching vector. Given those matched vectors, we employ another bi-directional LSTM to aggregate all the information and predict the beginning and ending points. Experimental result on the test set of SQuAD shows that our model achieves a competitive result on the leaderboard.",
"title": ""
},
{
"docid": "804920bbd9ee11cc35e93a53b58e7e79",
"text": "Narrative reports in medical records contain a wealth of information that may augment structured data for managing patient information and predicting trends in diseases. Pertinent negatives are evident in text but are not usually indexed in structured databases. The objective of the study reported here was to test a simple algorithm for determining whether a finding or disease mentioned within narrative medical reports is present or absent. We developed a simple regular expression algorithm called NegEx that implements several phrases indicating negation, filters out sentences containing phrases that falsely appear to be negation phrases, and limits the scope of the negation phrases. We compared NegEx against a baseline algorithm that has a limited set of negation phrases and a simpler notion of scope. In a test of 1235 findings and diseases in 1000 sentences taken from discharge summaries indexed by physicians, NegEx had a specificity of 94.5% (versus 85.3% for the baseline), a positive predictive value of 84.5% (versus 68.4% for the baseline) while maintaining a reasonable sensitivity of 77.8% (versus 88.3% for the baseline). We conclude that with little implementation effort a simple regular expression algorithm for determining whether a finding or disease is absent can identify a large portion of the pertinent negatives from discharge summaries.",
"title": ""
},
{
"docid": "cdc276a3c4305d6c7ba763332ae933cc",
"text": "Synthetic aperture radar (SAR) image classification is a fundamental process for SAR image understanding and interpretation. With the advancement of imaging techniques, it permits to produce higher resolution SAR data and extend data amount. Therefore, intelligent algorithms for high-resolution SAR image classification are demanded. Inspired by deep learning technology, an end-to-end classification model from the original SAR image to final classification map is developed to automatically extract features and conduct classification, which is named deep recurrent encoding neural networks (DRENNs). In our proposed framework, a spatial feature learning network based on long–short-term memory (LSTM) is developed to extract contextual dependencies of SAR images, where 2-D image patches are transformed into 1-D sequences and imported into LSTM to learn the latent spatial correlations. After LSTM, nonnegative and Fisher constrained autoencoders (NFCAEs) are proposed to improve the discrimination of features and conduct final classification, where nonnegative constraint and Fisher constraint are developed in each autoencoder to restrict the training of the network. The whole DRENN not only combines the spatial feature learning power of LSTM but also utilizes the discriminative representation ability of our NFCAE to improve the classification performance. The experimental results tested on three SAR images demonstrate that the proposed DRENN is able to learn effective feature representations from SAR images and produce competitive classification accuracies to other related approaches.",
"title": ""
},
{
"docid": "2d17838b344c07245ebee619859dd881",
"text": "BACKGROUND\nMortality among patients admitted to hospital after out-of-hospital cardiac arrest (OHCA) is high. Based on recent scientific evidence with a main goal of improving survival, we introduced and implemented a standardised post resuscitation protocol focusing on vital organ function including therapeutic hypothermia, percutaneous coronary intervention (PCI), control of haemodynamics, blood glucose, ventilation and seizures.\n\n\nMETHODS\nAll patients with OHCA of cardiac aetiology admitted to the ICU from September 2003 to May 2005 (intervention period) were included in a prospective, observational study and compared to controls from February 1996 to February 1998.\n\n\nRESULTS\nIn the control period 15/58 (26%) survived to hospital discharge with a favourable neurological outcome versus 34 of 61 (56%) in the intervention period (OR 3.61, CI 1.66-7.84, p=0.001). All survivors with a favourable neurological outcome in both groups were still alive 1 year after discharge. Two patients from the control period were revascularised with thrombolytics versus 30 (49%) receiving PCI treatment in the intervention period (47 patients (77%) underwent cardiac angiography). Therapeutic hypothermia was not used in the control period, but 40 of 52 (77%) comatose patients received this treatment in the intervention period.\n\n\nCONCLUSIONS\nDischarge rate from hospital, neurological outcome and 1-year survival improved after standardisation of post resuscitation care. Based on a multivariate logistic analysis, hospital treatment in the intervention period was the most important independent predictor of survival.",
"title": ""
},
{
"docid": "1524297aeea3a28a542d8006607266bf",
"text": "Fully automating machine learning pipeline is one of the outstanding challenges of general artificial intelligence, as practical machine learning often requires costly human driven process, such as hyper-parameter tuning, algorithmic selection, and model selection. In this work, we consider the problem of executing automated, yet scalable search for finding optimal gradient based meta-learners in practice. As a solution, we apply progressive neural architecture search to proto-architectures by appealing to the model agnostic nature of general gradient based meta learners. In the presence of recent universality result of Finn et al.[9], our search is a priori motivated in that neural network architecture search dynamics—automated or not—may be quite different from that of the classical setting with the same target tasks, due to the presence of the gradient update operator. A posteriori, our search algorithm, given appropriately designed search spaces, finds gradient based meta learners with non-intuitive proto-architectures that are narrowly deep, unlike the inception-like structures previously observed in the resulting architectures of traditional NAS algorithms. Along with these notable findings, the searched gradient based meta-learner achieves state-of-the-art results on the few shot classification problem on Mini-ImageNet with 76.29% accuracy, which is an 13.18% improvement over results reported in the original MAML paper. To our best knowledge, this work is the first successful AutoML implementation in the context of meta learning.",
"title": ""
},
{
"docid": "82479411c3d3b6796f96880ee5012d74",
"text": "The recent advances brought by deep learning allowed to improve the performance in image retrieval tasks. Through the many convolutional layers, available in a Convolutional Neural Network (CNN), it is possible to obtain a hierarchy of features from the evaluated image. At every step, the patches extracted are smaller than the previous levels and more representative. Following this idea, this paper introduces a new detector applied on the feature maps extracted from pre-trained CNN. Specifically, this approach lets to increase the number of features in order to increase the performance of the aggregation algorithms like the most famous and used VLAD embedding. The proposed approach is tested on different public datasets: Holidays, Oxford5k, Paris6k and UKB.",
"title": ""
},
{
"docid": "914d17433df678e9ace1c9edd1c968d3",
"text": "We propose a Deep Learning approach to the visual question answering task, where machines answer to questions about real-world images. By combining latest advances in image representation and natural language processing, we propose Ask Your Neurons, a scalable, jointly trained, end-to-end formulation to this problem. In contrast to previous efforts, we are facing a multi-modal problem where the language output (answer) is conditioned on visual and natural language inputs (image and question). We evaluate our approaches on the DAQUAR as well as the VQA dataset where we also report various baselines, including an analysis how much information is contained in the language part only. To study human consensus, we propose two novel metrics and collect additional answers which extend the original DAQUAR dataset to DAQUAR-Consensus. Finally, we evaluate a rich set of design choices how to encode, combine and decode information in our proposed Deep Learning formulation.",
"title": ""
},
{
"docid": "7ac0875617d11cb811de8e2d4e117e01",
"text": "The video-recorded lecture represents a central feature of most online learning platforms. Nonetheless, little is known about how to best structure video-recorded lectures in order to optimize learning. Here, we focused on the tendency for high school and college students to be overconfident in their learning from video-recorded modules, and demonstrated that testing could be used to effectively improve the calibration between predicted and actual performance. Notably, interpolating a lecture with repeated",
"title": ""
},
{
"docid": "d37dd9382e7fd8e4c7e7099728a09d59",
"text": "OBJECTIVE\nTo assess immediate and near-term effects of 2 exercise training programs for persons with idiopathic Parkinson's disease (IPD).\n\n\nDESIGN\nRandomized control trial.\n\n\nSETTING\nPublic health facility and medical center.\n\n\nPARTICIPANTS\nFifteen persons with IPD.\n\n\nINTERVENTION\nCombined group (balance and resistance training) and balance group (balance training only) underwent 10 weeks of high-intensity resistance training (knee extensors and flexors, ankle plantarflexion) and/or balance training under altered visual and somatosensory sensory conditions, 3 times a week on nonconsecutive days. Groups were assessed before, immediately after training, and 4 weeks later.\n\n\nMAIN OUTCOME MEASURES\nBalance was assessed by computerized dynamic posturography, which determined the subject's response to reduced or altered visual and somatosensory orientation cues (Sensory Orientation Test [SOT]). Muscle strength was assessed by measuring the amount of weight a participant could lift, by using a standardized weight-and-pulley system, during a 4-repetition-maximum test of knee extension, knee flexion, and ankle plantarflexion.\n\n\nRESULTS\nBoth types of training improved SOT performance. This effect was larger in the combined group. Both groups could balance longer before falling, and this effect persisted for at least 4 weeks. Muscle strength increased marginally in the balance group and substantially in the combined group, and this effect persisted for at least 4 weeks.\n\n\nCONCLUSION\nMuscle strength and balance can be improved in persons with IPD by high-intensity resistance training and balance training.",
"title": ""
},
{
"docid": "df55896d227ae0b4d565af22bffca3ac",
"text": "Copper nanoparticles are being given considerable attention as of late due to their interesting properties and potential applications in many areas of industry. One such exploitable use is as the major constituent of conductive inks and pastes used for printing various electronic components. In this study, copper nanoparticles were synthesized through a relatively large-scale (5 l), high-throughput (0.2 M) process. This facile method occurs through the chemical reduction of copper sulfate with sodium hypophosphite in ethylene glycol within the presence of a polymer surfactant (PVP), which was included to prevent aggregation and give dispersion stability to the resulting colloidal nanoparticles. Reaction yields were determined to be quantitative while particle dispersion yields were between 68 and 73%. The size of the copper nanoparticles could be controlled between 30 and 65 nm by varying the reaction time, reaction temperature, and relative ratio of copper sulfate to the surfactant. Field emission scanning electron microscopy (FE-SEM) and transmission electron microscopy (TEM) images of the particles revealed a spherical shape within the reported size regime, and x-ray analysis confirmed the formation of face-centered cubic (FCC) metallic copper. Furthermore, inkjet printing nanocopper inks prepared from the polymer-stabilized copper nanoparticles onto polyimide substrates resulted in metallic copper traces with low electrical resistivities (≥3.6 µΩ cm, or ≥2.2 times the resistivity of bulk copper) after a relatively low-temperature sintering process (200 °C for up to 60 min).",
"title": ""
},
{
"docid": "1dd4a95adcd4f9e7518518148c3605ac",
"text": "Kernel modules are an integral part of most operating systems (OS) as they provide flexible ways of adding new functionalities (such as file system or hardware support) to the kernel without the need to recompile or reload the entire kernel. Aside from providing an interface between the user and the hardware, these modules maintain system security and reliability. Malicious kernel level exploits (e.g. code injections) provide a gateway to a system's privileged level where the attacker has access to an entire system. Such attacks may be detected by performing code integrity checks. Several commodity operating systems (such as Linux variants and MS Windows) maintain signatures of different pieces of kernel code in a database for code integrity checking purposes. However, it quickly becomes cumbersome and time consuming to maintain a database of legitimate dynamic changes in the code, such as regular module updates. In this paper we present Mod Checker, which checks in-memory kernel modules' code integrity in real time without maintaining a database of hashes. Our solution applies to virtual environments that have multiple virtual machines (VMs) running the same version of the operating system, an environment commonly found in large cloud servers. Mod Checker compares kernel module among a pool of VMs within a cloud. We thoroughly evaluate the effectiveness and runtime performance of Mod Checker and conclude that Mod Checker is able to detect any change in a kernel module's headers and executable content with minimal or no impact on the guest operating systems' performance.",
"title": ""
},
{
"docid": "abe32957798ec21bd7dbe714c21540ba",
"text": "OBJECTIVE\nTo evaluate the effects of reflexology treatment on quality of life, sleep disturbances, and fatigue in breast cancer patients during radiation therapy.\n\n\nMETHODS/SUBJECTS\nA total of 72 women with breast cancer (stages 1-3) scheduled for radiation therapy were recruited.\n\n\nDESIGN\nWomen were allocated upon their preference either to the group receiving reflexology treatments once a week concurrently with radiotherapy and continued for 10 weeks or to the control group (usual care).\n\n\nOUTCOME MEASURES\nThe Lee Fatigue Scale, General Sleep Disturbance Scale, and Multidimensional Quality of Life Scale Cancer were completed by each patient in both arms at the beginning of the radiation treatment, after 5 weeks, and after 10 weeks of reflexology treatment.\n\n\nRESULTS\nThe final analysis included 58 women. The reflexology treated group demonstrated statistically significant lower levels of fatigue after 5 weeks of radiation therapy (p < 0.001), compared to the control group. It was also detected that although the quality of life in the control group deteriorated after 5 and 10 weeks of radiation therapy (p < 0.01 and p < 0.05, respectively), it was preserved in the reflexology group, which also demonstrated a significant improvement in the quality of sleep after 10 weeks of radiation treatment (p < 0.05). Similar patterns were obtained in the assessment of the pain levels experienced by the patients.\n\n\nCONCLUSIONS\nThe results of the present study indicate that reflexology may have a positive effect on fatigue, quality of sleep, pain, and quality of life in breast cancer patients during radiation therapy. Reflexology prevented the decline in quality of life and significantly ameliorated the fatigue and quality of sleep of these patients. An encouraging trend was also noted in amelioration of pain levels.",
"title": ""
},
{
"docid": "436900539406faa9ff34c1af12b6348d",
"text": "The accomplishments to date on the development of automatic vehicle control (AVC) technology in the Program on Advanced Technology for the Highway (PATH) at the University of California, Berkeley, are summarized. The basic prqfiiples and assumptions underlying the PATH work are identified, ‘followed by explanations of the work on automating vehicle lateral (steering) and longitudinal (spacing and speed) control. For both lateral and longitudinal control, the modeling of plant dynamics is described first, followed by development of the additional subsystems needed (communications, reference/sensor systems) and the derivation of the control laws. Plans for testing on vehicles in both near and long term are then discussed.",
"title": ""
},
{
"docid": "2f51d8d289a7c615ddb4dc01803612a7",
"text": "Feedback is an important component of the design process, but gaining access to high-quality critique outside a classroom or firm is challenging. We present CrowdCrit, a web-based system that allows designers to receive design critiques from non-expert crowd workers. We evaluated CrowdCrit in three studies focusing on the designer's experience and benefits of the critiques. In the first study, we compared crowd and expert critiques and found evidence that aggregated crowd critique approaches expert critique. In a second study, we found that designers who got crowd feedback perceived that it improved their design process. The third study showed that designers were enthusiastic about crowd critiques and used them to change their designs. We conclude with implications for the design of crowd feedback services.",
"title": ""
}
] |
scidocsrr
|
dfb0171ddc4b65f5fbae045df35ab9a3
|
A survey on network attacks and Intrusion detection systems
|
[
{
"docid": "24b62b4d3ecee597cffef75e0864bdd8",
"text": "Botnets can cause significant security threat and huge loss to organizations, and are difficult to discover their existence. Therefore they have become one of the most severe threats on the Internet. The core component of botnets is their command and control channel. Botnets often use IRC (Internet Relay Chat) as a communication channel through which the botmaster can control the bots to launch attacks or propagate more infections. In this paper, anomaly score based botnet detection is proposed to identify the botnet activities by using the similarity measurement and the periodic characteristics of botnets. To improve the detection rate, the proposed system employs two-level correlation relating the set of hosts with same anomaly behaviors. The proposed method can differentiate the malicious network traffic generated by infected hosts (bots) from that by normal IRC clients, even in a network with only a very small number of bots. The experiment results show that, regardless the size of the botnet in a network, the proposed approach efficiently detects abnormal IRC traffic and identifies botnet activities. © 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "081e474c622f122832490a54657e5051",
"text": "To defend a network from intrusion is a generic problem of all time. It is important to develop a defense mechanism to secure the network from anomalous activities. This paper presents a comprehensive survey of methods and systems introduced by researchers in the past two decades to protect network resources from intrusion. A detailed pros and cons analysis of these methods and systems is also reported in this paper. Further, this paper also provides a list of issues and research challenges in this evolving field of research. We believe that, this knowledge will help to create a defense system.",
"title": ""
}
] |
[
{
"docid": "aee8080bb0a1c9de2eec907de095f1f9",
"text": "PURPOSE OF REVIEW\nCranioplasty has been long practiced, and the reconstructive techniques continue to evolve. With a variety of options available for filling cranial defects, a review of the current practices in cranioplasty allows for reporting the most advanced techniques and specific indications.\n\n\nRECENT FINDINGS\nOverwhelming support remains for the use of autologous bone grafts in filling the cranial defects. Alloplastic alternatives have relative advantages and disadvantages depending on the patient population and specific indications. Application of imaging technology has allowed for the utilization of custom-made alloplastic implants when autologous bone grafts are not feasible.\n\n\nSUMMARY\nAutologous bone grafts remain the best option for adult and pediatric patients with viable donor sites and small-to-medium defects. Large defects in the adult population can be reconstructed with titanium mesh and polymethylmethacrylate overlay with or without the use of computer-assisted design and manufacturing customization. In pediatric patients, exchange cranioplasty offers a viable technique for using an autologous bone graft, while simultaneously filling the donor site with particulate bone graft. Advances in alloplastic materials and custom manufacturing of implants will have an important influence on cranioplasty techniques in the years to come.",
"title": ""
},
{
"docid": "d4bbd07979940fd2b152144ab626fdb1",
"text": "Extracting minutiae from fingerprint images is one of the most important steps in automatic fingerprint identification and classification. Minutiae are local discontinuities in the fingerprint pattern, mainly terminations and bifurcations. In this work we propose two methods for fingerprint image enhancement. The first one is carried out using local histogram equalization, Wiener filtering, and image binarization. The second method use a unique anisotropic filter for direct grayscale enhancement. The results achieved are compared with those obtained through some other methods. Both methods show some improvement in the minutiae detection process in terms of either efficiency or time required.",
"title": ""
},
{
"docid": "40ebbaa3e7946a1ea6d39204b5efa611",
"text": "In their article, \"Does the autistic child have a 'theory of mind'?,\" Baron-Cohen et al. [1985] proposed a novel paradigm to explain social impairment in children diagnosed as autistic (AD). Much research has been undertaken since their article went to print. The purpose of this commentary is to gauge whether Theory of Mind (ToM)-or lack thereof-is a valid model for explaining abnormal social behavior in children with AD. ToM is defined as \"the ability to impute mental states to oneself and to others\" and \"the ability to make inferences about what other people believe to be the case.\" The source for their model was provided by an article published earlier by Premack and Woodruff, \"Does the chimpanzee have a theory of mind?\" Later research in chimpanzees did not support a ToM in primates. From the outset, ToM as a neurocognitive model of autism has had many shortcomings-methodological, logical, and empirical. Other ToM assumptions, for example, its universality in all children in all cultures and socioeconomic conditions, are not supported by data. The age at which a ToM emerges, or events that presage a ToM, are too often not corroborated. Recent studies of mirror neurons, their location and interconnections in brain, their relationship to social behavior and language, and the effect of lesions there on speech, language and social behavior, strongly suggests that a neurobiological as opposed to neurocognitive model of autism is a more parsimonious explanation for the social and behavioral phenotypes observed in autism.",
"title": ""
},
{
"docid": "5ee78ac120ab734826b08861133655a9",
"text": "This paper presents an approach to organizing folktales based on a data structure called a plot graph, which captures the narrative flow of events in a folktale. The similarity between two folktales can be computed as the structural similarity between their corresponding plot graphs. This is performed using the well-known Needleman-Wunsch algorithm. To test the efficacy of this approach, experiments are carried out using a small collection of 24 folktales grouped into 5 categories based on the Aarne-Thompson index. The best result is obtained by combining the proposed structural-based similarity measure with a more conventional bag of words vector space model, where 19 out of the 24 folktales (79.16%) yield higher average similarity with folktales within their respective categories as opposed to across categories.",
"title": ""
},
{
"docid": "b910376732bde1d7499875be8bdaa1ec",
"text": "Social tagging, as a novel approach to information organization and discovery, has been widely adopted in many Web 2.0 applications. Tags contributed by users to annotate a variety of Web resources or items provide a new type of information that can be exploited by recommender systems. Nevertheless, the sparsity of the ternary interaction data among users, items, and tags limits the performance of tag-based recommendation algorithms. In this article, we propose to deal with the sparsity problem in social tagging by applying random walks on ternary interaction graphs to explore transitive associations between users and items. The transitive associations in this article refer to the path of the link between any two nodes whose length is greater than one. Taking advantage of these transitive associations can allow more accurate measurement of the relevance between two entities (e.g., user-item, user-user, and item-item). A PageRank-like algorithm has been developed to explore these transitive associations by spreading users’ preferences on an item similarity graph and spreading items’ influences on a user similarity graph. Empirical evaluation on three real-world datasets demonstrates that our approach can effectively alleviate the sparsity problem and improve the quality of item recommendation.",
"title": ""
},
{
"docid": "88e59d7830d63fe49b1a4d49726b01db",
"text": "Semantic parsing is the task of transducing natural language (NL) utterances into formal meaning representations (MRs), commonly represented as tree structures. Annotating NL utterances with their corresponding MRs is expensive and timeconsuming, and thus the limited availability of labeled data often becomes the bottleneck of data-driven, supervised models. We introduce STRUCTVAE, a variational auto-encoding model for semisupervised semantic parsing, which learns both from limited amounts of parallel data, and readily-available unlabeled NL utterances. STRUCTVAE models latent MRs not observed in the unlabeled data as treestructured latent variables. Experiments on semantic parsing on the ATIS domain and Python code generation show that with extra unlabeled data, STRUCTVAE outperforms strong supervised models.1",
"title": ""
},
{
"docid": "da3650998a4bd6ea31467daa631d0e05",
"text": "Consideration of facial muscle dynamics is underappreciated among clinicians who provide injectable filler treatment. Injectable fillers are customarily used to fill static wrinkles, folds, and localized areas of volume loss, whereas neuromodulators are used to address excessive muscle movement. However, a more comprehensive understanding of the role of muscle function in facial appearance, taking into account biomechanical concepts such as the balance of activity among synergistic and antagonistic muscle groups, is critical to restoring facial appearance to that of a typical youthful individual with facial esthetic treatments. Failure to fully understand the effects of loss of support (due to aging or congenital structural deficiency) on muscle stability and interaction can result in inadequate or inappropriate treatment, producing an unnatural appearance. This article outlines these concepts to provide an innovative framework for an understanding of the role of muscle movement on facial appearance and presents cases that illustrate how modulation of muscle movement with injectable fillers can address structural deficiencies, rebalance abnormal muscle activity, and restore facial appearance. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266.",
"title": ""
},
{
"docid": "53df69bf8750a7e97f12b1fcac14b407",
"text": "In photovoltaic (PV) power systems where a set of series-connected PV arrays (PVAs) is connected to a conventional two-level inverter, the occurrence of partial shades and/or the mismatching of PVAs leads to a reduction of the power generated from its potential maximum. To overcome these problems, the connection of the PVAs to a multilevel diode-clamped converter is considered in this paper. A control and pulsewidth-modulation scheme is proposed, capable of independently controlling the operating voltage of each PVA. Compared to a conventional two-level inverter system, the proposed system configuration allows one to extract maximum power, to reduce the devices voltage rating (with the subsequent benefits in device-performance characteristics), to reduce the output-voltage distortion, and to increase the system efficiency. Simulation and experimental tests have been conducted with three PVAs connected to a four-level three-phase diode-clamped converter to verify the good performance of the proposed system configuration and control strategy.",
"title": ""
},
{
"docid": "becd66e0637b9b6dd07b45e6966227d6",
"text": "In real life, when telling a person’s age from his/her face, we tend to look at his/her whole face first and then focus on certain important regions like eyes. After that we will focus on each particular facial feature individually like the nose or the mouth so that we can decide the age of the person. Similarly, in this paper, we propose a new framework for age estimation, which is based on human face sub-regions. Each sub-network in our framework takes the input of two images each from human facial region. One of them is the global face, and the other is a vital sub-region. Then, we combine the predictions from different sub-regions based on a majority voting method. We call our framework Multi-Region Network Prediction Ensemble (MRNPE) and evaluate our approach using two popular public datasets: MORPH Album II and Cross Age Celebrity Dataset (CACD). Experiments show that our method outperforms the existing state-of-the-art age estimation methods by a significant margin. The Mean Absolute Errors (MAE) of age estimation are dropped from 3.03 to 2.73 years on the MORPH Album II and 4.79 to 4.40 years on the CACD.",
"title": ""
},
{
"docid": "ac24229e51822e44cb09baaf44e9623e",
"text": "Detecting representative frames in videos based on human actions is quite challenging because of the combined factors of human pose in action and the background. This paper addresses this problem and formulates the key frame detection as one of finding the video frames that optimally maximally contribute to differentiating the underlying action category from all other categories. To this end, we introduce a deep two-stream ConvNet for key frame detection in videos that learns to directly predict the location of key frames. Our key idea is to automatically generate labeled data for the CNN learning using a supervised linear discriminant method. While the training data is generated taking many different human action videos into account, the trained CNN can predict the importance of frames from a single video. We specify a new ConvNet framework, consisting of a summarizer and discriminator. The summarizer is a two-stream ConvNet aimed at, first, capturing the appearance and motion features of video frames, and then encoding the obtained appearance and motion features for video representation. The discriminator is a fitting function aimed at distinguishing between the key frames and others in the video. We conduct experiments on a challenging human action dataset UCF101 and show that our method can detect key frames with high accuracy.",
"title": ""
},
{
"docid": "fba3c3a0fbc08c992d388e6854890b01",
"text": "This paper presents a revenue maximisation model for sales channel allocation based on dynamic programming. It helps the media seller to determine how to distribute the sales volume of page views between guaranteed and nonguaranteed channels for display advertising. The model can algorithmically allocate and price the future page views via standardised guaranteed contracts in addition to real-time bidding (RTB). This is one of a few studies that investigates programmatic guarantee (PG) with posted prices. Several assumptions are made for media buyers’ behaviour, such as risk-aversion, stochastic demand arrivals, and time and price effects. We examine our model with an RTB dataset and find it increases the seller’s expected total revenue by adopting different pricing and allocation strategies depending the level of competition in RTB campaigns. The insights from this research can increase the allocative efficiency of the current media sellers’ sales mechanism and thus improve their revenue.",
"title": ""
},
{
"docid": "6b6e055e4d6aea80d4f01eee47256be1",
"text": "Ponseti treatment for clubfoot has been successful, but recurrence continues to be an issue. After correction, patients are typically braced full time with a static abduction bar and shoes. Patient compliance with bracing is a modifiable risk factor for recurrence. We hypothesized that the use of Mitchell shoes and a dynamic abduction brace would increase compliance and thereby reduce the rate of recurrence. A prospective, randomized trial was carried out with consecutive patients treated for idiopathic clubfeet from 2008 to 2012. After casting and tenotomy, patients were randomized into either the dynamic or static abduction bar group. Both groups used Mitchell shoes. Patient demographics, satisfaction, and compliance were measured with self-reported questionnaires throughout follow-up. Thirty patients were followed up, with 15 in each group. Average follow-up was 18.7 months (range 3-40.7 months). Eight recurrences (26.7%) were found, with four in each group. Recurrences had a statistically significant higher number of casts and a longer follow-up time. Mean income, education level, patient-reported satisfaction and compliance, and age of caregiver tended to be lower in the recurrence group but were not statistically significant. No differences were found between the two brace types. Our study showed excellent patient satisfaction and reported compliance with Mitchell shoes and either the dynamic or static abduction bar. Close attention and careful education should be directed towards patients with known risk factors or difficult casting courses to maximize brace compliance, a modifiable risk factor for recurrence.",
"title": ""
},
{
"docid": "e43814f288e1c5a84fb9d26b46fc7e37",
"text": "Achieving good performance in bytecoded language interpreters is difficult without sacrificing both simplicity and portability. This is due to the complexity of dynamic translation (\"just-in-time compilation\") of bytecodes into native code, which is the mechanism employed universally by high-performance interpreters.We demonstrate that a few simple techniques make it possible to create highly-portable dynamic translators that can attain as much as 70% the performance of optimized C for certain numerical computations. Translators based on such techniques can offer respectable performance without sacrificing either the simplicity or portability of much slower \"pure\" bytecode interpreters.",
"title": ""
},
{
"docid": "4fa9db557f53fa3099862af87337cfa9",
"text": "With the rapid development of E-commerce, recent years have witnessed the booming of online advertising industry, which raises extensive concerns of both academic and business circles. Among all the issues, the task of Click-through rates (CTR) prediction plays a central role, as it may influence the ranking and pricing of online ads. To deal with this task, the Factorization Machines (FM) model is designed for better revealing proper combinations of basic features. However, the sparsity of ads transaction data, i.e., a large proportion of zero elements, may severely disturb the performance of FM models. To address this problem, in this paper, we propose a novel Sparse Factorization Machines (SFM) model, in which the Laplace distribution is introduced instead of traditional Gaussian distribution to model the parameters, as Laplace distribution could better fit the sparse data with higher ratio of zero elements. Along this line, it will be beneficial to select the most important features or conjunctions with the proposed SFM model. Furthermore, we develop a distributed implementation of our SFM model on Spark platform to support the prediction task on mass dataset in practice. Comprehensive experiments on two large-scale real-world datasets clearly validate both the effectiveness and efficiency of our SFM model compared with several state-of-the-art baselines, which also proves our assumption that Laplace distribution could be more suitable to describe the online ads transaction data.",
"title": ""
},
{
"docid": "eaf3d25c7babb067e987b2586129e0e4",
"text": "Iterative refinement reduces the roundoff errors in the computed solution to a system of linear equations. Only one step requires higher precision arithmetic. If sufficiently high precision is used, the final result is shown to be very accurate.",
"title": ""
},
{
"docid": "34e1566235f94a265564cbe5d0bf7cc1",
"text": "Circuit techniques that overcome practical noise, reliability, and EMI limitations are reported. An auxiliary loop with ramping circuits suppresses pop-and-click noise to 1 mV for an amplifier with 4 V-achievable output voltage. Switching edge rate control enables the system to meet the EN55022 Class-B standard with a 15 dB margin. An enhanced scheme detects short-circuit conditions without relying on overlimit current events.",
"title": ""
},
{
"docid": "99a9dd7ed22351a1b33528f878537da8",
"text": "The aim of single image super-resolution is to reconstruct a high-resolution image from a single low-resolution input. Although the task is ill-posed it can be seen as finding a non-linear mapping from a low to high-dimensional space. Recent methods that rely on both neighborhood embedding and sparse-coding have led to tremendous quality improvements. Yet, many of the previous approaches are hard to apply in practice because they are either too slow or demand tedious parameter tweaks. In this paper, we propose to directly map from low to high-resolution patches using random forests. We show the close relation of previous work on single image super-resolution to locally linear regression and demonstrate how random forests nicely fit into this framework. During training the trees, we optimize a novel and effective regularized objective that not only operates on the output space but also on the input space, which especially suits the regression task. During inference, our method comprises the same well-known computational efficiency that has made random forests popular for many computer vision problems. In the experimental part, we demonstrate on standard benchmarks for single image super-resolution that our approach yields highly accurate state-of-the-art results, while being fast in both training and evaluation.",
"title": ""
},
{
"docid": "de638a90e5a6ef3bf030d998b0e921a3",
"text": "The quantization techniques have shown competitive performance in approximate nearest neighbor search. The state-of-the-art algorithm, composite quantization, takes advantage of the compositionabity, i.e., the vector approximation accuracy, as opposed to product quantization and Cartesian k-means. However, we have observed that the runtime cost of computing the distance table in composite quantization, which is used as a lookup table for fast distance computation, becomes nonnegligible in real applications, e.g., reordering the candidates retrieved from the inverted index when handling very large scale databases. To address this problem, we develop a novel approach, called sparse composite quantization, which constructs sparse dictionaries. The benefit is that the distance evaluation between the query and the dictionary element (a sparse vector) is accelerated using the efficient sparse vector operation, and thus the cost of distance table computation is reduced a lot. Experiment results on large scale ANN retrieval tasks (1M SIFTs and 1B SIFTs) and applications to object retrieval show that the proposed approach yields competitive performance: superior search accuracy to product quantization and Cartesian k-means with almost the same computing cost, and much faster ANN search than composite quantization with the same level of accuracy.",
"title": ""
},
{
"docid": "4d79d71c019c0f573885ffa2bc67f48b",
"text": "In this article, we provide a basic introduction to CMOS image-sensor technology, design and performance limits and present recent developments and future directions in this area. We also discuss image-sensor operation and describe the most popular CMOS image-sensor architectures. We note the main non-idealities that limit CMOS image sensor performance, and specify several key performance measures. One of the most important advantages of CMOS image sensors over CCDs is the ability to integrate sensing with analog and digital processing down to the pixel level. Finally, we focus on recent developments and future research directions that are enabled by pixel-level processing, the applications of which promise to further improve CMOS image sensor performance and broaden their applicability beyond current markets.",
"title": ""
},
{
"docid": "c5639c65908882291c29e147605c79ca",
"text": "Dirofilariasis is a rare disease in humans. We report here a case of a 48-year-old male who was diagnosed with pulmonary dirofilariasis in Korea. On chest radiographs, a coin lesion of 1 cm in diameter was shown. Although it looked like a benign inflammatory nodule, malignancy could not be excluded. So, the nodule was resected by video-assisted thoracic surgery. Pathologically, chronic granulomatous inflammation composed of coagulation necrosis with rim of fibrous tissues and granulations was seen. In the center of the necrotic nodules, a degenerating parasitic organism was found. The parasite had prominent internal cuticular ridges and thick cuticle, a well-developed muscle layer, an intestinal tube, and uterine tubules. The parasite was diagnosed as an immature female worm of Dirofilaria immitis. This is the second reported case of human pulmonary dirofilariasis in Korea.",
"title": ""
}
] |
scidocsrr
|
7b388588d67297cec35614d2702025c2
|
SEMAFOR 1.0: A Probabilistic Frame-Semantic Parser
|
[
{
"docid": "33b2c5abe122a66b73840506aa3b443e",
"text": "Semantic role labeling, the computational identification and labeling of arguments in text, has become a leading task in computational linguistics today. Although the issues for this task have been studied for decades, the availability of large resources and the development of statistical machine learning methods have heightened the amount of effort in this field. This special issue presents selected and representative work in the field. This overview describes linguistic background of the problem, the movement from linguistic theories to computational practice, the major resources that are being used, an overview of steps taken in computational systems, and a description of the key issues and results in semantic role labeling (as revealed in several international evaluations). We assess weaknesses in semantic role labeling and identify important challenges facing the field. Overall, the opportunities and the potential for useful further research in semantic role labeling are considerable.",
"title": ""
}
] |
[
{
"docid": "55772e55adb83d4fd383ddebcf564a71",
"text": "The generation of multi-functional drug delivery systems, namely solid dosage forms loaded with nano-sized carriers, remains little explored and is still a challenge for formulators. For the first time, the coupling of two important technologies, 3D printing and nanotechnology, to produce innovative solid dosage forms containing drug-loaded nanocapsules was evaluated here. Drug delivery devices were prepared by fused deposition modelling (FDM) from poly(ε-caprolactone) (PCL) and Eudragit® RL100 (ERL) filaments with or without a channelling agent (mannitol). They were soaked in deflazacort-loaded nanocapsules (particle size: 138nm) to produce 3D printed tablets (printlets) loaded with them, as observed by SEM. Drug loading was improved by the presence of the channelling agent and a linear correlation was obtained between the soaking time and the drug loading (r2=0.9739). Moreover, drug release profiles were dependent on the polymeric material of tablets and the presence of the channelling agent. In particular, tablets prepared with a partially hollow core (50% infill) had a higher drug loading (0.27% w/w) and faster drug release rate. This study represents an original approach to convert nanocapsules suspensions into solid dosage forms as well as an efficient 3D printing method to produce novel drug delivery systems, as personalised nanomedicines.",
"title": ""
},
{
"docid": "0a0f826f1a8fa52d61892632fd403502",
"text": "We show that sequence information can be encoded into highdimensional fixed-width vectors using permutations of coordinates. Computational models of language often represent words with high-dimensional semantic vectors compiled from word-use statistics. A word’s semantic vector usually encodes the contexts in which the word appears in a large body of text but ignores word order. However, word order often signals a word’s grammatical role in a sentence and thus tells of the word’s meaning. Jones and Mewhort (2007) show that word order can be included in the semantic vectors using holographic reduced representation and convolution. We show here that the order information can be captured also by permuting of vector coordinates, thus providing a general and computationally light alternative to convolution.",
"title": ""
},
{
"docid": "6a2d1dfb61a4e37c8554900e0d366f51",
"text": "Attention Deficit/Hyperactivity Disorder (ADHD) is a neurobehavioral disorder which leads to the difficulty on focusing, paying attention and controlling normal behavior. Globally, the prevalence of ADHD is estimated to be 6.5%. Medicine has been widely used for the treatment of ADHD symptoms, but the patient may have a chance to suffer from the side effects of drug, such as vomit, rash, urticarial, cardiac arrthymia and insomnia. In this paper, we propose the alternative medicine system based on the brain-computer interface (BCI) technology called neurofeedback. The proposed neurofeedback system simultaneously employs two important signals, i.e. electroencephalogram (EEG) and hemoencephalogram (HEG), which can quickly reveal the brain functional network. The treatment criteria are that, for EEG signals, the patient needs to maintain the beta activities (13-30 Hz) while reducing the alpha activities (7-13 Hz). Simultaneously, HEG signals need to be maintained continuously increasing to some setting thresholds of the brain blood oxygenation levels. Time-frequency selective multilayer perceptron (MLP) is employed to capture the mentioned phenomena in real-time. The experimental results show that the proposed system yields the sensitivity of 98.16% and the specificity of 95.57%. Furthermore, from the resulting weights of the proposed MLP, we can also conclude that HEG signals yield the most impact to our neurofeedback treatment followed by the alpha, beta, and theta activities, respectively.",
"title": ""
},
{
"docid": "eba769c6246b44d8ed7e5f08aac17731",
"text": "One hundred men, living in three villages in a remote region of the Eastern Highlands of Papua New Guinea were asked to judge the attractiveness of photographs of women who had undergone micrograft surgery to reduce their waist-to-hip ratios (WHRs). Micrograft surgery involves harvesting adipose tissue from the waist and reshaping the buttocks to produce a low WHR and an \"hourglass\" female figure. Men consistently chose postoperative photographs as being more attractive than preoperative photographs of the same women. Some women gained, and some lost weight, postoperatively, with resultant changes in body mass index (BMI). However, changes in BMI were not related to men's judgments of attractiveness. These results show that the hourglass female figure is rated as attractive by men living in a remote, indigenous community, and that when controlling for BMI, WHR plays a crucial role in their attractiveness judgments.",
"title": ""
},
{
"docid": "1924730db532936166d07c6bab058800",
"text": "The rising popularity of digital table surfaces has spawned considerable interest in new interaction techniques. Most interactions fall into one of two modalities: 1) direct touch and multi-touch (by hand and by tangibles) directly on the surface, and 2) hand gestures above the surface. The limitation is that these two modalities ignore the rich interaction space between them. To move beyond this limitation, we first contribute a unification of these discrete interaction modalities called the continuous interaction space. The idea is that many interaction techniques can be developed that go beyond these two modalities, where they can leverage the space between them. That is, we believe that the underlying system should treat the space on and above the surface as a continuum, where a person can use touch, gestures, and tangibles anywhere in the space and naturally move between them. Our second contribution illustrates this, where we introduce a variety of interaction categories that exploit the space between these modalities. For example, with our Extended Continuous Gestures category, a person can start an interaction with a direct touch and drag, then naturally lift off the surface and continue their drag with a hand gesture over the surface. For each interaction category, we implement an example (or use prior work) that illustrates how that technique can be applied. In summary, our primary contribution is to broaden the design space of interaction techniques for digital surfaces, where we populate the continuous interaction space both with concepts and examples that emerge from considering this space as a continuum.",
"title": ""
},
{
"docid": "3f1d69e8a2fdfc69e451679255782d70",
"text": "This tutorial gives a broad view of modern approaches for scaling up machine learning and data mining methods on parallel/distributed platforms. Demand for scaling up machine learning is task-specific: for some tasks it is driven by the enormous dataset sizes, for others by model complexity or by the requirement for real-time prediction. Selecting a task-appropriate parallelization platform and algorithm requires understanding their benefits, trade-offs and constraints. This tutorial focuses on providing an integrated overview of state-of-the-art platforms and algorithm choices. These span a range of hardware options (from FPGAs and GPUs to multi-core systems and commodity clusters), programming frameworks (including CUDA, MPI, MapReduce, and DryadLINQ), and learning settings (e.g., semi-supervised and online learning). The tutorial is example-driven, covering a number of popular algorithms (e.g., boosted trees, spectral clustering, belief propagation) and diverse applications (e.g., recommender systems and object recognition in vision).\n The tutorial is based on (but not limited to) the material from our upcoming Cambridge U. Press edited book which is currently in production.\n Visit the tutorial website at http://hunch.net/~large_scale_survey/",
"title": ""
},
{
"docid": "2732b8453269834e481428f054ff4992",
"text": "Otsu reference proposed a criterion for maximizing the between-class variance of pixel intensity to perform picture thresholding. However, Otsu’s method for image segmentation is very time-consuming because of the inefficient formulation of the between-class variance. In this paper, a faster version of Otsu’s method is proposed for improving the efficiency of computation for the optimal thresholds of an image. First, a criterion for maximizing a modified between-class variance that is equivalent to the criterion of maximizing the usual between-class variance is proposed for image segmentation. Next, in accordance with the new criterion, a recursive algorithm is designed to efficiently find the optimal threshold. This procedure yields the same set of thresholds as the original method. In addition, the modified between-class variance can be pre-computed and stored in a look-up table. Our analysis of the new criterion clearly shows that it takes less computation to compute both the cumulative probability (zeroth order moment) and the mean (first order moment) of a class, and that determining the modified between-class variance by accessing a look-up table is quicker than that by performing mathematical arithmetic operations. For example, the experimental results of a five-level threshold selection show that our proposed method can reduce down the processing time from more than one hour by the conventional Otsu’s method to less than 107 seconds.",
"title": ""
},
{
"docid": "44ea81d223e3c60c7b4fd1192ca3c4ba",
"text": "Existing classification and rule learning algorithms in machine learning mainly use heuristic/greedy search to find a subset of regularities (e.g., a decision tree or a set of rules) in data for classification. In the past few years, extensive research was done in the database community on learning rules using exhaustive search under the name of association rule mining. The objective there is to find all rules in data that satisfy the user-specified minimum support and minimum confidence. Although the whole set of rules may not be used directly for accurate classification, effective and efficient classifiers have been built using the rules. This paper aims to improve such an exhaustive search based classification system CBA (Classification Based on Associations). The main strength of this system is that it is able to use the most accurate rules for classification. However, it also has weaknesses. This paper proposes two new techniques to deal with these weaknesses. This results in remarkably accurate classifiers. Experiments on a set of 34 benchmark datasets show that on average the new techniques reduce the error of CBA by 17% and is superior to CBA on 26 of the 34 datasets. They reduce the error of the decision tree classifier C4.5 by 19%, and improve performance on 29 datasets. Similar good results are also achieved against the existing classification systems, RIPPER, LB and a Naïve-Bayes",
"title": ""
},
{
"docid": "b40ef74fd41676d51d0870578e483b27",
"text": "In this paper, we propose a simple but effective image prior-dark channel prior to remove haze from a single input image. The dark channel prior is a kind of statistics of outdoor haze-free images. It is based on a key observation-most local patches in outdoor haze-free images contain some pixels whose intensity is very low in at least one color channel. Using this prior with the haze imaging model, we can directly estimate the thickness of the haze and recover a high-quality haze-free image. Results on a variety of hazy images demonstrate the power of the proposed prior. Moreover, a high-quality depth map can also be obtained as a byproduct of haze removal.",
"title": ""
},
{
"docid": "fbe0c6e8cbaf6c419990c1a7093fe2a9",
"text": "Deep learning is quickly becoming the leading methodology for medical image analysis. Given a large medical archive, where each image is associated with a diagnosis, efficient pathology detectors or classifiers can be trained with virtually no expert knowledge about the target pathologies. However, deep learning algorithms, including the popular ConvNets, are black boxes: little is known about the local patterns analyzed by ConvNets to make a decision at the image level. A solution is proposed in this paper to create heatmaps showing which pixels in images play a role in the image-level predictions. In other words, a ConvNet trained for image-level classification can be used to detect lesions as well. A generalization of the backpropagation method is proposed in order to train ConvNets that produce high-quality heatmaps. The proposed solution is applied to diabetic retinopathy (DR) screening in a dataset of almost 90,000 fundus photographs from the 2015 Kaggle Diabetic Retinopathy competition and a private dataset of almost 110,000 photographs (e-ophtha). For the task of detecting referable DR, very good detection performance was achieved: Az=0.954 in Kaggle's dataset and Az=0.949 in e-ophtha. Performance was also evaluated at the image level and at the lesion level in the DiaretDB1 dataset, where four types of lesions are manually segmented: microaneurysms, hemorrhages, exudates and cotton-wool spots. For the task of detecting images containing these four lesion types, the proposed detector, which was trained to detect referable DR, outperforms recent algorithms trained to detect those lesions specifically, with pixel-level supervision. At the lesion level, the proposed detector outperforms heatmap generation algorithms for ConvNets. This detector is part of the Messidor® system for mobile eye pathology screening. Because it does not rely on expert knowledge or manual segmentation for detecting relevant patterns, the proposed solution is a promising image mining tool, which has the potential to discover new biomarkers in images.",
"title": ""
},
{
"docid": "0e803e853422328aeef59e426410df48",
"text": "We present WatchWriter, a finger operated keyboard that supports both touch and gesture typing with statistical decoding on a smartwatch. Just like on modern smartphones, users type one letter per tap or one word per gesture stroke on WatchWriter but in a much smaller spatial scale. WatchWriter demonstrates that human motor control adaptability, coupled with modern statistical decoding and error correction technologies developed for smartphones, can enable a surprisingly effective typing performance despite the small watch size. In a user performance experiment entirely run on a smartwatch, 36 participants reached a speed of 22-24 WPM with near zero error rate.",
"title": ""
},
{
"docid": "121a388391c12de1329e74fdeebdaf10",
"text": "In this paper, we present the first longitudinal measurement study of the underground ecosystem fueling credential theft and assess the risk it poses to millions of users. Over the course of March, 2016--March, 2017, we identify 788,000 potential victims of off-the-shelf keyloggers; 12.4 million potential victims of phishing kits; and 1.9 billion usernames and passwords exposed via data breaches and traded on blackmarket forums. Using this dataset, we explore to what degree the stolen passwords---which originate from thousands of online services---enable an attacker to obtain a victim's valid email credentials---and thus complete control of their online identity due to transitive trust. Drawing upon Google as a case study, we find 7--25% of exposed passwords match a victim's Google account. For these accounts, we show how hardening authentication mechanisms to include additional risk signals such as a user's historical geolocations and device profiles helps to mitigate the risk of hijacking. Beyond these risk metrics, we delve into the global reach of the miscreants involved in credential theft and the blackhat tools they rely on. We observe a remarkable lack of external pressure on bad actors, with phishing kit playbooks and keylogger capabilities remaining largely unchanged since the mid-2000s.",
"title": ""
},
{
"docid": "b3cb053d44a90a2a9a9332ac920f0e90",
"text": "This study develops a crowdfunding sponsor typology based on sponsors’ motivations for participating in a project. Using a two by two crowdfunding motivation framework, we analyzed six relevant funding motivations—interest, playfulness, philanthropy, reward, relationship, and recognition—and identified four types of crowdfunding sponsors: angelic backer, reward hunter, avid fan, and tasteful hermit. They are profiled in terms of the antecedents and consequences of funding motivations. Angelic backers are similar in some ways to traditional charitable donors while reward hunters are analogous to market investors; thus they differ in their approach to crowdfunding. Avid fans comprise the most passionate sponsor group, and they are similar to members of a brand community. Tasteful hermits support their projects as actively as avid fans, but they have lower extrinsic and others-oriented motivations. The results show that these sponsor types reflect the nature of crowdfunding as a new form of co-creation in the E-commerce context. 2016 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "25d913188ee5790d5b3a9f5fb8b68dda",
"text": "RPL, the routing protocol proposed by IETF for IPv6/6LoWPAN Low Power and Lossy Networks has significant complexity. Another protocol called LOADng, a lightweight variant of AODV, emerges as an alternative solution. In this paper, we compare the performance of the two protocols in a Home Automation scenario with heterogenous traffic patterns including a mix of multipoint-to-point and point-to-multipoint routes in realistic dense non-uniform network topologies. We use Contiki OS and Cooja simulator to evaluate the behavior of the ContikiRPL implementation and a basic non-optimized implementation of LOADng. Unlike previous studies, our results show that RPL provides shorter delays, less control overhead, and requires less memory than LOADng. Nevertheless, enhancing LOADng with more efficient flooding and a better route storage algorithm may improve its performance.",
"title": ""
},
{
"docid": "370767f85718121dc3975f383bf99d8b",
"text": "A combinatorial classification and a phylogenetic analysis of the ten 12/8 time, seven-stroke bell rhythm timelines in African and Afro-American music are presented. New methods for rhythm classification are proposed based on measures of rhythmic oddity and off-beatness. These combinatorial classifications reveal several new uniqueness properties of the Bembé bell pattern that may explain its widespread popularity and preference among the other patterns in this class. A new distance measure called the swap-distance is introduced to measure the non-similarity of two rhythms that have the same number of strokes. A swap in a sequence of notes and rests of equal duration is the location interchange of a note and a rest that are adjacent in the sequence. The swap distance between two rhythms is defined as the minimum number of swaps required to transform one rhythm to the other. A phylogenetic analysis using Splits Graphs with the swap distance shows that each of the ten bell patterns can be derived from one of two “canonical” patterns with at most four swap operations, or from one with at most five swap operations. Furthermore, the phylogenetic analysis suggests that for these ten bell patterns there are no “ancestral” rhythms not contained in this set.",
"title": ""
},
{
"docid": "774394b64cf9a98f481b343866f648a6",
"text": "The aim of this study was to evaluate the anatomy of the central myelin portion and the central myelin-peripheral myelin transitional zone of the trigeminal, facial, glossopharyngeal and vagus nerves from fresh cadavers. The aim was also to investigate the relationship between the length and volume of the central myelin portion of these nerves with the incidences of the corresponding cranial dysfunctional syndromes caused by their compression to provide some more insights for a better understanding of mechanisms. The trigeminal, facial, glossopharyngeal and vagus nerves from six fresh cadavers were examined. The length of these nerves from the brainstem to the foramen that they exit were measured. Longitudinal sections were stained and photographed to make measurements. The diameters of the nerves where they exit/enter from/to brainstem, the diameters where the transitional zone begins, the distances to the most distal part of transitional zone from brainstem and depths of the transitional zones were measured. Most importantly, the volume of the central myelin portion of the nerves was calculated. Correlation between length and volume of the central myelin portion of these nerves and the incidences of the corresponding hyperactive dysfunctional syndromes as reported in the literature were studied. The distance of the most distal part of the transitional zone from the brainstem was 4.19 ± 0.81 mm for the trigeminal nerve, 2.86 ± 1.19 mm for the facial nerve, 1.51 ± 0.39 mm for the glossopharyngeal nerve, and 1.63 ± 1.15 mm for the vagus nerve. The volume of central myelin portion was 24.54 ± 9.82 mm3 in trigeminal nerve; 4.43 ± 2.55 mm3 in facial nerve; 1.55 ± 1.08 mm3 in glossopharyngeal nerve; 2.56 ± 1.32 mm3 in vagus nerve. Correlations (p < 0.001) have been found between the length or volume of central myelin portions of the trigeminal, facial, glossopharyngeal and vagus nerves and incidences of the corresponding diseases. At present it is rather well-established that primary trigeminal neuralgia, hemifacial spasm and vago-glossopharyngeal neuralgia have as one of the main causes a vascular compression. The strong correlations found between the lengths and volumes of the central myelin portions of the nerves and the incidences of the corresponding diseases is a plea for the role played by this anatomical region in the mechanism of these diseases.",
"title": ""
},
{
"docid": "83de0252b28e4dcedefc239aaaee79e5",
"text": "Recently, there has been immense interest in using unmanned aerial vehicles (UAVs) for civilian operations such as package delivery, aerial surveillance, and disaster response. As a result, UAV traffic management systems are needed to support potentially thousands of UAVs flying simultaneously in the air space, in order to ensure their liveness and safety requirements are met. Currently, the analysis of large multi-agent systems cannot tractably provide these guarantees if the agents’ set of maneuvers are unrestricted. In this paper, we propose to have platoons of UAVs flying on air highways in order to impose the air space structure that allows for tractable analysis and intuitive monitoring. For the air highway placement problem, we use the flexible and efficient fast marching method to solve the Eikonal equation, which produces a sequence of air highways that minimizes the cost of flying from an origin to any destination. Within the platoons that travel on the air highways, we model each vehicle as a hybrid system with modes corresponding to its role in the platoon. Using Hamilton-Jacobi reachability, we propose several liveness controllers and a safety controller that guarantee the success and safety of all mode transitions. For a single altitude range, our approach guarantees safety for one safety breach per vehicle; in the unlikely event of multiple safety breaches, safety can be guaranteed over multiple altitude ranges. We demonstrate the satisfaction of liveness and safety requirements through simulations of three common scenarios.",
"title": ""
},
{
"docid": "06f27036cd261647c7670bdf854f5fb4",
"text": "OBJECTIVE\nTo determine the formation and dissolution of calcium fluoride on the enamel surface after application of two fluoride gel-saliva mixtures.\n\n\nMETHOD AND MATERIALS\nFrom each of 80 bovine incisors, two enamel specimens were prepared and subjected to two different treatment procedures. In group 1, 80 specimens were treated with a mixture of an amine fluoride gel (1.25% F-; pH 5.2; 5 minutes) and human saliva. In group 2, 80 enamel blocks were subjected to a mixture of sodium fluoride gel (1.25% F; pH 5.5; 5 minutes) and human saliva. Subsequent to fluoride treatment, 40 specimens from each group were stored in human saliva and sterile water, respectively. Ten specimens were removed after each of 1 hour, 24 hours, 2 days, and 5 days and analyzed according to potassium hydroxide-soluble fluoride.\n\n\nRESULTS\nApplication of amine fluoride gel resulted in a higher amount of potassium hydroxide-soluble fluoride than did sodium fluoride gel 1 hour after application. Saliva exerted an inhibitory effect according to the dissolution rate of calcium fluoride. However, after 5 days, more than 90% of the precipitated calcium fluoride was dissolved in the amine fluoride group, and almost all potassium hydroxide-soluble fluoride was lost in the sodium fluoride group. Calcium fluoride apparently dissolves rapidly, even at almost neutral pH.\n\n\nCONCLUSION\nConsidering the limitations of an in vitro study, it is concluded that highly concentrated fluoride gels should be applied at an adequate frequency to reestablish a calcium fluoride-like layer.",
"title": ""
},
{
"docid": "c2db241a94d9fec15af613d593730dea",
"text": "This study investigated the influence of Cloisite-15A nanoclay on the physical, performance, and mechanical properties of bitumen binder. Cloisite-15A was blended in the bitumen in variegated percentages from 1% to 9% with increment of 2%. The blended bitumen was characterized using penetration, softening point, and dynamic viscosity using rotational viscometer, and compared with unmodified bitumen equally penetration grade 60/70. The rheological parameters were investigated using Dynamic Shear Rheometer (DSR), and mechanical properties were investigated by using Marshall Stability test. The results indicated an increase in softening point, dynamic viscosity and decrease in binder penetration. Rheological properties of bitumen increase complex modulus, decrease phase angle and improve rutting resistances as well. There was significant improvement in Marshall Stability, rather marginal improvement in flow value. The best improvement in the modified binder was obtained with 5% Cloisite-15A nanoclay. Keywords—Cloisite-15A, complex shear modulus, phase angle, rutting resistance.",
"title": ""
}
] |
scidocsrr
|
d26016066331715339a082414469a654
|
GUI Design for IDE Command Recommendations
|
[
{
"docid": "ef598ba4f9a4df1f42debc0eabd1ead8",
"text": "Software developers interact with the development environments they use by issuing commands that execute various programming tools, from source code formatters to build tools. However, developers often only use a small subset of the commands offered by modern development environments, reducing their overall development fluency. In this paper, we use several existing command recommender algorithms to suggest new commands to developers based on their existing command usage history, and also introduce several new algorithms. By running these algorithms on data submitted by several thousand Eclipse users, we describe two studies that explore the feasibility of automatically recommending commands to software developers. The results suggest that, while recommendation is more difficult in development environments than in other domains, it is still feasible to automatically recommend commands to developers based on their usage history, and that using patterns of past discovery is a useful way to do so.",
"title": ""
}
] |
[
{
"docid": "c41259069ff779cf727ee4cfcf317cee",
"text": "Trends in miniaturization have resulted in an explosion of small, low power devices with network connectivity. Welcome to the era of Internet of Things (IoT), wearable devices, and automated home and industrial systems. These devices are loaded with sensors, collect information from their surroundings, process it, and relay it to remote locations for further analysis. Pervasive and seeminly harmless, this new breed of devices raise security and privacy concerns. In this chapter, we evaluate the security of these devices from an industry point of view, concentrating on the design flow, and catalogue the types of vulnerabilities we have found. We also present an in-depth evaluation of the Google Nest Thermostat, the Nike+ Fuelband SE Fitness Tracker, the Haier SmartCare home automation system, and the Itron Centron CL200 electric meter. We study and present an analysis of the effects of these compromised devices in an every day setting. We then finish by discussing design flow enhancements, with security mechanisms that can be efficiently added into a device in a comparative way.",
"title": ""
},
{
"docid": "bf08d673b40109d6d6101947258684fd",
"text": "More and more medicinal mushrooms have been widely used as a miraculous herb for health promotion, especially by cancer patients. Here we report screening thirteen mushrooms for anti-cancer cell activities in eleven different cell lines. Of the herbal products tested, we found that the extract of Amauroderma rude exerted the highest activity in killing most of these cancer cell lines. Amauroderma rude is a fungus belonging to the Ganodermataceae family. The Amauroderma genus contains approximately 30 species widespread throughout the tropical areas. Since the biological function of Amauroderma rude is unknown, we examined its anti-cancer effect on breast carcinoma cell lines. We compared the anti-cancer activity of Amauroderma rude and Ganoderma lucidum, the most well-known medicinal mushrooms with anti-cancer activity and found that Amauroderma rude had significantly higher activity in killing cancer cells than Ganoderma lucidum. We then examined the effect of Amauroderma rude on breast cancer cells and found that at low concentrations, Amauroderma rude could inhibit cancer cell survival and induce apoptosis. Treated cancer cells also formed fewer and smaller colonies than the untreated cells. When nude mice bearing tumors were injected with Amauroderma rude extract, the tumors grew at a slower rate than the control. Examination of these tumors revealed extensive cell death, decreased proliferation rate as stained by Ki67, and increased apoptosis as stained by TUNEL. Suppression of c-myc expression appeared to be associated with these effects. Taken together, Amauroderma rude represented a powerful medicinal mushroom with anti-cancer activities.",
"title": ""
},
{
"docid": "a0c36cccd31a1bf0a1e7c9baa78dd3fa",
"text": "Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function (“avoiding side effects” and “avoiding reward hacking”), an objective function that is too expensive to evaluate frequently (“scalable supervision”), or undesirable behavior during the learning process (“safe exploration” and “distributional shift”). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking",
"title": ""
},
{
"docid": "0ce46853852a20e5e0ab9aacd3ec20c1",
"text": "In immunocompromised subjects, Epstein-Barr virus (EBV) infection of terminally differentiated oral keratinocytes may result in subclinical productive infection of the virus in the stratum spinosum and in the stratum granulosum with shedding of infectious virions into the oral fluid in the desquamating cells. In a minority of cases this productive infection with dysregulation of the cell cycle of terminally differentiated epithelial cells may manifest as oral hairy leukoplakia. This is a white, hyperkeratotic, benign lesion of low morbidity, affecting primarily the lateral border of the tongue. Factors that determine whether productive EBV replication within the oral epithelium will cause oral hairy leukoplakia include the fitness of local immune responses, the profile of EBV gene expression, and local environmental factors.",
"title": ""
},
{
"docid": "be7662e67b3cff4991ae7249e8f8cde2",
"text": "The kernelized correlation filter (KCF) is one of the state-of-the-art object trackers. However, it does not reasonably model the distribution of correlation response during tracking process, which might cause the drifting problem, especially when targets undergo significant appearance changes due to occlusion, camera shaking, and/or deformation. In this paper, we propose an output constraint transfer (OCT) method that by modeling the distribution of correlation response in a Bayesian optimization framework is able to mitigate the drifting problem. OCT builds upon the reasonable assumption that the correlation response to the target image follows a Gaussian distribution, which we exploit to select training samples and reduce model uncertainty. OCT is rooted in a new theory which transfers data distribution to a constraint of the optimized variable, leading to an efficient framework to calculate correlation filters. Extensive experiments on a commonly used tracking benchmark show that the proposed method significantly improves KCF, and achieves better performance than other state-of-the-art trackers. To encourage further developments, the source code is made available.",
"title": ""
},
{
"docid": "4560e1b7318013be0688b8e73692fda4",
"text": "This paper introduces a new real-time object detection approach named Yes-Net. It realizes the prediction of bounding boxes and class via single neural network like YOLOv2 and SSD, but owns more efficient and outstanding features. It combines local information with global information by adding the RNN architecture as a packed unit in CNN model to form the basic feature extractor. Independent anchor boxes coming from full-dimension kmeans is also applied in Yes-Net, it brings better average IOU than grid anchor box. In addition, instead of NMS, YesNet uses RNN as a filter to get the final boxes, which is more efficient. For 416 × 416 input, Yes-Net achieves 74.3% mAP on VOC2007 test at 39 FPS on an Nvidia Titan X Pascal.",
"title": ""
},
{
"docid": "8a7a8de5cae191a4493e5a0e4f34bbf1",
"text": "B-spline surfaces, although widely used, are incapable of describing surfaces of arbitrary topology. It is not possible to model a general closed surface or a surface with handles as a single non-degenerate B-spline. In practice such surfaces are often needed. In this paper, we present generalizations of biquadratic and bicubic B-spline surfaces that are capable of capturing surfaces of arbitrary topology (although restrictions are placed on the connectivity of the control mesh). These results are obtained by relaxing the sufficient but not necessary smoothness constraints imposed by B-splines and through the use of an n-sided generalization of Bézier surfaces called S-patches.",
"title": ""
},
{
"docid": "bb4001c4cb5fde8d34fd48ee50eb053c",
"text": "We consider the problem of identifying the causal direction between two discrete random variables using observational data. Unlike previous work, we keep the most general functional model but make an assumption on the unobserved exogenous variable: Inspired by Occam’s razor, we assume that the exogenous variable is simple in the true causal direction. We quantify simplicity using Rényi entropy. Our main result is that, under natural assumptions, if the exogenous variable has lowH0 entropy (cardinality) in the true direction, it must have high H0 entropy in the wrong direction. We establish several algorithmic hardness results about estimating the minimum entropy exogenous variable. We show that the problem of finding the exogenous variable with minimum H1 entropy (Shannon Entropy) is equivalent to the problem of finding minimum joint entropy given n marginal distributions, also known as minimum entropy coupling problem. We propose an efficient greedy algorithm for the minimum entropy coupling problem, that for n = 2 provably finds a local optimum. This gives a greedy algorithm for finding the exogenous variable with minimum Shannon entropy. Our greedy entropy-based causal inference algorithm has similar performance to the state of the art additive noise models in real datasets. One advantage of our approach is that we make no use of the values of random variables but only their distributions. Our method can therefore be used for causal inference for both ordinal and also categorical data, unlike additive noise models.",
"title": ""
},
{
"docid": "3cde70842ee80663cbdc04db6a871d46",
"text": "Artificial perception, in the context of autonomous driving, is the process by which an intelligent system translates sensory data into an effective model of the environment surrounding a vehicle. In this paper, and considering data from a 3D-LIDAR mounted onboard an intelligent vehicle, a 3D perception system based on voxels and planes is proposed for ground modeling and obstacle detection in urban environments. The system, which incorporates time-dependent data, is composed of two main modules: (i) an effective ground surface estimation using a piecewise plane fitting algorithm and RANSAC-method, and (ii) a voxel-grid model for static and moving obstacles detection using discriminative analysis and ego-motion information. This perception system has direct application in safety systems for intelligent vehicles, particularly in collision avoidance and vulnerable road users detection, namely pedestrians and cyclists. Experiments, using point-cloud data from a Velodyne LIDAR and localization data from an Inertial Navigation System were conducted for both a quantitative and a qualitative assessment of the static/moving obstacle detection module and for the surface estimation approach. Reported results, from experiments using the KITTI database, demonstrate the applicability and efficiency of the proposed approach in urban scenarios.",
"title": ""
},
{
"docid": "4f37b872c44c2bda3ff62e3e8ebf4391",
"text": "This paper proposes a method based on conditional random fields to incorporate sentence structure (syntax and semantics) and context information to identify sentiments of sentences within a document. It also proposes and evaluates two different active learning strategies for labeling sentiment data. The experiments with the proposed approach demonstrate a 5-15% improvement in accuracy on Amazon customer reviews compared to existing supervised learning and rule-based methods.",
"title": ""
},
{
"docid": "b4e9cfc0dbac4a5d7f76001e73e8973d",
"text": "Style transfer aims to apply the style of an exemplar model to a target one, while retaining the target’s structure. The main challenge in this process is to algorithmically distinguish style from structure, a high-level, potentially ill-posed cognitive task. Inspired by cognitive science research we recast style transfer in terms of shape analogies. In IQ testing, shape analogy queries present the subject with three shapes: source, target and exemplar, and ask them to select an output such that the transformation, or analogy, from the exemplar to the output is similar to that from the source to the target. The logical process involved in identifying the source-to-target analogies implicitly detects the structural differences between the source and target and can be used effectively to facilitate style transfer. Since the exemplar has a similar structure to the source, applying the analogy to the exemplar will provide the output we seek. The main technical challenge we address is to compute the source to target analogies, consistent with human logic. We observe that the typical analogies we look for consist of a small set of simple transformations, which when applied to the exemplar generate a continuous, seamless output model. To assemble a shape analogy, we compute an optimal set of source-to-target transformations, such that the assembled analogy best fits these criteria. The assembled analogy is then applied to the exemplar shape to produce the desired output model. We use the proposed framework to seamlessly transfer a variety of style properties between 2D and 3D objects and demonstrate significant improvements over the state of the art in style transfer. We further show that our framework can be used to successfully complete partial scans with the help of a user provided structural template, coherently propagating scan style across the completed surfaces.",
"title": ""
},
{
"docid": "5e8154a99b4b0cc544cab604b680ebd2",
"text": "This work presents performance of robust wearable antennas intended to operate in Wireless Body Area Networks (W-BAN) in UHF, TETRAPOL communication band, 380-400 MHz. We propose a Planar Inverted F Antenna (PIFA) as reliable antenna type for UHF W-BAN applications. In order to satisfy the robustness requirements of the UHF band, both from communication and mechanical aspect, a new technology for building these antennas was proposed. The antennas are built out of flexible conductive sheets encapsulated inside a silicone based elastomer, Polydimethylsiloxane (PDMS). The proposed antennas are resistive to washing, bending and perforating. From the communication point of view, opting for a PIFA antenna type we solve the problem of coupling to the wearer and thus improve the overall communication performance of the antenna. Several different tests and comparisons were performed in order to check the stability of the proposed antennas when they are placed on the wearer or left in a common everyday environ- ment, on the ground, table etc. S11 deviations are observed and compared with the commercially available wearable antennas. As a final check, the antennas were tested in the frame of an existing UHF TETRAPOL communication system. All the measurements were performed in a real university campus scenario, showing reliable and good performance of the proposed PIFA antennas.",
"title": ""
},
{
"docid": "5f01e9cd6dc2f9bd051e172b3108f06d",
"text": "Head pose estimation is recently a more and more popular area of research. For the last three decades new approaches have constantly been developed, and steadily better accuracy was achieved. Unsurprisingly, a very broad range of methods was explored statistical, geometrical and tracking-based to name a few. This paper presents a brief summary of the evolution of head pose estimation and a glimpse at the current state-of-the-art in this eld.",
"title": ""
},
{
"docid": "4fa9db557f53fa3099862af87337cfa9",
"text": "With the rapid development of E-commerce, recent years have witnessed the booming of online advertising industry, which raises extensive concerns of both academic and business circles. Among all the issues, the task of Click-through rates (CTR) prediction plays a central role, as it may influence the ranking and pricing of online ads. To deal with this task, the Factorization Machines (FM) model is designed for better revealing proper combinations of basic features. However, the sparsity of ads transaction data, i.e., a large proportion of zero elements, may severely disturb the performance of FM models. To address this problem, in this paper, we propose a novel Sparse Factorization Machines (SFM) model, in which the Laplace distribution is introduced instead of traditional Gaussian distribution to model the parameters, as Laplace distribution could better fit the sparse data with higher ratio of zero elements. Along this line, it will be beneficial to select the most important features or conjunctions with the proposed SFM model. Furthermore, we develop a distributed implementation of our SFM model on Spark platform to support the prediction task on mass dataset in practice. Comprehensive experiments on two large-scale real-world datasets clearly validate both the effectiveness and efficiency of our SFM model compared with several state-of-the-art baselines, which also proves our assumption that Laplace distribution could be more suitable to describe the online ads transaction data.",
"title": ""
},
{
"docid": "1fc9a4a769c7ff6d6ddeff7e5df7986b",
"text": "This paper describes a model of problem solving for use in collaborative agents. It is intended as a practical model for use in implemented systems, rather than a study of the theoretical underpinnings of collaborative action. The model is based on our experience in building a series of interactive systems in different domains, including route planning, emergency management, and medical advising. It is currently being used in an implemented, end-to- end spoken dialogue system in which the system assists a person in managing their medications. While we are primarily focussed on human-machine collaboration, we believe that the model will equally well apply to interactions between sophisticated software agents that need to coordinate their activities.",
"title": ""
},
{
"docid": "937de8ba80bd92084f9c2886a28874d1",
"text": "Android security has been a hot spot recently in both academic research and public concerns due to numerous instances of security attacks and privacy leakage on Android platform. Android security has been built upon a permission based mechanism which restricts accesses of third-party Android applications to critical resources on an Android device. Such permission based mechanism is widely criticized for its coarse-grained control of application permissions and difficult management of permissions by developers, marketers, and end-users. In this paper, we investigate the arising issues in Android security, including coarse granularity of permissions, incompetent permission administration, insufficient permission documentation, over-claim of permissions, permission escalation attack, and TOCTOU (Time of Check to Time of Use) attack. We illustrate the relationships among these issues, and investigate the existing countermeasures to address these issues. In particular, we provide a systematic review on the development of these countermeasures, and compare them according to their technical features. Finally, we propose several methods to further mitigate the risk in Android security. a 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "b0b2e50ea9020f6dd6419fbb0520cdfd",
"text": "Social interactions, such as an aggressive encounter between two conspecific males or a mating encounter between a male and a female, typically progress from an initial appetitive or motivational phase, to a final consummatory phase. This progression involves both changes in the intensity of the animals' internal state of arousal or motivation and sequential changes in their behavior. How are these internal states, and their escalating intensity, encoded in the brain? Does this escalation drive the progression from the appetitive/motivational to the consummatory phase of a social interaction and, if so, how are appropriate behaviors chosen during this progression? Recent work on social behaviors in flies and mice suggests possible ways in which changes in internal state intensity during a social encounter may be encoded and coupled to appropriate behavioral decisions at appropriate phases of the interaction. These studies may have relevance to understanding how emotion states influence cognitive behavioral decisions at higher levels of brain function.",
"title": ""
},
{
"docid": "a0d49d0f2dd9ef4fabf98d36f0180347",
"text": "This study draws on the work/family border theory to investigate the role of information communication technology (ICT) use at home in shaping the characteristics of work/family borders (i.e. flexibility and permeability) and consequently influencing individuals’ perceived work-family conflict, technostress, and level of telecommuting. Data were collected from a probability sample of 509 information workers in Hong Kong who were not selfemployed. The results showed that the more that people used ICT to do their work at home, the greater they perceived their work/family borders flexible and permeable. Interestingly, low flexibility and high permeability, rather than the use of ICT at home, had much stronger influences on increasing, in particular, family-to-work conflict. As expected, work-tofamily conflict was significantly and positively associated with technostress. Results also showed that the telecommuters tended to be older, had lower family incomes, used ICT frequently at home, and had a permeable boundary that allowed work to penetrate their home domain. The theoretical and practical implications are discussed. 2016 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "0a7f93e98e1d256ea6a4400f33753d6a",
"text": "In this paper, we investigate safe and efficient map-building strategies for a mobile robot with imperfect control and sensing. In the implementation, a robot equipped with a range sensor builds a polygonal map (layout) of a previously unknown indoor environment. The robot explores the environment and builds the map concurrently by patching together the local models acquired by the sensor into a global map. A well-studied and related problem is the simultaneous localization and mapping (SLAM) problem, where the goal is to integrate the information collected during navigation into the most accurate map possible. However, SLAM does not address the sensorplacement portion of the map-building task. That is, given the map built so far, where should the robot go next? This is the main question addressed in this paper. Concretely, an algorithm is proposed to guide the robot through a series of “good” positions, where “good” refers to the expected amount and quality of the information that will be revealed at each new location. This is similar to the nextbest-view (NBV) problem studied in computer vision and graphics. However, in mobile robotics the problem is complicated by several issues, two of which are particularly crucial. One is to achieve safe navigation despite an incomplete knowledge of the environment and sensor limitations (e.g., in range and incidence). The other issue is the need to ensure sufficient overlap between each new local model and the current map, in order to allow registration of successive views under positioning uncertainties inherent to mobile robots. To address both issues in a coherent framework, in this paper we introduce the concept of a safe region, defined as the largest region that is guaranteed to be free of obstacles given the sensor readings made so far. The construction of a safe region takes sensor limitations into account. In this paper we also describe an NBV algorithm that uses the safe-region concept to select the next robot position at each step. The International Journal of Robotics Research Vol. 21, No. 10–11, October-November 2002, pp. 829-848, ©2002 Sage Publications The new position is chosen within the safe region in order to maximize the expected gain of information under the constraint that the local model at this new position must have a minimal overlap with the current global map. In the future, NBV and SLAM algorithms should reinforce each other. While a SLAM algorithm builds a map by making the best use of the available sensory data, an NBV algorithm, such as that proposed here, guides the navigation of the robot through positions selected to provide the best sensory inputs. KEY WORDS—next-best view, safe region, online exploration, incidence constraints, map building",
"title": ""
},
{
"docid": "dfde48aa79ac10382fe4b9a312662cd9",
"text": "221 Abstract— Due to rapid advances and availabilities of powerful image processing software's, it is easy to manipulate and modify digital images. So it is very difficult for a viewer to judge the authenticity of a given image. Nowadays, it is possible to add or remove important features from an image without leaving any obvious traces of tampering. As digital cameras and video cameras replace their analog counterparts, the need for authenticating digital images, validating their content and detecting forgeries will only increase. For digital photographs to be used as evidence in law issues or to be circulated in mass media, it is necessary to check the authenticity of the image. So In this paper, describes an Image forgery detection method based on SIFT. In particular, we focus on detection of a special type of digital forgery – the copy-move attack, in a copy-move image forgery method; a part of an image is copied and then pasted on a different location within the same image. In this approach an improved algorithm based on scale invariant features transform (SIFT) is used to detect such cloning forgery, In this technique Transform is applied to the input image to yield a reduced dimensional representation, After that Apply key point detection and feature descriptor along with a matching over all the key points. Such a method allows us to both understand if a copy–move attack has occurred and, also furthermore gives output by applying clustering over matched points.",
"title": ""
}
] |
scidocsrr
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.